text
stringlengths
100
500k
subset
stringclasses
4 values
Journal of the Brazilian Computer Society Experiments and results Improving trend analysis using social network features Caio Cesar Trucolo1Email authorView ORCID ID profile and Luciano Antonio Digiampietri1 Journal of the Brazilian Computer Society201723:8 Published: 7 June 2017 In recent years, large volumes of data have been massively studied by researchers and organizations. In this context, trend analysis is one of the most important areas. Typically, good prediction results are hard to obtain because of unknown variables that could explain the behaviors of the subject of the problem. This paper goes beyond standard trend identification methods that consider only historical behavior of the objects by including the structure of the information sources, i.e., social network metrics, as an additional dimension to model and predict trends over time. Results from a set of experiments indicate that including such metrics has improved the prediction accuracy. Our experiments considered the publication titles, as recorded in the Brazilian Lattes database, from all the Ph.Ds. in Computer Science registered in the Brazilian Lattes platform for the periods analyzed in order to evaluate the proposed trend prediction approach. Data-driven activities are getting more and more usual in many types of organizations and data analysis is becoming the main focus of business. In this context, trend analysis is a major application of data analysis. Organizations may try to identify trends to create strategies and plan actions, e.g., an e-commerce company may try to identify trends in order to better focus their supply chain activities. There are several approaches used for prediction and most of them are based on the temporal behavior of the studied subject. Usually, the temporal behavior is modeled as a time series, where time explains the behavior of the relevant variable. However, when these subjects are produced or consumed by people (journalistic texts or new technology products, for example), another factor can be taken into account: the social structure of the generators or consumers, i.e., individuals directly related to the object under analysis. A social network in this context can be modeled around these individuals. Nodes can represent producers (or consumers) and edges can represent relationships between them. Taking the content of blog posts as an example, a social network can be built based on the connections among bloggers, i.e., the hyperlinks that connect the websites. The analysis and quantification of the behaviors and relationships of the people in the social structure can be performed using social network analysis. We can calculate social metrics to understand influences, centralities, and communities to predict the information diffusion in the network [10, 15, 17, 18, 24, 25]. As we understand the social network characteristics given the calculated metrics, we become able to identify which individuals will be reached by the information spread. It enables us to say whether it is going to take a long or a short time. For example, an information being propagated by a very influential node within a specific time interval can reach more nodes in the network than if it were propagated by a non-influential network node. The social structure plays an important role in the temporal behavior of objects [8, 24]. This work differs from previous work in that besides using the temporal behavior of the studied object, it incorporates the social structure of the individuals related to this object into the prediction models. In this paper, we present an approach that combines the prediction models based on the temporal behavior of the studied object with social network metrics. This approach can be applied to improve the accuracy of trend predictions that are based only on the temporal behavior and where it is possible to model a social network from the interaction among individuals related to the object. For the purpose of this article, we applied this approach to the academic co-authorship environment. Essentially, we used a corpus of titles of papers published in a certain period to predict what will be the major topics (represented as n-grams) in the future. This problem could be solved with standard trend analysis approaches that rely on predicting future frequencies from the observed ones, whereas in this work we consider the properties of the co-authorship network to enhance the predictions. In this case, the objects considered are the n-grams extracted from the paper titles and the individuals are the paper authors. The approach was tested and validated using data from the publication titles of Computer Science Ph.D.s working in Brazil and then compared with approaches that consider only the temporal behavior of the analyzed object. This paper is organized as follows. "Related work" section describes some basic concepts and related work. "Methodology" section details the methodology used. The results are described in "Experiments and results" section. Finally, conclusions are presented in "Conclusion" section. Time series trend analysis Usually time is a very important feature in prediction and classification problems. Once there is an understanding about the object temporal behavior it is possible to identify patterns and predict trends. A problem modeling in which time is considered as an explanatory variable is known as time series analysis [9]. Trend analysis can be applied to several topics, such as stock market [20], textual documents [21], and many others [22]. The trend identification in textual documents, more specifically in a corpus formed by titles of scientific papers, is the application addressed in this paper. In the context of textual documents, frequency counts are usually used as the dependent variable in time series models [1]. Social network trend analysis There are many ways to model and explore social networks and one of the research branches is trend analysis in social networks. How to measure the dynamism and impact of information flow? To answer this question, it is necessary to study the characteristics of the network and its connection structure, that is, how nodes and edges are distributed in the network. Information is produced and transmitted by individuals and their connection structure affects how information diffuses [3]. A very important characteristic of the individuals in the network is their influence. Finding influential nodes in the network can help to explain how fast the information will spread and how many nodes it will reach. There are methods developed to identify influential nodes [19]. Beyond the individual level, analysis of the size and density of groups in the network is very important to understand the dynamics of information diffusion. For this, it is necessary to identify these groups or communities, which is not a trivial task [16]. Another challenge is to identify critical points in the network where the probability of information diffusion increases [2]. Finally, social network information is being used in several ways to predict trends based on the network behavior [13]. Science and technology systems embrace several utilities related to scholars and knowledge can be discovered in a quantitative way [11]. Research productivity, for example, can be measured by models that use citation indices and academic social network analysis [4]. The application explored in this paper also uses data from a science and technology system aiming to identify research trends and topics. Our work differs from others by combining time series and social network analysis. The proposed approach uses these two concepts so that trends are identified based on time and social characteristics of the individuals that generate information. The methodology of this work consists of five steps: data gathering, term extraction, time series analysis, social network analysis, and trend analysis. Figure 1 illustrates the schematic data flow. The next sections describe all the steps applied to the problem of trend identification for the publications in Computer Science in the Brazilian academy context. The proposed approach can be applied to improve the accuracy of trend predictions that consider only the temporal behavior in scenarios where it is possible to recover the connection between the individuals who generate the data (e.g., trend identification of topics discussed in the blogosphere). Schematic data flow of the work process Data gathering Brazil maintains a unique platform called Lattes Platform1. This is a database of information on science, technology, and innovation, including publications by individual researchers, and currently registers over 4.5 million curricula. In this work, all the information has been obtained from Lattes Platform. For data gathering, curricula from all the Computer Science PhDs were selected for the periods analyzed (comprising 5642 curricula). The pre-processing consisted of the extraction and organization of the information using the methodology described by Digiampietri et al. [6, 7]. The pre-processing activities include the stop-words removal and coauthorship identification based on an entity resolution approach [7]. From these curricula, 55,710 titles were identified from papers published between the years 1991 and 2012. The variables considered to build the dataset are lattesId (researcher identification number), year (year of publication), title (title of publication), and publicationId (publication identification). Term extraction In this paper, a term is an n-gram extracted from the titles of the papers. In this step, the goal was to automate the data preparation. The first stage of term extraction was to split the titles into subsets of words or sequence of words without stop-words. The terms extracted consist of one or more consecutive words from the titles excluding words that were listed as stop-words. As an example, the title Social Network Analysis For Digital Media was split into the following terms: Social, Network, Analysis, Digital, Media, Social Network, Network Analysis, Digital Media, and Social Network Analysis. Terms such as Analysis Digital Media and Media Digital are not included because they are not formed by consecutive words from a title or because they include stop-words. In this example, we obtained unigrams, bigrams, and 3-grams, however, the process can obtain n-grams for all possible n. With all the possible sets of terms, we adopted a scoring system to identify the most important terms. This scoring method was based on the adjacent frequency of the words in the terms. The equation to measure the importance of each candidate term is: $$ {}LRF(CT)\,=\,f(CT)\times\! \left(\prod_{i=1}^{T}\!(LF(Ni)\,+\,1\!)(\!RF\!(Ni)\,+\,1)\!\right)^{1/T}\!>\! 1.0. $$ f(C T) is the frequency of the candidate term CT, L F(N i) and R F(N i) indicate the frequency of the left and right word candidates, respectively. This equation is described in detail by Nakagawa et al. [12]. In that same work, the authors conducted evaluations to demonstrate that it is possible to find meaningful terms. In summary, in this step we automatically extract the terms (n-grams) and then filter the most meaningful ones to build our dataset. We observed that n-grams had more significance than the unigrams for the subjects discussed in the publications. Since our goal is to identify terms and research topics, unigrams could be very ambiguous. For example, the word Network can be ambiguous given that it can be related to Social Network, Neural Network, or even Business Network. Therefore, we selected the 1638 most important n-grams, this is the number of n-grams occur over all the period (1991–2012) considered in the experiments, as explained in "Experiments and results" section. Given a dependent variable and a set of independent ones, a regression model can be formulated as $$ Y \approx f \left (X,\beta \right) $$ where the dependent variable Y can be approximated by the independent variables X and the respective parameters β for a function f. For the analysis in this step, we are interested in the frequency (TF-IDF) variation of each term over a target period (e.g., a year). For each term, a time series of its yearly frequency variation is built. The time series can have many types of shapes and behavior thus we used linear and nonlinear regressions (linear, exponential, logarithmic, power law, and polynomial with 2° to 5°). We applied all for each term and chose the one that best fitted each of the series using ordinary least squares for evaluation. The regression curves for a few terms are shown in Fig. 2. Examples of regression curves As a result, we obtained the best prediction among the regression methods cited above for each term to be used for building the datasets for trend analysis. These results are taken as a basis for comparison with the proposed approach. Social network analysis The network modeled was built from the joint publications (co-authorship relationships) as recorded in the Lattes database. The social network was modeled as a graph composed of 5642 vertices (authors) and 14,647 edges (coauthorship relationships). The metrics Metrics of the social network capture different characteristics that can be quantified. In this approach, some metrics have been selected to form the independent variable's set. Selection was based on assumptions on the potential of each metric to explain the information spreading [14, 23]. For example, one of the assumptions is that a node in the giant component of a network is more capable of disseminating information through the network than a node which is not in this component. The metrics selected are giant composition, the shortest path to the most central node, degree centrality, eigenvector centrality, page rank centrality, betweenness centrality, closeness centrality, clustering coefficient, structural equivalence to the most central node and community average centrality [10, 15, 17, 18, 24, 25]. These metrics are described as follows. Giant composition: number of nodes in the giant component; Shortest path to the most central node: smaller value among the shortest paths to the most central node; Degree centrality: average degree centrality of the nodes within the community; Eigenvector centrality: average eigenvector centrality of the nodes within the community; Page rank centrality: average page rank centrality of the nodes within the community; Betweenness centrality: average betweenness centrality of the nodes within the community; Closeness centrality: average closeness centrality of the nodes within the community; Clustering coefficient: average value of the clustering coefficient from the nodes within the community; Structural equivalence with the most central node: average value of the structural equivalence from the nodes within the community; Community average centrality: average centrality of all community nodes. The centrality metrics can explain the importance of a node in the network, the shortest path metric indicates how far a node is from the central node, while the structural equivalence quantifies the similarity of a target node to the most central node. The most important node was used as a reference. To justify this choice, Table 1 shows the difference in the Degree and Eigenvector centralities between the most central node and the other top ten most important nodes in the network. Eigenvector centrality and number of degrees of top ten central nodes Top important nodes Eigenvector centrality Each selected metrics has been computed for all network nodes and each term has been related to one or more nodes (a term may have been employed by one or more authors). If a term is related to a single author, then, in this step, its metrics will have exactly the same values of its related node. However if a term is related to multiple nodes then the metrics must be aggregated. The aggregated metrics is computed as the sum of the metrics values of each author related to the term. The one exception is the metrics Shortest path to the most central node, for which the aggregated metrics is taken as the minimum value from all authors related to the term. For example, if author 1 has a Degree centrality of 10 and author 2 has a Degree centrality of 5 and both used a term A, the Degree centrality value of term A would be 15. We also improved the approach with a so-called network community balance. Communities are characterized as groups of nodes with a high edge density [24]. In a community, information propagates quickly and tends to become general knowledge. In trend analysis, this can lead to situations such as terms that are widespread in a particular community but not in the network as a whole. Thus, it is important to evaluate whether the importance of a term occurs only within a community or in the whole network. Thus, we decided to apply a community level aggregation to balance the node level aggregation in computing the metrics values of each term. To identify the communities, we used the R 2 implementation of the algorithm proposed by Clauset et. al. [5]. Therefore, for nodes that are within the same community, the aggregated metrics value is computed as the average value of the nodes; for nodes that are not in the same community it is computed as the sum of node metrics values. For example, let us assume that a term A is used only by two authors: author 1 and author 2. If author 1 and author 2 are in the same community, the metrics would be calculated as metrics average of the authors in this community who used term A. But if author 1 and author 2 are in different communities, the average metrics value would be computed for each community and these results would be finally summed. In the end of this step, the first part of the feature vector is finished which row is a term and each column is a social network metric. With the time series analysis and social network analysis performed, we are able to model the behavior of each term. At this moment, the time series model and the social network analysis are combined. Having the social network metrics and the time series prediction calculated, we modeled the problem as the term importance index being explained by the social characteristics and the "clue" about its future importance (TF-IDF predicted by time series prediction methods). The feature vector built in the previous step (as described in the "The metrics" section) is then enriched with the importance index (TF-IDF) predicted by the time series models. Thus, in this step, the dataset to be input into the proposed trend analysis is built. Both social network analysis and time series analysis rely on the periods considered. Therefore, for each time interval of model training, the dataset will be different. For example, the dataset relative to the period between 2002 and 2005, built to predict 2006, is different from the dataset relative to the period between 2002 and 2006 built to predict 2007, that is, one year (2006) is included in the second case. The dataset is subject to preprocessing methods such as normalization and feature selection and then supervised learning methods are applied to predict the importance index of the terms for certain periods. The methods considered in the experiments were Linear Regression, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Rotation Forest (RF). In this context, trends are the terms with high predicted values of importance index. The main goals of the experiments are to evaluate the proposed approach and compare with results from standard time series prediction. Identifying which models, periods, and variables present a better performance is also within the scope. We split the experiments into two groups. In the first group the goal was to evaluate the best techniques, time period and set of variables while the second group of experiments was designed to evaluate longer prediction periods based on the best set of variables and techniques. Models were evaluated by measuring the Relative Absolute Error (RAE), comparing the true TF-IDF values observed (y i ) with the predicted ones (f i ). The equation for RAE is $$RAE = \frac{{\sum\nolimits}_{i=1}^{n} \left | f_{i} - y_{i} \right |}{{\sum\nolimits}_{i=1}^{n} \left | y_{i} - \bar{y} \right |} $$ $$\bar{y} = \frac{1}{n}\sum\limits_{i=1}^{n}{y_{i}}. $$ At first, we made experiments based only in the temporal behavior. We considered the same time series dataset model (with no social metric feature, only TF-IDF and year) to evaluate and compare two different kind of techniques: time series methods and supervised learning methods. Table 2 shows the best RAE value relatives to the three periods that represent short, medium and long periods for the time series trend analysis. With the conventional methods for time series analysis, the best result was obtained for the longest period while with the supervised learning methods, the shortest period yielded the best results. We can see that the supervised learning methods were considerably more accurate than the regression ones. Time series and supervised learning methods best results (lowest RAE) for three periods (1991–2011, 2002–2011 and 2007–2011) having 2012 as the year of prediction Time series regression Time series supervised learning methods The italicized data refers to the best result of each period To test the model described in this paper we used the final dataset explained in "Trend analysis" section. A description of this dataset is shown in Table 3. Basic statistical metrics relative to the final dataset built for the first experiment (period 1991–2011) Std. Dev. Giant composition Degree centrality Page rank centrality Betweenness centrality Closeness centrality Clustering coefficient Structural equivalence Community average centrality Time series prediction −97.50 It is clear that some features take values on a larger scale than others (e.g., Betweenness centrality). To correct these differences we applied normalization in the preprocessing step. Before applying the prediction methods, a correlation analysis was conducted to clarify the behavior of each feature. Figure 3 depicts the scatterplots showing the pairwise correlations of features, including the dependent variable. We can see that no feature is highly correlated with the dependent variable (importance index) but most of them have some correlation. As expected, most of the centrality metrics are very correlated indicating that some of them can be discarded in the supervised learning step. Scatterplot of each pair of explanatory features and dependent variable (importance index) in the final dataset for the period between 1991 and 2011. The highest correlated pairs of variables are in the center and have the colored red while the lowest correlated pairs are towards the border and colored yellow In this experiment, we varied the number of features selected. We generated datasets with the instances described by all attributes (features) and datasets with attributes selected by Relief and manual selection, which is an appropriate selection method if the analyst has knowledge about the problem domain. The most important criterion in selecting the features manually was their mutual correlation. Furthermore, we varied the parameters for each prediction model algorithm generating 16 tests for ANN, 9 tests for SVM, and 15 tests for Rotation Forest. For ANN, we varied the parameters related to learning rate, momentum term, number of nodes in the hidden layer, and number of hidden layers. For SVM, we experimented several kernels (including Radial Basis Function kernel and Polynomial kernel) and different values for parameter C. In Rotation Forest, different tree based methods for the ensemble approach were tested, varying their specific parameters in each case. Table 4 presents the best RAE results obtained from each model, considering the different feature selection methods, for different periods. Best results (lowest RAE) of all prediction methods for three periods and three different feature's sets Method/Period Manual selection Linear Reg. 91–11 ANN 91–11 SVM 91–11 Rot. Forest 91–11 RNA 02–11 As far as the techniques are concerned, the best performances, as shown in Table 4, were obtained with Rotation Forest. One observes that Rotation Forest achieved the best performances for short periods while SVM did better on longer periods, doing better than Rotation Forest in the 1991–2011 period. When analyzing the periods, the period 2002–2011 presented the best results considering an average among all techniques, however, the best result was obtained in the 2007–2011 period (39.28%). The average RAE values for the best techniques are: 43.77% for 2002–2011; 51.57% for 2007–2011; and 69.68% for 1991–2011. There is an important difference between the two models at this point. While the time series model yielded better results on longer periods (Table 2), the proposed approach presented better results on shorter periods. This can be explained by a change in the network dynamics. Metrics derived from networks modeling longer periods can be misleading, as network properties are likely to change considerably along time. Comparing the best results of the proposed approach with the time series model (Tables 2 and 4) one observes an error reduction of 45%, 70% and 86% for the 1991–2011, 2002–2011 and 2007–2011 periods, respectively. While comparing to the supervised methods applied to time series dataset (Tables 2 and 4) one observes an error increase of 21% for the 1991–2011 period and an error reduction of 22% for the 2002–2011 and 2005–2011 periods. The best result, relative to the 2007–2011 period with Rotation Forest, has been obtained with the set of features shown in Table 5. The best set of parameters for Rotation Forest technique was Random Forest as the tree based method with 50 decision trees, 5 features for random selection and 7 as the maximum depth. Sets of features for the model with the best results (lowest RAE) Giant component; Eigenvector centrality; Betweenness centrality; Clustering coefficient; Clustering coefficient and Structural equivalence and Time series prediction. Table 6 compares the results of 15 trending terms obtained from both models. These terms were selected based on the time series trend analysis. In this table, the real TF-IDF of each term is compared with the predicted value from the time series prediction model and the results of the proposed approach. The prediction technique was Rotation Forest for the period 2007–2011 (the best prediction results presented, as shown in Table 4). Results comparison for the 15 first trends of the time series prediction model in 2012 Time series supervised learning Based approach Motion estimation Routing problem The accuracy gain displayed in Table 6 is a sample of the trend analysis improvement when including social network features. The experimental results show that the error produced by the proposed approach corresponds, in average, to only 17% of the error produced by the time series regression model and 18% of the error produced by the time series supervised learning methods which do not consider social network features. In order to verify the quality of the proposed approach to identify trends over longer periods, additional experiments have been conducted fixing the dataset training period between years 1991 and 2005 and varying the prediction periods between the years 2006 and 2011 for testing. Only SVM and Rotation Forest have been employed in these experiments, as they yielded the best results in the previous experiments. Table 7 shows the results. As expected, the error rates increase with time. However, the errors do not increase dramatically for longer periods. Comparing these results with those obtained from time series regression methods presented in Table 2 one observes that the error rates are still lower. RAE for tests of short, medium, and long terms for the models trained in the period between 1991 and 2005 SVM Approaches that consider only the historical behavior of the analyzed object have been widely employed for trend prediction. However, the contents generated by people are clearly influenced by their connections. How information spreads is an important factor that can be considered in prediction. Intending to fill this gap, we presented a new approach for trend analysis incorporating the social network information to a content-based trend analysis model. The proposed approach achieved better results than the standard time series-based models. In addition to simple prediction techniques, such as linear regression, we applied more robust techniques that resulted in even more accurate models. As we supposed, these findings cast light on the issue of trend prediction. Information content and the characteristics of their social structure can be combined to improve the explanation of the information temporal behavior. This work explored a concept still little studied and, thus, some shortcomings remain to be addressed. The dynamics of the social network is one of them. We worked with a fixed time window to the social network modeling. However, slicing the time interval probably would improve the prediction models by capturing the transient characteristics over time in the social structures. Another improvement could be achieved by grouping the extracted terms by topics, which can be more relevant than analyzing each term alone. In conclusion, we found out that looking at the social structure of data sources alongside the main analyzed data can help better understanding the information temporal behavior. 1 http://lattes.cnpq.br/. 2 https://www.r-project.org/. This work was partially funded by FAPESP, CAPES, and CNPq. CCT developed, tested and validated the approach presented in this paper. LAD was Caio's advisor and contributed in the specification of the approach and the design of the experiments. Both authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. University of São Paulo, São Paulo, Brazil Abe H, Tsumoto S (2009) Evaluating a method to detect temporal trends of phrases in research documents In: 2009 8th IEEE International Conference on Cognitive Informatics, 378–383.. IEEE. doi:10.1109/ICSMC.2009.5345958. Altshuler Y, Pan W, Pentland AS (2012) Trends prediction using social diffusion models In: International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction, 97–104.. Springer, Berlin Heidelberg. doi:10.1007/978-3-642-29047-3_12. Bakshy E, Rosenn I, Marlow C, Adamic L (2012) The role of social networks in information diffusion In: Proceedings of the 21st international conference on World Wide Web, 519–528.. ACM. doi:10.1145/2187836.2187907. Cimenler O, Reeves Ka, Skvoretz J (2014) A regression analysis of researchers social network metrics on their citation performance in a college of engineering. J Informetrics 8(3): 667–682. doi:10.1016/j.joi.2014.06.004. Clauset A, Newman ME, Moore C (2004) Finding community structure in very large networks. Physical review E 70(6): 066 111.View ArticleGoogle Scholar Digiampietri LA, Alves CM, Trucolo CC, Oliveira RA (2014) Análise da rede dos doutores que atuam em computação no Brasil In: CSBC 2014 - BRASNAM, 33–44.Google Scholar Digiampietri LA, Mena-chalco JP, Melo POV, Malheiros AP, Meira DNO, Franco LF, Oliveira LB (2014) BraX-Ray: an x-ray of the Brazilian computer science graduate programs. Plos-ONE9(4): e94541.View ArticleGoogle Scholar Glanzel W, Schubert A (2004) Analysing scientific networks through coauthorship In: Handbook of quantitative science and technology research, 257–276.. Kluwer Academic Publishers. doi:10.1.1.86.4083. Hamilton JD (1994) Time series analysis, vol 2. Princeton university press, Princeton. ISBN: 9780691042893.Google Scholar Lemieux V, Ouimet M (2008) Análise Estrutural das Redes Sociais. Instituto Piaget.Google Scholar Moed HF, Glänzel W, Schmoch U (2004) Editors' introduction In: Handbook of quantitative science and technology research, 1–15.. Springer Netherlands.Google Scholar Nakagawa H, Mori T (2002) A Simple but Powerful Automatic Term Extraction Method In: COLING-02 on COMPUTERM 2002: Second International Workshop on Computational Terminology - Volume 14, COMPUTERM '02., 1–7.. Association for Computational Linguistics, Stroudsburg. doi:10.3115/1118771.1118778. Pan W, Aharony N, Pentland A (2011) Composite social network for predicting mobile apps installation In: AAAI. arXiv:1106.0359.Google Scholar Pandit S, Yang Y, Chawla NV (2012) Maximizing information spread through influence structures in social networks In: 2012 IEEE 12th International Conference on Data Mining Workshops, 258–265.. IEEE. doi:10.1109/ICDMW.2012.140. Poblacion D, Mugnaini R, Ramos L (2009) Redes sociais e colaborativas em informação científica, 1st ed. Angellara Editoras,Sao Paulo.Google Scholar Pourkazemi M, Keyvanpour M (2013) A survey on community detection methods based on the nature of social networks. Iccke 2013 5(1): 114–120. doi:10.1109/ICCKE.2013.6682855.View ArticleGoogle Scholar Prell C (2012) Social network analysis history, theory & methodology, Los Angeles London SAGE.Google Scholar Scott J (2009) Social network analysis: a handbook, 2nd ed. SAGE. doi:10.1109/ICCKE.2013.6682855. Singh S, Mishra N, Sharma S (2013) Survey of various techniques for determining influential users in social networks In: Emerging Trends in Computing, Communication and Nanotechnology (ICE-CCN), 2013 International Conference on, 398–403. doi:10.1109/ICE-CCN.2013.6528531. Teixeira LA, de Oliveira ALI (2009) Predicting stock trends through technical analysis and nearest neighbor classification In: 2009 IEEE International Conference on Systems, Man and Cybernetics, 3094–3099.. IEEE. doi:10.1109/ICSMC.2009.5345944. Trucolo CC, Digiampietri LA (2014) Trend Analysis of the Brazilian Scientific Production in Computer Science. FSMA 14: 2–9.Google Scholar Trucolo CC, Digiampietri LA (2014) Uma Revisão Sistematica acerca das Técnicas de Identificação e Análise de Tendênciaś In: X Simpósio Brasileiro de Sistemas de Informação (SBSI 2014), 639–650.. Londrina.Google Scholar Wang D, Wen Z, Tong H, Lin CY, Song C, Barabási AL (2011) Information spreading in context In: Proceedings of the 20th International Conference on World Wide Web, WWW '11., 735–744.. ACM, New York. doi:10.1145/1963405.1963508 http://doi.acm.org/10.1145/1963405.1963508. Wasserman S, Faust K (2009) Social network analysis: methods and applications. 19th ed. Social network analysis: methods and applications.Google Scholar Wasserman S, Galaskiewicz J (1994) Advances in social network analysis research in the social and behavioral sciences. SAGE. doi:10.4135/9781452243528.
CommonCrawl
FYKOS.org RulesPeopleContact Us SearchArchiveSerialYearbooks Events Experiments česky: 5. Série 28. Ročníku Problems Results Fyziklani 2020 in Prague Online Physics Brawl 5. Series 28. Year Year 0 Year 09 Year 15 Year 18 Year 19 Year 20 Year 21 Year 22 Year 23 Year 24 Year 25 Year 26 Year 27 Year 28 Year 29 Year 30 Year 31 Year 32 Year 33 Post deadline: - Upload deadline: - (local time in Czech Republic) Year book in Czech Brochure in Czech Serial in Czech (2 points)1. stiffness of Mr. Planck Maybe you have heard about the so called Planck's units ie. units expressed in the form of fundamental physics constants – speed of light $c≈3.00\cdot 10^{8}\;\mathrm{m}\cdot \mathrm{s}^{-1}$, gravitational&nbspconstant $G=6.67\cdot 10^{-11}\;\mathrm{m}\cdot \mathrm{kg}^{-1}\cdot \mathrm{s}^{-2}$ and the reduced Planck's constant $h=1.05\cdot 10&nbsp^{-34}\;\mathrm{kg}\cdot \mathrm{m}\cdot \mathrm{s}^{-1}$. This way Planck's time, Planck's length and Planck's weight are often mentioned. What if we were interested in "Planck's spring constant"? Using dimensional analysis with $c$, $G$ and $h$ the equation of the unit relating to the unit of a spring constant [ $k]=\;\mathrm{kg}\cdot \mathrm{s}^{-2}$. To determine the equation assume that the unknown and from dimensional analysis undeterminable dimensionless constant is equal to 1. Solution in Czech Karel was learning quantumdots (2 points)2. I hear well, I can't say At a distance $d=5\;\mathrm{m}$ from a point-like source of sound we hear a noise of the level of intensity $L_{1}=90dB$. At what distance from the source of the sound is the level of intensity of the sound $L_{2}=50dB?$ wave mechanics Karel wanted to have something from accoustics here again. (4 points)3. matfyz tag $N$ people decide to play tag but not the normal variety. At the start they stand in the vertices of a regular $N-gram$ of a side $a$. The game then proceeds so that everyone chases (goes to him in a straight line)his neighbour on the right (anti-clockwise). Everyone moves with the same constant velocity $v$. Describe the progress of the game (trajectory on which the players move) and determine how quickly the game will end depending on the parameters $N$, $and$, $v$. mechanics of a point mass Kuba Vosmera graduate. (4 points)4. heavy rain Autumn weather is sometimes as unstable as Spring weather and so it often happens that we can be surprised by an unforeseen torrent of rain. A happy few carried umbrellas. Approximate how large the pressure of heavy rain can be and compare the force of the rain with the gravitational force with which the umbrella is pulled down. Choose the parameters of the umbrella appropriately. Mirek was looking for excuses why not to be envious of protected passerbys. (5 points)5. a lens was floating on water On the surface of water a thin biconcave lens made from a light-weight material is floating. The radii of both surfaces are $R=20\;\mathrm{cm}$. Determine the distance between the two focal points of the lens, if the index of refraction of the air above the lens is $n_{a}=1$, index of refraction of the lens is $n_{l}=1.5$ and index of refraction of water is $n_{w}=1.3$. Bonus: Assume that it is a lens of width $T=3\;\mathrm{cm}$, and within it is symmetrically place an air bubble in the shape of a biconcave lens with the radii of curvature $r=50\;\mathrm{cm}$ and width $t=1\;\mathrm{cm}$. geometrical optics Mirek didn't forget about everyone's favourite optics. (5 points)P. splashed Would it be possible to swim in a pool, if the water in it would behave as a completely ideally incompressible liquid, the visocity of which would approach zero? How would the movement of the swimmer differ from a swimmer that would swim in regular wate? What would happen with the energy of the system if water could flow out of the pool over the edges ? At the beginning the water is level with the edge of the pool. hydromechanics Chemical physicist floats. (8 points)E. Sweetening Determine the dependency of the temperature of the solidification of the aqueous solution of sucrose at atmospheric pressure. Pikos was sweetening the sidewalk. (6 points)S. mapping Show that for arbitrary values of parameters $K$ and $T$ you can express the Standard map from the series express as $$x_{n} = x_{n-1} y_{n-1},$$ $$\\ y_n = y_{n-1} K \sin(x),$$ where $x$, y$ are somehow scaled d$φ⁄dt,φ$. Show that the physical parameter $K$, x, y$$. Look at the model of the kicked rotor from the series and take this time the passed impuls$I(φ)=I_{0}$, after the period $T$ then $I(φ)=-I_{0}$, after another one $I_{0}$ and this way keep on kicking the rotor on and on. Make a map $φ_{n},dφ⁄dt_{n}$ on the basis of values $φ_{n-1},dφ⁄dt_{n-1}$ before the doublekick ± $I$ Why not? Solve $φ_{n},dφ⁄dt_{n}$ on the basis of some initial conditions $φ_{0},dφ⁄dt_{0}$ for an arbitrary $n$. *Bonus:** Try using the ingeredients from this series to design kicking which $will$ result in chaotic dynamics. Take care though because $φ$ is periodic with a period 2π and shouldn't d$φ⁄dt$ unscrew forever through kicking. mechanics of a point massother Created with <love/> by ©FYKOS – [email protected]
CommonCrawl
Dynamical analysis of a giving up smoking model with time delay Zizhen Zhang1, Ruibin Wei1 & Wanjun Xia1 Advances in Difference Equations volume 2019, Article number: 505 (2019) Cite this article In this paper, we are concerned with a delayed smoking model in which the population is divided into five classes. Sufficient conditions guaranteeing the local stability and existence of Hopf bifurcation for the model are established by taking the time delay as a bifurcation parameter and employing the Routh–Hurwitz criteria. Furthermore, direction and stability of the Hopf bifurcation are investigated by applying the center manifold theorem and normal form theory. Finally, computer simulations are implemented to support the analytic results and to analyze the effects of some parameters on the dynamical behavior of the model. In China and around the world, one of the public health problems that has been recognized in recent years is smoking addiction, which has developed into an epidemic causing many deaths. Taking China for example, the data from the Global Tobacco Epidemic Report published on 26 July 2019 by the World Health Organization shows that smoking-related diseases kill one million people in China every year and 100,000 non-smokers die from exposure to second-hand smoke [1]. From the global perspective, according to the survey, smoking kills about six million persons each year, and ten million persons will pass away every year because of smoking-related diseases by 2030 [2,3,4]. Consequently, it is very essential to help people quit smoking and reduce tobacco use and related deaths. In order to reduce the future effects of smoking on the health of people, the World Health Organization has suggested a set of control policy measures since 2008, known as Framework Convention on Tobacco Control (FCTC). As stated in the Global Tobacco Epidemic Report (2019), about five billion people all over the world have been covered by at least one comprehensive tobacco control measure, although there are still 59 countries in which none of the tobacco control measures has reached the highest level of implementation [1]. On the other hand, the mathematicians have been also in effort to inform people about control of smoking by using mathematical models considering that smoking can spread through social contact since the pioneering work of Castullo-Garsow et al. in [5]. In [5], Castullo-Garsow et al. formulated a giving up smoking model including three population classes: the potential smokers (P), the smokers (S), and the quit smokers (Q). Then Sharomi and Gumel [6] developed a model taking into account the temporarily quit smokers (\(Q_{t}\)) and the permanently quit smokers (\(Q_{p}\)) in the model formulated by Castullo-Garsow et al. [5]. Afterwards, some scholars [4, 7,8,9,10,11,12,13] proposed different forms of giving up smoking models including the occasional smoker class. Rahman et al. [14] proposed a giving up smoking model with the continuous age-structure in the chain smokers and studied local and global stability of the model, and the optimal control strategy on potential smokers is also presented. Fei and Liu [15] presented a giving up smoking model with birth and death rates on complex heterogeneous networks. They examined the stability and attractivity of the proposed model. For the analytical study of stochastic giving up smoking models or some other giving up smoking models, we can refer to [16,17,18,19,20]. As stated in [12], smoking contributes to a number of human diseases such as lung cancer, heart disease, alimentary canal effect, and so on. Thus, it is reasonable to consider the smokers associated with some illness compartment in giving up smoking model. Based on this point, the following smoking model has been proposed by Din et al. [21]: $$ \textstyle\begin{cases} \frac{dP(t)}{dt}=\alpha-\beta\sqrt{P(t)S(t)}-\gamma P(t), \\ \frac{dS(t)}{dt}=\beta\sqrt{P(t)S(t)}-(\gamma+\delta+\varepsilon )S(t)+\zeta X(t), \\ \frac{dX(t)}{dt}=\delta(1-\eta)S(t)-(\gamma+\zeta)X(t), \\ \frac{dY(t)}{dt}=\delta\eta S(t)-\gamma Y(t), \\ \frac{dZ(t)}{dt}=\varepsilon S(t)-(\gamma+\vartheta)Z(t), \end{cases} $$ where \(P(t)\), \(S(t)\), \(X(t)\), \(Y(t)\), and \(Z(t)\) denote the numbers of the potential smokers, smokers, temporarily quit smokers, permanently quit smokers, and smokers associated with some illness at time t, respectively. α is the recruitment rate of the potential smoker; β is the transmission coefficient; γ is the natural death rate; \(\delta(1-\eta)\) is the temporarily quit rate of smoking; δη is the permanently quit rate of smoking; ε is the developing rate of the smokers associated with some illness; ζ is the relapse rate from the temporarily quit smokers to the smokers; ϑ is the death rate related to smoking illness. Din et al. [21] investigated stability of system (1). In fact, there is usually a fixed duration of temporary immunity due to self-control, after which the temporarily quit smokers return to the class of smokers. That is, the temporarily quit smokers begin to quit smoking at \(t-\tau\) and they start smoking again at t. On the other hand, it is worthy to notice that delay differential equations exhibit much more complicated dynamics than ordinary differential equations since a time delay could cause the population to fluctuate [22,23,24]. Yuan et al. demonstrated that time delay can affect stability of a dynamical system and cause nonlinear phenomena such as Hopf bifurcation and periodic solutions [25, 26]. For some other works about dynamical systems, one can refer to [27,28,29,30]. Therefore, it is very crucial to examine the effect of the time delay τ on system (1). To this end, we incorporate the time delay due to the immunity period, after which the temporarily quit smokers return to the class of smokers, and investigate the following delayed system: $$ \textstyle\begin{cases} \frac{dP(t)}{dt}=\alpha-\beta\sqrt{P(t)S(t)}-\gamma P(t), \\ \frac{dS(t)}{dt}=\beta\sqrt{P(t)S(t)}-(\gamma+\delta+\varepsilon )S(t)+\zeta X(t-\tau), \\ \frac{dX(t)}{dt}=\delta(1-\eta)S(t)-\gamma X(t)-\zeta X(t-\tau), \\ \frac{dY(t)}{dt}=\delta\eta S(t)-\gamma Y(t), \\ \frac{dZ(t)}{dt}=\varepsilon S(t)-(\gamma+\vartheta)Z(t), \end{cases} $$ where τ is the length of immunity period after which the temporarily quit smokers return to the class of smokers. The flow diagram of system (2) is shown as in Fig. 1. The flow diagram of system (2) The outline of this article is arranged as follows. In Sect. 2, local stability and existence of Hopf bifurcation are discussed in detail. In Sect. 3, the direction of Hopf bifurcation and the stability of bifurcating periodic solutions are determined. In order to validate the theoretical analysis and the effect of some crucial parameters on behaviors of the model, some numerical simulations are carried out in Sect. 4. Finally, conclusions are offered in Sect. 5. Local stability and existence of Hopf bifurcation In view of [21], we can conclude that system (2) has a unique positive equilibrium \(E^{*}(P^{*}, S^{*}, X^{*}, Y^{*}, Z^{*})\), where $$ \begin{gathered} P^{*}=\frac{\alpha(\gamma^{2}+\gamma(\delta+\zeta+\varepsilon )+\zeta(\delta\eta+\varepsilon))}{\beta^{2}(\gamma+\zeta)+\gamma (\gamma^{2}+\gamma(\delta+\zeta+\varepsilon)+\zeta(\delta\eta +\varepsilon))}, \\ S^{*}=\frac{\alpha\beta^{2}(\gamma+\zeta)^{2}}{(\gamma^{2}+\gamma (\delta+\zeta+\varepsilon)+\zeta(\delta\eta+\varepsilon))(\beta ^{2}(\gamma+\zeta)+\gamma(\gamma^{2}+\gamma(\delta+\zeta+\varepsilon )+\zeta(\delta\eta+\varepsilon)))}, \\ X^{*}=\frac{\alpha\beta^{2}\delta(1-\eta)(\gamma+\zeta)}{(\gamma ^{2}+\gamma(\delta+\zeta+\varepsilon)+\zeta(\delta\eta+\varepsilon ))(\beta^{2}(\gamma+\zeta)+\gamma(\gamma^{2}+\gamma(\delta+\zeta +\varepsilon)+\zeta(\delta\eta+\varepsilon)))}, \\ Y^{*}=\frac{\alpha\beta^{2}\delta\eta(\gamma+\zeta)^{2}}{\gamma (\gamma^{2}+\gamma(\delta+\zeta+\varepsilon)+\zeta(\delta\eta +\varepsilon))(\beta^{2}(\gamma+\zeta)+\gamma(\gamma^{2}+\gamma (\delta+\zeta+\varepsilon)+\zeta(\delta\eta+\varepsilon )))}, \\ Z^{*}=\frac{\alpha\beta^{2}\varepsilon(\gamma+\zeta)^{2}}{(\gamma +\vartheta)(\gamma^{2}+\gamma(\delta+\zeta+\varepsilon)+\zeta (\delta\eta+\varepsilon))(\beta^{2}(\gamma+\zeta)+\gamma(\gamma ^{2}+\gamma(\delta+\zeta+\varepsilon)+\zeta(\delta\eta+\varepsilon )))}.\end{gathered} $$ The linear system of system (2) at \(E^{*}\) is $$ \textstyle\begin{cases} \frac{dP(t)}{dt}=g_{11}P(t)+g_{12}S(t), \\ \frac{dS(t)}{dt}=g_{21}P(t)+g_{22}S(t)+h_{23}X(t-\tau), \\ \frac{dX(t)}{dt}=g_{32}S(t)+g_{33}X(t)+h_{33}X(t-\tau), \\ \frac{dY(t)}{dt}=g_{42}S(t)+g_{44}Y(t), \\ \frac{dZ(t)}{dt}=g_{52}S(t)+g_{55}Z(t), \end{cases} $$ $$\begin{aligned}& g_{11}=-\frac{\beta\sqrt{S^{*}}}{2\sqrt{P^{*}}}-\gamma,\qquad g_{12}=- \frac{\beta\sqrt{P^{*}}}{2\sqrt{S^{*}}}, \\& g_{21}=\frac{\beta\sqrt{S^{*}}}{2\sqrt{P^{*}}},\qquad g_{22}=\frac{\beta \sqrt{P^{*}}}{2\sqrt{S^{*}}}-( \gamma+\delta+\varepsilon),\qquad h_{23}=\zeta, \\& g_{32}=\delta(1-\eta),\qquad g_{33}=-\gamma,\qquad h_{33}=- \zeta, \\& g_{42}=\delta\eta, g_{44}=-\gamma,\qquad g_{52}= \varepsilon,\qquad g_{55}=-(\gamma+\vartheta). \end{aligned}$$ The characteristic equation of system (3) is given by $$ \text{det} \left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \lambda-g_{11} & -g_{12} & {0} & {0} & {0} \\ -g_{21} & \lambda-g_{22} & -h_{23}e^{-\lambda\tau} & {0} & {0} \\ {0} & -g_{32} & \lambda-g_{33}-h_{33}e^{-\lambda\tau} & {0} & {0} \\ {0} & -g_{42} & {0} & \lambda-g_{44} & {0} \\ {0} & -g_{52} & {0} & {0} & \lambda-g_{55} \end{array}\displaystyle \right ]=0, $$ which leads to $$ \lambda^{5}+G_{4}\lambda^{4}+G_{3} \lambda^{3}+G_{2}\lambda^{2}+G_{1} \lambda+G_{0}+\bigl(H_{4}\lambda^{4}+H_{3} \lambda^{3}+H_{2}\lambda^{2}+H_{1} \lambda+H_{0}\bigr)e^{-\lambda\tau}=0, $$ $$\begin{aligned}& G_{0}=g_{33}g_{44}g_{55}(g_{12}g_{21}-g_{11}g_{22}), \\& \begin{aligned}G_{1}={}&(g_{33}g_{44}+g_{33}g_{55}+g_{44}g_{55}) (g_{11}g_{22}-g_{12}g_{21}) \\ &+g_{33}g_{44}g_{55}(g_{11}+g_{22}),\end{aligned} \\& \begin{aligned}G_{2}={}&(g_{33}+g_{44}+g_{55}) (g_{12}g_{21}-g_{11}g_{22}) \\ &-(g_{11}+g_{22}) (g_{33}g_{44}+g_{33}g_{55}+g_{44}g_{55}), \end{aligned} \\& \begin{aligned}G_{3}={}&(g_{11}+g_{22}) (g_{33}+g_{44}+g_{55})+g_{33}g_{44}+g_{33}g_{55} \\ &+g_{44}g_{55}-g_{12}g_{21},\end{aligned} \\& G_{4}=-(g_{11}+g_{22}+g_{33}+g_{44}+g_{55}), \\& H_{0}=g_{44}g_{55}(g_{11}g_{32}h_{23}-g_{11}g_{22}h_{33}+g_{12}g_{21}h_{33}), \\& \begin{aligned}H_{1}={}&g_{55}h_{33}(g_{11}g_{22}+g_{11}g_{44}+g_{22}g_{44})+g_{11}g_{22}g_{44}h_{33} \\ &-g_{32}h_{23}(g_{11}g_{44}+g_{11}g_{55}+g_{44}g_{55})-g_{12}g_{21}h_{33}(g_{44}+g_{55}), \end{aligned} \\& \begin{aligned}H_{2}={}&g_{32}h_{23}(g_{11}+g_{44}+g_{55})-g_{55}h_{33}(g_{11}+g_{22}+g_{44}) \\ &+g_{12}g_{21}h_{33}-h_{33}(g_{11}g_{22}+g_{11}g_{44}+g_{22}g_{44}), \end{aligned} \\& H_{3}=h_{33}(g_{11}+g_{22}+g_{44}+g_{55})-g_{32}h_{23},\qquad G_{44}=-h_{33}. \end{aligned}$$ When \(\tau=0\), Eq. (5) becomes $$ \lambda^{5}+G_{04}\lambda^{4}+G_{03} \lambda^{3}+G_{02}\lambda^{2}+G_{01} \lambda+G_{00}=0, $$ $$\begin{aligned}& G_{00}=G_{0}+H_{0},\qquad G_{01}=G_{1}+H_{1},\qquad G_{02}=G_{2}+H_{2}, \\& G_{03}=G_{3}+H_{3},\qquad G_{04}=G_{4}+H_{0}. \end{aligned}$$ Based on the discussion in [21], it can be concluded that all the roots of Eq. (6) have negative real parts. Thus, according to the Hurwitz criterion, we have the following result. Lemma 1 ([21]) The unique positive equilibrium \(E^{*}(P^{*}, S^{*}, X^{*}, Y^{*}, Z^{*})\)of system (2) is locally asymptotically stable when \(\tau=0\). For \(\tau>0\), let \(\lambda=i\omega\) (\(\omega>0\)) be a root of Eq. (5). Then $$ \textstyle\begin{cases} (H_{1}\omega-H_{3}\omega^{3})\sin\tau\omega+(H_{4}\omega^{4}-H_{2}\omega ^{2}+H_{0})\cos\tau\omega=G_{2}\omega^{2}-G_{4}\omega^{4}-G_{0}, \\ (H_{1}\omega-H_{3}\omega^{3})\cos\tau\omega-(H_{4}\omega^{4}-H_{2}\omega ^{2}+H_{0})\sin\tau\omega=G_{3}\omega^{3}-\omega^{5}-G_{1}\omega. \end{cases} $$ It follows from Eq. (8) that $$ \omega^{10}+J_{4}\omega^{8}+J_{3} \omega^{6}+J_{2}\omega^{4}+J_{1} \omega^{2}+J_{0}=0, $$ $$\begin{aligned}& J_{0}=G_{0}^{2}-H_{0}^{2}, \\& J_{1}=2G_{0}G_{2}-H_{1}^{2}+2H_{0}H_{2}, \\& J_{2}=G_{2}^{2}-2G_{0}G_{4}-2G_{1}G_{3}+2H_{1}H_{3}-H_{2}^{2}-2H_{0}H_{4}, \\& J_{3}=2G_{1}-2G_{2}G_{4}-H_{3}^{2}+2H_{2}H_{4}, \\& J_{4}=G_{4}^{2}-H_{4}^{2}. \end{aligned}$$ Let \(\omega^{2}=\nu\), Eq. (8) becomes $$ \nu^{5}+J_{4}\nu^{4}+J_{3} \nu^{3}+J_{2}\nu^{2}+J_{1} \nu+J_{0}=0. $$ In order to establish the main results of this paper, we make the following necessary assumption: \((S_{1})\): Eq. (9) has at least one positive root. \(f^{\prime}(\nu_{0})\neq0\), where \(f(\nu)=\nu^{5}+J_{4}\nu ^{4}+J_{3}\nu^{3}+J_{2}\nu^{2}+J_{1}\nu+J_{0}\). It follows from \((S_{1})\) that Eq. (9) has at least one positive root, and without loss of generality we assume that Eq. (9) has five positive roots denoted by \(\nu_{1}\), \(\nu_{2}\), \(\nu_{3}\), \(\nu_{4}\), and \(\nu_{5}\). Thus, \(\omega_{l}=\sqrt{\nu_{l}}\), (\(l=1, 2, 3, 4, 5\)) is the roots of Eq. (8). Based on Eq. (7), one can obtain $$ \tau_{l}^{j}=\frac{1}{\omega_{l}}\times \arccos\biggl\{ \frac{H_{3}\omega_{l}^{8}-(G_{3}H_{3}+H_{1})\omega _{l}^{6}+(G_{1}H_{3}+G_{3}H_{1})\omega_{l}^{4}-G_{1}H_{1}\omega_{l}^{2}}{(H_{1}\omega _{l}-H_{3}\omega_{l}^{3})^{2}+(H_{4}\omega_{l}^{4}-H_{2}\omega_{l}^{2}+H_{0})^{2}}+2n\pi \biggr\} $$ with \(l=1, 2, 3, 4, 5\); \(n=0, 1, 2, \dots\). Denote $$\tau_{0}=\tau_{{j}_{0}}^{0}=\min\bigl\{ \tau_{l}^{0}|l=1, 2, 3, 4, 5\bigr\} , \qquad\omega_{0}= \omega|_{\tau=\tau_{0}}. $$ Let \(\lambda(\tau)=\tilde{\alpha}(\tau)+i\tilde{\beta}(\tau)\)be the root of Eq. (5) at \(\tau=\tau_{0}\)satisfying \(\tilde{\alpha}(\tau_{0})=0\), \(\tilde{\beta}(\tau_{0})=\omega_{0}\), then \(\operatorname{Re}[d\lambda/d\tau]_{\tau=\tau_{0}}\neq0\). Differentiating Eq. (5) with respect to τ leads to $$ \begin{aligned}[b]\biggl[\frac{d\lambda}{d\tau} \biggr]^{-1}={}&{-} \frac{5\lambda^{4}+4G_{4}\lambda^{3}+3G_{3}\lambda^{2}+2G_{2}\lambda +G_{1}}{\lambda(\lambda^{5}+G_{4}\lambda^{4}+G_{3}\lambda^{3}+G_{2}\lambda ^{2}+G_{1}\lambda+G_{0})} \\ &+\frac{4H_{4}\lambda^{3}+3H_{3}\lambda^{2}+2H_{2}\lambda+H_{1}}{\lambda (H_{4}\lambda^{4}+H_{3}\lambda^{3}+H_{2}\lambda^{2}+H_{1}\lambda+H_{0})}-\frac {\tau}{\lambda}.\end{aligned} $$ $$ \operatorname{Re} \biggl[\frac{d\lambda}{d\tau} \biggr]^{-1}_{\tau=\tau_{0}}= \frac{f^{\prime}(\nu_{0})}{(H_{1}\omega_{0}-H_{3}\omega_{0}^{3})^{2}+(H_{4}\omega _{0}^{4}-H_{2}\omega_{0}^{2}+H_{0})^{2}}. $$ It follows from \((S_{2})\) that \(\operatorname{Re}[d\lambda/d\tau ]_{\tau=\tau_{0}}\neq0\). This ends the proof of Lemma 2. Based on the discussion above and Lemmas 1 and 2, one has the following result. □ Theorem 1 For system (2), if \((S_{1})\)–\((S_{2})\)hold, then \(E^{*}(P^{*}, S^{*}, X^{*}, Y^{*}, Z^{*})\)is locally asymptotically stable when \(\tau\in [0, \tau_{0})\); system (2) undergoes a Hopf bifurcation at \(E^{*}(P^{*}, S^{*}, X^{*}, Y^{*}, Z^{*})\)when \(\tau=\tau_{0}\)and a family of periodic solutions bifurcate from \(E^{*}(P^{*}, S^{*}, X^{*}, Y^{*}, Z^{*})\). \(\tau_{0}\)is defined as in Eq. (10). Direction and stability of Hopf bifurcation In this section, we investigate the direction and stability of Hopf bifurcation. By Hassard et al. [31], we have the following theorem for system (2). The Hopf bifurcation exhibited by system (2) can be determined by the parameters \(\mu_{2}\), \(\beta_{2}\), and \(T_{2}\). (i) If \(\mu_{2}>0\) (\(\mu_{2}<0\)), then the Hopf bifurcation is supercritical (subcritical); (ii) if \(\beta_{2}<0\) (\(\beta_{2}>0\)), then the bifurcating periodic solutions are stable (unstable); (iii) if \(T_{2}>0\) (\(T_{2}<0\)), then the period of the bifurcating periodic solutions increases (decrease). The parameters \(\mu_{2}\), \(\beta_{2}\), and \(T_{2}\) can be found using the following formulas: $$ \begin{gathered} C_{1}(0)=\frac{i}{2\tau_{0}\omega_{0}} \biggl(v_{11}v_{20}-2 \vert v_{11} \vert ^{2}-\frac{ \vert v_{02} \vert ^{2}}{3} \biggr)+\frac{v_{21}}{2}, \\ \mu_{2} =-\frac{\operatorname{Re}\{C_{1}(0)\}}{\operatorname{Re}\{ \lambda^{\prime}(\tau_{0})\}}, \\ \beta_{2}=2{\operatorname{Re}\bigl\{ C_{1}(0)\bigr\} }, \\ T_{2}=-\frac{\operatorname{Im}\{C_{1}(0)\}+\mu_{2}\operatorname{Im}\{ \lambda^{\prime}(\tau_{0})\}}{\tau_{0}\omega_{0}}, \end{gathered} $$ in which the expressions of \(v_{20}\), \(v_{11}\), \(v_{02}\), and \(v_{21}\) can be found in the following. Proof of Theorem 2 Introduce a new perturbation parameter \(\tau=\tau_{0}+\mu\) with \(\mu \in R\), then \(\mu=0\) is the Hopf bifurcation value of system (2). Let \(u_{1}(t)=P(t)-P^{*}\), \(u_{2}(t)=S(t)-S^{*}\), \(u_{3}(t)=X(t)-X^{*}\), \(u_{4}(t)=Y(t)-Y^{*}\), \(u_{5}(t)=Z(t)-Z^{*}\), and \(u_{i}(t)=u_{i}(\tau t)\), \(i=1,2,\ldots, 5\). Then system (2) can be written as a functional differential equation in \(C=C([-1,0],R^{5})\) as follows: $$ \dot{u}(t)=L_{\mu}(u_{t})+F(\mu, u_{t}), $$ where \(L_{\mu}: C\rightarrow R^{5}\), \(F: R\times C\rightarrow R^{5}\), and $$\begin{aligned}& L_{\mu}\phi=(\tau_{0}+\mu) \bigl(G_{\text{max}}\phi(0)+H_{\text{max}}\phi(-1) \bigr), \end{aligned}$$ $$\begin{aligned}& F(\mu,\phi)=(\tau_{0}+\mu) (F_{1}, F_{2}, 0, 0, 0)^{T}, \end{aligned}$$ $$G_{\text{max}}=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} g_{11} &g_{12} &{0} &{0} &{0}\\ g_{21} &g_{22} &{0} &{0} &{0}\\ {0} &g_{32} &g_{33} &{0} &{0}\\ {0} &g_{42} &{0} &g_{44} &{0}\\ {0} &g_{52} &{0} &{0} &g_{55} \end{array}\displaystyle \right ),\qquad H_{\text{max}}=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} {0} &{0} &{0} &{0} &{0}\\ {0} &{0} &h_{23} &{0} &{0}\\ {0} &{0} &h_{33} &{0} &{0}\\ {0} &{0} &{0} &{0} &{0}\\ {0} &{0} &{0} &{0} &{0} \end{array}\displaystyle \right ), $$ $$\begin{aligned}& \begin{aligned} F_{1}={}&g_{13}\phi_{1}^{2}(0)+g_{14} \phi_{1}(0)\phi_{2}(0)+g_{15}\phi_{2}^{2}(0)+g_{16} \phi_{1}^{3}(0)+g_{17}\phi_{1}^{2}(0) \phi_{2}(0) \\ &+g_{18}\phi_{1}(0)\phi_{2}^{2}(0)+g_{19} \phi_{2}^{3}(0)+\cdots,\end{aligned} \\& \begin{aligned}F_{2}={}&g_{23}\phi_{1}^{2}(0)+g_{24} \phi_{1}(0)\phi_{2}(0)+g_{25}\phi_{2}^{2}(0)+g_{26} \phi_{1}^{3}(0)+g_{27}\phi_{1}^{2}(0) \phi_{2}(0) \\ &+g_{28}\phi_{1}(0)\phi_{2}^{2}(0)+g_{29} \phi_{2}^{3}(0)+\cdots,\end{aligned} \end{aligned}$$ $$\begin{aligned}& g_{13}=\frac{\beta\sqrt{S^{*}}}{8P^{*}\sqrt{P^{*}}},\qquad g_{14}=-\frac {\beta}{2\sqrt{P^{*}S^{*}}},\qquad g_{15}=\frac{\beta\sqrt{P^{*}}}{8S^{*}\sqrt{S^{*}}},\qquad g_{16}=-\frac{\beta \sqrt{S^{*}}}{16(P^{*})^{2}\sqrt{P^{*}}}, \\& g_{17}=\frac{\beta}{16P^{*}\sqrt{P^{*}S^{*}}},\qquad g_{18}=\frac{\beta }{16S^{*}\sqrt{P^{*}S^{*}}},\qquad g_{19}=-\frac{\beta\sqrt{P^{*}}}{16(S^{*})^{2}\sqrt{S^{*}}}, \\& g_{23}=-\frac{\beta\sqrt{S^{*}}}{8P^{*}\sqrt{P^{*}}},\qquad g_{24}=\frac {\beta}{2\sqrt{P^{*}S^{*}}},\qquad g_{25}=-\frac{\beta\sqrt{P^{*}}}{8S^{*}\sqrt{S^{*}}},\qquad g_{26}=\frac{\beta \sqrt{S^{*}}}{16(P^{*})^{2}\sqrt{P^{*}}}, \\& g_{27}=-\frac{\beta}{16P^{*}\sqrt{P^{*}S^{*}}},\qquad g_{28}=-\frac{\beta }{16S^{*}\sqrt{P^{*}S^{*}}},\qquad g_{29}=\frac{\beta\sqrt{P^{*}}}{16(S^{*})^{2}\sqrt{S^{*}}}. \end{aligned}$$ By using the Riesz representation theorem, let \(\eta(\theta, \mu ):[-1,0]\rightarrow R^{5\times5}\) be a function of bounded variation. For \(\phi\in C([-1,0], R^{5})\), let $$ L_{\mu}\phi= \int_{-1}^{0}d\eta(\theta, \mu)\phi(\theta). $$ Moreover, we can choose $$\eta(\theta, \mu)= \textstyle\begin{cases} (\tau_{0}+\mu)G_{\text{max}},& \theta=0,\\ 0, &\theta\in(-1,0),\\ (\tau_{0}+\mu)H_{\text{max}}, &\theta=-1. \end{cases} $$ $$A(\mu)\phi= \textstyle\begin{cases} \frac{d\phi(\theta)}{d\theta},& -1\leq\theta< 0, \\ \int_{-1}^{0}d\eta(\theta,\mu)\phi(\theta),& \theta=0, \end{cases} $$ $$R(\mu)\phi= \textstyle\begin{cases} 0,& -1\leq\theta< 0, \\ F(\mu,\phi), &\theta=0. \end{cases} $$ Then system (14) can be written as follows: $$ \dot{u}(t)=A(\mu)u_{t}+R(\mu)u_{t}. $$ For \(\varphi\in C^{1}([0,1],(R^{5})^{*})\), define the adjoint operator of \(A(0)\) $$A^{*}(\varphi)= \textstyle\begin{cases} -\frac{d\varphi(s)}{ds},& 0< s\leq1, \\ \int_{-1}^{0}d{\eta}^{T}(s,0)\varphi(-s),& s=0, \end{cases} $$ and a bilinear product $$ \bigl\langle \varphi(s),\phi(\theta)\bigr\rangle =\bar{\varphi}(0) \phi(0)- \int_{\theta=-1}^{0} \int_{\xi=0}^{\theta}\bar{\varphi}(\xi-\theta)\,d\eta(\theta) \phi(\xi)\,d\xi, $$ where \(\eta(\theta)=\eta(\theta, 0)\). According to the analysis in Sect. 2, \(\pm i\tau_{0}\omega_{0}\) are eigenvalues of \(A(0)\), so they are also eigenvalues of \(A^{*}\). Then \(A(0)q(\theta)=i\tau _{0}\omega_{0}q(\theta)\) and \(A^{*}q^{*}(s)=-i\tau_{0}\omega_{0}q^{*}(s)\). Suppose that \(q(\theta)=(1,q_{2},q_{3},q_{4},q_{5})^{T}e^{i\tau_{0}\omega _{0}\theta}\) and \(q^{*}(s)=D(1,q_{2}^{*},q_{3}^{*},q_{4}^{*},q_{5}^{*})e^{i\tau_{0}\omega _{0}s}\) are the corresponding eigenvectors. By calculation we can obtain $$\begin{gathered} q_{2}=\frac{g_{21}(i\omega_{0}-g_{33}-h_{33}e^{-i\tau_{0}\omega _{0}})}{(i\omega_{0}-g_{22})(i\omega_{0}-g_{33}-h_{33}e^{-i\tau_{0}\omega _{0}})-g_{32}h_{23}e^{-i\tau_{0}\omega_{0}}}, \\ q_{3}=\frac{g_{32}q_{2}}{i\omega_{0}-g_{33}-h_{33}e^{-i\tau_{0}\omega _{0}}},\qquad q_{4}=\frac{g_{42}q_{2}}{i\omega_{0}-g_{44}},\qquad q_{5}=\frac{g_{52}q_{2}}{i\omega_{0}-g_{55}}, \\ q_{2}^{*}=-\frac{i\omega_{0}+g_{11}}{g_{21}},\qquad q_{3}^{*}=\frac{h_{23}e^{i\tau _{0}\omega_{0}}(i\omega_{0}+g_{11})}{g_{21}(i\omega _{0}+g_{33}+h_{33}e^{i\tau_{0}\omega_{0}})},\qquad q_{4}^{*}=0,\qquad q_{5}^{*}=0.\end{gathered} $$ From \(\langle q^{*}(s),q(\theta)\rangle=1\), we have $$\bar{D}=\Biggl[1+\sum_{i=1}^{5}\bar{q}_{i}^{*}q_{i}+ \tau_{0}e^{-i\tau_{0}\omega_{0}}q_{3}\bigl(h_{23} \bar{q}_{2}^{*}+h_{33}\bar{q}_{3}^{*}\bigr) \Biggr]^{-1}. $$ In the following, according to the algorithm given in [31] and the computation process as that in [24, 32,33,34], we can obtain $$\begin{aligned}& v_{20}=2\tau_{0}\bar{D}\bigl[g_{13}+g_{14}q_{2}+g_{15}q_{2}^{2}+ \bar{q}_{2}^{*}\bigl(g_{23}+g_{24}q_{2}+g_{25}q_{2}^{2} \bigr)\bigr], \\& v_{11}=\tau_{0}\bar{D}\bigl[2g_{13}+g_{14}(q_{2}+ \bar{q}_{2})+2g_{15}q_{2}\bar{q}_{2}+ \bar{q}_{2}^{*}\bigl(2g_{23}+g_{24}(q_{2}+ \bar{q}_{2})+2g_{25}q_{2}\bar{q}_{2}\bigr) \bigr], \\& v_{02}=2\tau_{0}\bar{D}\bigl[g_{13}+g_{14} \bar{q}_{2}+g_{15}\bar{q}_{2}^{2}+ \bar{q}_{2}^{*}\bigl(g_{23}+g_{24} \bar{q}_{2}+g_{25}\bar{q}_{2}^{2}\bigr) \bigr], \\& \begin{aligned}v_{21}={}&2\tau_{0}\bar{D} \biggl[g_{13} \bigl(2W_{11}^{(0)}+W_{20}^{(1)}(0)\bigr) \\ &+g_{14} \biggl(W_{11}^{(1)}(0)q_{2}+ \frac{1}{2}W_{20}^{(1)}(0)\bar{q}_{2}+W_{11}^{(2)}(0)+ \frac{1}{2}W_{20}^{(2)}(0) \biggr) \\ &+g_{15}\bigl(2W_{11}^{(2)}(0)q_{2}+W_{20}^{(2)}(0) \bar{q}_{2}\bigr) \\ &+3g_{16}+g_{17}(\bar{q}_{2}+2q_{2})+g_{18} \bigl(q_{2}^{2}+2q_{2}\bar{q}_{2} \bigr)+3g_{19}q_{2}^{2}\bar{q}_{2} \\ &+\bar{q}_{2}^{*} \biggl(g_{23}\bigl(2W_{11}^{(0)}+W_{20}^{(1)}(0) \bigr) \\ &+g_{24} \biggl(W_{11}^{(1)}(0)q_{2}+ \frac{1}{2}W_{20}^{(1)}(0)\bar{q}_{2}+W_{11}^{(2)}(0)+ \frac{1}{2}W_{20}^{(2)}(0) \biggr) \\ &+g_{25}\bigl(2W_{11}^{(2)}(0)q_{2}+W_{20}^{(2)}(0) \bar{q}_{2}\bigr) \\ &+3g_{16}+g_{17}(\bar{q}_{2}+2q_{2})+g_{18} \bigl(q_{2}^{2}+2q_{2}\bar{q}_{2} \bigr)+3g_{19}q_{2}^{2}\bar{q}_{2} \biggr) \biggr]\end{aligned} \end{aligned}$$ $$\begin{aligned}& W_{20}(\theta)=\frac{iv_{20}q(0)}{\tau_{0}\omega_{0}}e^{i\tau _{0}\omega_{0}\theta}+ \frac{i\bar{v}_{02}\bar{q}(0)}{3\tau_{0}\omega_{0}}e^{-i\tau_{0}\omega _{0}\theta}+E_{1}e^{2i\tau_{0}\omega_{0}\theta}, \\& W_{11}(\theta)=-\frac{iv_{11}q(0)}{\tau_{0}\omega_{0}}e^{i\tau _{0}\omega_{0}\theta}+ \frac{i\bar{v}_{11}\bar{q}(0)}{\tau_{0}\omega_{0}}e^{-i\tau_{0}\omega _{0}\theta}+E_{2}, \end{aligned}$$ $$\begin{aligned}& E_{1}=2\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} g_{11}^{*} &-g_{12} &{0} &{0} &{0}\\ -g_{21}e^{-2i\tau_{0}\omega_{0}} &g_{22}^{*} &-g_{23}e^{-2i\tau_{0}\omega _{0}} &{0} &{0}\\ {0} &-g_{32} &g_{33}^{*} &{0} &{0}\\ {0} &-g_{42} &{0} &g_{44}^{*} &{0}\\ {0} &-g_{52} &{0} &{0} &g_{55}^{*} \end{array}\displaystyle \right )^{-1}\times \left ( \textstyle\begin{array}{c} g_{13}+g_{14}q_{2}+g_{15}q_{2}^{2}\\ g_{23}+g_{24}q_{2}+g_{25}q_{2}^{2}\\ {0}\\ {0}\\ {0} \end{array}\displaystyle \right ), \\& E_{2}=\left ( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} g_{11} &g_{12} &{0} &{0} &{0}\\ g_{21} &g_{22} &h_{23} &{0} &{0}\\ {0} &g_{32} &g_{33}+h_{33} &{0} &{0}\\ {0} &g_{42} &{0} &g_{44} &{0}\\ {0} &g_{52} &{0} &{0} &g_{55} \end{array}\displaystyle \right )^{-1}\times \left ( \textstyle\begin{array}{c} 2g_{13}+g_{14}(q_{2}+\bar{q}_{2})+2g_{15}q_{2}\bar{q}_{2}\\ 2g_{23}+g_{24}(q_{2}+\bar{q}_{2})+2g_{25}q_{2}\bar{q}_{2}\\ {0}\\ {0}\\ {0} \end{array}\displaystyle \right ), \end{aligned}$$ $$\begin{aligned}& g_{11}^{*}=2i\omega_{0}-g_{11},\qquad g_{22}^{*}=2i\omega_{0}-g_{22}, \qquad g_{33}^{*}=2i\omega_{0}-g_{33}-h_{33}e^{-2i\tau_{0}\omega_{0}}, \\& g_{44}^{*}=2i\omega_{0}-g_{44},\qquad g_{55}^{*}=2i\omega_{0}-g_{55}. \end{aligned}$$ Thus, we can conclude that \(v_{20}\), \(v_{11}\), \(v_{02}\), and \(v_{21}\) in Eq. (13) can be obtained. The proof is completed. □ Numerical simulations In this section, we verify the correctness of the obtained theoretical results by using numerical simulations. Choosing \(\alpha=0.8\), \(\beta=0.005\), \(\gamma=0.0000391\), \(\delta=0.00913\), \(\varepsilon=0.00458\), \(\zeta=0.02\), \(\eta=0.001\), \(\vartheta =0.0457\), we obtain the following specific case of system (2): $$ \textstyle\begin{cases} \frac{dP(t)}{dt}=0.8-0.005\sqrt{P(t)S(t)}-0.0000391 P(t), \\ \frac{dS(t)}{dt}=0.005\sqrt{P(t)S(t)}-0.0137491 S(t)+0.02 X(t-\tau), \\ \frac{dX(t)}{dt}=0.009121 S(t)-0.0000391 X(t)-0.02 X(t-\tau), \\ \frac{dY(t)}{dt}=9.1300e-006 S(t)-0.0000391 Y(t), \\ \frac{dZ(t)}{dt}=0.00458 S(t)-0.0457391 Z(t). \end{cases} $$ Thus, the unique positive equilibrium is \(E^{*}(147.6003, 170.9480, 77.8076, 39.9170, 17.1176)\). By calculating, we can obtain that \(\nu _{0}=0.00023516\), \(\omega_{0}=0.01533623\), and \(\tau_{0}=118.1368\), \(f^{\prime}(\nu_{0})=0.00052229>0\). Obviously, the parameters in system (21) fulfill assumptions \(S_{1}\) and \(S_{2}\). From Theorem 1, when \(\tau\in(0, \tau_{0})\), \(E^{*}(147.6003, 170.9480, 77.8076, 39.9170,17.1176)\) is locally asymptotically stable, which can be illustrated in Figs. 2–3. While as τ is increased to pass \(\tau_{0}\), we can see the effect of time delay that destabilizes system (21) and a Hopf bifurcation occurs and a periodic oscillation appears around \(E^{*}(147.6003, 170.9480, 77.8076, 39.9170,17.1176)\). This can be shown as in Figs. 4–5. The equilibrium \(E^{*}\) of system (21) is asymptotically stable for \(\tau=114.685<\tau_{0}\) The phase plots of system (2) for \(\tau=114.685<\tau_{0}\) The equilibrium \(E^{*}\) of system (21) is unstable for \(\tau=122.905>\tau_{0}\) The phase plots of system (2) for \(\tau=122.905>\tau_{0}\) Now, we are interested in studying the effect of some other parameters on the dynamics of system (21). (i) The number of smokers associated with some illness decreases as the value of β decreases, whereas the value of η increases, which can be demonstrated by Figs. 6–7. (ii) The number of smokers associated with some illness decreases when the value of ζ decreases, which can be depicted by Fig. 8. In addition, it is easy to check in Fig. 9 that system (21) shows the limit cycle behavior from the stable state due to increase in ζ. Time plot of Z for different β at \({\tau=114.685}\). The rest of the parameters are taken as given in the text Time plot of Z for different η at \({\tau=114.685}\). The rest of the parameters are taken as given in the text Time plot of Z for different ζ at \({\tau=114.685}\). The rest of the parameters are taken as given in the text Dynamic behavior of system (21): projection on S–X–Z for different ζ at \({\tau=122.905}\). The rest of the parameters are taken as given in the text In the current paper, a delayed smoking model in which the population is divided into five classes is investigated by incorporating the time delay due to the immunity period, after which the temporarily quit smokers return to the class of smokers, into the proposed model by Din et al. [21]. It is found that the delayed smoking model is locally asymptotically stable when the time delay is suitably small under some certain conditions. In this case, it is easy to control smoking. However, once the value of the time delay passes through the critical value \(\tau_{0}\), a Hopf bifurcation occurs and smoking will be out of control. Particularly, properties such as direction and stability of the Hopf bifurcation are examined with the aid of the center manifold theorem and normal form theory. It has been observed from our simulations that the number of smokers associated with some illness decreases as we decrease the value of β or increase the value of η. Therefore, it can be concluded that we should actively propagandize the harm of smoking, so that more and more people can stay away from tobacco and quit smoking timely and permanently. It has also been shown that the number of smokers associated with some illness decreases when we decrease the value of ζ, and the model changes its behavior from stable focus to limit cycle as we increase the value of ζ. Thus, it is strongly recommended that the smokers who have quitted smoking should have strong will and resolutely prevent relapse, which is also meaningful for controlling tobacco epidemic. At last, it should be noted that similar to smoking addiction, the other public health problem is excessive drinking, which is not only harmful to personal health, but also leads to a range of negative social effects [35,36,37]. Therefore, we will try to complete some work about drink modeling in the near future. World Health Organization report on the global tobacco epidemic (2019). https://apps.who.int/iris/bitstream/handle/10665/326043/9789241516204-eng.pdf Sun, C.X., Jia, J.W.: Optimal control of a delayed smoking model with immigration. J. Biol. Dyn. 13, 447–460 (2019) Khan, S.A., Shah, K., Zaman, G., Jarad, F.: Existence theory and numerical solutions to smoking model under Caputo–Fabrizio fractional derivative. Chaos 29, Article ID 013128 (2019). https://doi.org/10.1063/1.5079644 Rahman, G., Agarwal, R.P., Din, Q.: Mathematical analysis of giving up smoking model via harmonic mean type incidence rate. Appl. Math. Comput. 354, 128–148 (2019) MathSciNet MATH Google Scholar Garsow, C.C., Salivia, G.J., Herrera, A.R.: Mathematical models for dynamics of tobacco use, recovery and relapse. Technical report BU-1505-M, Cornell University, Ithaca, NY (2000) Sharomi, O., Gumel, A.B.: Curtailing smoking dynamics: a mathematical modeling approach. Appl. Math. Comput. 195, 475–499 (2008) Zaman, G.: Qualitative behavior of giving up smoking models. Bull. Malays. Math. Soc. 34, 403–415 (2011) Zeb, A., Zaman, G., Momani, S.: Square-root dynamics of a giving up smoking model. Appl. Math. Model. 37, 5326–5334 (2013) Huo, H.F., Zhu, C.C.: Influence of relapse in a giving up smoking model. Abstr. Appl. Anal. 2013, Article ID 525461 (2013) Bushnaq, S., Maayah, B., Alhabees, A.: Application of multistep reproducing kernel Hilbert space method for solving giving up smoking model. Int. J. Pure Appl. Math. 109, 311–324 (2016) Singh, J., Kumar, D., Qurashi, M.A., Baleanu, D.: A new fractional model for giving up smoking dynamics. Adv. Differ. Equ. 2017, Article ID 88 (2017) Haq, F., Shah, K., Rahman, G., Shahzad, M.: Numerical solution of fractional order smoking model via Laplace Adomian decomposition method. Alex. Eng. J. 57, 1061–1069 (2018) Labzai, A., Balatif, O., Rachik, M.: Optimal control strategy for a discrete time smoking model with specific saturated incidence rate. Discrete Dyn. Nat. Soc. 2018, Article ID 5949303 (2018) Rahman, G., Agarwal, R.P., Liu, L.L., Khan, A.: Threshold dynamics and optimal control of an age-structured giving up smoking model. Nonlinear Anal., Real World Appl. 43, 96–120 (2018) Fei, Y.L., Liu, X.D.: Spreading dynamic of a PLSGP giving up smoking model on scale-free network. Open Access Libr. J. 5, Article ID e4365 (2018) Sharma, A., Misra, A.K.: Backward bifurcation in a smoking cessation model with media campaigns. Appl. Math. Model. 39, 1087–1098 (2015) Zhang, X.K., Zhang, Z.Z., Tong, J.Y., Dong, M.: Ergodicity of stochastic smoking model and parameter estimation. Adv. Differ. Equ. 2016, Article ID 274 (2016) Zaman, G., Kang, Y.H., Jung, I.H.: Dynamics of a smoking model with smoking death rate. Appl. Math. 44, 281–295 (2017) Pulecio-Montoya, A.M., Lopez-Montenegro, L.E., Benavides, L.M.: Analysis of a mathematical model of smoking. Contemp. Eng. Sci. 12, 117–129 (2019) Matintu, S.: Smoking as epidemic: modeling and simulation study. Am. J. Appl. Math. 5, 31–38 (2017) Din, Q., Ozair, M., Hussain, T., Saeed, U.: Qualitative behavior of a smoking model. Adv. Differ. Equ. 2016, Article ID 96 (2016) Wang, L.S., Xu, R., Feng, G.H.: Modelling and analysis of an eco-epidemiological model with time delay and stage structure. J. Appl. Math. Comput. 50, 175–197 (2016) Bai, Y.Z., Li, Y.Y.: Stability and Hopf bifurcation for a stage-structured predator–prey model incorporating refuge for prey and additional food for predator. Adv. Differ. Equ. 2019, Article ID 42 (2019) Xu, C.J.: Delay-induced oscillations in a competitor–competitor–mutualist Lotka–Volterra model. Complexity 2017, Article ID 2578043 (2017) Yuan, S.L., Song, Y.L.: Stability and Hopf bifurcations in a delayed Leslie–Gower predator–prey system. J. Math. Anal. Appl. 355, 82–100 (2009) Zhang, J.F.: Bifurcation analysis of a modified Holling–Tanner predator–prey model with time delay. Appl. Math. Model. 36, 1219–1231 (2012) Meng, X.Y., Wang, J.G.: Analysis of a delayed diffusive model with Beddington–Deangelis functional response. Int. J. Biomath. 12, Article ID 1950047 (2019) Kundu, S., Maitra, S.: Dynamics of a delayed predator–prey system with stage structure and cooperation for preys. Chaos Solitons Fractals 114, 453–460 (2018) Sun, X.G., Wei, J.J.: Stability and bifurcation analysis in a viral infection model with delays. Adv. Differ. Equ. 2015, Article ID 332 (2015) Keshri, N., Mishra, B.K.: Two time-delay dynamic model on the transmission of malicious signals in wireless sensor network. Chaos Solitons Fractals 68, 151–158 (2014) Hassard, B.D., Kazarinoff, N.D., Wan, Y.H.: Theory and Applications of Hopf Bifurcation. Cambridge University Press, Cambridge (1981) Bianca, C., Ferrara, M., Guerrini, L.: The Cai model with time delay: existence of periodic solutions and asymptotic analysis. Appl. Math. Inf. Sci. 7, 21–27 (2013) Zhao, T., Bi, D.J.: Hopf bifurcation of a computer virus spreading model in the network with limited anti-virus ability. Adv. Differ. Equ. 2017, Article ID 183 (2017) Meng, X.Y., Huo, H.F., Zhang, X.B., Xiang, H.: Stability and Hopf bifurcation in a three species system with feedback delays. Nonlinear Dyn. 64, 349–364 (2011) Huo, H.F., Chen, Y.L., Xiang, H.: Stability of a binge drinking model with delay. J. Biol. Dyn. 11, 210–225 (2017) Xiang, H., Wang, Y., Huo, H.F.: Analysis of the binge drinking models with demographics and nonlinear infectivity on networks. J. Appl. Anal. Comput. 8, 1535–1554 (2018) Huo, H.F., Zhang, X.M.: Complex dynamics in an alcoholism model with the impact of Twitter. Math. Biosci. 281, 24–35 (2016) The authors are very thankful to the anonymous reviewers for their insightful comments and suggestions, which helped us to improve the manuscript considerably and further open doors for future work. All of the authors declare that all the data can be accessed in our manuscript in the numerical simulation section. This research was supported by the Project of Support Program for Excellent Youth Talent in Colleges and Universities of Anhui Province (No. gxyqZD2018044) and the Natural Science Foundation of the Higher Education Institutions of Anhui Province (Nos. KJ2019A0655, KJ2019A0656, KJ2019A0662). School of Management Science and Engineering, Anhui University of Finance and Economics, Bengbu, China Zizhen Zhang, Ruibin Wei & Wanjun Xia Zizhen Zhang Ruibin Wei Wanjun Xia All authors read and approved the final manuscript. Correspondence to Zizhen Zhang. The authors declare that there is no conflict of interests. Zhang, Z., Wei, R. & Xia, W. Dynamical analysis of a giving up smoking model with time delay. Adv Differ Equ 2019, 505 (2019). https://doi.org/10.1186/s13662-019-2450-4 Hopf bifurcation Smoking model
CommonCrawl
A review on low-dimensional physics-based models of systemic arteries: application to estimation of central aortic pressure Shuran Zhou1, Lisheng Xu ORCID: orcid.org/0000-0001-8360-36051,2, Liling Hao1, Hanguang Xiao3, Yang Yao1, Lin Qi1 & Yudong Yao1,2 The physiological processes and mechanisms of an arterial system are complex and subtle. Physics-based models have been proven to be a very useful tool to simulate actual physiological behavior of the arteries. The current physics-based models include high-dimensional models (2D and 3D models) and low-dimensional models (0D, 1D and tube-load models). High-dimensional models can describe the local hemodynamic information of arteries in detail. With regard to an exact model of the whole arterial system, a high-dimensional model is computationally impracticable since the complex geometry, viscosity or elastic properties and complex vectorial output need to be provided. For low-dimensional models, the structure, centerline and viscosity or elastic properties only need to be provided. Therefore, low-dimensional modeling with lower computational costs might be a more applicable approach to represent hemodynamic properties of the entire arterial system and these three types of low-dimensional models have been extensively used in the study of cardiovascular dynamics. In recent decades, application of physics-based models to estimate central aortic pressure has attracted increasing interest. However, to our best knowledge, there has been few review paper about reconstruction of central aortic pressure using these physics-based models. In this paper, three types of low-dimensional physical models (0D, 1D and tube-load models) of systemic arteries are reviewed, the application of three types of models on estimation of central aortic pressure is taken as an example to discuss their advantages and disadvantages, and the proper choice of models for specific researches and applications are advised. Cardiovascular diseases have become a dominant factor of mortality all over the world [1]. Nearly 17.5 million people die of cardiovascular disease [2] and billions of dollars are spent every year on related healthcare [3]. Nowadays, cardiovascular research has become an important topic and been paid significant attention by researchers. Cardiovascular system is a complex circulatory system consisting of the heart, arteries and veins [4]. In recent years, due to the significant improvements in computer technology, modeling based on physical principles has become a powerful tool to simulate the hemodynamic properties of cardiovascular system and has been playing an increasingly important role in the diagnosis of cardiovascular diseases and the development of medical devices [5,6,7]. Current physics-based models can be divided into two categories, high-dimensional models and low-dimensional models as shown in Fig. 1. High-dimensional models including 2D models and 3D models can give detailed descriptions of the local flow field of the blood. These models describe the complex hemodynamic phenomenon of a specific region in the cardiovascular system. 2D models are generally used to describe changes of the radial blood flow velocity in an axisymmetric tube [8, 9]. 3D models are usually applied to simulate the fluid-structure interaction between the vascular walls and blood [10, 11]. To establish a 3D model of the entire arterial tree, the complex geometrical and mechanical information needs to be provided, which results in the enormous computational complexity, so that it cannot be readily implemented in practice. Consequently, high dimensional models can generally be used to simulate local hemodynamics of specific arterial sites, instead of the whole arterial tree. The structure diagram of physics-based models in the cardiovascular system In contrast to high-dimensional models, low-dimensional models with small computational costs can readily reproduce the pulse wave propagation phenomenon and realize patient-specific modeling. Thus, the low-dimensional modeling can be an effective way to describe the hemodynamic properties of the entire arterial tree in practical applications. At present, the available low-dimensional models mainly consist of 0D models, 1D models and tube-load models. 0D models, also called lumped parameter models, can describe global properties of the arterial system. The lumped parameter models are characterized by their pulse waveforms as a function of time only. The most well-known lumped parameter model is the Windkessel model [12], which includes mono-compartment models and multi-compartment models [13, 14]. 1D models and tube-load models are distributed parameter models, which can represent distributed properties of the arterial system. In the latter two types of models, their pulse waveforms depend on both time and space. In the distributed parameter models, 1D model based on the simplified Navier–Stokes equation is commonly used to reproduce pressure and flow at any position in the entire arterial tree [15,16,17,18,19]. The Windkessel model is computationally simple but less accurate. On the other hand, 1D model can represent the wave propagation phenomenon accurately but need a relatively large amount of computation. Taking advantage of both Windkessel models (simplicity) and 1D models (accuracy), some researchers developed tube-load models [20, 21]. Tube-load models can monitor multiple arterial hemodynamic parameters such as pulse transit time, arterial compliance, pulse wave velocity, and so on. So far, these three types of low-dimensional models have been extensively used in the study of cardiovascular dynamics as shown in Table 1. Especially, applying physics-based models to estimate central aortic pressure has been paid much attention in recent decades [22,23,24,25,26,27]. However, to our best knowledge, there has been few review paper about the reconstruction of central aortic pressure using these physics-based models. Additionally, estimating central aortic pressure is a common application of these three types of models, which contributes to compare their advantages and disadvantages fully. Thus, this paper is to review three types of low-dimensional physics-based models (0D models, 1D models and tube-load models) of the arterial system and take the application of estimating central aortic pressure as an example to compare their advantages and disadvantages. To begin with, the theories and applications of Windkessel models including mono-compartment and multi-compartment models are described. Then, the theories and applications of two types of distributed parameter models, namely 1D models and tube-load models, are elaborated. Next, the advantages and disadvantages of these three models are discussed. Finally, future challenges and final conclusion are presented. Table 1 Main applications of Windkessel models, 1D models and tube-load models In 0D models (lumped parameter models), the Windkessel theory is applied to the modeling of the arterial system [12, 28, 29]. Windkessel models are divided into two classes: mono-compartment models and multi-compartment models. Theories and applications of two categories of models are elaborated in this part, respectively. Furthermore, the comparison of different Windkessel models is made in Table 2. Table 2 Comparison of different Windkessel models Model descriptions Mono-compartment models The mono-compartment model is a combination of inductance, compliance and resistance. According to the number of elements included, current mono-compartment models are classified into four main types: two-element, three-element, four-element and complex mono-compartment Windkessel models. a. Two-element Windkessel model The two-element Windkessel model is the simplest mono-compartment model presented by Frank [12], which is made up of a resistor (R) and a capacitor (C) as shown in Fig. 2a. In this model, the resistor describes the resistance of small peripheral vessels and the capacitor describes the distensibility of large arteries. The two-element Windkessel model simply describes the pressure decay of the aorta in diastole. This model cannot signify the high frequency effects because there is merely a time constant in the model. Owing to its simplicity, this model can be used in clinical practice readily such as total arterial compliance estimation [30] and blood pressure estimation [31]. The mono-compartment models. a Two-element Windkessel model; b three-element Windkessel model; c four-element Windkessel model b. Three-element Windkessel model Adding a characteristic impedance (\(Z_c\)) to the two-element Windkessel model, the three-element Windkessel model is formed as shown in Fig. 2b [15]. The characteristic impedance is equal to oscillatory pressure divided by oscillatory flow. Although it is found that a resistance numerically equals approximately a characteristic impedance, the characteristic impedance is different from the resistance. The characteristic impedance is merely used to signify oscillatory phenomena. Owing to the inclusion of the characteristic impedance, this model can simulate high frequency effects. Simultaneously, the introduction of the characteristic impedance also results in some errors at the low frequency. In contrast with the two-element Windkessel model, the three-element Windkessel model can have a higher accuracy. Therefore, the three-element Windkessel model has been extensively used in theoretical research [32,33,34]. c. Four-element Windkessel model Taking the inertance of blood flow into consideration on the basis of the three-element Windkessel model, Stergiopulos et al. [35] proposed a four-element Windkessel model as shown in Fig. 2c. Due to the addition of the inertance, this model can represent middle frequency effects. In other words, the four-element Windkessel model can simulate all frequency effects. It has been validated that the four-element model can give a better description of the impedance characteristics [36]. Some nonlinear regression analysis methods are applied to the estimation of the four-element model parameters. Compared with the two-element and three-element Windkessel models, it is more difficult to identify the model parameters of the four-element Windkessel model. Consequently, only a few researchers make use of this model [23, 37, 38]. d. Complex mono-compartment models For the sake of the further improvement of the arterial model, a few researchers developed more complex Windkessel models in which more resistive, inductive and capacitive components were introduced [39, 40]. By including more resistive and inductive elements, the laminar oscillatory flow impedance can be simulated. Owing to the high complexity of these models, there has been no further development by other investigators. These models described above focus on simulating the pressure and flow characteristics of the arterial vessels without considering the effect of the venous vessels. In fact, with regard to the coronary and pulmonary circulation, the pressure and flow of the veins have a significant impact on the global hemodynamics [41]. Under this circumstance, the venous side cannot be ignored. In order to describe the characteristics of the veins, extra resistance, inertance and compliance are added to form more complex (five, six and seven element) arterial models. In contrast with the two, three and four element Windkessel model, the five-element model simulates the characteristics of microcirculation hemodynamics more accurately, and the six-element model accounts for the hemodynamic contribution of the venous vessels in the cardiovascular system more precisely, and the seven-element model further gives the representation of the systemic circulation through the improvement on the description of the venous system. Multi-compartment models Regardless of spatial information, the mono-compartment model regards all arteries as a single block. In order to represent the distribution of flow and pressure, some multi-compartment models which were composed of a series of mono-compartment models were established. Figure 3 is an example of a simple multi-compartment model of the systemic arteries [42]. Every mono-compartment model is a combination of resistance (R), inertance (L) and compliance (C). At present, there are four typical compartmental configurations in the multi-compartment model: T, \(\Pi\) and inverted L element, respectively [13]. The corresponding compartmental configuration should be chosen appropriately according to the characteristics of the particular arteries. Since the multi-compartment model represents position information roughly but in general does not signify the nonlinear convective acceleration term of 1D model, this model is usually seen as the first order discretization of the one-dimensional linear model [43]. A multi-compartment model of systemic circulation. ao aortic root, at artery, ar arteriole, cp capillary, vn vein Mono-compartment models are simplified descriptions of an arterial system, simulating physiological properties of the arterial vessels with a few parameters. Multi-compartment models construct a full arterial network by connecting several mono-compartment models, describing particular information of different vessel compartments. Due to the simplicity of mono-compartment and multi-compartment models, merely a few researchers used them to reconstruct central aortic pressure. In Windkessel models, proximal flow or peripheral pressure measurements were frequently used as inflow condition. Many researchers chose aortic flow as model input [22, 44, 45]. Flow at other positions was selected as model input, such as carotid flow [46], left ventricle flow [23] and mitral valve mean flow [47]. Few researchers chose brachial and radial pressure as model input [48]. For model parameter acquisition, the majority of researchers calculated model parameters by population averaging [22, 23, 44,45,46, 48] and only a small number of researchers adopted partially individualized model parameters [47]. Mono-compartment models are more commonly applied to the estimation of central aortic pressure than multi-compartment models. For example, the pressure waveform in the aorta was reconstructed by Stergiopulos et al. [44] and Struijk et al. [45] using a two-element Windkessel model. Cai et al. [22], Zala et al. [46] and Vennin et al. [22] employed a three-element Windkessel model to calculate central aortic pressure. A four-element Windkessel model was adopted by Her et al. [23] to reproduce the patient's aortic pressure waveform undergoing counter-pulsation control by ventricular assist devices. Revie et al. [47] used a six-chamber lumped parameter model consisting of left ventricle and right ventricle, aorta, pulmonary artery, pulmonary vena cava and pulmonary vein to monitor aortic pressure changes. It was verified in clinical data [22, 23, 47] that Windkessel models can describe the general shape of pressure waveform in the ascending aorta, however, it is difficult to show the details of pressure waveform such as dicrotic notch features. Therefore, Windkessel models have limited accuracy for the estimation of central aortic pressure. 1D models are distributed parameter models. Theory and application of 1D models are described in this part, respectively. 1D models mainly focus on methods that solve one-dimensional equations and boundary conditions (including inflow, bifurcation and outflow conditions). Model derivations The one-dimensional arterial flow theory was proposed by some researchers [16, 17]. Euler [49] first established a 1D model using the one-dimensional theory. Although assumptions of the model are simple, it laid foundations for further studies. Reymond et al. [50] extended the existing 1D model to a more detailed model consisting of the foot and hand circulation. Moreover, the ventricular-arterial coupling model was developed and 1D model of the circulation was validated. 1D models are commonly used to represent pulse wave propagation phenomena of large arteries [29]. In the 1D model, the blood is assumed to be an incompressible Newton fluid and the vessel is an axisymmetric cylindrical tube. The 1D model is governed by two equations [51]. A continuity equation (see Eq. 1) and a momentum equation (see Eq. 2) both together describe the motion of the blood flow and the vessel wall. The formulas are as follows $$\begin{aligned} \frac{\partial q}{\partial x}+\frac{\partial A}{\partial t}= & {} 0. \end{aligned}$$ $$\begin{aligned} \frac{\partial q}{\partial t}+\frac{4}{3}\frac{\partial \frac{q^2}{A}}{\partial x}= & {} -\frac{A}{\rho }\frac{\partial p}{\partial x}-\frac{8\mu }{\rho r^2}q. \end{aligned}$$ where x is the distance along the vessel, t is time, q is the blood flow rate, p is the blood pressure, A is the cross-sectional area, r is the vessel radius, \(\rho\) is the blood density and \(\mu\) is the viscosity. Solving methods For solving 1D Navier–Stokes equations, there are two types of methods including time domain and frequency domain methods. The time domain method can solve linear or nonlinear equations and frequency domain method can only solve linear equations. a. Time domain method Generally, Navier–Stokes equations of 1D models are nonlinear, which are solved in the time domain using numerical methods. At present, there are many numerical methods for solving the partial differential equations. Each method has its scope of application. The method of characteristics, finite difference method, finite volume method, finite element method and spectral method are frequently used to solve 1D pulse propagation equations. The method of characteristics is a basic method of solving the partial differential equations. The essence of this method is the integral along the characteristic line of the partial differential equations to simplify the form of equations. The characteristic method has a clear physical meaning and a wide application scope. As for solving the differential equations of three independent variables, the method of characteristic can be very complicated, and there are still some problems to be solved. The governing equations can be solved by taking use of the method of characteristics [52,53,54]. The finite difference method is a numerical method for solving complex partial differential equations by approximating the derivatives with finite differences. While the principle of the finite difference method is simple, it can give the corresponding difference equation for any complex partial differential equation. Difference equations can only be considered as mathematical approximations of differential equations. This method has been used by a number of researchers. For instance, Olufsen et al. applied the two-step Lax–Wendroff method to solving the continuity and momentum equations [18, 19]. A finite volume method is developed on the basis of the finite difference method. To begin with, the calculated region is divided into a series of control volumes and there is a control volume at the surrounding of each grid point. Then each control volume is integrated and a set of discrete equations are obtained. Finally, the discrete equations need to be solved. The finite volume method is suitable for the computation of fluid. This method has a high computing speed and low requirements for the grid, but its precision is limited. To solve the differential equations, the finite volume method is often used [55, 56]. The finite element method uses the variational principle to minimize the error function. The advantage of the finite element method is that this method can simulate complex curve or surface boundary accurately. Furthermore, the division of the grid is arbitrary and it can design the general program easily. Nevertheless, the finite element method cannot give a reasonable physical explanation and some errors in the calculation are still difficult to improve. Recently, some investigators used the finite element method to solve the differential equations [57,58,59]. The spectral method is a class of computing techniques of using an orthogonal function or intrinsic function as an approximate function to solve certain differential equations. The superiorities of the spectral method are to obtain a higher precision using fewer grid points. The poor stability and high complexity in the treatment of boundary conditions are the major weaknesses of this method. The spectral method has been utilized to solve the one-dimensional pulse wave propagation equations by a few researchers [60, 61]. b. Frequency domain method In order to reduce the computational complexity of the nonlinear model, a transmission line method is used to solve the Navier–Stokes equations in the frequency domain. The method requires that the 1D Navier–Stokes equations are linearized. According to the similarity of electromagnetic propagation theory and pulse wave propagation theory, linear 1D Navier–Stokes equations (see Eqs. 3, 4) in hemodynamics are converted into electrical transmission line equations (see Eqs. 5, 6) in the circuit [62]. The subsequent work is that we can employ methods of solving electric circuit to solve 1D Navier–Stokes equations. A transmission line equivalent circuit of an arterial segment is represented as shown in Fig. 4. $$\begin{aligned} -\frac{\partial q}{\partial x}= \, {} \frac{dA}{dp}\frac{\partial p}{\partial t}=0. \end{aligned}$$ $$\begin{aligned} -\frac{\partial p}{\partial x}= \, {} \frac{\rho }{A}\frac{\partial q}{\partial t}+\frac{8\mu }{\pi r^2}q. \end{aligned}$$ $$\begin{aligned} -\frac{\partial I}{\partial x}= \, {} UG+C\frac{\partial U}{\partial t}. \end{aligned}$$ $$\begin{aligned} -\frac{\partial U}{\partial x}= \, {} IR+L\frac{\partial I}{\partial t}. \end{aligned}$$ where U is the voltage, I is the current, \(R=\frac{8\mu }{\pi r^4}\) is the resistance, \(L=\frac{\rho }{A}=\frac{\rho }{\pi r^2}\) is the inductance, \(C=\frac{dA}{dp}=\frac{3 \pi r^2}{2Eh}\) is the capacitance, E is the Young's modulus, and h is the arterial wall thickness. G is the conductance, describing blood flow leakage, which is usually neglected. The electrical circuit is comprised of resistive, inductive and capacitive elements. The values of these elements are calculated from mechanical and geometric parameters in the arterial tree. Transmission line equivalent circuit. a Arterial segment of unit length; b transmission line segment. Where \(Z_{input}\) is the input impedance, \(Z_L\) is the terminal impedance, \(\gamma = \sqrt{(R+jwL)(G+jwC)}\) is the propagation constant and \(Z_c= \sqrt{(R+jwL)/(G+jwC)}\) is the characteristic impedance. When the transmission line is lossless (\(R=G=0\)), \(\gamma = w \sqrt{LC}\) and \(Z_c= \sqrt{L/C}\) For 1D pulse wave propagation equations, boundary conditions, commonly including inflow, bifurcation and outflow boundary conditions, need to be determined. a. Inflow conditions The flow waveform measured in vivo at ascending aorta or aortic root serves as the inflow condition using a magnetic resonance imaging or ultrasound equipment. Alternatively, a flow function derived from a simple model of the heart can serve as the inflow condition. The function is periodic which is mainly determined by the cardiac period and the cardiac output parameters [18]. $$\begin{aligned} q(0,t)= \, {} CO\frac{t}{\tau ^2}e^{\frac{-t^2}{2\tau ^2}}, \quad 0\le t<T. \end{aligned}$$ $$\begin{aligned} q(0,t+jT)= \, {} q(0,t), \quad j=1,2,3,\ldots \end{aligned}$$ where CO denotes the cardiac output, \(\tau\) denotes the time for cardiac output to reach its maximum and T denotes the cardiac period. The other inflow condition is that the left ventricle of the heart is coupled to 1D arterial tree model. Two main heart models are developed to represent the relationship between the ventricular pressure and volume. The time-varying elastance model is that the heart is seen as an elastance varying with time [63]. This left ventricular model indicates the instantaneous change of pressure and volume in left ventricle. $$\begin{aligned} E(t)=\frac{P(t)}{V(t)-V_0}. \end{aligned}$$ where E is the elastance, P is the instantaneous pressure of left ventricle, V is the instantaneous volume of left ventricle and \(V_0\) is the volume intercept of the end-systolic line. The one-fiber model is another heart model in which the heart is described as a rotationally-symmetric cylindrical or spherical cavity [64]. The left-ventricular pump function is reflected by wall tissue function. It is assumed that fiber stress and strain are homogeneously distributed in the thick wall. The ratio of the fiber stress (\(\tau _f\)) to the pressure of the left ventricle (\(P_{lv}\)) is closely related to the ratio of the wall volume (\(V_w\)) and the cavity volume (\(V_{lv}\)). $$\begin{aligned} \frac{P_{lv}}{\tau _f}=\frac{1}{3}ln\left( 1+\frac{V_w}{V_{lv}}\right) . \end{aligned}$$ b. Bifurcation conditions In arterial networks, vessel branching is another important sort of boundary condition. At the bifurcation, the principle of pressure and flow continuity was applied [32, 55, 59, 65]. It is assumed that all bifurcations are situated at a point and the effect of the bifurcation angles is ignored. Without any blood leakage, according to the conservation of mass, the outlet flow of a parent vessel is equal to the sum of the inlet flow of two daughter vessels at the bifurcation (see Eq. 11). Considering the continuity of pressure, the pressure of a parent vessel and the pressure of each daughter vessel are identical at the bifurcation (see Eq. 12). In reality, there exist energy losses at the bifurcations. Generally, the energy loss at a bifurcation is quite small and they are often neglected. For the bifurcation at the aortic arch, however, large energy losses are brought about. Because it has a nearly right-angled turn and a high velocity of blood at this bifurcation, remarkable vortices are produced. In order to represent the loss at the aortic arch bifurcation, a loss coefficient K is introduced [19] (see Eq. 13). $$\begin{aligned} q_{pa}= \, {} q_{d1}+q_{d2}. \end{aligned}$$ $$\begin{aligned} p_{pa}= \, {} p_{d1}=p_{d2}. \end{aligned}$$ $$\begin{aligned} p_{di}= \, {} p_{pa}+\frac{\rho }{2}((\bar{u_x})^2_{pa}- (\bar{u_x})^2_{di})-\frac{K_{di}}{2}(\bar{u_x})^2_{pa}, \quad i=1,2. \end{aligned}$$ where \(\bar{u_x}\) denotes the average axial velocity and \(\rho\) denotes the density. The subscripts pa and d indicate the parent and the daughter vessel, respectively. c. Outflow conditions The simplest outflow boundary condition is that the terminal of vessel is seen as a pure resistive load [18]. Nevertheless, a precise peripheral resistance value is not easy to be determined. Assuming a constant relation between pressure and flow, pressure and flow are in phase, actually, which is not physiologically reasonable for large arteries. The pure resistance model is merely suitable for small arteries. To overcome the weaknesses, a phase-shift between pressure and flow should be applied to the downstream boundary. The terminal impedance for the pure resistance is as follows. $$\begin{aligned} Z_L(w)=R_T \end{aligned}$$ where \(Z_L(w)\) denotes the terminal impedance of large arteries and \(R_T\) denotes the peripheral resistance. The distal network of truncated vessel is represented as the terminal impedance which is modeled by a three-element Windkessel model [66]. The three-element Windkessel model is made up of a resistance \(R_1\) in series with a parallel combination of a capacitor \(C_T\) and another resistance \(R_2\). This model cannot represent the wave propagation effects. The frequency dependent impedance of Windkessel model is given by $$\begin{aligned} Z_L(w)=\frac{R_1+R_2+iwC_TR_1R_2}{1+iwC_TR_2}. \end{aligned}$$ The relation between pressure and flow at the truncated arteries is given by the following differential equation. $$\begin{aligned} \frac{\partial p}{\partial t}=R_1\frac{\partial q}{\partial t}-\frac{p}{R_2C_T}+\frac{q(R_1+R_2)}{R_2C_T}. \end{aligned}$$ In recent years, the structured-tree model presented by Olufsen [67] has become a popular outflow boundary condition. Compared with the resistance and Windkessel model, the structured-tree model can simulate the impedances of small arteries more accurately. At the terminal branches of the truncated arterial tree, a structured-tree model, which is based on linear one-dimensional Navier–Stokes equations, provides a dynamic boundary condition for large arteries. The model can describe the phase lag between pressure and flow and the high frequency oscillations. Meanwhile, it can also represent the wave propagation effects of arterial system. According to the convolution theorem, the outflo w boundary condition is obtained by $$\begin{aligned} p(x,t)=\frac{1}{T}\int _{-T/2}^{T/2}z(x,t-\tau )q(x,\tau )d\tau . \end{aligned}$$ The root impedance is computed from the relationship between pressure and flow as follow $$\begin{aligned} Z_L(w)=\frac{ig^{-1}sin(wL/c)+Z(L,w)cos(wL/c)}{cos(wL/c)+igZ(L,w)sin(wL/c)}. \end{aligned}$$ where \(g=cC\), c denotes the wave propagation velocity, \(Z_L(w)\) denotes the root impedance, namely, the terminal impedance of large arteries, Z(L, w) denotes the terminal impedance of small arteries, L denotes the vessel length and w denotes the angular frequency. 1D models can simulate pressure and flow waveforms at any point of the arterial network according to their distributed properties. Since 1D models include too many vascular parameters, they haven't been extensively used to reconstruct central aortic pressure up to now. By limiting the number of personalized parameters, 1D models may have a great potential for estimation of central aortic pressure in clinical practice. For 1D models, aortic flow waveform is the most common inflow condition [24, 68]. Few researchers used peripheral pressure measurement (e.g. brachial pressure or radial pressure) as model input [25, 69]. The aortic flow waveform can be measured by ultrasound equipment, however, it is not accurate. Meanwhile, it is very expensive to obtain aortic flow waveforms by MRI equipment. The pressure waveforms of good stability can readily be recorded using peripheral pressure sensors such as applanation tonometry. Generally, vascular geometric parameters of 1D models are measured by MRI or CT equipment, which is costly and complex. Many researchers used population averages for these geometric parameters. Nevertheless, Harana et al. [24] measured aortic geometry parameters including ascending aorta, descending aorta and three supra-aortic branches using MRI equipment. Meanwhile, pulse wave velocity (PWV) and vascular resistance and compliance parameters for each subject were calculated from measured data. Remaining blood flow parameters such as density and viscosity were assumed to be constants. In recent years, 1D models with different degrees of complexity have been utilized by several researchers to reconstruct central aortic pressure. For example, Bárdossy et al. [69] presented a "backward" calculation method to derive central aortic pressure waveform in a 1D model comprising 50-segment arteries. A personalized transfer function between aorta and radial was established by Jiang et al. [24] to estimate central aortic pressure based on 1D model including 55 large arteries and 28 small arteries. Khalifé et al. [68] estimated absolute pressure in the aorta by combining a reduced 1D model including an ascending aorta branch and a descending aorta branch with MRI. A non-invasive personalized estimation method of central aortic pressure was developed by Harana et al. [24] using a 1D aortic blood flow model. Owing to the complexity of 1D models, the details of pressure waveform can be described easily [24]. If all vascular geometric parameters are measured by noninvasive equipments, 1D models can provide accurate estimation of central aortic pressure. Tube-load models Tube-load models are distributed parameter models. Theory and application of tube-load models are described in this part, respectively. Tube-load models mainly focus on various tube models based on different assumptions. Tube-load models are a kind of highly simplified transmission line models, which are made up of multiple parallel tubes with loads [70]. The simplest tube-load model whose tube is taken as lossless, linear and uniform, only consists of a tube and a load as shown in Fig. 5. The tube signifies the wave transmission pathway of large arteries and the load signifies the wave refection site in the arterial terminal. The formulas are as follows. $$\begin{aligned} T_d = \, {} \sqrt{LC} \end{aligned}$$ $$\begin{aligned} Z_c = \, {} \sqrt{L/C} \end{aligned}$$ $$\begin{aligned} \Gamma (jw) = \, {} \frac{Z_L(jw)-Z_c}{Z_L(jw)+Z_c} \end{aligned}$$ $$\begin{aligned} P(x,jw) = \, {} P_f(0,jw)e^{jwT_d x/d}+P_b(0,jw)e^{-jwT_d x/d} \end{aligned}$$ $$\begin{aligned} Q(x,jw) = \, {} \frac{1}{Z_c}(P_f(0,jw)e^{jwT_d x/d}+P_b(0,jw)e^{-jwT_d x/d}) \end{aligned}$$ $$\begin{aligned} P_p(jw) = \, {} \frac{\left( jw+\frac{1}{RC}+\frac{1}{2Z_c C}\right) e^{jwT_d} +\frac{1}{2Z_c C}e^{-jwT_d}}{jw+\frac{1}{RC}+\frac{1}{Z_c C}}P_c(jw) \end{aligned}$$ where L denotes the large artery inertance, C denotes the large artery compliance, d denotes length of tube, \(T_d\) denotes the time delay, \(Z_c\) denotes the characteristic impedance, \(Z_L\) denotes the terminal impedance, \(\Gamma\) denotes the wave reflection coefficient, P denotes the pressure, Q denotes the flow. In addition, the subscripts f, b, c and p are for the forward and backward wave, aorta and peripheral artery, respectively. Single tube model with a load. R the peripheral resistance, C the load compliance, \(Z_c\) the characteristic impedance and \(T_d\) the time delay For tube-load models, there are three main types of loads, including pure resistance, generic pole-zero, three-element Windkessel models. The pure resistance load is a high simplification of small arterial vessels, accounting for the peripheral resistance [26, 71]. The greatest advantage of this kind of load is simplicity and its disadvantage is that there is a big difference from the real vascular structure. The generic pole-zero models as a terminal impedance can change the order of system flexibly [72]. However, the weakness of the model is that model parameters do not have physiological significance. The most frequently used load is the three-element Windkessel model, which consists of a characteristic impedance, a resistance and a compliance [73, 74]. Although the Windkessel model fails to provide the detailed anatomical and mechanical information of arterial network, it can describe the lumped properties of terminal arterial vessels well. Based on different assumptions, tube-load models have been developed into T-tube, lossy tube-load, nonlinear tube-load and non-uniform tube-load models, which are summarized in Table 3. In comparison with the simplest tube-load model, these models have a higher accuracy in hemodynamic simulation. Table 3 Summary of tube-load models based on different assumptions T-tube model The T-tube model is a frequently-used tube-load model, in which there are two tubes with two terminal loads as shown in Fig. 6 [75, 76]. While the tubes signify head-end and body-end travel paths, the loads represent head-end and body-end reflection sites, respectively. The advantages of the T-tube model are that it is a simple model and that it can indicate main features of pressure and flow waveforms in the large blood vessels. However, it cannot represent the wave reflection intensely depending on frequency. The T-tube model. R the peripheral resistance, C the load compliance, \(Z_c\) the characteristic impedance; subscripts b and h are for the body load and head load, respectively Lossy tube-load model Tube-load models mentioned above are based on lossless tube-load models. The lossless tube-load model is a kind of ideal and simple model. In some situations, ignoring the loss of tube can bring large errors [77,78,79,80]. For example, for reconstructing the central aortic pressure waveform from peripheral pressure waveforms, it is generally assumed that the mean pressure is same at any position. In fact, the blood pressure loss is large along the arterial tree in pathophysiologic conditions or postoperative period. In order to improve the accuracy of tube-load models, Abdollahzade et al. [20] proposed the lossy tube-load model of arterial tree in humans as shown in Fig. 7. Compared with lossless tube-load models, lossy tube-load models have smaller errors and larger efficacy. The lossy tube-load model. P the blood pressure, \(\gamma\) the wave propagation constant, \(l_0\) the length of the tube, R the peripheral resistance, C the load compliance, \(Z_c\) the characteristic impedance; subscripts i and o are for the inlet and outlet of the tube, respectively Nonlinear tube-load model Previously, tube-load models were regarded as linear models. In recent years, Gao et al. [81, 82] has developed a nonlinear tube-load model based on the exponential relationship between blood pressure and compliance as shown in Fig. 8. In the nonlinear tube-load model, arterial compliance is no longer a constant but a function of blood pressure. In contrast with the linear tube-load model, the nonlinear tube-load model has a higher accuracy in estimating pulse transit time [21]. The nonlinear tube-load model. P the blood pressure, \(\alpha\) the constant, L the large artery inertance, R the peripheral resistance, \(C_0\) the large artery compliance, C the load compliance, \(Z_c\) the characteristic impedance; subscripts p and d are for the inlet and outlet of the tube, respectively Non-uniform tube-load model The electrical transmission line theory is applied to mathematical modeling of arterial vessels. It is generally assumed that the transmission line model is a uniform tube with a terminal load. Taylor [83] explored the wave propagation properties of a non-uniform transmission line, since the uniform tube is too simple to reflect the real characteristics of the arteries. Subsequently, Einav et al. [84] proposed an exponentially tapered transmission line model of the arterial system, in which the geometrical properties and wall elasticity of the tube exponentially tapered along the length of arterial vessels. At present, there are two methods to describe the taper effects of a non-uniform tube. One method is that the inductance and capacitance of the tube change with the position exponentially as shown in Fig. 9a [85,86,87]. Another method is that an artery is separated into several smaller segments and each segment is viewed as a uniform tube as shown in Fig. 9b [88, 89]. The non-uniform tube-load model. a A non-uniform tube tapering with the position exponentially; b a non-uniform tube consisting of multiple uniform tubes with a successive decrease in diameter dimension Tube-load models include tubes and loads, describing wave propagation and refection phenomenon with only a few parameters. Combining the advantages of Windkessel models (simplicity) and 1D models (accuracy), tube-load models have become an attractive tool for the reconstruction of central aortic pressure waveform. For tube-load models, inflow conditions are obtained from one or two measured peripheral pressure waveforms. A radial [71, 73, 74], brachial [26] or femoral [72] pressure measurement is commonly used as model input. Moreover, some researchers [27, 90,91,92] chose two measured peripheral pressure waveforms as inflow conditions such as radial and femoral arteries. Because intravascular and extravascular pressure at radial artery is very close, the radial pressure waveform can be accurately recorded using an applanation tonometry. The brachial cuff-based measurement is a very convenient approach for obtaining brachial pressure, especially suitable for longer term outpatient monitoring of central aortic pressure or potential general hospital ward usage. Single tube or T-tube models with different loads are used to described the relation between central and peripheral pressures. For example, Swamy et al. [72] employed a single tube with a generic pole-zero load to establish an adaptive transfer function between aortic and femoral pressures. A single tube model with a resistance load was applied by Gao et al. [71] and Natarajan et al. [26] to the estimation of central aortic pressure. Individualized transfer functions were built by Sugimachi et al. [73] and Hahn et al. [74] using a single tube model with a three element Windkessel load. Ghasemi et al. [27, 90], Lee [91] and Kim et al. [92] utilized a T-tube model with a three-element Windkessel load, to reconstruct central aortic pressure from two measured peripheral pressure. In order to acquire an adaptive or individualized transfer function, most researchers measured the pulse transit time for each subject through some noninvasive approaches and calculated the remaining parameters by population averaging [26, 27, 71,72,73,74, 92, 93]. Blind system identification is another method of estimating model parameters, which can reconstruct the central aortic pressure waveform from two distinct peripheral pressure waveforms [90, 91]. The blind system identification method can obtain fully individualized parameters. The weakness of this method is that it is inconvenient to measure multiple distinct peripheral pressure waveforms in clinical practice. To examine the performance of individualization, Hahn [93] made a comparative study on the estimation of central pressure among a fully individualized, two partially individualized and a fully generalized transfer functions based on tube-load models. The 9 swine experiment results showed that the fully individualized transfer function had higher accuracy than two partially individualized functions and the fully nonindividualized transfer function. Since in the tube-load model, only one parameter, the pulse transit time, could be readily individualized, tube-load models had moderate accuracy for estimation of central aortic pressure. Discussions and conclusions Comparisons of three types of models In this paper, recent progresses of Windkessel, 1D and tube-load models in the arterial system are reviewed. Windkessel models are developed into increasingly complicated and detailed structures and a variety of Windkessel models are established [39, 94]. 1D models including more arterial segments and coupling the heart have been set up in recent years [29, 50]. Tube-load models with various types of tubes based on different assumptions are investigated [77, 81, 83]. To select an appropriate model, the comparisons among three types of low-dimensional models are made. The Windkessel model can give a global description of the arterial system and every model element has a particular physiological meaning [15, 28]. Windkessel models only include a few parameters and the parameters are usually estimated from measured aortic pressure and flow waveforms. Due to its simplicity, the Windkessel models have a low accuracy. The iterative and system identification techniques are adopted by most researchers, such as linear least-squares method [95] and subspace model identification method [96]. Since the Windkessel model is established by a set of ordinary differential equations, this model is simpler than the distributed parameter model built by a set of partial differential equations. The model is fit for calculating hemodynamic parameters and simulating the whole circulatory system. 1D models and tube-load models are distributed parameter models which signify the distributed properties along the arterial vessels. 1D models can accurately predict flow and pressure in the entire arterial tree and the model parameters can truly reflect the physiological properties of arterial vessels [49]. A 1D model is determined by a set of partial differential equations with a lot of parameters. It is very difficult to determine a large number of parameters using system identification methods. The majority of geometrical and mechanical parameters can be directly measured by MRI, CT or Doppler ultrasound equipment. The rest of the parameters are approximated or seen as constants such as the thickness of arterial wall and width of boundary layer. The 1D model is an appropriate approach to study pulse wave propagation phenomenon in arterial system. Tube-load models are a parallel connection of multiple tubes with parametric loads, which is a simplified 1D model. The tube indicates the path of pulse wave propagation and the load indicates the effective reflection point [75, 76]. The transit time of pressure and flow waves is described by the time delay constant parameter of the model. Tube-load models can represent the relationship between central and peripheral arteries with a few parameters. Once proximal and distal waveforms of the arterial tree are obtained, the parameters of tube-load models can be determined by system identification techniques. In comparison with 1D model, tube-load model has a lower computation cost. Meanwhile, the shortage of the tube-load model is that it is less accurate than 1D model. Tube-load models are suitable for investigating wave propagation and reflection. In general, computational time of three types of models is short and they have the potential for real-time and general use clinical monitoring in intensive care. In specific applications, however, in order to obtain more accurate clinical parameters, these models may need to be optimized such as personalization of the model which will be made a detailed discussion later. The personalized model requires that model parameters can be obtained individually. The real-time character depends on required patient-specific model parameters, which should be discussed in the following two cases. If the patient-specific model parameters cannot be real-time obtained, the central aortic pressure monitoring is impossible. For example, the pulse transit time parameter in tube-load models can be obtained from various combinations of physiological signals such as two pulse waveforms (e.g. carotid and radial sites) and a combination of ECG and pulse waveforms (e.g. radial site) methods. If we choose the former method (two pulse waveforms), the pulse transit time parameter cannot be real-time obtained since the carotid pulse waveform is difficult to measure for long time. In this case, we cannot monitor central aortic pressure in real-time. The pulse transit time parameter can be real-time estimated if we choose the latter method (ECG and pulse waveforms) because ECG and radial pulse waveforms can be measured simultaneously for long time, real-time central aortic pressure can be monitored. Windkessel models and tube-load models can be used in long-term monitoring of central aortic pressure/potential general hospital ward usage, however, 1D models cannot. This is because geometric parameters of 1D model need to be measured by CT or MRI equipment, which is very complex and costly. Detailed comparisons of three types of low-dimensional models are summarized in Table 4. Table 4 Comparison of Windkessel, 1D and tube-load models Future challenges Although a variety of physics-based models have been developed, there still exist a number of challenging problems to be solved. Multi-scale modeling, coupling of various systems and patient-specific modeling are very significant research subjects at present. Multi-scale modeling A model of each scale has its scope of application [29, 97, 98]. Low-dimensional models have low computational cost but poor accuracy, which is applicable to represent the global properties of the arterial networks. Nevertheless, high-dimensional models can offer high accuracy simulation but with greater complexity. They are commonly used to describe the local properties of arterial vessels in detail. Therefore, coupling models of various different scales can combine the advantages of different dimensional models [43, 99, 100]. Multi-scale modeling of arterial vessels can be a powerful tool for providing potential applications in clinical practice. Coupling of various systems There exist a variety of biological systems such as the nervous system and the respiratory system in human body which run simultaneously and interactively. The effects of other biological systems are usually ignored in physical modeling of cardiovascular system. As a matter of fact, the nervous system has a significant impact on cardiovascular system [101]. For example, as the blood pressure changes, to avoid dysfunctions, the sympathetic nerves and the parasympathetic nerves are usually motivated to regulate cardiovascular response [101]. Smith et al. [102] proposed a modified cardiovascular model by including a minimal autonomic nervous system activation model. This modified model can simulate various cardiovascular diseases such as hypovolemic shock septic shock, cardiogenic shock, pericardial tamponade and pulmonary embolism. Due to the fact that both the respiratory system and the cardiovascular system located in the thoracic cavity, the cardiovascular system can be influenced by the respiratory system strongly [103]. For example, an integrated model of the cardiopulmonary system was present by Albanese et al. [104], which included cardiovascular circulation, respiratory mechanics, gas exchange and neural control mechanisms. The physiological parameters in normal and pathological conditions were simulated and the interactions between the cardiovascular and respiratory systems were explained. Trenhago et al. [105] proposed a refined coupled model of the cardiovascular and respiratory systems, consisting of 19 compartments, in which the respiratory system was extended to include a complex system for gas exchange and transport. The advantage of the refined model is that it enables to simulate situations in which existing models cannot predict mimic and it helps us to understand complex mechanisms better. For modeling of cardiovascular system, the respiratory effects should be considered. In comparison with previous cardiovascular model, a modified cardiovascular model integrating neurological and respiratory components has better accuracy. Hence, combining the nervous system and the respiratory system with the cardiovascular system might be a good way to improve the accuracy of models. Patient-specific modeling Patient-specific models can provide opportunities for improving accuracy [106,107,108]. The patient-specific parameters can be obtained from imaging techniques such as magnetic resonance imaging, computed tomography and ultrasound. A properly personalized model can predict physiological or pathological status more accurately [109, 110]. The personalized modeling in the arterial system can play an increasingly key role in the development of medical instruments. This paper takes the estimation of central arterial pressure as an example to introduce personalization of physics-based models. Although several individualized estimation methods of central aortic pressure have been proposed, these methods haven't been sufficiently verified in clinical practice yet. An accurate and convenient method of reconstructing central aortic pressure waveform with sufficient verification is a current hot topic. The physics-based models with clear physiological meaning may provide great potential for individualized estimation approach of central aortic pressure. In these three types of physics-based models, 1D models are the most accurate and complicated. Under the condition of guaranteeing a high accuracy, reducing the complexity of the model as much as possible is the prefered method. By examining the influence of the complexity of the arterial tree on the accuracy of the model, the complex model can be greatly simplified. For example, a 1D aortic model consisting of ascending aorta, aortic arch, thoracic aorta and abdominal aorta with Windkessel outflow conditions may be created to reconstruct aortic pressure waveform with a high accuracy. Another feasible way is to use body size parameters to replace complex blood vessel parameters. Since 1D models have too many parameters, it is actually impossible to measure all parameters, however, human body size parameters are available easily. A statistical analysis has shown that the blood vessel sizes have close correlation with sex, age, height, and weight of a subject [111]. Furthermore, Young's modulus representing vascular stiffness has a strong correlation with age. It might be possible to build relationships between geometrical and mechanical parameters of 1D models and body size parameters of the subject. The geometrical and mechanical parameters can be firstly employed by population averages and then the average parameters can be calibrated with body size parameters. Combining with the two methods above, a modified 1D model may be the best choice for estimating central aortic pressure. In conclusion, different physics-based models in cardiovascular system have different traits and the selection of models mainly depends on the aim of modeling including complexity and accuracy required. By discussing the advantages and disadvantages of various physics-based models, this review contributes to a better understanding of physiological mechanism in the arterial system and provides effective guidance on low-dimensional physics-based modeling. Nejad SE, Carey JP, Mcmurtry MS, Hahn J-O. Model-based cardiovascular disease diagnosis: a preliminary in-silico study. Biomech Model Mechanobiol. 2016;16(2):549–60. Mendis S, Puska P, Norrving B, Mendis S, Puska P, Norrving B. Global atlas on cardiovascular disease prevention and control. Geneva: World Health Organization; 2011. Weintraub WS, Daniels SR, Burke LE, Franklin BA Jr, Hayman LL, Lloyd-Jones D, Pandey DK, Sanchez EJ, Schram AP. Value of primordial and primary prevention for cardiovascular disease: a policy statement from the american heart association. Circulation. 2011;124(8):967–90. Quarteroni A, Formaggia L. Mathematical modelling and numerical simulation of the cardiovascular system. Handb Numer Anal. 2004;12:7–9. Quarteroni A, Manzoni A, Vergara C. The cardiovascular system: mathematical modelling, numerical algorithms and clinical applications. Acta Numer. 2017;26:365–590. Capoccia M, Marconi S, Singh SA, Pisanelli DM, De CL. Simulation as a preoperative planning approach in advanced heart failure patients. a retrospective clinical analysis. Biomed Eng Online. 2018;17(1):52–72. Tang D, Li ZY, Gijsen F, Giddens DP. Cardiovascular diseases and vulnerable plaques: data, modeling, predictions and clinical applications. Biomed Eng Online. 2015;14(1):1–7. Ghigo AR, Fullana JM, Lagree PY, Ghigo AR, Fullana JM, Lagree PY, Ghigo AR, Fullana JM, Lagree PY. A 2D nonlinear multiring model for blood flow in large elastic arteries. J Comput Phys. 2016;350:136–65. Boujena S, Kafi O, El Khatib N. A 2D mathematical model of blood flow and its interactions in an atherosclerotic artery. Math Model Nat Phenom. 2014;09(6):46–68. Lopezperez A, Sebastian R, Ferrero JM. Three-dimensional cardiac computational modelling: methods, features and applications. Biomed Eng Online. 2015;14(1):35–65. Xie X, Zheng M, Wen D, Li Y, Xie S. A new CFD based non-invasive method for functional diagnosis of coronary stenosis. Biomed Eng Online. 2018;17(1):36–48. Frank O. Grundform des arteriellen pulses. Z Biol. 1899;37:483–526. Shi Y, Lawford P, Hose R. Review of zero-D and 1-D models of blood flow in the cardiovascular system. Biomed Eng Online. 2011;10(1):33–70. Malatos S, Raptis A, Xenos M. Advances in low-dimensional mathematical modeling of the human cardiovascular system. J Hypertens Manag. 2016;2(2):1–10. Westerhof N, Bosman F, De Vries CJ, Noordergraaf A. Analog studies of the human systemic arterial tree. J Biomech. 1969;2(2):121–43. Lambert Wallace J. Fluid flow in a nonrigid tube. PhD thesis, Purdue University, Mechanical Engineering Department. 1956. Hughes TJR, Lubliner J. On the one-dimensional theory of blood flow in the larger vessels. Math Biosci. 1973;18(1):161–70. Olufsen MS. Modeling the arterial system with reference to an anesthesia simulator. PhD thesis, Roskilde University, Mathematics Department. 1998. Olufsen MS, Peskin CS, Kim WY, Pedersen EM, Nadim A, Larsen J. Numerical simulation and experimental validation of blood flow in arteries with structured-tree outflow conditions. Ann Biomed Eng. 2000;28(11):1281–99. Abdollahzade M, Kim CS, Fazeli N, Finegan BA, Sean MM, Hahn J-O. Data-driven lossy tube-load modeling of arterial tree: in-human study. J Biomech Eng. 2014;136(10):101011–7. Gao M, Cheng H, Sung S, Chen C, Olivier NB, Mukkamala R. Estimation of pulse transit time as a function of blood pressure using a nonlinear arterial tube-load model. IEEE Trans Biomed Eng. 2017;64(7):1524–34. Vennin S, Li Y, Willemet M, Fok H, Gu H, Charlton P, Alastruey J, Chowienczyk P. Identifying hemodynamic determinants of pulse pressure. Hypertension. 2017;70(5):1176–82. Her K, Kim JY, Lim KM, Choi SW. Windkessel model of hemodynamic state supported by a pulsatile ventricular assist device in premature ventricle contraction. Biomed Eng Online. 2018;17(1):18–30. Harana MJ. Non-invasive, MRI-based calculation of the aortic blood pressure waveform by 0-dimensional flow modelling: development and testing using in silico and in vivo data. Master's thesis, King's college London, Department of Biomedical Engineering. 2017. Jiang S, Zhiqiang, Wang F, Wu J-K. A personalized-model-based central aortic pressure estimation method. J Biomech. 2016;49(16):4098–106. Natarajan K, Cheng H-M, Liu J, Gao M, Sung S-H, Chen C-H, Hahn J-O, Mukkamala R. Central blood pressure monitoring via a standard automatic arm cuff. Sci Rep. 2017;7(1):14441–52. Ghasemi Z, Lee JC, Kim C-S, Cheng H-M, Sung S-H, Chen C-H, Mukkamala R, Hahn J-O. Estimation of cardiovascular risk predictors from non-invasively measured diametric pulse volume waveforms via multiple measurement information fusion. Sci Rep. 2018;8(1):10433–43. Westerhof N, Lankhaar JW, Westerhof BE. The arterial Windkessel. Med Biol Eng Comput. 2009;47(2):131–41. Liu H, Liang F, Wong J, Fujiwara T, Ye W, Tsubota KI, Sugawara M. Multi-scale modeling of hemodynamics in the cardiovascular system. Acta Mech Sin. 2015;31(4):446–64. Liu Z, Brin KP, Yin FC. Estimation of total arterial compliance: an improved method and evaluation of current methods. Am J Physiol. 1986;251(2):588–600. Lee J, Sohn JJ, Park J, Yang SM, Lee S, Kim HC. Novel blood pressure and pulse pressure estimation based on pulse transit time and stroke volume approximation. Biomed Eng Online. 2018;17(1):81–100. Wesseling KH, Jansen JR, Settels JJ, Schreuder JJ. Computation of aortic flow from pressure in humans using a nonlinear, three-element model. J Appl Physiol. 1993;74(5):2566–73. Laskey WK, Parker HG, Ferrari VA, Kussmaul WG, Noordergraaf A. Estimation of total systemic arterial compliance in humans. J Appl Physiol. 1990;69(1):112–9. Stergiopulos N, Young DF, Rogge TR. Computer simulation of arterial flow with applications to arterial and aortic stenoses. J Biomech. 1992;25(12):1477–88. Stergiopulos N, Westerhof BE, Westerhof N. Total arterial inertance as the fourth element of the Windkessel model. Am J Physiol. 1999;276(1 Pt 2):81–8. Deswysen B, Charlier AA, Gevers M. Quantitative evaluation of the systemic arterial bed by parameter estimation of a simple model. Med Biol Eng Comput. 1980;18(2):153–6. Burattini R, Di SP. Development of systemic arterial mechanical properties from infancy to adulthood interpreted by four-element windkessel models. J Appl Physiol. 2007;103(1):66–79. Segers P, Georgakopoulos D, Afanasyeva M, Champion HC, Judge DP, Millar HD, Verdonck P, Kass DA, Stergiopulos N, Westerhof N. Conductance catheter-based assessment of arterial input impedance, arterial function, and ventricular-vascular interaction in mice. Am J Physiol Heart Circ Physiol. 2005;288(3):1157–64. Jager GN, Westerhof N, Noordergraaf A. Oscillatory flow impedance in electrical analog of arterial system: representation of sleeve effect and non-newtonian properties of blood. Circ Res. 1965;16(2):121–33. Avanzolini G, Barbini P, Cappello A, Cevenini G, Moller D, Pohl V, Sikora T. Electrical analogs for monitoring vascular properties in artificial heart studies. IEEE Trans Biomed Eng. 1989;36(4):462–70. Frasch HF, Kresh JY, Noordergraaf A. Two-port analysis of microcirculation: an extension of Windkessel. Am J Physiol. 1996;270(2):376–85. Shi Y, Lawford PV, Hose DR. Numerical modeling of hemodynamics with pulsatile impeller pump support. Ann Biomed Eng. 2010;38(8):2621–34. Quarteroni A, Veneziani A, Vergara C. Geometric multiscale modeling of the cardiovascular system, between theory and practice. Comput Meth Appl Mech Eng. 2016;302:193–252. Stergiopulos N, Westerhof N. Role of total arterial compliance and peripheral resistance in the determination of systolic and diastolic aortic pressure. Pathologie Biologie. 1999;47(6):641–7. Struijk PC, Mathews VJ, Loupas T, Stewart PA, Clark EB, Steegers EAP, Wladimiroff JW. Blood pressure estimation in the human fetal descending aorta. Ultrasound Obstet Gynecol. 2008;32(5):673–81. Zala D. Arterial flow based transfer function and ascending aorta pressure waveform estimation. Master's thesis, The State University of New Jersey, The Graduate School of Biomedical Science. 2017. Revie JA, Stevenson D, Chase JG, Pretty CJ, Lambermont BC, Ghuysen A, Kolh P, Shaw Geoffrey M, Desaive T. Evaluation of a model-based hemodynamic monitoring method in a porcine study of septic shock. Comput Math Method Med. 2013;2013:1–17. Cai Y, Ma C, Zhang P, Liu J. A novel method to reconstruct central aortic pressure signal using dual-peripheral pressure waves. In: IEEE international conference on information science & technology; 2014. p. 565–8. Euler L. Principia pro motu sanguinis per arterias determinando. J Biol Phys. 1775;2(1):814–23. Reymond P, Merenda F, Perren F, Rufenacht D, Stergiopulos N. Validation of a one-dimensional model of the systemic arterial tree. Am J Physiol Heart Circ Physiol. 2009;297(1):208–22. Saito M, Ikenaga Y, Matsukawa M, Watanabe Y, Asada T, Lagreee P-Y. One-dimensional model for propagation of a pressure wave in a model of the human arterial network: comparison of theoretical and experimental results. J Biomech Eng. 2011;133(12):121005–13. Parker KH, Jones CJH. Forward and backward running waves in the arteries: analysis using the method of characteristics. J Biol Phys. 1990;112(3):322–6. Wang JJ, O'Brien AB, Shrive NG, Parker KH, Tyberg JV. Time-domain representation of ventricular-arterial coupling as a Windkessel and wave system. Am J Physiol Heart Circ Physiol. 2003;284(4):1358–68. Wang JPK. Wave propagation in a model of the arterial circulation. J Biomech. 2004;37(4):457–70. Brook BS, Falle SAEG, Pedley TJ. Numerical solutions for unsteady gravity-driven flows in collapsible tubes: evolution and roll-wave instability of a steady state. J Fluid Mech. 1999;396(396):223–56. Brook BS, Pedley TJ. A model for time-dependent flow in (giraffe jugular) veins: uniform tube properties. J Biomech. 2002;35(1):95–107. Porenta G, Young DF, Rogge TR. A finite-element model of blood flow in arteries including taper, branches, and obstructions. J Biomech Eng. 1986;108(2):161–7. Formaggia L, Gerbeau J-F, Nobile F, Quarteroni A. On the coupling of 3D and 1D navier-stokes equations for flow problems in compliant vessels. Comput Meth Appl Mech Eng. 2001;191(6–7):561–82. Wan J, Steele B, Spicer SA, Strohband S, Feijoo G, Hughes TJR, Taylor CA. A one-dimensional finite element method for simulation-based medical planning for cardiovascular disease. J Biomech Eng. 2002;5(3):195–206. Sherwin SJ, Franke V, Peireo J, Parker K. One-dimensional modelling of a vascular network in space-time variables. J Eng Math. 2003;47(3–4):217–50. Bessems D, Giannopapa CG, Rutten MCM, Vosse FNVD. Experimental validation of a time-domain-based wave propagation model of blood flow in viscoelastic vessels. J Biomech. 2008;41(2):284–91. John LR. Forward electrical transmission line model of the human arterial system. Med Biol Eng Comput. 2004;42(3):312–21. Sunagawa K, Sagawa K. Models of ventricular contraction based on time-varying elastance. Crit Rev Biomed Eng. 1982;7(3):193–228. Cox LG, Loerakker S, Rutten MC, de Mol BA, van de Vosse F. A mathematical model to evaluate control strategies for mechanical circulatory support. Artif Organs. 2009;33(8):593–603. Lighthill J. Mathematical biofluiddynamics. Comput Methods Appl Mech Eng. 1975;386(4):1–3. MATH Google Scholar Vosse FNVD, Stergiopulos N. Pulse wave propagation in the arterial tree. Annu Rev Fluid Mech. 2011;43(1):467–99. Olufsen MS. Structured tree outflow condition for blood flow in larger systemic arteries. Am J Physiol. 1999;276(1 Pt 2):257–68. Khalife M, Decoene A, Caetano F, Rochefort L, Durand E, Rodriguez D. Estimating absolute aortic pressure using mri and a one-dimensional model. J Biomech. 2014;47(13):3390–9. Bardossy G, Halesz G. A "backward" calculation method for the estimation of central aortic pressure wave in a 1D arterial model network. Comput Fluids. 2013;73:134–44. Zhang G, Hahn J-O, Mukkamala R. Tube-load model parameter estimation for monitoring arterial hemodynamics. Front Physiol. 2011;2:72–89. Gao M, Rose William C, Fetics B, Kass DA, Chen C-H, Mukkamala R. A simple adaptive transfer function for deriving the central blood pressure waveform from a radial blood pressure waveform. Sci Rep. 2016;6(1):33230–8. Swamy G, Xu D, Olivier NB, Mukkamala R. An adaptive transfer function for deriving the aortic pressure waveform from a peripheral artery pressure waveform. Am J Physiol Heart Circ Physiol. 2009;297(5):1956–63. Sugimachi M, Shishido T, Miyatake K, Sunagawa K. A new model-based method of reconstructing central aortic pressure from peripheral arterial pressure. Jpn J Physiol. 2001;51(2):217–22. Hahn J-O, Reisner AT, Jaffer FA, Harry H. Subject-specific estimation of central aortic blood pressure using an individualized transfer function: a preliminary feasibility study. IEEE Trans Inform Technol Biomed. 2011;16(2):212–20. Campbell KB, Burattini R, Bell DL, Kirkpatrick RD, Knowlen GG. Time-domain formulation of asymmetric T-tube model of arterial system. Am J Physiol. 1990;258(6 Pt 2):1761–74. Burattini R, Campbell KB. Modified asymmetric T-tube model to infer arterial wave reflection at the aortic root. IEEE Trans Biomed Eng. 2002;36(8):805–14. Gravlee GP, Brauer SD, O'Rourke MF, Avolio AP. A comparison of brachial, femoral, and aortic intra-arterial pressures before and after cardiopulmonary bypass. Anaesth Intensive Care. 1989;17(3):305–11. Nakayama R, Goto T, Kukita I, Sakata R. Sustained effects of plasma norepinephrine levels on femoral–radial pressure gradient after cardiopulmonary bypass. J Anesth. 1993;7(1):8–15. Pauca AL, Wallenhaupt SL, Kon ND, Tucker WY. Does radial artery pressure accurately reflect aortic pressure? Chest. 1992;102(4):1193–8. Young DF, Cholvin NR, Roth AC. Pressure drop across artificially induced stenoses in the femoral arteries of dogs. Circ Res. 1975;36(6):735–43. Gao M, Mukkamala R. Perturbationless calibration of pulse transit time to blood pressure. Conf Proc IEEE Eng Med Biol Soc. 2012;2012(2012):232–5. Hughes DJ, Babbs CF, Geddes LA, Bourland JD. Measurements of Young's modulus of elasticity of the canine aorta with ultrasound. Ultrason Imaging. 1979;1(4):356–67. Taylor MG. Wave-travel in a non-uniform transmission line, in relation to pulses in arteries. Phys Med Biol. 1965;10(4):539–50. Einav S, Aharoni S, Manoach M. Exponentially tapered transmission line model of the arterial system. IEEE Trans Biomed Eng. 1988;35(5):333–9. Chang KC, Tseng YZ, Lin YJ, Kuo TS, Chen HI. Exponentially tapered T-tube model of systemic arterial system in dogs. Med Eng Phys. 1994;16(5):370–8. Chang KC, Tseng YZ, Kuo TS, Chen HI. Impedance and wave reflection in arterial system: simulation with geometrically tapered T-tubes. Med Biol Eng Comput. 1995;33(5):652–60. Fogliardi R, Burattini R, Campbell KB. Identification and physiological relevance of an exponentially tapered tube model of canine descending aortic circulation. Med Eng Phys. 1997;19(3):201–11. Segers P, Carlier S, Pasquet A, Rabben SI, Hellevik LR, Remme E, De BT, De SJ, Thomas JD, Verdonck P. Individualizing the aorto-radial pressure transfer function: feasibility of a model-based approach. Am J Physiol Heart Circ Physiol. 2000;279(2):542–9. Matonick JP, Li KJ. A new nonuniform piecewise linear viscoelastic model of the aorta with propagation characteristics. Cardiovasc Eng. 2001;1(1):37–47. Ghasemi Z, Kim C-S, Ginsberg E, Gupta A, Hahn J-O. Model-based blind system identification approach to estimation of central aortic blood pressure waveform from noninvasive diametric circulatory signals. J Dyn Syst Meas Control. 2016;139:1–10. Lee J. Subject-specific multichannel blind system identification of human arterial tree via cuff oscillation measurements. Master's thesis, University of Maryland, Department of Mechanical Engineering. 2016. Kim C-S, Fazeli N, McMurtry MS, Finegan BA, Hahn J-O. Quantification of wave reflection using peripheral blood pressure waveforms. IEEE J Biomed Health Inform. 2015;19(1):309–16. Hahn J-O, Reisner AT, Jaffer FA, Harry H. Individualized estimation of the central aortic blood pressure waveform: a comparative study. IEEE J Biomed Health Inform. 2014;18(1):215–21. Wilde RBPD, Schreuder JJ, Berg PCMVD, Jansen JRC. An evaluation of cardiac output by five arterial pulse contour techniques during cardiac surgery. Anaesthesia. 2007;62(8):760–8. Shim Y, Pasipoularides A, Straley CA, Hampton TG, Soto PF, Owen CH, Davis JW, Glower DD. Arterial windkessel parameter estimation: a new time-domain method. Ann Biomed Eng. 1994;22(1):66–77. Taco K, Faes TJC, Jan-Willem L, Anton VN, Michel V. Estimation of three- and four-element windkessel parameters using subspace model identification. IEEE Trans Biomed Eng. 2010;57(7):1531–8. Alastruey J, Parker KH, Peiro J, Sherwin SJ. Lumped parameter outflow models for 1-D blood flow simulations: effect on pulse waves and parameter estimation. Commun Comput Phys. 2008;4(2):317–36. Taelman L, Degroote J, Verdonck P, Vierendeels J, Segers P. Modeling hemodynamics in vascular networks using a geometrical multiscale approach: numerical aspects. Ann Biomed Eng. 2013;41(7):1445–58. Chen WW, Gao H, Luo XY, Hill NA. Study of cardiovascular function using a coupled left ventricle and systemic circulation model. J Biomech. 2016;49(12):2445–54. Zhang Y, Barocas VH, Berceli SA, Clancy CE, Eckmann DM, Garbey M, Kassab GS, Lochner DR, Mcculloch AD, Tran-Son-Tay R. Multi-scale modeling of the cardiovascular system: disease development, progression, and clinical intervention. Ann Biomed Eng. 2016;44(9):1–19. Liang F. An integrated computational study of multi-scale hemodynamics and multi-mechanism physiology in human cardiovascular system. PhD thesis, Chiba University, Artificial System Science Department. 2007. Smith BW, Andreassen S, Shaw GM, Jensen PL, Rees SE, Chase JG. Simulation of cardiovascular system diseases by including the autonomic nervous system into a minimal model. Comput Methods Programs Biomed. 2007;86(2):153–60. Gutta S, Cheng Q, Benjamin BA. Control mechanism modeling of human cardiovascular-respiratory system. In: Global S. I. P; 2016. p. 918–22. Albanese A, Cheng L, Ursino M, Chbat NW. An integrated mathematical model of the human cardiopulmonary system: model development. Am J Physiol Heart Circ Physiol. 2015;310(7):899–921. Trenhago PR, Fernandes LG, Mueller LO, Blanco PJ, Feijoo RA. An integrated mathematical model of the cardiovascular and respiratory systems. Int J Numer Methods Biomed. 2016;32(1):1–25. Kiselev IN, Kolpakov FA, Biberdorf EA, Baranov VI, Komlyagina TG, Suvorova IY, Melnikov VN, Krivoschekov SG. Patient-specific 1D model of the human cardiovascular system. In: Proceeding of international conference biomedical engineering computer technology; 2015. p. 76–81. Taylor CA, Figueroa CA. Patient-specific modeling of cardiovascular mechanics. Annu Rev Biomed Eng. 2009;11:109–34. Tay WB, Tseng YH, Lin LY, Tseng WY. Towards patient-specific cardiovascular modeling system using the immersed boundary technique. Biomed Eng Online. 2011;10(1):52–68. Poleszczuk J, Debowska M, Dabrowski W, Wojcik-Zaluska A, Zaluska W, Waniewski J. Patient-specific pulse wave propagation model identifies cardiovascular risk characteristics in hemodialysis patients. PLoS Comput Biol. 2018;14(9):1–15. Zheng D, Yin M, Fan X, Yang X, Luo X. A patient-specific lumped-parameter model of coronary circulation. Sci Rep. 2018;8(1):874–83. Mortensen JD, Talbot S, Burkart JA. Cross-sectional internal diameters of human cervical and femoral blood vessels: relationship to subject's sex, age, body size. Anat Rec. 1990;226(1):115–24. Parlikar TA, Heldt T, Ranade GV, Verghese GC. Model-based estimation of cardiac output and total peripheral resistance. In: Computers in cardiology; 2007. p. 379–82. Stergiopulos NMJWN. Evaluation of methods for estimation of total arterial compliance. Am J Physiol. 1995;268(2):1540–8. Burkhoff D, Alexander J, Schipke J. Assessment of Windkessel as a model of aortic input impedance. Am J Physiol. 1988;255(2):742–53. Gnudi G. Analytical relationship between arterial input impedance and the three-element Windkessel series resistance. Med Biol Eng Comput. 1998;36(4):480–4. Kamoi S, Pretty C, Balmer J, Davidson S, Pironet A, Desaive T, Shaw GM, Chase JG. Improved pressure contour analysis for estimating cardiac stroke volume using pulse wave velocity measurement. Biomed Eng Online. 2017;16(1):16–51. Kamoi S, Pretty C, Chiew YS, Davidson S, Pironet A, Desaive T, Shaw GM, Chase JG. Relationship between stroke volume and pulse wave velocity. In: 9th IFAC symposium on biological and medical systems; 2015. p. 285–90. Myers TG, Ripoll VR, de Tejada Cuenca AS, Mitchell SL, McGuinness MJ. Modelling the cardiovascular system for assessing the blood pressure curve. Math Ind Case Stud. 2017;8(1):2–17. Huppert TJ, Allen MS, Benav H, Devor A, Jones P, Dale A, Boas DA. A multi-compartment vascular model for inferring arteriole dilation and cerebral metabolic changes during functional activation. J Cereb Blood Flow Metab. 2007;27(6):1262–79. Huppert TJ, Allen MS, Diamond SG, Boas DA. Estimating cerebral oxygen metabolism from fMRI with a dynamic multi-compartment Windkessel model. Hum Brain Mapp. 2009;30(5):1548–67. Huberts W, Bode AS, Kroon W, Planken RN, Tordoir JH, Fn vdV, Bosboom EM. A pulse wave propagation model to support decision-making in vascular access planning in the clinic. Med Eng Phys. 2012;34(2):233–48. Jalali A, Jones G, Licht D, Nataraj C. Application of mathematical modeling for simulation and analysis of hypoplastic left heart syndrome (HLHS) in pre- and postsurgery conditions. Biomed Res Int. 2015;2015:1–14. Frolov SV, Sindeev SV, Lischouk VA, Gazizova DS, Liepsch D, Balasso A. A lumped parameter model of cardiovascular system with pulsating heart for diagnostic studies. Biomed Res Int. 2016;17(3):1750056–76. Bodley WE. The non-linearities of arterial blood flow. Phys Med Biol. 1971;16(4):663–72. Streeter VL, Keitzer WF, Bohr DF. Pulsatile pressure and flow through distensible vessels. Circ Res. 1963;13(1):3–20. Reymond P, Merenda F, Perren F, Rufenacht D, Stergiopulos N. Validation of a one-dimensional model of the systemic arterial tree. Am J Physiol Heart Cric Physiol. 2009;297(1):208–22. Pan Q, Wang R, Reglin B, Cai G, Yan J, Pries AR, Ning G. A one-dimensional mathematical model for studying the pulsatile flow in microvascular networks. J Biomech Eng. 2014;136(1):011009–19. Sun YH, Anderson TJ, Parker KH, Tyberg JV. Wave-intensity analysis: a new approach to coronary hemodynamics. J Appl Physiol. 2000;89(4):1636–44. Hollander EH, Dobson GM, Wang JJ, Parker KH, Tyberg JV. Direct and series transmission of left atrial pressure perturbations to the pulmonary artery: a study using wave-intensity analysis. Am J Physiol Heart Circ Physiol. 2004;286(1):267–75. Zambanini A, Cunningham SL, Parker KH, Khir AW, McG Thom SA, Hughes AD. Wave-energy patterns in carotid, brachial, and radial arteries: a noninvasive approach using wave-intensity analysis. Am J Physiol Heart Circul Physiol. 2005;289(1):270–6. Charlton P, Aresu M, Spear J, Chowienczyk P, Alastruey J. Indices to assess aortic stiffness from the finger photoplethysmogram: in silico and in vivo testing. Artery Res. 2018;24:128. Willemet M, Chowienczyk P, Alastruey J. A database of virtual healthy subjects to assess the accuracy of foot-to-foot pulse wave velocities for estimation of aortic stiffness. Am J Physiol Heart Circul Physiol. 2015;309(4):663–75. Vennin S, Mayer A, Li Y, Fok H, Clapp B, Alastruey J, Chowienczyk P. Noninvasive calculation of the aortic blood pressure waveform from the flow velocity waveform: a proof of concept. Am J Physiol Heart Circul Physiol. 2015;309(5):969–1000. Mukkamala R, Hahn J-O, Inan OT, Mestha LK, Kim C-S, Toreyin H, Kyal S. Toward ubiquitous blood pressure monitoring via pulse transit time: theory and practice. IEEE Trans Biomed Eng. 2015;62(8):1879–901. Gao M, Zhang G, Olivier NB, Mukkamala R. Improved pulse wave velocity estimation using an arterial tube-load model. IEEE Trans Biomed Eng. 2014;61(3):848–58. Swamy G, Olivier B, Mukkamala R. Calculation of forward and backward arterial waves by analysis of two pressure waveforms. IEEE Trans Biomed Eng. 2010;57(12):2833–9. Kim C, Fazeli N, McMurtry MS, Finegan BA, Hahn J-O. Quantification of wave reflection using peripheral blood pressure waveforms. IEEE J Biomed Health Inform. 2015;19(1):309–16. Leung M, Dumont G, Sandor GGS, Potts JE. Estimating arterial stiffness using transmission line model. In: Conference proceeding on IEEE engineering medical biological society; 2006. p. 1375–8. Mo LY, Bascom PA, Ritchie K, McCowan LME. A transmission line modelling approach to the interpretation of uterine Doppler waveforms. Ultrasound Med Biol. 1988;14(5):365–76. SZ performed the literature review and drafted the manuscript; LX conceived the study and provided the theory of mathematical modeling; LH, HX and YaY checked the manuscript; LQ and YuY gave suggestions and revised the manuscript. All authors read and approved the final manuscript. All authors consent for the publication of this manuscript. National Natural Science Foundation of China (Nos. 61773110, 61374015, 61202258 and 61701099); Fundamental Research Funds for the Central Universities (N161904002 and N172008008); Open Program of Neusoft Research of Intelligent Healthcare Technology (NRIHTOP1801). Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, 110819, China Shuran Zhou, Lisheng Xu, Liling Hao, Yang Yao, Lin Qi & Yudong Yao Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., Shenyang, 110167, China Lisheng Xu & Yudong Yao Chongqing Key Laboratory of Modern Photoelectric Detection Technology and Instrument, School of Optoelectronic Information, Chongqing University of Technology, Chongqing, 400054, China Hanguang Xiao Shuran Zhou Lisheng Xu Liling Hao Yang Yao Lin Qi Yudong Yao Correspondence to Lisheng Xu. Zhou, S., Xu, L., Hao, L. et al. A review on low-dimensional physics-based models of systemic arteries: application to estimation of central aortic pressure. BioMed Eng OnLine 18, 41 (2019). https://doi.org/10.1186/s12938-019-0660-3 Physics-based model Systemic arteries Central aortic pressure Tube-load model
CommonCrawl
Are there ways to estimate size of the "whole universe"? Words escape me, but by "whole universe" (I think) I mean everything that's spatially connected to the observable universe in a conventional sense. If there is a better term for it, please let me know! I was surprised to find out that theories and measurements about the big bang seem to mostly relate to the size of the observable universe via expansion, and if I understand correctly don't say much about the size of the whole universe. Have I got this right? Are there any ways at all to try to estimate size of the whole universe? Does the term even mean anything? See @RobJeffries' excellent answer to When will the number of stars be a maximum? and @Acccumulation's comment for background on this question. universe cosmology cosmological-inflation cosmological-horizon $\begingroup$ If you can't observe something you certainly can't measure it and if you can't make relevant measurements you cannot verify a theory. Without data to base a theory on you aren't doing much more than making an elaborate guessing. $\endgroup$ – StephenG $\begingroup$ @StephenG In science we can use one kind of data to formulate a theory that predicts a different kind of data. I think that there are numerous examples of predictions which precede measurements. General Relativity and all its implications might be a good example of that. "We can't measure it so we can't predict or even estimate it" is an oversimplification. $\endgroup$ $\begingroup$ There are unfortunately lots of theories which have proven wrong because later measurement proves them wrong. For every theory that is proven right (or more accurately right within acceptable error) there are many, many more that sounded perfectly reasonable but when they were eventually testable, proved wrong. Prediction alone is not enough. $\endgroup$ $\begingroup$ @StephenG I haven't asked for "right theories". In fact, I don't think there is such a thing. Usually there are theories that seem to work, and ones that don't. Here in my question though, I've only asked "Are there any ways at all to try to estimate..." $\endgroup$ $\begingroup$ Major limitation is that the current understanding poses that out of the realm of physics. I think the proper term is unfalsifiable. Cosmology has indeed a special status, beyond a limit it cannot go. Which will always give room to philosophical questions. $\endgroup$ – Alchimista tl; dr The universe is probably infinite, but if that's the case it's impossible to verify. If the universe is finite, and small enough, and the global curvature is equal to the curvature of our observable universe, then we will be able to estimate its size. If the global curvature of the universe isn't positive, then the size of the universe is infinite, (and it's always been infinite since the dawn of time, at the Big Bang), assuming the topology of the universe is trivial. Measurements of the observable universe indicate that the curvature may be zero (giving a flat universe), or even negative; if the curvature is positive, then its value is very small. We assume that the observable universe is typical of the whole universe, but of course that's impossible to verify. Wikipedia says that experimental data from various independent sources (WMAP, BOOMERanG, and Planck for example) confirm that the observable universe is flat with only a 0.4% margin of error. The latest research shows that even the most powerful future experiments (like SKA, Planck..) will not be able to distinguish between flat, open and closed universe if the true value of cosmological curvature parameter is smaller than $10^{-4}$. If the true value of the cosmological curvature parameter is larger than $10^{-3}$ we will be able to distinguish between these three models even now. So even if the total universe does have positive curvature, its size is much larger than the observable universe, assuming that our observable universe isn't a patch that's abnormally flat. According to How Big is the Entire Universe? by Ethan Siegel, if the curvature is positive, then the diameter of the total universe is at least 14 trillion lightyears, but that article is from 2012, more recent calculations may give a larger value. PM 2RingPM 2Ring $\begingroup$ The tl;dr is very helpful, thank you! I'm not a cosmologist and so it is helpful to have guide where the explanation is going to go before diving in, and I am sure it's going to be helpful to some future readers as well! $\endgroup$ $\begingroup$ Ah, I hadn't noticed that! Perhaps this is more like playing chess than logic ;-) $\endgroup$ $\begingroup$ omg physics SE is a whole 'nuther universe. It's an angsty place, but much better than several years ago. There was a "thing", and it's been improving ever since. I think there are very different ways to handle low question-rate sites and high question rate sites. stats: Physics: 92/day, Astronomy: 5/day Personally I think Physics SE should be broken up, slower Q/day lowers angst and that is particularly useful to encourage new users to stick around and grow with the site. SO is yet a whole 'nuther 'nuther but is also thankfully in "angst-recovery". $\endgroup$ $\begingroup$ @uhoh If the OP clarifies that Moon question, or at least gives some feedback, I'll vote to reopen it. $\endgroup$ So far our estimates of the size a the Universe is from what it is expected to be (i.e. calculations) rather than what we see. But there are several problems. Age of the Universe We are pretty sure of the age of the Universe, 13.8B years old, and the time when the first light was emitted. This gives us a relatively good idea of the size of the Universe since we consider that the furthest edge of our Universe is the light that it has emitted so far which has gone outward and never hit anything. Assuming a perfect sphere, a.k.a. a perfect Euclidean Universe (which it isn't,) then you would think the Universe is a sphere with a radius of about 13.8 billion light years and so its size would be: $$size_{universe} = {\frac {4 \pi \times r^3} {3}}$$ With r = 13.8 billion light years. Hubble Discovery: Expansion of the Universe Only we know that our Universe is in expansion and there is nothing that says it's going to stop expanding. This affects the size of the Universe. Unfortunately, when we look around the cosmos, we can't see the expansion happening. We inferred it because of what we call the redshift. Most of the far away galaxies are moving away from us, the further they are, the longer the wave length of their light. The speed at which the expansion happens is not yet set in stone, though. Therefore, we have many arguing about the current size of the Universe. Is there an Horizon? One problem we currently have is the weight of the observable Universe. We calculated a weight which is much higher than what we see... so we added Black Matter. Matter which we can't see because it doesn't emit any kind of radiation and it's not a small weight: 85% of the calculated weight is missing! Another theory in that regard is the possibility that the Expansion of the Universe has already moved most of the would be visible Universe over the Horizon. When you go to the Ocean and look at the water, at some point you see the edge of the Earth. This is called the Horizon. Whatever is behind that line, you can't see it unless you go toward the horizon (when we observe the Universe, though, we stay in our solar system...) At such a size, the Universe is expanding faster than the speed of light. So light over the Horizon will never reach us because it can't go faster than the Expansion. One more theory which is going to be hard to prove... Limited Size? There is a funny word circulating these days: Circumnavigation. This is like Pacman going out on the left side of the screen to re-appear on the right side. There are theories, which so far have been disproved, that the Universe would be finite and that things going in one direction reappear in on the other side. (If I'm correct, this is directly linked to the string theory.) I guess some people think that this would have been great since that would mean our Universe has a specific size and things will always be around. They just float and go around, come back where they were... in an infinite loop. So far, we have not been able to see light of galaxies coming from the opposite direction. This is also quite contradictory with the possibility of a Horizon. I also personally think this is contradictory with the Expansion theory. If the Universe had a limited size, how could it also expand? So what's the size? We actually calculate a size from what we can observe using a radius of 46.5 billion light years. This gives us an estimation of $$3.57×10^{80} m^3$$ This size takes our best estimate of the redshift in account. For additional details about how the size is calculated, I would suggest reading the Size of the Observable Universe section on Wikipedia. Alexis WilkeAlexis Wilke $\begingroup$ Well I recommended something a little shorter but okay, thank you! $\endgroup$ $\begingroup$ @uhoh, I know. I just couldn't see how to get the answer without proper explanations... $\endgroup$ – Alexis Wilke Not the answer you're looking for? Browse other questions tagged universe cosmology cosmological-inflation cosmological-horizon or ask your own question. Volume of the observable universe What is the size of the universe? Will there be collision between universes? Can there be planets, stars and galaxies made of dark matter or antimatter? When will the number of stars be a maximum? Does the universe curve in on itself? What covers the outside of the Universe? What are the concrete technical arguments supporting the idea that the wave function of the universe can be written as partition function? Does the radius of the Universe correspond to its total entropy? Is the whole universe is rotating on an axis? Is the Universe really expanding at an increasing rate? Are there expanding-universe cosmological models that do not include inflation? Methodology for size of Universe Light beam (1 photon) in the limited universe? Time dilation due to the expansion of the universe Are there only $10^{83}$ atoms in the universe?
CommonCrawl
Sum of all products of subarrays For any three-dimensional array $A$ of size $n_1 \times n_2 \times n_3$ let $P(A)$ be the product of all its elements, i.e. $$P(A) = \prod_{i_1 = 1}^{n_1} \prod_{i_2 = 1}^{n_2} \prod_{i_3 = 1}^{n_3} a_{i_1,i_2,i_3}.$$ A subarray (cf. submatrix) of $A$ is given deleting rows in any of the tree dimensions. For e.g. the $3 \times 3 \times 3$ array $B$ with elements $b_{ijk}$: $$B = \left( \begin{array}{ccc} \left( \begin{array}{c} b_{1,1,1} \\ b_{1,1,2} \\ b_{1,1,3} \end{array} \right) & \left( \begin{array}{c} b_{1,2,1} \\ b_{1,2,2} \\ b_{1,2,3} \end{array} \right) & \left( \begin{array}{c} b_{1,3,1} \\ b_{1,3,2} \\ b_{1,3,3} \end{array} \right) \\ \left( \begin{array}{c} b_{2,1,1} \\ b_{2,1,2} \\ b_{2,1,3} \end{array} \right) & \left( \begin{array}{c} b_{2,2,1} \\ b_{2,2,2} \\ b_{2,2,3} \end{array} \right) & \left( \begin{array}{c} b_{2,3,1} \\ b_{2,3,2} \\ b_{2,3,3} \end{array} \right) \\ \left( \begin{array}{c} b_{3,1,1} \\ b_{3,1,2} \\ b_{3,1,3} \end{array} \right) & \left( \begin{array}{c} b_{3,2,1} \\ b_{3,2,2} \\ b_{3,2,3} \end{array} \right) & \left( \begin{array}{c} b_{3,3,1} \\ b_{3,3,2} \\ b_{3,3,3} \end{array} \right) \end{array} \right)$$ one can form a subarray $B'$ of $B$ by deleting the first row in the first dimension, the second row in the second dimension, and keeping all of the third dimension: $$B' = \left( \begin{array}{cc} \left( \begin{array}{c} b_{2,1,1} \\ b_{2,1,2} \\ b_{2,1,3} \end{array} \right) & \left( \begin{array}{c} b_{2,3,1} \\ b_{2,3,2} \\ b_{2,3,3} \end{array} \right) \\ \left( \begin{array}{c} b_{3,1,1} \\ b_{3,1,2} \\ b_{3,1,3} \end{array} \right) & \left( \begin{array}{c} b_{3,3,1} \\ b_{3,3,2} \\ b_{3,3,3} \end{array} \right) \end{array} \right).$$ For a given three-dimensional array $A$ with elements only being $1$ or $-1$, I want to compute the sum of $P(A')$ for all subarrays $A'$ of $A$: $$\sum_{A' \text{ subarray of } A} P(A').$$ Is there a clever way to do this? Can it be done in polynomial time? In other words, I want to compute $$\sum_{S_1,S_2,S_3} \prod_{i_1 \in S_1} \prod_{i_2 \in S_2} \prod_{i_3 \in S_3} a_{i_1,i_2,i_3},$$ where $S_i$ ranges over all non-empty subsets of $[n_i] = \{1,\dots,n_i\}$ Some observations I have made: A subarray is uniquely determined by which rows are kept from the original array. For example, in the case above $B'$ is determined by saying that we keep rows $2,3$ in the first dimension, rows $1,3$ in the second dimension and rows $1,2,3$ in the third dimension. So, for a $n_1 \times n_2 \times n_3$ array $A$, we get one subarray for each triple of subsets $(l_1,l_2,l_3)$, where $l_1 \subset \{1, \dots, n_1\}$ and so on. This gives a total of $(2^{n_1} - 1)(2^{n_2} -1)(2^{n_3} -1)$ subarrays, so a naive approach is out of the question if we want to compute the sum in polynomial time. If a subarray is given by $(l_1,l_2,l_3)$ then another subarray is given by $(l_1 \setminus [n_1], l_2 \setminus [n_2], l_3 \setminus [n_3])$, so if you compute $P$ for one subarray, you get the other one "for free", if you know what $P$ for the whole array is. Maybe this can be used to do some clever thing, but I can't see it. Here is a table of the products for the $2 \times 2 \times 2$ case, each row are the elements we want to multiply together in one product, and the result will be the sum of the products: $$\begin{array}{llllllll} b_{1,1,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,2,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,2,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,1,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,1,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,2,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,2,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,1} & b_{1,1,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,1} & b_{1,2,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,1} & b_{2,1,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,2} & b_{1,2,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,2} & b_{2,1,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,2,1} & b_{1,2,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,2,1} & b_{2,2,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,2,2} & b_{2,2,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,1,1} & b_{2,1,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,1,1} & b_{2,2,1} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,1,2} & b_{2,2,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{2,2,1} & b_{2,2,2} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,1} & b_{1,1,2} & b_{1,2,1} & b_{1,2,2} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,1} & b_{1,1,2} & b_{2,1,1} & b_{2,1,2} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,1} & b_{1,2,1} & b_{2,1,1} & b_{2,2,1} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,2} & b_{1,2,2} & b_{2,1,2} & b_{2,2,2} & \text{} & \text{} & \text{} & \text{} \\ b_{1,2,1} & b_{1,2,2} & b_{2,2,1} & b_{2,2,2} & \text{} & \text{} & \text{} & \text{} \\ b_{2,1,1} & b_{2,1,2} & b_{2,2,1} & b_{2,2,2} & \text{} & \text{} & \text{} & \text{} \\ b_{1,1,1} & b_{1,1,2} & b_{1,2,1} & b_{1,2,2} & b_{2,1,1} & b_{2,1,2} & b_{2,2,1} & b_{2,2,2} \end{array}$$ The one-dimensional case If we have a one-dimensional array $A$, we can just as well consider $A$ to be a set and compute: $$\sum_{A' \subseteq A} \prod_{x \in A'} x$$ which can be computed by considering the polynomial $$p(x) = \prod_{a \in A} (x-a)$$ and if we expand this polynomial to $$p(x) = \sum_{j=0}^n \alpha_j x^j$$ the coefficient $\alpha_j$ is the sum of all products of $n - j$ elements in $A$, so to get total sum we just need to sum the $\alpha$. Computing the coefficients $\alpha_j$ can be done in polynomial time. complexity-theory time-complexity arrays arithmetic matrices Raphael♦ CalleCalle $\begingroup$ Have you worked out the cases for one-dimensional and two-dimensional A? $\endgroup$ – mhum Dec 29 '15 at 23:38 $\begingroup$ @mhum, I have added some information on the one-dimensional case. I have not worked out the two-dimensional case. $\endgroup$ – Calle Dec 30 '15 at 23:08 $\begingroup$ Ok. Are you more interested in the case of general $A$ or particularly in the case where $A$ has entries in $\{+1,-1\}$? If the latter, then I believe there is a further simplification for the one-dimensional case: If $A$ has only $+1$, then the sum is $2^n - 1$; if $A$ has even one $-1$, then the sum is $-1$. $\endgroup$ – mhum Dec 30 '15 at 23:11 $\begingroup$ @mhum, I am mostly interested in the case with entries from $\{ \pm 1\}$, but I wouldn't mind a solution to a more general case. Yes, the one-dimensional case is special, and there doesn't seem to be such a rule for the two-dimensional case. $\endgroup$ – Calle Dec 31 '15 at 13:35 $\begingroup$ I have a trick for the two-dimensional case with $\pm 1$ entries. I'll write up a sketch. $\endgroup$ – mhum Dec 31 '15 at 23:28 This is a partial solution for the case of a two-dimensional $A$ with entries in $\{+1, -1\}$. First, some notation. As above, for a given array $A$, let $P(A)$ denote the product of all the entries in $A$. Further, we'll define $S(A)$ to be the sum of all $P(B)$ where $B$ is a subarray of $A$. For a given array $A$ with index sets $I_1 \times I_2 \times \ldots$, we identify a subarray $B$ of $A$ by a tuple of subsets $S_1 \times S_2 \times \dots$ where $\emptyset \neq S_j \subseteq I_j$. We will denote $B = A[S_1, S_2, \ldots]$. Lemma 1: Let $A$ be a one-dimensional array (i.e.: vector) of length $n$ with entries in $\{+1, -1\}$. If $A$ consists solely of $+1$ entries, then $S(A) = 2^n-1$; otherwise if $A$ contains any $-1$ entries, then $S(A) = -1$. Proof: Left as an exercise to the reader Let $A$ be a $n_1 \times n_2$ two-dimensional array (i.e.: matrix) with entries in $\{+1, -1\}$. For a fixed $T \subseteq [n_2]$, we can easily compute the sum $$S_T := \Sigma_{S \subseteq [n_1]} P(A[S, T])$$ by reducing the two-dimensional problem to the one-dimensional one. Note that $$S(A) = \Sigma_{\emptyset \neq T \subseteq n_2} S_T$$ Let us treat $A$ as a set of $n_2$ vectors, each of length $n_1$. In this way, the subarray $A[[n_1], T]$ (with $T \subseteq [n_2]$) can be considered a set of vectors $\{v_i | i \in T\}$. Consider the vector $u$ where $u[j] = \Pi v_i[j]$, i.e.: the $j^{th}$ entry of $u$ is the product of the $j^{th}$ entries of all the $v_i$. From Lemma 1, we know that if all the elements of $u$ are $+1$, then the required sum is $2^{n_1}-1$; otherwise, it is $-1$. We'll call $T$ degenerate in the former case and non-degenerate in the latter. Of course, this alone doesn't quite help us enough because there are still exponentially many subsets $T \subseteq [n_2]$. So, lets try to understand exactly which subsets $T$ are degenerate. Evidently , $T$ is degenerate iff for all $1\leq j\leq n_1$ there are an even number of $-1$s in $\{v_i[j] | i \in T\}$. Now, when we see situations involving the parity of entries over an array, the first thing we should think about is translating things over to $\mathbb{Z}_2$, the field over 2 elements. Define: For an array $A$ with entries in $\{+1,-1\}$, we define the array $A'$ by mapping each entry in $A$ according to $+1\mapsto 0$ and $-1 \mapsto 1$. We will typically understand the entries of $A'$ to be elements of $\mathbb{Z}_2$. Lemma 2: A subset $T \subseteq [n_2]$ is degenerate iff the vector sum $\Sigma_{i\in T} v'_i$ equals the zero vector (over $\mathbb{Z}_2$). Proof: This follows directly from the fact that an even number of $1$s in $\mathbb{Z}_2$ sums to $0$. Corrollary 2: If $T$ is such that the set of vectors $\{v'_i|i\in T\}$ has full rank over $\mathbb{Z}_2$ then $ \forall \emptyset \neq U \subseteq T$, $U$ is non-degenerate. We see now that the degeneracy of a subset $T$ is intimately tied up with the rank of the associated matrix $A'[[n_1], T]$. So, let us take $r$ to be the rank of $A'$ over $\mathbb{Z}_2$. Let us re-order the vectors $\{v_1, v_2,\ldots, v_{n_2}\}$ so that $\{v'_1, v'_2, \ldots, v'_r\}$ forms a maximal, linearly independent subset of $\{v'_1,v'_2,\ldots,v'_{n_2}\}$. By Corollary 2, we know that if $T \subseteq [r]$, then $T$ is non-degenerate. Now, consider some non-empty subset $U$ of $\{r+1, r+2, \ldots, n_2\}$. By some basic properties of linear algebra, we know that there is a unique (possibly empty) subset $V \subseteq [r]$ such that $\Sigma_{j\in V} v'_j = \Sigma_{j \in U} v'_j$. And, since we're operating in $\mathbb{Z}_2$, we can re-arrange this equality into $\Sigma_{j\in U \cup V} v'_j = 0$. So, we conclude that for every non-empty subset $U$ of $\{r+1, r+2, \ldots, n_2\}$ we can construct a unique degenerate subset $T \subseteq [n_2]$. Conversely, we claim that every degenerate subset $T$ is accounted for in this manner (proof left to the reader). Theorem: Let $A$ be a $n_1 \times n_2$ two-dimensional array with entries in $\{+1, -1\}$. Let $r$ be the rank of $A'$ over $\mathbb{Z}_2$. Then, $$ S(A) = (2^{n_2-r}-1) * (2^{n_1}-1) - (2^{n_2} - 2^{n_2-r})*(-1) $$ Proof: The first summand corresponds to the $2^{n_2-r}-1$ degenerate subsets formed by extended the non-empty subsets of $\{r+1, r+2, \ldots, n_2\}$. The second summand corresponds to all the other (non-degenerate) subsets. mhummhum $\begingroup$ This is a very interesting answer, thank you! $\endgroup$ – Calle Jan 2 '16 at 16:32 $\begingroup$ I've so far been unable to find an extension of this trick into three dimensions. But, if this is a practical question rather than a theoretical one, you might be able to get away with doing $2^{n_3}-1$ rank calculations of order $n_1 \times n_2$ if $n_3$ is sufficiently small. $\endgroup$ – mhum Jan 4 '16 at 20:12 Not the answer you're looking for? Browse other questions tagged complexity-theory time-complexity arrays arithmetic matrices or ask your own question. A canonical representative, for this equivalence relation on matrices What Can We Say Complexity of Solving the Weighted Pool Ball Problem with a Variable Scale? Definition of complexity classes? Logarithmic reduction to another LOGSPACE problem Transforming an array into one with descending order
CommonCrawl
NSF Public Access Accepted Manuscript: High‑precision measurements of half‐lives for 69Ge, 73Se, 83Sr, 85mSr, and 63Zn radionuclides relevant to the astrophysical p‑process via photoactivation at the Madison Accelerator Laboratory Citation Details Title: High‑precision measurements of half‐lives for 69Ge, 73Se, 83Sr, 85mSr, and 63Zn radionuclides relevant to the astrophysical p‑process via photoactivation at the Madison Accelerator Laboratory The ground state half-lives of 69Ge, 73Se, 83Sr, 63Zn, and the half-life of the 1/2− isomer in 85Sr have been measured with high precision using the photoactivation technique at an unconventional bremsstrahlung facility that features a repurposed medical electron linear accelerator. The γ-ray activity was counted over about 6 half-lives with a high-purity germanium detector, enclosed into an ultra low-background lead shield. The measured half-lives are: T1/2(69Ge) = 38.82 ± 0.07 (stat) ± 0.06 (sys) h; T1/2(73Se) = 7.18 ± 0.02 (stat) ± 0.004 (sys) h; T1/2(83Sr) = 31.87 ± 1.16 (stat) ± 0.42 (sys) h; T1/2(85mSr) = 68.24 ± 0.84 (stat) ± 0.11 (sys) min; T1/2(63Zn) = 38.71 ± 0.25 (stat) ± 0.10 (sys) min. These high-precision half-life measurements will contribute to a more accurate determination of corresponding ground-state photoneutron reaction rates, which are part of a broader effort of constraining statistical nuclear models needed to calculate stellar nuclear reaction rates relevant for the astrophysical p-process nucleosynthesis. hain, T. A.; Pendeton, J. A.; Silano, S. A.; Banu, A. Award ID(s): NSF-PAR ID: Journal of radioanalytical and nuclear chemistry an international journal dealing with all aspects and applications of nuclear chemistry Page Range or eLocation-ID: Sponsoring Org: High‑precision measurements of half‐lives for 69Ge, 73Se, 83Sr, 85mSr, and 63Zn radionuclides relevant to the astrophysical p‑process via photoactivation at the Madison Accelerator Laboratory https://doi.org/https://link.springer.com/article/10.1007/s10967-020-07589-5 Hain, T. A ; Pendleton, S. J. ; Silano, S. A. ; Banu, A. ( December 2020 , Journal of radioanalytical and nuclear chemistry an international journal dealing with all aspects and applications of nuclear chemistry) The ground state half-lives of 69Ge, 73Se, 83Sr, 63Zn, and the half-life of the 1/2− isomer in 85Sr have been measured with high precision using the photoactivation technique at an unconventional bremsstrahlung facility that features a repurposed medical electron linear accelerator. The γ-ray activity was counted over about 6 half-lives with a high-purity germanium detector, enclosed into an ultra low-background lead shield. The measured half-lives are: T1/2(69Ge) = 38.82 ± 0.07 (stat) ± 0.06 (sys)h; T1/2(73Se) = 7.18 ± 0.02 (stat) ± 0.004 (sys) h; T1/2(83Sr) = 31.87 ± 1.16 (stat) ± 0.42 (sys) h; T1/2(85mSr) = 68.24 ± 0.84(stat) ± 0.11 (sys) min; T1/2(63Zn) = 38.71 ± 0.25 (stat) ± 0.10 (sys) min. These high-precision half-life measurements will contribute to a more accurate determination of corresponding ground-state photoneutron reaction rates, which are part of a broader effort of constraining statistical nuclear models needed to calculate stellar nuclear reaction rates relevant for the astrophysical p-process nucleosynthesis. Measuring the β-decay Properties of Neutron-rich Exotic Pm, Sm, Eu, and Gd Isotopes to Constrain the Nucleosynthesis Yields in the Rare-earth Region https://doi.org/10.3847/1538-4357/ac80fc Kiss, G. G. ; Vitéz-Sveiczer, A. ; Saito, Y. ; Tarifeño-Saldivia, A. ; Pallas, M. ; Tain, J. L. ; Dillmann, I. ; Agramunt, J. ; Algora, A. ; Domingo-Pardo, C. ; et al ( September 2022 , The Astrophysical Journal) Theβ-delayed neutron-emission probabilities of 28 exotic neutron-rich isotopes of Pm, Sm, Eu, and Gd were measured for the first time at RIKEN Nishina Center using the Advanced Implantation Detector Array (AIDA) and the BRIKEN neutron detector array. The existingβ-decay half-life (T1/2) database was significantly increased toward more neutron-rich isotopes, and uncertainties for previously measured values were decreased. The new data not only constrain the theoretical predictions of half-lives andβ-delayed neutron-emission probabilities, but also allow for probing the mechanisms of formation of the high-mass wing of the rare-earth peak located atA≈ 160 in ther-process abundance distribution through astrophysical reaction network calculations. An uncertainty quantification of the calculated abundance patterns with the new data shows a reduction of the uncertainty in the rare-earth peak region. The newly introduced variance-based sensitivity analysis method offers valuable insight into the influence of important nuclear physics inputs on the calculated abundance patterns. The analysis has identified the half-lives of168Sm and of several gadolinium isotopes as some of the key variables among the current experimental data to understand the remaining abundance uncertainty atA= 167–172. 3D1D hydro-nucleosynthesis simulations – I. Advective–reactive post-processing method and its application to H ingestion into He-shell flash convection in rapidly accreting white dwarfs https://doi.org/10.1093/mnras/stab500 Stephens, David ; Herwig, Falk ; Woodward, Paul ; Denissenkov, Pavel ; Andrassy, Robert ; Mao, Huaqing ( April 2021 , Monthly Notices of the Royal Astronomical Society) ABSTRACT We present two mixing models for post-processing of 3D hydrodynamic simulations applied to convective–reactive i-process nucleosynthesis in a rapidly accreting white dwarf (RAWD) with [Fe/H] = −2.6, in which H is ingested into a convective He shell. A 1D advective two-stream model adopts physically motivated radial and horizontal mixing coefficients constrained by 3D hydrodynamic simulations. A simpler approach uses diffusion coefficients calculated from the same simulations. All 3D simulations include the energy feedback of the 12C(p, γ)13N reaction from the H entrainment. Global oscillations of shell H ingestion in two of the RAWD simulations cause bursts of entrainment of H and non-radial hydrodynamic feedback. With the same nuclear network as in the 3D simulations, the 1D advective two-stream model reproduces the rate and location of the H burning within the He shell closely matching the 3D simulation predictions, as well as qualitatively displaying the asymmetry of the XH profiles between the upstream and downstream. With a full i-process network the advective mixing model captures the difference in the n-capture nucleosynthesis in the upstream and downstream. For example, 89Kr and 90Kr with half-lives of $3.18\,\,\mathrm{\mathrm{min}}$ and $32.3\,\,\mathrm{\mathrm{s}}$ differ by a factor 2–10 in the two streams. In this particular applicationmore »the diffusion approach provides globally the same abundance distribution as the advective two-stream mixing model. The resulting i-process yields are in excellent agreement with observations of the exemplary CEMP-r/s star CS31062-050.« less Cross-conjugation controls the stabilities and photophysical properties of heteroazoarene photoswitches https://doi.org/10.1039/d1ob02026a Adrion, Daniel M. ; Lopez, Steven A. ( January 2022 , Organic & Biomolecular Chemistry) Azoarene photoswitches are versatile molecules that interconvert from their E-isomer to their Z-isomer with light. Azobenzene is a prototypical photoswitch but its derivatives can be poorly suited for in vivo applications such as photopharmacology due to undesired photochemical reactions promoted by ultraviolet light and the relatively short half-life (t1/2) of the Z-isomer (2 days). Experimental and computational studies suggest that these properties (λmax of the E isomer and t1/2 of the Z-isomer) are inversely related. We identified isomeric azobisthiophenes and azobisfurans from a high-throughput screening study of 1540 azoarenes as photoswitch candidates with improved λmax and t1/2 values relative to azobenzene. We used density functional theory to predict the activation free energies and vertical excitation energies of the E- and Z-isomers of 2,2- and 3,3-substituted azobisthiophenes and azobisfurans. The half-lives depend on whether the heterocycles are π-conjugated or cross-conjugated with the diazo π-bond. The 2,2-substituted azoarenes both have t1/2 values on the scale of 1 hour, while the 3,3-analogues have computed half-lives of 40 and 230 years (thiophene and furan, respectively). The 2,2-substituted heteroazoarenes have significantly higher λmax absorptions than their 3,3-substituted analogues: 76 nm for azofuran and 77 nm for azothiophene. Carbon–titanium dioxide (C/TiO 2 ) nanofiber composites for chemical oxidation of emerging organic contaminants in reactive filtration applications https://doi.org/10.1039/D0EN00975J Greenstein, Katherine E. ; Nagorzanski, Matthew R. ; Kelsay, Bailey ; Verdugo, Edgard M. ; Myung, Nosang V. ; Parkin, Gene F. ; Cwiertny, David M. ( March 2021 , Environmental Science: Nano) The recalcitrance of some emerging organic contaminants through conventional water treatment systems may necessitate advanced technologies that use highly reactive, non-specific hydroxyl radicals. Here, polyacrylonitrile (PAN) nanofibers with embedded titanium dioxide (TiO 2 ) nanoparticles were synthesized via electrospinning and subsequently carbonized to produce mechanically stable carbon/TiO 2 (C/TiO 2 ) nanofiber composite filters. Nanofiber composites were optimized for reactivity in flow through treatment systems by varying their mass loading of TiO 2 , adding phthalic acid (PTA) as a dispersing agent for nanoparticles in electrospinning sol gels, comparing different types of commercially available TiO 2 nanoparticles (Aeroxide® P25 and 5 nm anatase nanoparticles) and through functionalization with gold (Au/TiO 2 ) as a co-catalyst. High bulk and surface TiO 2 concentrations correspond with enhanced nanofiber reactivity, while PTA as a dispersant makes it possible to fabricate materials at very high P25 loadings (∼80% wt%). The optimal composite formulation (50 wt% P25 with 2.5 wt% PTA) combining high reactivity and material stability was then tested across a range of variables relevant to filtration applications including filter thickness (300–1800 μm), permeate flux (from 540–2700 L m −2 h), incident light energy (UV-254 and simulated sunlight), flow configuration (dead-end and cross-flow filtration),more »presence of potentially interfering co-solutes (dissolved organic matter and carbonate alkalinity), and across a suite of eight organic micropollutants (atrazine, benzotriazole, caffeine, carbamazepine, DEET, metoprolol, naproxen, and sulfamethoxazole). During cross-flow recirculation under UV-irradiation, 300 μm thick filters (30 mg total mass) produced micropollutant half-lives ∼45 min, with 40–90% removal (from an initial 0.5 μM concentration) in a single pass through the filter. The initial reaction rate coefficients of micropollutant transformation did not clearly correlate with reported second order rate coefficients for reaction with hydroxyl radical ( k OH ), implying that processes other than reaction with photogenerated hydroxyl radical ( e.g. , surface sorption) may control the overall rate of transformation. The materials developed herein represent a promising next-generation filtration technology that integrates photocatalytic activity in a robust platform for nanomaterial-enabled water treatment.« less Free Publicly Accessible Full Text Accepted Manuscript1.0 Publisher's Version of Record https://doi.org/https://doi.org/10.1007/s10967-020-07589-5 Cite: MLA Format hain, T. A., Pendeton, J. A., Silano, S. A., and Banu, A.. High‑precision measurements of half‐lives for 69Ge, 73Se, 83Sr, 85mSr, and 63Zn radionuclides relevant to the astrophysical p‑process via photoactivation at the Madison Accelerator Laboratory. Retrieved from https://par.nsf.gov/biblio/10251461. Journal of radioanalytical and nuclear chemistry an international journal dealing with all aspects and applications of nuclear chemistry 327.3 Web. doi:https://doi.org/10.1007/s10967-020-07589-5. Cite: APA Format hain, T. A., Pendeton, J. A., Silano, S. A., & Banu, A.. High‑precision measurements of half‐lives for 69Ge, 73Se, 83Sr, 85mSr, and 63Zn radionuclides relevant to the astrophysical p‑process via photoactivation at the Madison Accelerator Laboratory. Journal of radioanalytical and nuclear chemistry an international journal dealing with all aspects and applications of nuclear chemistry, 327 (3). Retrieved from https://par.nsf.gov/biblio/10251461. https://doi.org/https://doi.org/10.1007/s10967-020-07589-5 Cite: Chicago Format hain, T. A., Pendeton, J. A., Silano, S. A., and Banu, A.. "High‑precision measurements of half‐lives for 69Ge, 73Se, 83Sr, 85mSr, and 63Zn radionuclides relevant to the astrophysical p‑process via photoactivation at the Madison Accelerator Laboratory". Journal of radioanalytical and nuclear chemistry an international journal dealing with all aspects and applications of nuclear chemistry 327 (3). Country unknown/Code not available. https://doi.org/https://doi.org/10.1007/s10967-020-07589-5. https://par.nsf.gov/biblio/10251461. Cite: BibTeX Format @article{osti_10251461, place = {Country unknown/Code not available}, title = {High‑precision measurements of half‐lives for 69Ge, 73Se, 83Sr, 85mSr, and 63Zn radionuclides relevant to the astrophysical p‑process via photoactivation at the Madison Accelerator Laboratory}, url = {https://par.nsf.gov/biblio/10251461}, DOI = {https://doi.org/10.1007/s10967-020-07589-5}, abstractNote = {The ground state half-lives of 69Ge, 73Se, 83Sr, 63Zn, and the half-life of the 1/2− isomer in 85Sr have been measured with high precision using the photoactivation technique at an unconventional bremsstrahlung facility that features a repurposed medical electron linear accelerator. The γ-ray activity was counted over about 6 half-lives with a high-purity germanium detector, enclosed into an ultra low-background lead shield. The measured half-lives are: T1/2(69Ge) = 38.82 ± 0.07 (stat) ± 0.06 (sys) h; T1/2(73Se) = 7.18 ± 0.02 (stat) ± 0.004 (sys) h; T1/2(83Sr) = 31.87 ± 1.16 (stat) ± 0.42 (sys) h; T1/2(85mSr) = 68.24 ± 0.84 (stat) ± 0.11 (sys) min; T1/2(63Zn) = 38.71 ± 0.25 (stat) ± 0.10 (sys) min. These high-precision half-life measurements will contribute to a more accurate determination of corresponding ground-state photoneutron reaction rates, which are part of a broader effort of constraining statistical nuclear models needed to calculate stellar nuclear reaction rates relevant for the astrophysical p-process nucleosynthesis.}, journal = {Journal of radioanalytical and nuclear chemistry an international journal dealing with all aspects and applications of nuclear chemistry}, volume = {327}, number = {3}, author = {hain, T. A. and Pendeton, J. A. and Silano, S. A. and Banu, A.}, } Save / Share this Record
CommonCrawl
January 2017 , Volume 16 , Issue 1 Global weak solution to 3D compressible flows with density-dependent viscosity and free boundary Mei Wang, Zilai Li and Zhenhua Guo 2017, 16(1): 1-24 doi: 10.3934/cpaa.2017001 +[Abstract](1467) +[HTML](105) +[PDF](472.41KB) In this paper, we obtain the global weak solution to the 3D spherically symmetric compressible isentropic Navier-Stokes equations with arbitrarily large, vacuum data and free boundary when the shear viscosity $\mu$ is a positive constant and the bulk viscosity $\lambda(\rho)=\rho^\beta$ with $\beta>0$. The analysis of the upper and lower bound of the density is based on some well-chosen functionals. In addition, the free boundary can be shown to expand outward at an algebraic rate in time. Mei Wang, Zilai Li, Zhenhua Guo. Global weak solution to 3D compressible flows with density-dependent viscosity and free boundary. Communications on Pure & Applied Analysis, 2017, 16(1): 1-24. doi: 10.3934\/cpaa.2017001. Invariant tori of a nonlinear Schrödinger equation with quasi-periodically unbounded perturbations Jie Liu and Jianguo Si 2017, 16(1): 25-68 doi: 10.3934/cpaa.2017002 +[Abstract](1243) +[HTML](66) +[PDF](690.24KB) This paper is concerned with the derivative nonlinear Schrödinger equation with quasi-periodic forcing under periodic boundary conditions \begin{document} $ {\rm{i}} u_t+u_{xx}+{\rm{i}} (B+\epsilon g(\beta t))(f(|u|^2)u)_x=0, \ \ x\in \mathbb{T}:=\mathbb{R}/2\pi\mathbb{Z}. $ \end{document} Assume that the frequency vector $\beta$ is co-linear with a fixed Diophantine vector $\bar{\beta}\in \mathbb{R}.{m}$, that is, $\beta=\lambda \bar{\beta}$, $\lambda \in [1/2, 3/2]$. We show that above equation possesses a Cantorian branch of invariant $n$--tori and exists many smooth quasi-periodic solutions with $(m+n)$ non-resonance frequencies $(\lambda\bar{\beta}, \omega_{\ast})$. The proof is based on a Kolmogorov--Arnold--Moser (KAM) iterative procedure for quasi-periodically unbounded vector fields and partial Birkhoff normal form. Jie Liu, Jianguo Si. Invariant tori of a nonlinear Schr\u00F6dinger equation with quasi-periodically unbounded perturbations. Communications on Pure & Applied Analysis, 2017, 16(1): 25-68. doi: 10.3934\/cpaa.2017002. An isomorphism theorem for parabolic problems in Hörmander spaces and its applications Valerii Los, Vladimir A. Mikhailets and Aleksandr A. Murach We investigate a general parabolic initial-boundary value problem with zero Cauchy data in some anisotropic Hörmander inner product spaces. We prove that the operators corresponding to this problem are isomorphisms between appropriate Hörmander spaces. As an application of this result, we establish a theorem on the local increase in regularity of solutions to the problem. We also obtain new sufficient conditions under which the generalized derivatives, of a given order, of the solutions should be continuous. Valerii Los, Vladimir A. Mikhailets, Aleksandr A. Murach. An isomorphism theorem for parabolic problems in H\u00F6rmander spaces and its applications. Communications on Pure & Applied Analysis, 2017, 16(1): 69-98. doi: 10.3934\/cpaa.2017003. Existence and symmetry result for fractional p-Laplacian in $\mathbb{R}^{n}$ CÉSAR E. TORRES LEDESMA 2017, 16(1): 99-114 doi: 10.3934/cpaa.2017004 +[Abstract](1898) +[HTML](109) +[PDF](412.83KB) In this article we are interested in the following fractional $p$-Laplacian equation in $\mathbb{R}^n$ \begin{document} $ (-\Delta)_{p}^{s}u + V(x)|u|^{p-2}u = f(x,u) \mbox{ in } \mathbb{R}^{n}, $ \end{document} where $p\geq 2$, $0 < s < 1$, $n\geq 2$ and $f$ is $p$-superlinear. By using mountain pass theorem with Cerami condition we prove the existence of nontrivial solution. Furthermore, we show that this solution is radially simmetry. C\u00C9SAR E. TORRES LEDESMA. Existence and symmetry result for fractional <i>p<\/i>-Laplacian in $\\mathbb{R}^{n}$. Communications on Pure & Applied Analysis, 2017, 16(1): 99-114. doi: 10.3934\/cpaa.2017004. A complete classification of ground-states for a coupled nonlinear Schrödinger system Chuangye Liu and Zhi-Qiang Wang In this paper, we establish the existence of nontrivial ground-state solutions for a coupled nonlinear Schrödinger system \begin{document}$-\Delta u_j+ u_j=\sum\limits_{i=1}^mb_{ij}u_i^2u_j, \quad\text{in}\ \mathbb{R}^n,\\ u_j(x)\to 0\ \text{as}\ |x|\ \to \infty, \quad j=1,2,\cdots, m,$\end{document} where $n=1, 2, 3, m\geq 2$ and $b_{ij}$ are positive constants satisfying $b_{ij}=b_{ji}.$ By nontrivial we mean a solution that has all components non-zero. Due to possible systems collapsing it is important to classify ground state solutions. For $m=3$, we get a complete picture that describes whether nontrivial ground-state solutions exist or not for all possible cases according to some algebraic conditions of the matrix $B = (b_{ij})$. In particular, there is a nontrivial ground-state solution provided that all coupling constants $b_{ij}, i\neq j$ are sufficiently large as opposed to cases in which any ground-state solution has at least a zero component when $b_{ij}, i\neq j$ are all sufficiently small. Moreover, we prove that any ground-state solution is synchronized when matrix $B=(b_{ij})$ is positive semi-definite. Chuangye Liu, Zhi-Qiang Wang. A complete classification of ground-states for a coupled nonlinear Schr\u00F6dinger system. Communications on Pure & Applied Analysis, 2017, 16(1): 115-130. doi: 10.3934\/cpaa.2017005. Asymptotic behavior and uniqueness of traveling wave fronts in a delayed nonlocal dispersal competitive system Kun Li, Jianhua Huang and Xiong Li This paper is concerned with the asymptotic behavior and uniqueness of traveling wave fronts connecting two half-positive equilibria in a delayed nonlocal dispersal competitive system. We first prove the existence results by applying abstract theories. And then, we show that the traveling wave fronts decay exponentially at both infinities. At last, the strict monotonicity and uniqueness of traveling wave fronts are obtained by using the sliding method in the absent of intraspecific competitive delays. Based on the uniqueness, the exact decay rate of the stronger competitor is established under certain conditions. Kun Li, Jianhua Huang, Xiong Li. Asymptotic behavior and uniqueness of traveling wave fronts in a delayed nonlocal dispersal competitive system. Communications on Pure & Applied Analysis, 2017, 16(1): 131-150. doi: 10.3934\/cpaa.2017006. A comparison between random and stochastic modeling for a SIR model Tomás Caraballo and Renato Colucci In this article, a random and a stochastic version of a SIR nonautonomous model previously introduced in [19] is considered. In particular, the existence of a random attractor is proved for the random model and the persistence of the disease is analyzed as well. In the stochastic case, we consider some environmental effect on the model, in fact, we assume that one of the coefficients of the system is affected by some stochastic perturbation, and analyze the asymptotic behavior of the solutions. The paper is concluded with a comparison between the two different modeling strategies. Tom\u00E1s Caraballo, Renato Colucci. A comparison between random and stochastic modeling for a SIR model. Communications on Pure & Applied Analysis, 2017, 16(1): 151-162. doi: 10.3934\/cpaa.2017007. Resonant problems for fractional Laplacian Yutong Chen and Jiabao Su In this paper we consider the following fractional Laplacian equation \begin{document} $ \left\{\begin{array}{ll} (-\Delta).s u=g(x, u) & x\in\Omega,\\ u=0, & x \in \mathbb{R}.N\setminus\Omega,\end{array} \right. $ \end{document} where $ s\in (0, 1)$ is fixed, $\Omega$ is an open bounded set of $\mathbb{R}.N$, $N > 2s$, with smooth boundary, $(-\Delta).s$ is the fractional Laplace operator. By Morse theory we obtain the existence of nontrivial weak solutions when the problem is resonant at both infinity and zero. Yutong Chen, Jiabao Su. Resonant problems for fractional Laplacian. Communications on Pure & Applied Analysis, 2017, 16(1): 163-188. doi: 10.3934\/cpaa.2017008. Continuity of cost functional and optimal feedback controls for the stochastic Navier Stokes equation in 2D Kerem Uǧurlu We show the continuity of a specific cost functional $J(\phi) =\mathbb{E} \sup_{ t \in [0, T]}(\varphi(\mathcal{L}[t, u_\phi(t), \phi(t)]))$ of the SNSE in 2D on an open bounded nonperiodic domain $\mathcal{O}$ with respect to a special set of feedback controls $\{\phi_n\}_{n \geq 0}$, where $\varphi(x) =\log(1 + x)^{1-\epsilon}$ with $0 < \epsilon < 1$. Kerem U\u01E7urlu. Continuity of cost functional and optimal feedback controls for the stochastic Navier Stokes equation in 2D. Communications on Pure & Applied Analysis, 2017, 16(1): 189-208. doi: 10.3934\/cpaa.2017009. Zero dissipation limit to rarefaction wave with vacuum for a one-dimensional compressible non-Newtonian fluid Li Fang and Zhenhua Guo In this paper, we study the zero dissipation limit toward rarefaction waves for solutions to a one-dimensional compressible non-Newtonian fluid for general initial data, whose far fields are connected by a rarefaction wave to the corresponding Euler equations with one end state being vacuum. Given a rarefaction wave with one-side vacuum state to the compressible Euler equations, we construct a sequence of solutions to the one-dimensional compressible non-Newtonian fluid which converge to the above rarefaction wave with vacuum as the viscosity coefficient $\epsilon$ tends to zero. Moreover, the uniform convergence rate is obtained, based on one fact that the viscosity constant can control the degeneracies caused by the vacuum in rarefaction waves and another fact that the energy estimates are obtained under some a priori assumption. Li Fang, Zhenhua Guo. Zero dissipation limit to rarefaction wave with vacuum for a one-dimensional compressible non-Newtonian fluid. Communications on Pure & Applied Analysis, 2017, 16(1): 209-242. doi: 10.3934\/cpaa.2017010. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity Anouar Bahrouni In this paper we consider the following perturbed nonlocal problem with exponential nonlinearity 1 \begin{document}$\begin{cases}-\mathcal{L}_{K}u+ \left|u\right|^{p-2}u+h(u)= f \ \ \ \ \ \mbox{in} \ \ \Omega,\\u=0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{in}\ \ \mathbb{R}^{N}\setminus \Omega,\end{cases}$\end{document} where $s\in (0, 1)$, $N=ps$, $p\geq 2$ and $f\in L.{\infty}(\mathbb{R}^{N})$. First, we generalize a suitable Trudinger-Moser inequality to a fractional functional space. Then, using the Ekeland's variational principle, we prove the existence of a solution of problem (1). Anouar Bahrouni. Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity. Communications on Pure & Applied Analysis, 2017, 16(1): 243-252. doi: 10.3934\/cpaa.2017011. Higher order asymptotic for Burgers equation and Adhesion model Engu Satynarayana, Manas R. Sahoo and Manasa M This paper is focused on the study of the large time asymptotic for solutions to the viscous Burgers equation and also to the adhesion model via heat equation. Using generalization of the truncated moment problem to a complex measure space, we construct asymptotic N-wave approximate solution to the heat equation subject to the initial data whose moments exist upto the order $2n+m$ and $i$-th order moment vanishes, for $i=0, 1, 2\dots m-1$. We provide a different proof for a theorem given by Duoandikoetxea and Zuazua [3], which plays a crucial role in error estimations. In addition to this we describe a simple way to construct an initial data in Schwartz class whose $m$ moments are equal to the $m$ moments of given initial data. Engu Satynarayana, Manas R. Sahoo, Manasa M. Higher order asymptotic for Burgers equation and Adhesion model. Communications on Pure & Applied Analysis, 2017, 16(1): 253-272. doi: 10.3934\/cpaa.2017012. Quasineutral limit for the quantum Navier-Stokes-Poisson equations Min Li, Xueke Pu and Shu Wang In this paper, we study the quasineutral limit and asymptotic behaviors for the quantum Navier-Stokes-Possion equation. We apply a formal expansion according to Debye length and derive the neutral incompressible Navier-Stokes equation. To establish this limit mathematically rigorously, we derive uniform (in Debye length) estimates for the remainders, for well-prepared initial data. It is demonstrated that the quantum effect do play important roles in the estimates and the norm introduced depends on the Planck constant $\hbar>0$. Min Li, Xueke Pu, Shu Wang. Quasineutral limit for the quantum Navier-Stokes-Poisson equations. Communications on Pure & Applied Analysis, 2017, 16(1): 273-294. doi: 10.3934\/cpaa.2017013. The vanishing pressure limits of Riemann solutions to the Chaplygin gas equations with a source term Lihui Guo, Tong Li and Gan Yin 2017, 16(1): 295-310 doi: 10.3934/cpaa.2017014 +[Abstract](1474) +[HTML](161) +[PDF](528.23KB) We study the vanishing pressure limits of Riemann solutions to the Chaplygin gas equations with a source term. The phenomena of concentration and cavitation to Chaplygin gas equations with a friction term are identified and analyzed as the pressure vanishes. Due to the influence of source term, the Riemann solutions are no longer self-similar. When the pressure vanishes, the Riemann solutions to the inhomogeneous Chaplygin gas equations converge to the Riemann solutions to the pressureless gas dynamics model with a friction term. Lihui Guo, Tong Li, Gan Yin. The vanishing pressure limits of Riemann solutions to the Chaplygin gas equations with a source term. Communications on Pure & Applied Analysis, 2017, 16(1): 295-310. doi: 10.3934\/cpaa.2017014. Struwe's decomposition for a polyharmonic operator on a compact Riemannian manifold with or without boundary Saikat Mazumdar Given a high-order elliptic operator on a compact manifold with or without boundary, we perform the decomposition of Palais-Smale sequences for a nonlinear problem as a sum of bubbles. This is a generalization of the celebrated 1984 result of Struwe [19]. Unlike the case of second-order operators, bubbles close to the boundary might appear. Our result includes the case of a smooth bounded domain of $\mathbb{R}^{n}$. Saikat Mazumdar. Struwe\'s decomposition for a polyharmonic operator on a compact Riemannian manifold with or without boundary. Communications on Pure & Applied Analysis, 2017, 16(1): 311-330. doi: 10.3934\/cpaa.2017015. Periodic solutions for nonlocal fractional equations Vincenzo Ambrosio and Giovanni Molica Bisci The purpose of this paper is to study the existence of (weak) periodic solutions for nonlocal fractional equations with periodic boundary conditions. These equations have a variational structure and, by applying a critical point result coming out from a classical Pucci-Serrin theorem in addition to a local minimum result for differentiable functionals due to Ricceri, we are able to prove the existence of at least two periodic solutions for the treated problems. As far as we know, all these results are new. Vincenzo Ambrosio, Giovanni Molica Bisci. Periodic solutions for nonlocal fractional equations. Communications on Pure & Applied Analysis, 2017, 16(1): 331-344. doi: 10.3934\/cpaa.2017016. Global well posedness for the ghost effect system Bilal Al Taki The aim of this paper is to discuss the issue of global existence of weak solutions of the so called ghost effect system which has been derived recently in [C. D. LEVERMORE, W. SUN, K. TRIVISA, SIAM J. Math. Anal. 2012]. We extend the local existence of solutions proved in [C.D. LEVERMORE, W. SUN, K. TRIVISA, Indiana Univ. J., 2011] to a global existence result. The key tool in this paper is a new functional inequality inspired of what proposed in [A. JÜNGEL, D. MATTHES, SIAM J. Math. Anal., 2008]. Such an inequality being adapted in [D. BRESCH, A. VASSEUR, C. YU, 2016] to be useful for compressible Navier-Stokes equations with degenerate viscosities. Our strategy to prove the global existence of solution builds upon the framework developed in [D. BRESCH, V. GIOVANGILI, E. ZATORSKA, J. Math. Pures Appl., 2015] for low Mach number system. Bilal Al Taki. Global well posedness for the ghost effect system. Communications on Pure & Applied Analysis, 2017, 16(1): 345-368. doi: 10.3934\/cpaa.2017017. Erratum: "On the nonlocal Cahn-Hilliard-Brinkman and Cahn-Hilliard-Hele-Shaw systems" [Comm. Pure Appl. Anal. 15 (2016), 299--317] FRANCESCO DELLA PORTA and Maurizio Grasselli FRANCESCO DELLA PORTA, Maurizio Grasselli. Erratum: \"On the nonlocal Cahn-Hilliard-Brinkman and Cahn-Hilliard-Hele-Shaw systems\" [Comm. Pure Appl. Anal. 15 (2016), 299--317]. Communications on Pure & Applied Analysis, 2017, 16(1): 369-372. doi: 10.3934\/cpaa.2017018.
CommonCrawl
Platelet lysate as a novel serum-free media supplement for the culture of equine bone marrow-derived mesenchymal stem cells Maria C. Naskou1, Scarlett M. Sumner1, Anna Chocallo1, Hannah Kemelmakher1, Merrilee Thoresen1, Ian Copland2, Jacques Galipeau3 & John F. Peroni1 Stem Cell Research & Therapy volume 9, Article number: 75 (2018) Cite this article Mesenchymal stem cells (MSCs) produced for clinical purposes rely on culture media containing fetal bovine serum (FBS) which is xenogeneic and has the potential to significantly alter the MSC phenotype, rendering these cells immunogenic. As a result of bovine-derived exogenous proteins expressed on the cell surface, MSCs may be recognized by the host immune system as non-self and be rejected. Platelet lysate (PL) may obviate some of these concerns and shows promising results in human medicine as a possible alternative to FBS. Our goal was to evaluate the use of equine platelet lysate (ePL) pooled from donor horses in place of FBS to culture equine MSCs. We hypothesized that ePL, produced following apheresis, will function as the sole media supplement to accelerate the expansion of equine bone marrow-derived MSCs without altering their phenotype and their immunomodulatory capacity. Platelet concentrate was obtained via plateletpheresis and ePL were produced via freeze-thaw and centrifugation cycles. Population doublings (PD) and doubling time (DT) of bone marrow-derived MSCs (n = 3) cultured with FBS or ePL media were calculated. Cell viability, immunophenotypic analysis, and trilineage differentiation capacity of MSCs were assessed accordingly. To assess the ability of MSCs to modulate inflammatory responses, E. coli lipopolysaccharide (LPS)-stimulated monocytes were cocultured with MSCs cultured in the two different media formulations, and cell culture supernatants were assayed for the production of tumor necrosis factor (TNF)-α. Our results showed that MSCs cultured in ePL media exhibited similar proliferation rates (PD and DT) compared with those cultured in FBS at individual time points. MSCs cultured in ePL showed a statistically significant increased viability following a single washing step, expressed similar levels of MSC markers compared to FBS, and were able to differentiate towards the three lineages. Finally, MSCs cultured in ePL efficiently suppressed the release of TNF-α when exposed to LPS-stimulated monocytes similar to those cultured in FBS. ePL has the potential to be used for the expansion of MSCs before clinical application, avoiding the concerns associated with the use of FBS. Mesenchymal stem cells (MSCs) are multipotent self-renewing cells that have been implicated in orchestrating the repair of damaged tissues by modulating the endogenous repair process through interacting with the inflammatory response of the injured tissue [1,2,3]. The preparation of MSCs before clinical application requires their primary isolation and ex vivo expansion to propagate an adequate number of cells for transplantation. Fetal bovine serum (FBS) is the current gold standard culture additive used as a source of growth factors, hormones, and vital nutrients to support MSC expansion in the laboratory [4,5,6,7,8,9]. Unfortunately, there is concerning evidence to show that FBS contains endotoxins (such as lipopolysaccharide (LPS)) and xenogeneic antigens that may alter the phenotype of MSCs grown in FBS, rendering these cells immunogenic [10,11,12]. This may prompt the immune system to reject MSCs following introduction into the recipient, even when the delivered MSCs are autologous to the host. The transplantation of MSCs cultured with traditional culture techniques is also a potential route of transmission of FBS-derived animal pathogens, such as prions and viruses [13,14,15]. Furthermore, the Food and Drug Administration (FDA) has encouraged the use of xenoprotein-free culture conditions for the expansion of MSCs in humans to avoid adverse effects related to FBS [16]. These facts, together with the rising cost of FBS and ethical concerns related to the manufacturing of FBS, underpin the rationale behind the development of FBS-free media to support the expansion of MSCs for clinical purposes. To this end, several studies have investigated the use of platelet-derived products, such as platelet lysate (PL), obtained following the lysis of platelets from platelet concentrates or platelet-rich plasma (PRP) as a media supplement for the in vitro culture of various types of cells [4]. Human PL is a reportedly superior alternative to FBS and serum for the ex vivo expansion of MSCs which, in the presence of PL, maintain their differentiation potential, immune-phenotype, and immunomodulatory activities [9, 17,18,19]. In addition to the major role platelets play in hemostasis, they are a principal source of growth factors such as platelet-derived growth factor (PDGF), transforming growth factor (TGF)-β1, vascular endothelial growth factor (VEGF), epidermal growth factor (EGF), attachment factors, and enzymes found in serum. These factors can enhance the recruitment, proliferation, and differentiation of MSCs, but also exhibit anti-inflammatory and angiogenic properties [20,21,22]. In veterinary medicine, equine PL (ePL) obtained from whole blood via two-step centrifugation can be used instead of FBS for the culture of equine bone marrow-derived MSCs and the short-term expansion of equine cord blood MSCs [6, 23]. Differences in the preparation process of PL such as platelet separation methods (apheresis versus two-step centrifugation), platelet activation (freeze/thaw cycles versus calcium chloride), and removal of platelet fragments, can affect PL growth factor concentrations and therefore influence the proliferative rate and the differentiation capacity of MSCs [5, 24, 25]. We have recently shown that ePL can be safely generated after performing plateletpheresis in awake and standing horses [26]. Furthermore, our studies suggested that ePL suppresses the release of proinflammatory cytokines from LPS-stimulated equine monocytes and that it can be successfully used as a media supplement for the culture of cells without triggering immune responses [27]. There are, however, no detailed long-term MSC functionality studies evaluating the use of ePL obtained from platelet concentrates via plateletpheresis as a media supplement for the ex vivo culture of equine bone marrow-derived MSCs. Therefore, the main objective of this study was to evaluate the use of ePL pooled from donor horses as a homologous media supplement to rapidly expand equine bone marrow-derived MSCs in culture. As part of the evaluation, we wanted to compare the phenotypic characteristics, trilineage differentiation, and immunomodulatory properties of equine MSCs cultured in ePL compared to MSCs cultured in FBS. We hypothesized that ePL, produced via apheresis, could function as the sole media supplement to culture expand equine bone marrow-derived MSCs in a manner comparable to FBS and without altering their phenotype, trilineage differentiation, or immunomodulatory capacities. Preparation of ePL The preparation of ePL was conducted as previously described [26]. Briefly, platelet concentrates were obtained following plateletpheresis (COBE Spectra Dual-Needle) performed in five mix-breed horses belonging to the University of Georgia equine blood donors. The study protocol (IACUC approval #A2015 02–023-Y1-A1) was approved by the University of Georgia Institutional Animal Care and Committee. The platelets were fractured using two freeze-thaw cycles followed by three centrifugation cycles. The ePL was then filtered through a 40-μm Falcon strainer (Corning Inc., Corning, New York) and a 0.45-μm cellulose acetate membrane (EMD Millipore, Billerica, Massachusetts) to remove cellular debris. An equal portion of lysates from each horse was combined to obtain a pooled product after thawing at 37 °C, thorough mixing, and centrifugation at 3485 g for 10 min at 4 °C [28]. Isolation and culture of equine bone marrow-derived MSCs with FBS or ePL media supplement Bone marrow was obtained from three healthy mix-breed horses ranging from 3 to 20 years old and cultured under standard conditions. Specifically, bone marrow was aseptically harvested from the sternum of three horses using a bone marrow collection device (Jamshidi, Jorgensen Laboratories, Inc., Loveland, CO) and 15 ml of marrow was aspirated in two syringes containing 2500 units of heparin (Hospira, West-Wards, Eatontown, NJ). Bone marrow-derived MSCs were expanded according to standardized plate adherence techniques [29, 30]. Bone marrow aspirates from each horse were mixed thoroughly and plated equally in two 150-mm culture dishes (TPP, Trasadingen, Switzerland). MSC basal media, containing low-glucose Dulbecco's modified Eagle's medium with 4.5% g/L glucose and sodium pyruvate without l-glutamine (DMEM; Cellgro, Mediatech Inc., Manassas, VA), 2 mM l-glutamine (Gibco, Invitrogen, Auckland, New Zealand), 50 U/ml penicillin (Gibco, Invitrogen), 50 μg/ml streptomycin (Gibco, Invitrogen) and 10% FBS was added to each plate and MSCs were cultured under standard conditions (37 °C and 5% CO2). Bone marrow was replated and fresh standard cell culture media was added every 3 days until the formation of adherent bone marrow MSC colonies was observed. Upon reaching 80–90% confluency, the cells were harvested with 0.05% trypsin-EDTA (Gibco, Invitrogen), counted using a hemocytometer, and cryopreserved. MSCs were thawed and reseeded as "Passage 1" (P1) at a density of 6000 cells/cm2 in the presence of standard cell culture media and allowed to recover. Upon reaching 80–90% confluency, the cells were passaged via digestion with 0.05% trypsin-EDTA (Gibco, Invitrogen) and counted with an automated cell counter (Bio-Rad, Hercules, CA). Experimental cell lines were established by plating MSCs (P2; n = 3) at a density of 6000 cells/cm2 in 150-mm culture dishes with MSC basal media supplemented with either 10% FBS (FBS culture media) or 10% ePL (ePL culture media). Heparin (2 IU/ml) was added to the ePL culture media to prevent in vitro gel formation. Cells were incubated at 37 °C with 5% CO2 and media were replaced every 2 days. For the subsequent passages cells upon reaching 80% confluence were imaged with inverted microscope, passaged, replated, and cryopreserved with either FBS or ePL culture media containing 10% DMSO for future use. Cell growth kinetics: population doublings and doubling time For long-term cell proliferation studies, MSCs from three individual horses (P4; n = 3) were plated in triplicate at a density of 1000 cells/cm2 in six-well culture plates (Corning™ Costar™, Thermo Scientific, Hampton, NH) with 10% FBS or 10% ePL culture media and permitted to grow under standard cell culture conditions for 32 days. Every 4 days, MSCs in each media formulation were harvested via digestion with 0.05% trypsin and counted via an automatic cell counter (Bio Rad Laboratories, Hercules, CA). Population doublings (PD) and doubling time (DT) for each passage was calculated using the following two formulae [31]: $$ PD=\mathit{\ln}\;{N}_f/{N}_i/\mathit{\ln}2 $$ $$ DT= CT/ PD $$ where DT is the doubling time in days, CT is the cell culture time, PD is the population doublings, N f is the final number of cells, and N i is the initial number of cells. All counts were performed in triplicate. Cell viability Cell viability was assessed both with the trypan blue exclusion test and Live/Dead flow cytometry. For the flow cytometry analysis, MSCs in each media formulation were harvested at P5 via digestion with 0.05% trypsin and transferred into a 50-ml conical tube for centrifugation at 200 g for 4 min at room temperature. Following aspiration of excess media, cells were either washed three times with phosphate-buffered saline (PBS) with calcium and magnesium(+/+) and PBS without calcium and magnesium (−/−) or once with PBS (−/−) followed each time by a centrifugation cycle. MSCs were counted using an automated cell counter and stained with 0.4% Trypan blue solution (VWR, Radnor, PA). One million MSCs cultured in FBS or ePL culture media were resuspended in 1 ml PBS and stained with 4 μM ethidium homodimer (Biotium, Fremont, CA) and 2 μM Calcein Blue AM (Thermo Fisher Scientific, Waltman, MA). MSCs stained with either ethidium homodimer or Calcein Blue AM alone were used as control groups. As a negative control, MSCs were harvested, fixed with 4% paraformaldehyde (PFA) for 20 min on ice, washed with PBS, and stained with both ethidium homodimer and Calcein Blue AM. Samples were analyzed by flow cytometry and 50,000 events were collected per sample. Data were analyzed by Flow Jo software (NIH). Trilineage differentiation assays To ensure that equine MSCs cultured in ePL were capable of trilineage differentiation, MSCs at P5 or P6 (n = 3), expanded with FBS or ePL culture media, were used for differentiation assays. Undifferentiated MSCs, cultured under standard cell culture conditions, were used as negative controls in all experiments. All experiments were performed in triplicate for each biological replicate. Osteogenesis Equine MSCs (n = 3) were plated at 100,000 cells/well in six-well plates in FBS or ePL culture media until reaching 90% confluency. Cell culture medium was replaced by HyClone AdvanceSTEM osteogenic medium supplemented with 50 μg/ml streptomycin and 50 U/ml penicillin, exchanged every 2–3 days for 28 days. Osteocytes were identified using Van Kossa staining. Specifically, cultures were fixed with 4% PFA on ice for 15 min and stained with 1% silver nitrate for 20 min under ultraviolet light. Plates were washed with distilled water, and unreacted silver was removed by the addition of 5% sodium thiosulfate for 5 min at room temperature. The plates were then washed again, and cultures were imaged using a Leica inverted microscope [32]. For the quantification of calcium deposition, equine MSCs (n = 3) were plated at 21,000 cells/cm2 in a flat bottom 96-well plate and cultured with media supplemented with FBS or ePL. Upon reaching 90% confluency, cell culture medium was replaced by HyClone AdvanceSTEM osteogenic medium supplemented with 50 μg/ml streptomycin and 50 U/ml penicillin, exchanged every 2–3 days for 28 days. Differentiation of MSCs to osteocytes was determined using the Calcium Liquicolor® Test (StanBio) according the instructions of the manufacturer. Calcium was extracted by the addition of 0.6 N HCL, stored overnight at 4°C; supernatants were combined at a ratio of 1:20 with an equal portion mixture of the color and the base reagent and plates were read at 550 nm (SpectraMax). Adipogenesis Equine MSCs (n = 3) were plated at 100,000 cells/well in six-well plates and cultured with FBS or ePL culture media until reaching 90% confluency. Medium was then replaced by adipogenic media consisting of DMEM, 10% FBS, 5% rabbit serum, 0.5 μΜ dexamethasone, 60 μΜ indomethacin, 0.5 mM IBMX, 1 μM insulin, and 50 U/ml penicillin and 50 μg/ml streptomycin [31]. Medium was exchanged every 2–3 days for 21 days. Cultures were fixed with 4% PFA for 15 min over ice and rinsed with 60% isopropanol. For the identification of lipid droplets an Oil Red O (Sigma, St. Louis, MO) working solution in 60% isopropanol was added to the cultures for 20 min at room temperature. Cultures were imaged using a Leica inverted microscope. Chondrogenesis One million equine MSCs (n = 3) cultured with FBS or ePL culture media were pelleted in sterile polypropylene 15-ml centrifuge tubes and incubated for 48 h with their respective media. The medium was then discharged and replaced with HyClone AdvanceSTEM chondrogenic medium supplemented with 50 μg/ml streptomycin and 50 U/ml penicillin with a medium change every 2–3 days for 28 days. Cultures were fixed with 4% PFA for 15 min over ice, rinsed with PBS, and submitted to histology for staining with Alcian Blue 8GX. Samples were visualized using an Olympus microscope. For quantification of Alcian Blue staining, equine MSCs (n = 3) were plated at 100,000 cells/well in conical bottomed 96-well plates and centrifuged for 10 min; they remained in the presence of ePL or FBS media for 48 h. Cell culture medium was replenished with HyClone AdvanceSTEM chondrogenic medium every 2–3 days for 28 days. A 0.2% Alcian Blue 8GX in 0.1 M HCL solution was applied to the fixed chondrogenic pellets and incubated overnight at room temperature. Pellets were rinsed with PBS and Alcian Blue stain was extracted by the addition of 6 M guanidine/HCL for 24 h at 4 °C; absorbance was measured at 650 nm (Biotek Synergy). Phenotypic analysis The impact of ePL medium on MSC surface molecule expression levels was evaluated by immunophenotypic analysis of MSCs (n = 3; P4) expanded with FBS or ePL cell culture media for the expression levels of CD44, CD90, CD105, CD45, and MHC-II markers using flow cytometry. MSCs were harvested, washed three times with PBS by centrifugation at 200 × g for 4 min and fixed with 4% PFA for 15 min over ice. Following three more washes with PBS, cells were pelleted and a blocking solution (10% goat serum (Sigma-Aldrich, St. Louis, MO) diluted in PBS) was added at a final concentration of 1 × 106 cells/ml for 45 min at room temperature. The antibodies used are listed in Table 1. All antibodies used in this study were validated in equine fibroblasts and peripheral blood mononuclear cells (PBMCs). Table 1 List of primary unconjugated antibody characteristics Aliquots of 200 μl containing 2 × 105 cells were centrifuged at 200 × g for 4 min to obtain a dry pellet. After decanting the supernatant, 100 μl of primary unconjugated antibody diluted in blocking solution was added for 1 h at room temperature. Next, samples were washed three times with blocking solution and a secondary fluorescent-conjugated goat anti-mouse IgG antibody (FITC, Sigma-Aldrich, St. Louis, MO) or fluorescent goat anti-mouse IgM antibody (FITC, Sigma-Aldrich, St. Louis, MO) was added to the samples and allowed to further incubate for 1 h at room temperature. Cells were washed three times with blocking solution by centrifugation. MSCs from all animals expanded in FBS or ePL culture media stained with only fluorescence-conjugated secondary antibody were used as control groups for the detection of background autofluorescence staining. To identify nonspecific fluorescence staining, MSCs were stained with unconjugated mouse IgG1 K isotype (1:500, Biolegend, San Diego, CA) or mouse IgM K isotype (1:500, Washington State University Monoclonal Antibody Center, Pullman, WA) followed by the addition of the corresponding fluorescence-conjugated secondary antibody. Samples were reconstituted in 200 μl blocking buffer, analyzed by flow cytometry (BD Accuri™ C6), and 10,000 events were collected per sample. The mean percentage of positive cells was calculated by subtracting the percentage of positive cells of the fluorescence-conjugated secondary antibody from the percentage of each cell surface marker. Effect of MSCs on LPS-driven monocyte activation Equine PBMCs were isolated according to validated protocols [33]. Briefly, 120 ml of peripheral blood was obtained in two 60-ml syringes each containing 1.5 ml 100 μM EDTA. The leukocyte rich plasma was layered onto leukocyte separation media (Corning Cellgro® Inc., Manassas, VA) and centrifuged for 30 min. PBMCs were collected and viability was assessed via trypan blue exclusion (> 95% for all three biologicals). Cells were resuspended in media consisting of RPMI-1640 supplemented with 10% donor horse serum (DHS) and plated in 150-mm plates for 2 h at 37 °C under 5% CO2. After 2 h of incubation, adherent PBMCs, now mainly monocytes [34], were harvested and used for the following experiments. To assess the immunomodulatory ability of MSCs cultured with either 10% FBS or 10% ePL culture media, a cytokine production assay was performed according to established protocols [35]. Equine MSCs (n = 3) at P6 were placed at the bottom of 12-well transwell plates at 100,000 cells/well and allowed to adhere for 12 h in the presence of their corresponding medium. The following day, medium was aspirated and replaced with 1.5 ml of RPMI-1640 supplemented with 10% DHS. Half a milliliter containing 400,000 equine monocytes (ratio 1:4) from the three individuals (n = 3) were added to each insert of a transwell plate with pore size of inserts 0.4 μm (Corning, NY), stimulated with 50 ng/ml of E. coli 0111:B4 LPS (List Biologicals Inc.) and incubated at 37 °C under 5% CO2. At 6 and 18 h time points, cell culture supernatants were collected and assayed for the production of the proinflammatory cytokine tumor necrosis factor (TNF)-α as an indicator of inflammatory responses [27]. Normality of the data was evaluated by visual examination of histograms of the residuals, normal plots of residuals, and by using the Shapiro-Wilks test. Equality of variances was assessed using Levene's test and plotting residuals against the fitted value. Statistically significant differences for viability, semiquantification of osteogenesis and chondrogenesis, and immunophenotypic profile of MSCs cultured in FBS or ePL were detected by paired t test. A mixed model or a two-way repeated-measures analysis of variance (ANOVA) was used to assess the effect of the medium on MSC proliferation and immunomodulatory capacity, respectively. Multiple pairwise comparisons, if necessary, were obtained using the Tukey-Kramer test or Sidak test. All data were analyzed by commercially available statistical packages (Stata version 13.1, StataCorp LP, College Station, TX, or GraphPad Prism 7.0c, La Jolla, CA). The level of significance was set at P < 0.05. All results are reported as mean ± standard deviation (SD) unless otherwise stated. Cell growth kinetics: PD and DT Cells in both media conditions exhibited similar morphology at every passage, showing spindle-shape characteristics (Fig. 1). These findings were observed in all cell lines and were consistent among triplicates. Cellular morphology of equine bone marrow-derived MSCs cultured in 10% fetal bovine serum (FBS) or equine platelet lysate (ePL) culture media from passage P2 to P5. Images are shown from one representative cell line. Scale bars = 100 μm A statistically significant difference in proliferation rates for equine MSCs cultured in ePL or FBS at different time points was not identified (P > 0.05). Specifically, the PD for MSCs cultured in ePL was 4.17 ± 0.25 at day 4, 6.35 ± 0.49 at day 8, 7.04 ± 0.32 at day 12, 6.97 ± 0.4 at day 16, 7.26 ± 0.43 at day 20, 7.09 ± 0.47 at day 24, 7.17 ± 0.63 at day 28, and 7.14 ± 0.11 at day 32, whereas PD for MSCs cultured in FBS was 4.43 ± 0.95 at day 4, 6.36 ± 0.92 at day 8, 6.58 ± 0.84 at day 12, 6.51± 0.71 at day 16, 6.91 ± 0.62 at day 20, 7.02 ± 0.72 at day 24, 6.66 ± 0.73 at day 28 and 6.70 ± 0.66 at day 32 (Fig. 2a). Moreover, DT (in days) for MSCs in ePL was 0.96 ± 0.1 at day 4, 1.27 ± 0.1 at day 8, 1.7 ± 0.08 at day 12, 2.3 ± 0.13 at day 16, 2.76 ± 0.16 at day 20, 3.4 ± 0.23 at day 24, 3.93 ± 0.37 at day 28, and 4.48 ± 0.07 at day 32, while DT for the FBS control group was 0.93 ± 0.2 at day 4, 1.28 ± 0.1 at day 8, 1.85 ± 0.24 at day 12, 2.48 ± 0.27 at day 16, 2.9 ± 0.28 at day 20, 3.44± 0.37 at day 24, 4.24 ± 0.50 at day 28, and 4.80 ± 0.5 at day 32 (Fig. 2b). Cell growth kinetics of equine bone marrow-derived mesenchymal stem cells (MSCs) grown in 10% fetal bovine serum (FBS) or equine platelet lysate (ePL) culture media from day 4 (D4) to day 32 (D32). a Population doublings (PD) and b) doubling time (DT) in days of cells cultured with FBS (MSCs-FBS) or with ePL (MSCs-ePL). Data are shown as mean ± standard deviation; n = 3. All data were combined from triplicate cell cultures MSCs cultured with ePL culture medium exhibited similar percentages of viable cells (64.6 ± 7.67) compared with MSCs cultured in FBS culture medium (61.83 ± 10.42), as evaluated by flow cytometry following extensive washes with PBS (Fig. 3a). The percentage of dead cells in the negative control was 98.1% (data not shown). A decreased percentage of viable MSCs was noticed when cells underwent extensive washes compared to the baseline trypan blue viability assessment (data not shown) regardless of the culture medium used. We chose to perform extensive washes after collecting the cells from the plate in order to mimic the conditions that are commonly used in preparation for the clinical use of MSCs. After noticing a decline in the recovery of viable cells after extensive washes, we chose to include flow cytometry viability data from MSCs collected after a single washing step. Our data revealed that MSCs in ePL had a statistically significant higher percentage of viable cells (84.33 ± 3.45) compared to those in FBS (74.73 ± 6.18) (Fig. 3b). Flow cytometric analysis of the viability of MSCs cultured with different media formulations. The percentage of viable cells cultured with fetal bovine serum (FBS) or equine platelet lysate (ePL) media supplement following (a) extensive washing steps or (b) a single washing step. a Regardless of the media used, there was no statistically significant difference in the percentage of viable cells between FBS and ePL following extensive washing steps. b Culture of MSCs with ePL medium resulted in a statistically significant higher percentage of viable cells compared with those in FBS following a single washing step. Data are shown as mean ± standard deviation; n = 3. *P < 0.05 The in vitro differentiation assays were performed in equine bone marrow-derived MSCs (n = 3) cultured in FBS or ePL culture media at P5 or P6 following their culture in the appropriate differentiation medium. Undifferentiated MSCs cultured for the same period were used as negative controls and failed to differentiate as indicated by lack of specific stain uptake and alteration of cellular morphology. Our assays revealed that MSCs from all three cell lines cultured in FBS or ePL media were able to differentiate towards all three lineages following exposure to the corresponding induction medium (Fig. 4). Specifically, MSCs cultured in both media differentiated into osteocytes as shown by increased Van Kossa silver staining for calcium deposition following 28 days of osteogenic induction compared with the undifferentiated group (Fig. 4a, d, g). Our quantification data revealed no statistically significant differences in the amount of calcium production for MSCs grown in FBS (0.81 ± 0.06 OD) compared with ePL (0.80 ± 0.06 OD) (Fig. 5a). For adipogenesis, MSCs cultured in both media differentiated to adipocytes 21 days following induction compared with the undifferentiated cells as indicated by Oil Red O staining for the deposition of lipid droplets (Fig. 4b, e, h). Finally, MSCs in FBS or ePL, following 28 days of chondrogenic media exposure, showed increased proteoglycans by Alcian Blue staining compared with the undifferentiated group (Fig. 4c, f, i). MSCs cultured with ePL exhibited a statistically significantly higher amount of proteoglycan staining (0.14 ± 0.03 OD) as indicated by quantification of Alcian Blue uptake in cell pellets compared with those cultures in FBS (0.11 ± 0.01 OD) (Fig. 5b). Trilineage differentiation capacity of MSCs (n = 3) grown in fetal bovine serum (FBS) or equine platelet lysate (ePL) media supplement. a Van Kossa, b Oil Red O, and c Alcian Blue staining of control undifferentiated cells (U/D). d Van Kossa, e Oil Red O, and f Alcian Blue staining of MSCs previously cultured with FBS media. g Van Kossa, h Oil Red O, and i Alcian Blue staining of MSCs cultured with ePL media. MSCs cultured in both types of media were able to undergo trilineage differentiation. Images are shown from one representative cell line. Scale bars = 100 μm Quantification data of a osteogenic and b chondrogenic capacity of equine bone marrow-derived MSCs previously cultured in fetal bovine serum (FBS) or equine platelet lysate (ePL). Undifferentiated (UD) MSCs were used as negative control. a No statistically significant differences were detected in the amount of calcium deposition during osteogenic differentiation of MSCs grown in ePL (ePL DF) compared with FBS (FBS DF). b Statistically significantly elevated levels of proteoglycans were found for MSCs previously grown in ePL (ePL DF) compared with FBS (FBS DF) during chondrogenic differentiation. Data are shown as mean ± standard deviation; n = 3. *P < 0.05. OD optical density The results of the phenotypic analysis are shown in Table 2 as analyzed by flow cytometry for the expression levels of the positive markers CD44, CD90, and CD105 and the negative markers CD45 and MHC-II. Table 2 Cell surface marker expression of equine bone marrow-derived mesenchymal stem cells cultured in fetal bovine serum (FBS) or equine platelet lysate (ePL) media supplement by flow cytometry (n = 3) No statistical significance was detected for the expression levels of CD44, CD105, and MHC-II. The percentage of positive cells for CD45 was 18.89 ± 12.37% in MSCs cultured in ePL compared with 32.29 ± 12.58% in MSCs cultured in FBS, exhibiting a statistically significant reduction (P = 0.0109) of the negative marker. However, MSCs in FBS expressed a statistically significantly higher (87.87 ± 3.29% versus 80.08 ± 1.48%) percentage of positive cells for the marker CD90 compared with MSCs in ePL (P = 0.0199). The ability of MSCs to modulate inflammation was tested according to protocols previously validated in our laboratory [35]. After 6 h of incubation with LPS, equine monocytes produced TNF-α concentrations (771.6 ± 246.31 pg/ml) that were markedly greater than those found in the supernatants of the nonstimulated monocytes (100.5 ± 174.04 pg/ml) (Fig. 6). This trend was even more obvious after 18 h when TNF-α concentrations were significantly increased in supernatants from LPS-stimulated monocytes (4025 ± 943.07 pg/ml) compared to nonstimulated controls (382.4 ± 346.98 pg/ml). The effect of mesenchymal stem cells (MSCs) cultured in different expansion media on cytokine production from lipopolysaccharide (LPS)-stimulated monocytes. Tumor necrosis factor-α (TNF-α) expression from LPS-stimulated equine monocytes alone or following the addition of MSCs (n = 3) cultured in fetal bovine serum (FBS) or equine platelet lysate (ePL) 6 and 18 h following incubation. Unstimulated monocytes (n = 3) were used as negative control (Mono). No effect of MSCs was seen on the production levels of TNF-α 6 h following their addition to LPS-stimulated monocytes. A statistically significant decrease in TNF-α was detected when MSCs grown in FBS or ePL were added to LPS-stimulated monocytes 18 h following incubation. Regardless of the expansion media, MSCs retain their ability to modulate the production of proinflammatory cytokines from LPS-stimulated monocytes. Date are shown as mean ± standard deviation; n = 3. *P < 0.05 #P < 0.05; statistically significant from all other groups No significant reduction in TNF-α production was measured when LPS-stimulated monocytes were coincubated for 6 h with MSCs cultured in either FBS or ePL media (MSCs in FBS, P = 0.9999; MSCs in ePL, P = 0.9829). In contrast, after 18 h of coculture, MSCs cultured in FBS or ePL media were able to significantly suppress TNF-α production from LPS-stimulated monocytes (P = 0.0017 and P = 0.0064, respectively) compared with LPS-stimulated monocytes incubated alone. Specifically, LPS-stimulated monocytes alone produced 4025 ± 943.07 pg/ml of TNF-α whereas when MSCs cultured in ePL culture medium were added to LPS-stimulated monocytes, TNF-α production was 2286 ± 983.79 pg/ml. Coculture of LPS-stimulated monocytes with MSCs grown in FBS resulted in the production of 1798 ± 669.75 pg/ml of TNF-α resulting in no significant difference between the suppressive effect of MSCs cultured in ePL or FBS (P = 0.6218). In this study, we were able to show that ePL pooled from donor horses and produced via apheresis can be successfully used as a homologous medium supplement for the in vitro expansion of equine bone marrow-derived MSCs. Moreover, our data support the notion that prolonged culture in ePL medium without FBS preserves the MSC cell surface marker expression and functional characteristics such as trilineage differentiation and immunomodulatory capacity. One of the major concerns that hampers the clinical application of equine MSCs is the use of FBS for the ex vivo expansion of the cells prior to introduction into the host. Avoiding the use of FBS in cell culture eliminates the concerns related to xeno-immunization of the recipients, transmission of bovine pathogens, and ethical controversies related to the collection methods for FBS [4]. We completed this study because we feel it is of paramount importance to develop MSC cell culture supplements homologous to the species of interest prior to the clinical application of stem cell-based clinical trials or biological therapies. By establishing the use of equine-derived media supplements for the ex vivo expansion of equine MSCs, researchers will be able to proceed to clinical trials without the concerns related to the presence of xenoantigens found in FBS as well as to better standardize an "off-the shelf" stem cell therapeutic product according to international regulations [36,37,38]. Earlier and recent efforts have focused on the development of xenoprotein-free media for the expansion of MSCs. Studies have shown that the use of serum-free media for the culture of canine and equine MSCs leads to inferior cell proliferation rates and altered immunomodulatory capacity of MSCs compared to standard FBS-based culture media [39]. These findings further prompted the need to develop a homologous supplement rich in growth factors and chemokines that support the proliferation of MSCs and preserve their functional immunomodulatory characteristics. Only a few studies have investigated the use of ePL produced from whole blood for the expansion of equine MSCs [6, 23]. One of the major advantages of generating PL from platelet concentrates obtained via a standardized plateletpheresis technique is that the final product has a high concentration of platelets with a very low leukocyte contamination [4]. Our data indicate that MSCs cultured in ePL exhibit comparable proliferation rates to those seen with FBS as evaluated by calculation of standard growth kinetic parameters such as PD and DT. It is well documented that culturing human bone marrow- and adipose-derived MSCs with human PL increases the proliferation of MSCs over time compared to FBS [5, 18, 40, 41]. Our results seem to agree with those findings and are comparable to those published in the veterinary literature suggesting that ePL can be used for the expansion of equine MSCs in place of FBS [6, 23]. However, a study conducted by Russell and Koch [6] showed that ePL can be used as a media supplement only for the short-term expansion of equine umbilical cord-derived MSCs. However, the ePL that was used in this study was prepared from whole blood and tested on proliferation abilities of umbilical cord-derived MSCs. As mentioned earlier, differences in the preparation methods of PL used for cell culture and the source of MSCs can affect the proliferative capacity of the cells [5, 24, 25, 42]. It is important to note that our growth kinetic studies showed progressively improved proliferation times the longer that MSCs remained in culture. The fact that significant differences were not detected between MSCs cultured in FBS or ePL at every time point may be attributable to the percentage of ePL we used (10%) for the supplementation of the basal media. Griffiths and colleagues have shown that supplementation of basal media with 5% human PL resulted in a statistically significant increase in the proliferation rates of human MSCs compared to 10% FBS [28]. Future studies should include detailed investigations of escalating concentrations of ePL for the culture of equine MSCs. Regarding the viability of MSCs following culture expansion, we found interesting yields of cell recovery depending on the methods we used. It is not uncommon for laboratories to include an extensive series of washes with PBS after harvesting MSCs from the culture plates. One of the reasons for this practice is to ensure that cells used in clinical applications do not carry any FBS-derived xenoantigens that would render the cells subject to immune-recognition once introduced to the recipient [43, 44]. When we followed this practice in our experiments and compared pre- and postwash recovery numbers we found a sharp decline in the percentage of viable cells regardless of whether they were cultured in FBS or ePL. In subsequent experiments, we conducted cell viability analyses after only one wash prior to performing viability assays and recorded much higher viability counts similar to those found before washing the cells. Most interestingly, following only one wash MSCs cultured in ePL showed significantly higher viability scores than those cultured in FBS. Based on these results, we suggest that ePL as a homologous medium requires less extensive postculture manipulation resulting in a superior recovery of viable cells. Trilineage differentiation is one way in which the stem cell research community has attempted to ensure that cultured cells are indeed MSCs [45, 46]. In keeping with this convention, we wanted to verify that MSCs in ePL retained their trilineage differentiation capacity and showed that osteogenic and adipogenic differentiation occurred with no significant differences in cellular morphology compared to MSCs in FBS, in accord with previous studies [5, 9, 41, 47]. With respect to chondrogenic differentiation, we noticed that MSCs in ePL produced statistically significantly greater amounts of proteoglycans which may indicate that ePL promotes a chondrogenic differentiation pattern different to that induced by FBS. This will have profound implications for future clinical applications and especially for the treatment of cartilage defects. The literature has suggested that MSCs grown in the presence of platelet-derived biologicals such as PRP or PL express high levels of chondrogenic markers and extracellular cartilage matrix [48,49,50,51,52], likely because of the release of platelet-derived chondrogenic growth factors such as TGF-β, VEGF, PDGF, insulin-like growth factor (IGF)-1, and fibroblast growth factor (FGF)-2 [53]. TGF-β seems to be especially important for the synthesis of proteoglycans and collagen type II [54, 55] and for the differentiation of MSCs into chondrocytes, a process that has been documented by measuring the chondrogenic-related transcriptional factor Sox9 and mRNA expression of collagen type II [56]. It is possible that the relatively high concentrations of TGF-β present in ePL [26] might have been responsible for favoring MSC chondrogenic differentiation. One other criteria that has been proposed as essential by the International Society of Cellular Therapies (ISCT) for the characterization of human MSCs includes the positive identification of the markers CD73, CD90, and CD105, and the absence of CD34, CD45, CD11b or CD14, CD79α or CD19, and HLA class II [46]. Unfortunately, and although much needed, such a consensus has not been reached in equine research regarding the panel of CD markers that should be tested to characterize equine MSCs [57]. Although we believe the veterinary research community should achieve a unanimous opinion on the characterization of MSCs, one obstacle that has hampered this effort is the absence of reliable commercially available monoclonal antibodies specific for equine cells. Regardless, in an attempt to further characterize our cells we evaluated the MSC phenotypic profile by quantifying the expression CD44, CD90, CD105, CD45, and MHC-II, which are markers commonly used in equine stem cell research and have been previously validated in our laboratory. Our immunophenotypic analysis showed that MSCs grown in both media supplements exhibited no statistically significant differences for CD44, CD105, and MHC-II. However, MSCs were characterized by statistically significantly lower percentages of the negative CD45 when cultured in ePL compared to FBS. In addition, for the positive CD90 we saw a decrease in the percentage of positive cells for the MSCs cultured in ePL compared with FBS. There has been strong evidence suggesting that different types of culture media can affect or even alter the MSC phenotypic profile [58]. In fact, these characteristics can be affected by the isolation techniques and the media used for their culture expansion [59, 60]. Most importantly, it is well documented that contamination of MSC cultures with other cell types is possible, especially when plastic adherence methodologies are used for their initial isolation from bone marrow aspirates resulting in an unexpectedly heterologous cell population [61]. Even though in this study our initial MSC isolation techniques were performed using the plastic adherence method, we were satisfied to find a relatively uniform cell population. It is not unlikely that our isolation technique, although widely applied across laboratories, might be responsible for the increased percentage of MSCs positive for CD45 in the FBS group. MSCs are clinically attractive because of their reported ability to modulate immune responses and influence inflammatory processes. Specifically, it is well documented that activated MSCs can interact with cells of the immune system such as B cells, T cells, natural killer cells, monocytes/macrophages, dendritic cells, and neutrophils via either direct cell to cell contact or via the expression of soluble factors [62, 63]. Equine monocytes are highly responsive immune cells that are very sensitive to a variety of factors including LPS which stimulates monocytes to secrete proinflammatory cytokines such as TNF-α via a Toll-like receptor (TLR)-mediated pathway. Relevant to our functionality testing, it has been shown that MSCs suppress the activation of LPS-stimulated monocytes and thus the production of proinflammatory cytokines such as TNF-α [35, 64, 65]. We chose to study this immunomodulatory effect as a platform to test differences in TNF-α release between MSCs cultured in FBS or ePL. We conducted these experiments with an established transwell coculture system which allowed us to expose LPS-stimulated monocyte cultures to MSCs grown in FBS or ePL. It is relevant to note that these coculture experiments were conducted in the presence of standard RPMI media appropriate for monocyte proliferation supplemented with 10% DHS. It was important to include DHS because it contains LPS-binding protein (LBP), an essential component for the LPS and TLR4 coupling and the efficient stimulation of monocytes to release their inflammatory payload including TNF-α [66]. Secondly, by only using standard monocyte medium, we ensured that any effect on monocyte activation would likely be due to the MSCs and not the FBS or ePL culture media in which they had been developed. We found an interesting temporal effect in our experiments noting that MSCs had no effect on TNF-α production following 6 h of coincubation with stimulated monocytes. However, 18 h of coincubation resulted in a significant difference in the expression levels of TNF-α following the addition of MSCs cultured in either FBS or ePL compared with monocyte cultures without MSCs, confirming our hypothesis that MSCs cultured in ePL can modulate inflammation. A trend noticed when we compared the ability of MSCs to reduce TNF-α production was that those cultured in FBS tended to suppress TNF-α release from LPS-stimulated monocytes more than those cultured in ePL. Studies have suggested that MSCs cultured in human PL, obtained from plateletpheresis products in which 10% of acid citrate dextrose (ACD) was added to donor's plasma, failed to support their immunomodulatory capacities [67, 68]. Additionally, a detailed study published by Copland and colleagues showed that the presence of fibrinogen in human PL results in an inferior immunosuppressive activity of MSCs compared with those expanded in FBS [69]. In the context of perfecting the processing and manufacturing of our ePL, it may be important to consider collection methods that avoid ACD and consider recovery methods that eliminate fibrinogen from the final product. The results of this study provide evidence that ePL can be used instead of FBS for the culture of equine bone marrow-derived MSCs without affecting their characteristics/phenotype and functionality properties. We have shown that ePL medium supplement supports the proliferation and increases the viability of MSCs following a single washing step. Moreover, ePL not only does not impact on the differentiation capacity of MSCs but, according to our data, improves their chondrogenic differentiation potential with profound implications for future clinical applications and especially for the treatment of cartilage defects. MSCs cultured with ePL exhibit comparable immunophenotype and immunomodulatory capacity compared to those in standard cell culture medium. Our results indicate that ePL is an attractive alternative for the ex vivo expansion of equine MSCs before clinical administration, avoiding issues of xeno-immunization related to the use of FBS. ACD: Acid citrate dextrose DHS: Donor horse serum DT: Doubling time ePL: Equine platelet lysate Mesenchymal stem cell PBMC: Peripheral blood mononuclear cell PD: Population doublings PDGF: Platelet-derived growth factor PFA: Paraformaldehyde Platelet lysate PRP: Platelet-rich plasma RPMI: Roswell Park Memorial Institute Transforming growth factor VEGF: Glenn JD, Whartenby KA. Mesenchymal stem cells: emerging mechanisms of immunomodulation and therapy. World J Stem Cells. 2014;6:526–39. Farini A, Sitzia C, Erratico S, Meregalli M, Torrente Y. Clinical applications of mesenchymal stem cells in chronic diseases. Stem Cells Int. 2014;2014:1–11. https://www.ncbi.nlm.nih.gov/pubmed/24876848. Paul G, Anisimov SV. The secretome of mesenchymal stem cells: potential implications for neuroregeneration. Biochimie. 2013;95:2246–56. Burnouf T, Strunk D, Koh MBC, Schallmoser K. Human platelet lysate: replacing fetal bovine serum as a gold standard for human cell propagation? Biomaterials. 2016;76:371–87. Doucet C, Ernou I, Zhang Y, Llense J-R, Begot L, Holy X, Lataillade J-J. Platelet lysates promote mesenchymal stem cell expansion: a safety substitute for animal serum in cell-based therapy applications. J Cell Physiol. 2005;205:228–36. Russell KA, Koch TG. Equine platelet lysate as an alternative to fetal bovine serum in equine mesenchymal stromal cell culture—too much of a good thing? Equine Vet J. 2016;48:261–4. Md B, Riccò S, Conti V, Merli E, Ramoni R, Grolli S. Platelet lysate promotes in vitro proliferation of equine mesenchymal stem cells and tenocytes. Vet Res Commun. 2007;31:289–92. Perez-Ilzarbe M, Diez-Campelo M, Aranda P, Tabera S, Lopez T, del Canizo C, Merino J, Moreno C, Andreu EJ, Prosper F, Perez-Simon JA. Comparison of ex vivo expansion culture conditions of mesenchymal stem cells for human cell therapy. Transfusion. 2009;49:1901–10. Schallmoser K, Bartmann C, Rohde E, Reinisch A, Kashofer K, Stadelmeyer E, Drexler C, Lanzer G, Linkesch W, Strunk D. Human platelet lysate can replace fetal bovine serum for clinical-scale expansion of functional mesenchymal stromal cells. Transfusion. 2007;47:1436–46. Kirikae T, Tamura H, Hashizume M, Kirikae F, Uemura Y, Tanaka S, Yokochi T, Nakano M. Endotoxin contamination in fetal bovine serum and its influence on tumor necrosis factor production by macrophage-like cells J774.1 cultured in the presence of the serum. Int J Immunopharmacol. 1997;19:255–62. Azouna NB, Jenhani F, Regaya Z, Berraeis L, Othman TB, Ducrocq E, Domenech J. Phenotypical and functional characteristics of mesenchymal stem cells from bone marrow: comparison of culture using different media supplemented with human platelet lysate or fetal bovine serum. Stem Cell Res Ther. 2012;3:1–14. Patrikoski M, et al. Different culture conditions modulate the immunological properties of adipose stem cells. Stem Cells Transl Med. 2014;3:1220–30. Horwitz EM, Gordon PL, Winston KKK, Marx JC, Neel MD, McNall RY, Muul L, Hofmann T. Isolated allogeneic bone marrow-derived mesenchymal cells engraft and stimulate growth in children with osteogenesis imperfecta: implications for cell therapy of bone. Natl. Acad Sci. 2002;99:8932. Hemeda H, Giebel B, Wagner W. Evaluation of human platelet lysate versus fetal bovine serum for culture of mesenchymal stromal cells. Cytotherapy. 2014;16:170–80. Even MS, Sandusky CB, Barnard ND. Serum-free hybridoma culture: ethical, scientific and safety considerations. Trends Biotechnol. 2006;24:105–8. Guidance for industry: Guidance for human somatic cell therapy and gene therapy. US Food and Drug Administration, Vaccines, Blood & Biologics; 1998. Available at: https://www.fda.gov/downloads/BiologicsBloodVaccines/GuidanceComplianceRegulatoryInformation/Guidances/CellularandGeneTherapy/UCM081670.pdf. Castegnaro S, Chieregato K, Maddalena M, Albiero E, Visco C, Madeo D, Pegoraro M, Rodeghiero F. Effect of platelet lysate on the functional and molecular characteristics of mesenchymal stem cells isolated from adipose tissue. Curr Stem Cell Res Ther. 2011;6:105–14. Lange C, Cakiroglu F, Spiess AN, Cappallo-Obermann H, Dierlamm J, Zander AR. Accelerated and safe expansion of human mesenchymal stromal cells in animal serum-free medium for transplantation and regenerative medicine. J Cell Physiol. 2007;213:18–26. Bieback K, Hecker A, Kocaomer A, Lannert H, Schallmoser K, Strunk D, Kluter H. Human alternatives to fetal bovine serum for the expansion of mesenchymal stromal cells from bone marrow. Stem Cells. 2009;27:2331–41. Anitua E, Andia I, Sanchez M, Azofra J, del Mar ZM, de la Fuente M, Nurden P, Nurden AT. Autologous preparations rich in growth factors promote proliferation and induce VEGF and HGF production by human tendon cells in culture. J Orthop Res. 2005;23:281–6. Herrmann M, Binder A, Menzel U, Zeiter S, Alini M, Verrier S. CD34/CD133 enriched bone marrow progenitor cells promote neovascularization of tissue engineered constructs in vivo. Stem Cell Res. 2014;13:465–77. Lippross S, Loibl M, Hoppe S, Meury T, Benneker L, Alini M, Verrier S. Platelet released growth factors boost expansion of bone marrow derived CD34(+) and CD133(+) endothelial progenitor cells for autologous grafting. Platelets. 2011;22:422–32. Seo JP, Tsuzuki N, Haneda S, Yamada K, Furuoka H, Tabata Y, Sasaki N. Comparison of allogeneic platelet lysate and fetal bovine serum for in vitro expansion of equine bone marrow-derived mesenchymal stem cells. Res Vet Sci. 2013;95:693–8. Kocaoemer A, Kern S, Kluter H, Bieback K. Human AB serum and thrombin-activated platelet-rich plasma are suitable alternatives to fetal calf serum for the expansion of mesenchymal stem cells from adipose tissue. Stem Cells. 2007;25:1270–8. Textor JA, Tablin F. Activation of equine platelet-rich plasma: comparison of methods and characterization of equine autologous thrombin. Vet Surg. 2012;41:784–94. Sumner SM, Naskou MC, Thoresen M, Copland I, Peroni JF. Platelet lysate obtained via plateletpheresis performed in standing and awake equine donors. Transfusion. 2017;57:1755–62. Naskou MC, Norton NA, Copland I, Galipeau J, Peroni JF. Innate immune responses of equine monocytes cultured in equine platelet lysate. Vet Immunol Immunopathol. 2017;195:65–71. Griffiths S, Baraniak PR, Copland IB, Nerem RM, McDevitt TC. Human platelet lysate stimulates high-passage and senescent human multipotent mesenchymal stromal cell growth and rejuvenation in vitro. Cytotherapy. 2013;15:1469–83. Borjesson DL, Peroni JF. The regenerative medicine laboratory: facilitating stem cell therapy for equine disease. Clin Lab Med. 2011;31:109–23. Mumaw J, Jordan E, Sonnet C, Olabisi R, Olmsted-Davis E, Davis A, Peroni J, West J, West F, Lu Y, Stice S. Rapid heterotrophic ossification with cryopreserved poly(ethylene glycol-)microencapsulated BMP2-expressing MSCs. Int J Biomater. 2012:1–11. Vidal MA, Kilroy GE, Johnson JR, Lopez MJ, Moore RM, Gimble JM. Cell growth characteristics and differentiation frequency of adherent equine bone marrow-derived mesenchymal stromal cells: adipogenic and osteogenic capacity. Vet Surg. 2006;35:601–10. Mumaw JL, Schmiedt CW, Breidling S, Sigmund A, Norton NA, Thoreson M, Peroni JF, Hurley DJ. Feline mesenchymal stem cells and supernatant inhibit reactive oxygen species production in cultured feline neutrophils. Res Vet Sci. 2015;103:60–9. Figueiredo MD, Moore JN, Vandenplas ML, Sun WC, Murray TF. Effects of the second-generation synthetic lipid A analogue E5564 on responses to endotoxin in [corrected] equine whole blood and monocytes. Am J Vet Res. 2008;69:796–803. Henry MM, Moore JN. Endotoxin-induced procoagulant activity in equine peripheral blood monocytes. Circ Shock. 1988;26:297–309. Scharf A, Holmes SP, Thoresen M, Mumaw J, Stumpf A, Peroni J. MRI-based assessment of intralesional delivery of bone marrow-derived mesenchymal stem cells in a model of equine tendonitis. Stem Cells Int. 2016;2016:8610964. WHO. WHO guidelines on tissue infectivity distribution in transmissible spongiform encephalopathies. Geneva: WHO (World Health Organization); 2006. http://www.who.int/bloodproducts/cs/TSEPUBLISHEDREPORT.pdf?ua=1. European Medicine Agency. Note for guidance on the use of bovine serum in the manufacture of human biological medicinal products. London: European Medicine Agency; 2003. CPMP/BWP/1793/02 October. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/06/WC500143930.pdf. European Commission. ESAC statement on the use of FCS and other animal-derived supplements. Brussels: European Commission; 2008. https://eurl-ecvam.jrc.ec.europa.eu/about-ecvam/archive-publications/publication/ESAC28_statement_FCS_20080508.pdf. Clark KC, Kol A, Shahbenderian S, Granick JL, Walker NJ, Borjesson DL. Canine and equine mesenchymal stem cells grown in serum free media have altered immunophenotype. Stem Cell Rev. 2016;12:245–56. Trojahn Kolle SF, Oliveri RS, Glovinski PV, Kirchhoff M, Mathiasen AB, Elberg JJ, Andersen PS, Drzewiecki KT, Fischer-Nielsen A. Pooled human platelet lysate versus fetal bovine serum-investigating the proliferation rate, chromosome stability and angiogenic potential of human adipose tissue-derived stem cells intended for clinical use. Cytotherapy. 2013;15:1086–97. Capelli C, Domenghini M, Borleri G, Bellavita P, Poma R, Carobbio A, Mico C, Rambaldi A, Golay J, Introna M. Human platelet lysate allows expansion and clinical grade production of mesenchymal stromal cells from small samples of bone marrow aspirates or marrow filter washouts. Bone Marrow Transplant. 2007;40:785–91. Fazzina R, Iudicone P, Fioravanti D, Bonanno G, Totta P, Zizzari IG, Pierelli L. Potency testing of mesenchymal stromal cell growth expanded in human platelet lysate from different human tissues. Stem Cell Res Ther. 2016;7:122. Selvaggi TA, Walker RE, Fleisher TA. Development of antibodies to fetal calf serum with arthus-like reactions in human immunodeficiency virus-infected patients given syngeneic lymphocyte infusions. Blood. 1997;89:776–9. Shih DT, Burnouf T. Preparation, quality criteria, and properties of human blood platelet lysate supplements for ex vivo stem cell expansion. New Biotechnol. 2015;32:199–211. Carrade DD, Borjesson DL. Immunomodulation by mesenchymal stem cells in veterinary species. Comp Med. 2013;63:207–17. Dominici M, Le Blanc K, Mueller I, Slaper-Cortenbach I, Marini F, Krause D, Deans R, Keating A, Prockop D, Horwitz E. Minimal criteria for defining multipotent mesenchymal stromal cells. The International Society for Cellular Therapy position statement. Cytotherapy. 2006;8:315–7. Ben Azouna N, Jenhani F, Regaya Z, Berraeis L, Ben Othman T, Ducrocq E, Domenech J. Phenotypical and functional characteristics of mesenchymal stem cells from bone marrow: comparison of culture using different media supplemented with human platelet lysate or fetal bovine serum. Stem Cell Res Ther. 2012;3:6. Gottipamula S, Sharma A, Krishnamurthy S, Majumdar AS, Seetharam RN. Human platelet lysate is an alternative to fetal bovine serum for large-scale expansion of bone marrow-derived mesenchymal stromal cells. Biotechnol Lett. 2012;34:1367–74. Mishra A, Tummala P, King A, Lee B, Kraus M, Tse V, Jacobs CR. Buffered platelet-rich plasma enhances mesenchymal stem cell proliferation and chondrogenic differentiation. Tissue Eng Part C Methods. 2009;15:431–5. Prins HJ, Rozemuller H, Vonk-Griffioen S, Verweij VG, Dhert WJ, Slaper-Cortenbach IC, Martens AC. Bone-forming capacity of mesenchymal stromal cells when cultured in the presence of human platelet lysate as substitute for fetal bovine serum. Tissue Eng Part A. 2009;15:3741–51. Rubio-Azpeitia E, Andia I. Partnership between platelet-rich plasma and mesenchymal stem cells: in vitro experience. Muscles Ligaments Tendons J. 2014;4:52–62. Shih DT, Chen JC, Chen WY, Kuo YP, Su CY, Burnouf T. Expansion of adipose tissue mesenchymal stromal progenitors in serum-free medium supplemented with virally inactivated allogeneic human platelet lysate. Transfusion. 2011;51:770–8. Kabiri A, Esfandiari E, Esmaeili A, Hashemibeni B, Pourazar A, Mardani M. Platelet-rich plasma application in chondrogenesis. Adv Biomed Res. 2014;3:138. Grimaud E, Heymann D, Redini F. Recent advances in TGF-beta effects on chondrocyte metabolism. Potential therapeutic roles of TGF-beta in cartilage disorders. Cytokine Growth Factor Rev. 2002;13:241–57. Fan H, Hu Y, Qin L, Li X, Wu H, Lv R. Porous gelatin-chondroitin-hyaluronate tri-copolymer scaffold containing microspheres loaded with TGF-beta1 induces differentiation of mesenchymal stem cells in vivo for enhancing cartilage repair. J Biomed Mater Res A. 2006;77:785–94. Yu DA, Han J, Kim BS. Stimulation of chondrogenic differentiation of mesenchymal stem cells. Int J Stem Cells. 2012;5:16–22. De Schauwer C, Meyer E, Van de Walle GR, Van Soom A. Markers of stemness in equine mesenchymal stem cells: a plea for uniformity. Theriogenology. 2011;75:1431–43. Hagmann S, Moradi B, Frank S, Dreher T, Kammerer PW, Richter W, Gotterbarm T. Different culture media affect growth characteristics, surface marker distribution and chondrogenic differentiation of human bone marrow-derived mesenchymal stromal cells. BMC Musculoskelet Disord. 2013;14:223. Kassem M, Kristiansen M, Abdallah BM. Mesenchymal stem cells: cell biology and potential use in therapy. Basic Clin Pharmacol Toxicol. 2004;95:209–14. Herzog EL, Chai L, Krause DS. Plasticity of marrow-derived stem cells. Blood. 2003;102:3483–93. Haack-Sorensen M, Friis T, Bindslev L, Mortensen S, Johnsen HE, Kastrup J. Comparison of different culture conditions for human mesenchymal stromal cells for clinical stem cell therapy. Scand J Clin Lab Invest. 2008;68:192–203. Brandau S, Jakob M, Hemeda H, Bruderek K, Janeschik S, Bootz F, Lang S. Tissue-resident mesenchymal stem cells attract peripheral blood neutrophils and enhance their inflammatory activity in response to microbial challenge. J Leukoc Biol. 2010;88:1005–15. Peroni JF, Borjesson DL. Anti-inflammatory and immunomodulatory activities of stem cells. Vet Clin North Am Equine Pract. 2011;27:351–62. Lei J, McLane LT, Curtis JE, Temenoff JS. Characterization of a multilayer heparin coating for biomolecule presentation to human mesenchymal stem cell spheroids. Biomater Sci. 2014;2:666–73. Yang Z, Concannon J, Ng KS, Seyb K, Mortensen LJ, Ranganath S, Gu F, Levy O, Tong Z, Martyn K, et al. Tetrandrine identified in a small molecule screen to activate mesenchymal stem cells for enhanced immunomodulation. Sci Rep. 2016;6:30263. Figueiredo MD, Salter CE, Hurley DJ, Moore JN. A comparison of equine and bovine sera as sources of lipopolysaccharide-binding protein activity in equine monocytes incubated with lipopolysaccharide. Vet Immunol Immunopathol. 2008;121:275–80. Abdelrazik H, Spaggiari GM, Chiossone L, Moretta L. Mesenchymal stem cells expanded in human platelet lysate display a decreased inhibitory capacity on T- and NK-cell proliferation and function. Eur J Immunol. 2011;41:3281–90. Bernardo ME, Avanzini MA, Perotti C, Cometa AM, Moretta A, Lenta E, Del Fante C, Novara F, de Silvestri A, Amendola G, et al. Optimization of in vitro expansion of human multipotent mesenchymal stromal cells for cell-therapy approaches: further insights in the search for a fetal calf serum substitute. J Cell Physiol. 2007;211:121–30. Copland IB, Garcia MA, Waller EK, Roback JD, Galipeau J. The effect of platelet lysate fibrinogen on the functionality of MSCs in immunotherapy. Biomaterials. 2013;34:7840–50. The authors would like to thank Annie Bullington for her technical support during the platelet apheresis procedure and Dr. Roy Berghaus and Dr. Steeve Giguère for their help in statistical analysis. This research was funded by the Morris Animal Foundation (grant number D17EQ-021). All data generated and/or analyzed during this study are included in this published article. Department of Large Animal Medicine, Veterinary Medical Center, College of Veterinary Medicine, University of Georgia, 2200 College Station Road, Athens, GA, 30602, USA Maria C. Naskou, Scarlett M. Sumner, Anna Chocallo, Hannah Kemelmakher, Merrilee Thoresen & John F. Peroni Emory Personalized Immunotherapy Center [EPIC], Emory University School of Medicine, 100 Woodruff Circle, Atlanta, GA, 30322, USA Department of Medicine and Carbone Comprehensive Cancer Center, University of Wisconsin, 600 Highland Ave., Madison, WI, 53792, USA Jacques Galipeau Maria C. Naskou Scarlett M. Sumner Anna Chocallo Hannah Kemelmakher Merrilee Thoresen John F. Peroni MCN contributed to the conception and experimental design of this study, performed cell culture, laboratory techniques, sample and data collection, statistical analysis, and wrote the manuscript. SMS, AC, and HK contributed to cell culture and characterization, performance of laboratory techniques, and data collection. MT was involved in experimental design and provided technical advice and support. IC and JG contributed to the conception and study design. JFP contributed to the conception and experimental design, grant writing, student mentoring, writing, and final approval of the manuscript. All authors except IC read and approved the final manuscript. Correspondence to John F. Peroni. The study protocol (IACUC approval #A2015 02–023-Y1-A1) was approved by the University of Georgia Institutional Animal Care and Committee. Ian Copland is deceased. This paper is dedicated to his memory. Naskou, M.C., Sumner, S.M., Chocallo, A. et al. Platelet lysate as a novel serum-free media supplement for the culture of equine bone marrow-derived mesenchymal stem cells. Stem Cell Res Ther 9, 75 (2018). https://doi.org/10.1186/s13287-018-0823-3 Equine platelet apheresis
CommonCrawl
Chemistry Meta Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up. Photoisomerization of Azobenzene Rotation around the double bond of azobenzene is restricted because it would distort the P orbital overlap between the nitrogen atoms. However, in the $n \rightarrow \pi^*$ excited state ($S_1$), the rotation is no longer hindered and it can isomerize. I don't understand why this is the case. You are taking an electron from the non-bonding orbital, and moving it into the pi-antibonding. However, the $\pi$-bonding is still present, so I see no reason why the rotation would be unrestrained. Is it something about the interaction between the electrons in the $\pi^*$ with the $\pi$ that weakens the double bond? And if so, how? organic-chemistry orbitals photochemistry Klaus-Dieter Warzecha Azobenzenes are one of the most widely used moietes in photochemically switchable systems. Nevertheless, the pathways for the E-Z isomerisation of azobenzene in the $S_1$ and the $S_2$ state have been in debate for decades and I'm not quite sure whether this is finally solved. However, in the $n \rightarrow \pi^*$ excited state ($S_1$), the rotation is no longer hindered and it can isomerize. It is true that the $S_1$ state arises from a $n \rightarrow \pi^*$ transition in a weak, broad band with $\lambda_{\mathrm{max}} = 432\,\mathrm{nm}$ and $\epsilon \approx 400\,\mathrm{L\,mol^{-1}\,cm^{-1}}$ in hexane. By the way, E-azobenzene exhibits another, much stronger absorption at $\lambda_{\mathrm{max}} = 319\,\mathrm{nm}$ with $\epsilon \approx 22800\,\mathrm{L\,mol^{-1}\,cm^{-1}}$ in the same solvent. This $\pi\rightarrow\pi^*$ transition to the $S_2$ state would indeed allow for an isomerisation through an one-bond-flip rotational mechanism. (For the UV data, see 1 and 2). But - you saw that coming after this lengthy introduction - the situation is much more complicated! You have neglected the fact that E-Z isomerisations can proceed by other mechanisms than just the simple one-bond-flip that is usually postulated for alkenes! Forgive me for not getting into all details on this, but for polyenes, to give an example, the so-called Bicycle Pedal (BP) and Hula twist (HT) pathways have been proposed and confirmed as space-conserving alternatives to the one-bon-flip mechanism. The latter can be visualized as "let's just flip the other side of the former double bond around like a rotor". Back to the azobenzenes. Even from more recent works (2, 3, 4, 5), it seems that the E-Z isomerisation is still not fully explained, although a plain rotation around the former double bond can be excluded. The two main fractions in the debate support mechanisms in which a contribution of inversion through a change of the $\ce{C-N-N}$ angle in the excited state is assumed. Whether this is the only effect and the reaction proceeds through a transition state with a linear $\ce{C-N-N}$ arrangement or whether an inversion-assisted torsional pathway provided the beter explanation is still unclear. 1 George Zimmerman, Lue-Yung Chow, and Un-Jin Paik, J. Am. Chem. Soc., 1958, 80, 3528-3531. 2 Alessandro Cembran, Fernando Bernardi, Marco Garavelli, Laura Gagliardi, and Giorgio Orlandi, J. Am. Chem. Soc., 2004, 126, 3234-3243. 3 Eric Wei-Guang Diau, J. Phys. Chem. A, 2004, 108, 950-956. 4 Eric M. M. Tan, Saeed Amirjalayer, Szymon Smolarek, Alexander Vdovin, Francesco Zerbetto, and Wybren Jan Buma, Nature Communications, 2015, 6, Article number: 5860. 5 Junfeng Shao, Yibo Lei, Zhenyi Wen, Yusheng Dou, and Zhisong Wang, J. Chem. Phys., 2008, 129, 164111 Klaus-Dieter WarzechaKlaus-Dieter Warzecha $\begingroup$ Aha! Thank you very much Klaus, this is very insightful. I will look into the papers and get further insight. I am curious to learn about the BP and HT pathways, and why they would occur in the S1 but not in the S0 state. $\endgroup$ – Max May 20 '15 at 7:14 $\begingroup$ @Max BP and HT modes are usually discussed in the context of polyenes, think in double-bond homologues of stilbenes. If you're interested in these, look for publications of Robert Liu, Jack Saltiel and Werner Fuß. As far as azobenzene is concerned, have a look at the references 2-4 and dig your way down from there. or, even, better, just use the azobenzene as building block in synthesis and don't care for the gory details ;-) $\endgroup$ – Klaus-Dieter Warzecha May 20 '15 at 7:23 $\begingroup$ Thanks again for the advice! I'm looking into this for a molecular simulations class project, and the gory details are the most important part :) $\endgroup$ – Max May 20 '15 at 7:29 $\begingroup$ Unless it is already among the references in the Nature article, dx.doi.org/10.1063/1.3000008 might be interesting too. $\endgroup$ – Klaus-Dieter Warzecha May 20 '15 at 7:37 $\begingroup$ I would like to encourage you to give the citations in addition to the DOI, I had to click all the links, just to find out, that I already read two of the papers. I also think proper acknowledgement, at least a name, is a treat in sciences. $\endgroup$ – Martin - マーチン♦ May 21 '15 at 4:10 Thanks for contributing an answer to Chemistry Stack Exchange! Not the answer you're looking for? Browse other questions tagged organic-chemistry orbitals photochemistry or ask your own question. Can different groups within a single molecule photoisomerize at different wavelengths? How to choose a UV lamp for photoisomerization of 4,4'-azobenzene dicarboxylic acid? Resonance stability: ester vs. thioester What is the explanation for this stereochemical product? What type of bonding is there among d-block metals? Significance of phase of atomic orbitals Hyperconjugation and the stability of alkenes Explaining why s and p orbitals do not interact
CommonCrawl
The Daily Brew Selected astro-ph abstracts for Friday 2017 November 10 The MUSE Hubble Ultra Deep Field Survey VI: The Faint-End of the Lya Luminosity Function at 2.91 < z < 6.64 and Implications for Reionisation A. B. Drake (1), T. Garel (1), L. Wisotzki (2), F. Leclercq (1), T. Hashimoto (1), J. Richard (1), R. Bacon (1), J. Blaizot (1), J. Caruana (3, 4), S.Conseil (1), T. Contini (5), B. Guiderdoni (1), E. C. Herenz (2, 9), H. Inami (1), J. Lewis (1), G. Mahler (1), R. A. Marino (7), R. Pello (5), J. Schaye (6), A. Verhamme (8, 1), E. Ventou (5), P. M. Weilbacher (2) ((1) Univ Lyon, Univ Lyon1, Ens de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon, Saint-Genis- Laval, France, (2) Leibniz-Institut fur Astrophysik Potsdam (AIP), Potsdam, Germany, (3) Department of Physics, University of Malta, Malta, (4) Institute of Space Sciences and Astronomy, University of Malta, Malta, (5) Institut de Recherche en Astrophysique et Planetologie (IRAP), Universite de Toulouse, CNRS, Toulouse, France, (6) Leiden Observatory, Leiden, The Netherlands, (7) Institute for Astronomy, ETH Zurich, Zurich, Switzerland, (8) Observatoire de Geneve, Universite de Geneve, Switzerland, (9) Stockholms universitet, Stockholm, Sweden) [ arXiv:1711.03095v1 | PDF File ] Deriving the contribution of blazars to the Fermi-LAT Extragalactic $\gamma$-ray background at $E > 10$ GeV with efficiency corrections and photon statistics Mattia Di Mauro, Silvia Manconi, Hannes-S. Zechlin, Marco Ajello, Eric Charles, Fiorenza Donato The Sloan Digital Sky Survey Reverberation Mapping Project: H$\alpha$ and H$\beta$ Reverberation Measurements From First-year Spectroscopy and Photometry C. J. Grier, J. R. Trump, Yue Shen, Keith Horne, Karen Kinemuchi, Ian D. McGreer, D. A. Starkey, W. N. Brandt, P. B. Hall, C. S. Kochanek, Yuguang Chen, K. D. Denney, Jenny E. Greene, L. C. Ho, Y. Homayouni, Jennifer I-Hsiu Li, Liuyi Pei, B. M. Peterson, P. Petitjean, D. P. Schneider, Mouyuan Sun, Yusura AlSayyad, Dmitry Bizyaev, Jonathan Brinkmann, Joel R. Brownstein, Kevin Bundy, K S. Dawson, Sarah Eftekharzadeh, J. G. Fernandez-Trincado, Yang Gao, Timothy A. Hutchinson, Siyao Jia, Linhua Jiang, Daniel Oravetz, Kaike Pan, Isabelle Paris, Kara A. Ponder, Christina Peters, Jesse Rogerson, Audrey Simmons, Robyn Smith, Ran Wang SDSS-V: Pioneering Panoptic Spectroscopy Juna A. Kollmeier (OCIS), Gail Zasowski (Utah), Hans-Walter Rix (MPIA), Matt Johns (UA), Scott F. Anderson (UW), Niv Drory (UT), Jennifer A. Johnson (OSU), Richard W. Pogge (OSU), Jonathan C. Bird (Vanderbilt), Guillermo A. Blanc (OCIS), Joel R. Brownstein (Utah), Jeffrey D. Crane (OCIS), Nathan M. De Lee (NKU/Vanderbilt), Mark A. Klaene (APO), Kathryn Kreckel (MPIA), Nick MacDonald (UCSC), Andrea Merloni (MPE), Melissa K. Ness (MPIA), Thomas O'Brien (OSU), Jose R. Sanchez-Gallego (UW), Conor C. Sayres (UW), Yue Shen (UIUC), Ani R. Thakar (JHU), Andrew Tkachenko (KU Leuven), Conny Aerts (KU Leuven), Michael R. Blanton (NYU), Daniel J. Eisenstein (Harvard), Jon A. Holtzman (NMSU), Dan Maoz (TAU), Kirpal Nandra (MPE), Constance Rockosi (UCSC), David H. Weinberg (OSU), Jo Bovy (Toronto), et al. (17 additional authors not shown) Prospects for unseen planets beyond Neptune Renu Malhotra Other Papers: Could not get document from http://arXiv.org/abs/1711.02671 brew.pl Version 2.4.1 [2014-10-20]
CommonCrawl
Climate Science without Climate Models Climate, Extreme Events, Probability, Statistics In June 2012, more than 3,000 daily maximum temperature records were broken or tied in the United States, according to the National Climatic Data Center (NCDC) of the U.S. National Oceanic and Atmospheric Administration (NOAA). Meteorologists commented at that time that this number was very unusual. By comparison, in June 2013, only about 1,200 such records were broken or tied. Was that number "normal"? Was it perhaps lower than expected? Was June 2012 (especially the last week of that month) perhaps just an especially warm time period, something that should be expected to happen every now and then? Also in June 2013, about 200 daily minimum temperature records were broken or tied in the United States. Shouldn't that number be comparable to the number of record daily highs, if everything was "normal"? Surprisingly, it is possible to make fairly precise mathematical statements about such temperature extremes (or for that matter, about many other record-setting events) simply by reasoning, almost without any models. Well, not quite. The mathematical framework is that individual numerical observations are random variables. One then has to make a few assumptions. The two main assumptions are that (1) the circumstances under which observations are made do not change, and (2) observations are stochastically independent, that is, knowledge of some observations does not convey any information about any of the other observations. Let's work with these assumptions for the moment and see what can be said about records. Suppose N numerical observations of a certain phenomenon have already been made and a new observation is added. What is the probability that this new observation exceeds all the previous ones? Think about it this way: Each of these N+1 observations has a rank, 1 for the largest value, and N+1 for the smallest value. (For the time being, let's assume that there are no ties). Thus any particular sequence of N+1 observations defines a sequence of ranks, that is, a permutation of the numbers from 1 to N+1. Since observations are independent and have the same probability distribution (that's what the two assumptions from above imply), all possible (N+1)! permutations are equally likely. A new record is observed during the last observation if its rank equals 1. There are N! permutations that have this additional property. Therefore, the probability that the last observation is a new record is N!/(N+1)! = 1/(N+1). This reasoning makes it possible to compute the expected number of record daily high temperatures for a given set of weather stations. For example, there are currently about 5,700 weather stations in the United States at which daily high temperatures are observed. In 1963, there were about 3,000 such stations and in 1923 only about 220. Assuming for simplicity that each of the current stations has been recording daily temperatures for 50 years, one would expect that on a typical day about 2% of all daily high records are broken, resulting in about 3,000 new daily high records per month on average – if the circumstances of temperature measurements remain the same and if the observations at any particular station are independent of each other. It is fairly clear that temperature records for the same date are indeed independent of one another for the same station: Knowing the maximum temperature at a particular location on August 27, 2013 does not give one any information about the maximum temperature on the same day a year later. However, circumstances of observations could indeed change for many different reasons. What if new equipment is used to record temperatures? What if the location of the station is changed? For example, until 1945, daily temperatures in Washington, DC, were recorded at a downtown location (24th and M St.). Since then, measurements have been made at National Airport. National Airport is adjacent to a river, which lowered daily temperatures measurements compared to downtown. The area around the airport has however become more urban over the last decades, possibly leading to higher temperature readings (the well-known urban heat island effect). And what about climate change? Perhaps it is better to use a single climate record and not thousands. Consider for example the global mean temperature record that is shown in the blog post of August 20. It shows that the largest global mean temperature for the 50 years from 1950 to 1999 (recorded in 1998) was exceeded twice in the 11 years from 2000 to 2010. The second-highest global mean temperature for these 50 years (that of 1997) was exceeded in 10 out of 11 years between 2000 to 2010. Can this be a coincidence? There is a mathematical theory to study such questions. Given a reference value equal to the $m$th largest out of $N$ observations, any observation out of $n$ additional ones that exceeds this reference value is called an "exceedance". For example, we might be interested in the probability of observing two exceedances of the largest value out of 50 during 12 additional observations. A combinatorial argument implies that the probability of seeing $k$ exceedances of the $m$th largest observation out of $N$ when $n$ additional observations are made equals \[ \frac{C(N+k-m,N-m) C(m+n-k-1, m-1)}{C(N+n,N)} , \] where C(r,s) is the usual binomial coefficient. The crucial assumption is again that observations are independent and come from the same probability distribution. Applied to the global mean temperature record, the formula implies that the probability of two or more exceedances of a 50 year record during an 11 year period is no more than 3%. The probability of 10 exceedances of the second-highest observation from 50 years during an 11 year period is tiny – of the order of 0.0000001%. Yet these exceedances were actually observed during the last decade. Clearly, at least one of the assumptions of stochastic independence and of identical distribution must be violated. The plot of August 20 already shows that distributions may vary from year to year, due to El Niño/La Niña effects. La Niña years in particular tend to be cooler when averaged over the entire planet. The assumption of stochastic independence is also questionable, since global weather patterns can persist over months and therefore influence more than one year. Could it be that more exeedances than plausible were observed because global mean temperatures became generally more variable during the past decades? In that case, low exceedances of the minimum temperature would also have been observed more often than predicted by the formula. That's clearly not the case, so that particular effect is unlikely to be solely responsible for what has been observed. We see that even this fairly simple climate record leads to serious questions and even partial answers about possible climate change, without any particular climate model. Part of this contribution is adapted from the forthcoming book Mathematics and Climate by Hans Kaper and Hans Engler; SIAM, Philadelphia, Pennsylvania, USA (OT131, October 2013). This entry was posted in Climate, Extreme Events, Probability, Statistics by Guest Blogger. Bookmark the permalink.
CommonCrawl
Sequential use of bronchial aspirates, biopsies and washings in the preoperative management of lung cancers Piaton Eric,Djelid Djamal,Duvert Bernard,Perrichon Marielle CytoJournal , 2007, DOI: 10.1186/1742-6413-4-11 Abstract: Background The combination of cytology and biopsies improves the recognition and typing of small cell (SCLC) versus non small cell (NSCLC) lung cancers in the fiberoptic bronchoscopy assessment of centrally located tumours. Methods We studied whether bronchial aspirates performed before biopsies (BA) and washings performed after biopsies (BW) could increase the diagnostic yield of fiberoptic bronchoscopy. A series of 334 consecutive samples taken in patients with suspicious fiberoptic bronchoscopy findings was studied. Two hundred primary tumours were included in the study. The actual diagnosis was based on surgical tissue specimen analysis and/or imaging techniques. The typing used was that of the 1999 WHO/IASLC classification. Results The diagnosis of malignancy and tumour typing were analyzed according to the sequential (combined) or single use of tests. Malignancy was assessed by cytology in 144/164 (87.8%) positive biopsy cases and in 174/200 tumour cases (87.0%). BA before biopsies allowed 84.0% of cancers to be diagnosed, whereas BW after biopsies allowed 79.0% of cancers to be found (p = ns). However, combining biopsies with BW allowed 94.0% of cancers to be diagnosed, whereas 82.0% were diagnosed by biopsies alone (p < 0.001). The highest diagnostic yield was obtained with the combination of BA, biopsies and BW, with 97.0% sensitivity. Exact concordance in typing was obtained in 83.8% of cases. The six surgically resected cases (3.0%) with negative cytology and biopsy results included four squamous cell carcinomas with necrotizing or fibrous surface and two adenocarcinomas, pT1 stage. Conclusion Fiberoptic bronchoscopy may reach a yield of close to 100% in the diagnosis and typing of centrally located, primary lung cancers by combining bronchial aspirates, biopsies and washings. Allergic bronchopulmonary aspergillosis among patients with bronchial asthma Al-Najada M, Al-Nadi K, Sharara AM Egyptian Journal of Hospital Medicine , 2010, Abstract: To determine the different presentations encountered upon diagnosis of ABPA among patients with bronchial asthma and the two-year-follow-up results. Patients and method All patients with bronchial asthma and ABPA were included in the study .Specially formulated sheet was done include age, gender, duration of bronchial asthma ,new clinical,radiological,and laboratory findings suggestive of ABPA and two year follow up of them. Diagnosis of ABPA was based on Rosenberg-Patterson criteria. Result Fifteen patients with ABPA 3.9% out of 385 patients with bronchial asthma were included in our study, (5males)and (10 females) there mean age was 28.8 years , and mean duration of asthma was 8.9 years, and they represent all stages of asthma severity. Fleeting shadows mainly in the upper lobes were the most common radiological findings observed in nine patients (60%), five patients (33.3%) had proximal bronchiectasis detected by high resolution chest ct-scan, one of our patients had collapsed consolidation. All patients had moderate to severe eosinophilia and positive immediate skin test for aspergillus. Conclusion As the prevalence of ABPA is not uncommon among patients with bronchial asthma regardless the severity and the level control of asthma, high index of suspicious for ABPA should be maintained when followed up any patient with bronchial asthma. Increased staining for phospho-Akt, p65/RELA and cIAP-2 in pre-neoplastic human bronchial biopsies Jay W Tichelaar, Yu Zhang, Jean C leRiche, Paul W Biddinger, Stephen Lam, Marshall W Anderson BMC Cancer , 2005, DOI: 10.1186/1471-2407-5-155 Abstract: We used immunohistochemistry to determine the presence, relative levels, and localization of proteins that mediate anti-apoptotic pathways in developing human bronchial neoplasia.Bronchial epithelial protein levels of the phosphorylated (active) form of AKT kinase and the caspase inhibitor cIAP-2 were increased in more advanced grades of bronchial IEN lesions than in normal bronchial epithelium. Additionally, the percentage of biopsies with nuclear localization of p65/RELA in epithelial cells increased with advancing pathology grade, suggesting that NF-κB transcriptional activity was induced more frequently in advanced IEN lesions.Our results indicate that anti-apoptotic pathways are elevated in bronchial IEN lesions prior to the onset of invasive carcinoma and that targeting these pathways therapeutically may offer promise in prevention of non-small cell lung carcinoma.Lung cancer is the leading cause of cancer mortality in both men and women in the United States [1]. Non-small cell lung carcinomas arise from the respiratory epithelium and progress through well-defined pathological stages prior to becoming invasive and metastatic tumors. While many studies have identified lung tumor markers of clinical or prognostic significance, survival rates for this deadly disease have remained essentially unchanged for the past 30 years. The slow advance in treating lung cancer is due in part to continued gaps in our understanding of the molecular mechanisms of lung tumorigenesis. Thus, studies that aid in our understanding of molecular mechanisms of lung tumorigenesis are important steps towards developing better detection, prevention and treatment of this disease.Evasion of apoptosis by tumor cells is a critical step during tumorigenesis. The serine/threonine kinase Akt is a critical mediator of anti-apoptotic signaling in eukaryotic cells and is activated in a signaling cascade downstream of Ras activation and phosphoinositide-3-kinase (PI3K) [2]. Amplification of PI3K is c BRONCHOPULMONARY COMPLICATIONS OF INDOOR POLLUTION IN IRANIAN RUSTIC POPULATION Kazem Amoli Acta Medica Iranica , 1994, Abstract: Chronic bronchopulmonary disorders occurred in a large number of rustic females who used to bake bread at their dwellings under unhgyienic conditions. Bronchoscopy revealed advanced pathological changes with characterised black areas infiltrating the bronchial walls. Findings in ten patients who referred with acute chronic respiratory symptoms and a positive history of indoor pollution are described with an emphasis on their bronchoscopic changes. Allergic bronchopulmonary aspergillosis. [cached] Oak J,Yavgal D,Chakore R Journal of Postgraduate Medicine , 1996, Abstract: A 38 year old male was diagnosed to have allergic bronchopulmonary aspergillosis which responded remarkably to prednisolone therapy. High attenuation mucoid impaction in allergic bronchopulmonary aspergillosis [cached] World Journal of Radiology , 2010, Abstract: Allergic bronchopulmonary aspergillosis (ABPA) is a complex hypersensitivity syndrome triggered against antigens of Aspergillus fumigatus, a fungus that most commonly colonizes the airways of patients with bronchial asthma and cystic fibrosis. It presents clinically with refractory asthma, hemoptysis and systemic manifestations including fever, malaise and weight loss. Radiologically, it presents with central bronchiectasis and recurrent episodes of mucus plugging. The mucus plugs in ABPA are generally hypodense but in up to 20% of patients the mucus can be hyperdense on computed tomography. This paper reviews the literature on the clinical significance of hyperattenuated mucus in patients with ABPA. Immunopathology and Immunogenetics of Allergic Bronchopulmonary Aspergillosis [PDF] Alan P. Knutsen Journal of Allergy , 2011, DOI: 10.1155/2011/785983 Abstract: Allergic bronchopulmonary aspergillosis (ABPA) is a Th2 hypersensitivity lung disease in response to Aspergillus fumigatus that affects asthmatic and cystic fibrosis (CF) patients. Sensitization to A. fumigatus is common in both atopic asthmatic and CF patients, yet only 1%–2% of asthmatic and 7%–9% of CF patients develop ABPA. ABPA is characterized by wheezing and pulmonary infiltrates which may lead to pulmonary fibrosis and/or bronchiectasis. The inflammatory response is characterized by Th2 responses to Aspergillus allergens, increased serum IgE, and eosinophilia. A number of genetic risks have recently been identified in the development of ABPA. These include HLA-DR and HLA-DQ, IL-4 receptor alpha chain (IL-4RA) polymorphisms, IL-10 ?1082GA promoter polymorphisms, surfactant protein A2 (SP-A2) polymorphisms, and cystic fibrosis transmembrane conductance regulator gene (CFTR) mutations. The studies indicate that ABPA patients are genetically at risk to develop skewed and heightened Th2 responses to A. fumigatus antigens. These genetic risk studies and their consequences of elevated biologic markers may aid in identifying asthmatic and CF patients who are at risk to the development of ABPA. Furthermore, these studies suggest that immune modulation with medications such as anti-IgE, anti-IL-4, and/or IL-13 monoclonal antibodies may be helpful in the treatment of ABPA. 1. Introduction Allergic bronchopulmonary aspergillosis (ABPA) is a hypersensitivity lung disease due to bronchial colonization by Aspergillus fumigatus that occurs in susceptible patients with asthma and cystic fibrosis (CF). The first published description of ABPA as an entity came from the United Kingdom in 1952 [1], while the first cases in the United States were reported a decade later [2, 3]. ABPA affects approximately 1%–2% of asthmatic patients and 7%–9% of CF patients [4–6]. If unrecognized or poorly treated, ABPA leads to airway destruction, bronchiectasis, and/or pulmonary fibrosis, resulting in significant morbidity and mortality. The diagnosis of ABPA is based on clinical and immunologic reactivity to A. fumigatus. The minimal criteria required for the diagnosis of ABPA are: (1) asthma or cystic fibrosis with deterioration of lung function, for example, wheezing, (2) immediate Aspergillus skin test reactivity, (3) total serum IgE ≥ 1000?IU/mL, (4) elevated Aspergillus specific IgE and IgG antibodies, and (5) chest radiographic infiltrates. Additional criteria may include peripheral blood eosinophilia, Aspergillus serum precipitating antibodies, central bronchiectasis, and Game Brush Number [PDF] William B. Kinnersley,Pawel Pralat Mathematics , 2014, Abstract: We study a two-person game based on the well-studied brushing process on graphs. Players Min and Max alternately place brushes on the vertices of a graph. When a vertex accumulates at least as many brushes as its degree, it sends one brush to each neighbor and is removed from the graph; this may in turn induce the removal of other vertices. The game ends once all vertices have been removed. Min seeks to minimize the number of brushes played during the game, while Max seeks to maximize it. When both players play optimally, the length of the game is the game brush number of the graph $G$, denoted $b_g(G)$. By considering strategies for both players and modelling the evolution of the game with differential equations, we provide an asymptotic value for the game brush number of the complete graph; namely, we show that $b_g(K_n) = (1+o(1))n^2/e$. Using a fractional version of the game, we couple the game brush numbers of complete graphs and the binomial random graph $\mathcal{G}(n,p)$. It is shown that for $pn \gg \ln n$ asymptotically almost surely $b_g(\mathcal{G}(n,p)) = (1 + o(1))p b_g(K_n) = (1 + o(1))pn^2/e$. Finally, we study the relationship between the game brush number and the (original) brush number. Allergic Bronchopulmonary Aspergillosis in Asthma and Cystic Fibrosis [PDF] Alan P. Knutsen,Raymond G. Slavin Journal of Immunology Research , 2011, DOI: 10.1155/2011/843763 Abstract: Allergic bronchopulmonary aspergillosis (ABPA) is a Th2 hypersensitivity lung disease in response to Aspergillus fumigatus that affects asthmatic and cystic fibrosis (CF) patients. Sensitization to A. fumigatus is common in both atopic asthmatic and CF patients, yet only 1-2% of asthmatic and 7–9% of CF patients develop ABPA. ABPA is characterized by wheezing and pulmonary infiltrates which may lead to pulmonary fibrosis and/or bronchiectasis. The inflammatory response is characterized by Th2 responses to Aspergillus allergens, increased serum IgE and eosinophilia. A number of genetic risks have recently been identified in the development of ABPA. These include HLA-DR and HLA-DQ, IL-4 receptor alpha chain (IL-4RA) polymorphisms, IL-10-1082GA promoter polymorphisms, surfactant protein A2 (SP-A2) polymorphisms, and cystic fibrosis transmembrane conductance regulator gene (CFTR) mutations. The studies indicate that ABPA patients are genetically at risk to develop skewed and heightened Th2 responses to A. fumigatus antigens. These genetic risk studies and their consequences of elevated biologic markers may aid in identifying asthmatic and CF patients who are at risk to the development of ABPA. Furthermore, these studies suggest that immune modulation with medications such as anti-IgE, anti-IL-4 and/or IL-13 monoclonal antibodies may be helpful in the treatment of ABPA. 1. Introduction Allergic bronchopulmonary aspergillosis (ABPA) is a hypersensitivity lung disease due to bronchial colonization by Aspergillus fumigatus that occurs in susceptible patients with asthma and cystic fibrosis (CF). The first published description of ABPA as an entity came from the United Kingdom in 1952 [1], while the first cases in the United States were reported a decade later [2, 3]. ABPA affects approximately 1-2% of asthmatic patients and 7–9% of CF patients [4–6]. If unrecognized or poorly treated, ABPA leads to airway destruction, bronchiectasis, and/or pulmonary fibrosis, resulting in significant morbidity and mortality. 2. Biology of Aspergillus fumigatus A number of fungi may lead to allergic bronchopulmonary mycoses (ABPM), but the genus Aspergillus contains the predominant organisms causing these pulmonary disorders; Aspergillus fumigatus is the species most commonly associated with ABPM. It is a ubiquitous, saprophytic mold found in both outdoor and indoor air, in potting soil, crawl spaces, compost piles, mulches, freshly cut grass, decaying vegetation, and sewage treatment facilities [7, 8]. A. fumigatus is found worldwide including the United States, where it is Congenital bronchial atresia: a case report with radiographic and pathologic correlation [cached] Wesselius LJ,Muhm JR,Tazelaar HD Southwest Journal of Pulmonary and Critical Care , 2011, Abstract: Bronchial atresia is a rare congenital disorder characterized by localized atresia or stenosis of a segmental bronchus. Imaging features typically include mucus impaction in distal airways associated with regional lung hyperlucency. Pathologic features of bronchial atresia have been rarely been reported. This case demonstrates CT features of this disorder as well as the unusual finding of increased lung uptake of 18F-fluorodeoxyglucose on PET scan. This finding led to a surgical lung biopsy to exclude infectious or neoplastic disorders. This case provides radiologic-pathologic correlation in a patient with congenital bronchial atresia and demonstrates that localized, mildly increased uptake on PET scan be associated bronchial atresia.
CommonCrawl
Find the Laurent expansion of $(1-z)e^{1/z}$ - When can we use Taylor series to find Laurent series? I'm currently taking a course in mathemathical tools, where we are covering complex analysis (Note that this course is not very rigorous and we cover complex analysis in only 4 lectures). The Laurent series have been introduced as: Laurent series$$ \begin{array}{l} f(z)=\sum_{n=-\infty}^{\infty} a_{n}\left(z-z_{0}\right)^{n} \\ a_{n}=\frac{1}{2 \pi i} \oint_{C} \frac{f\left(z^{\prime}\right) d z^{\prime}}{\left(z^{\prime}-z_{0}\right)^{n+1}} \end{array} $$ However, I noticed that the above integral isn't used in the solutions to any of the problems regarding laurent series. So I'm trying to understand why we can avoid using it. What I understood so far, is that the difference between the Taylor and Laurent series, is that the Laurent series also contains negative powers. Where it's not possible to expand a Taylor series around a point $z_0$ where f is not analytically, the opposite applies for the Laurent series. In the case we expand f around a point where f is analytically, the Laurent series and Taylor expansion will be the same. If that is the correct understanding, please help me understand the solution to the following problem: Find the Laurent series for $f(z)=(1-z)e^{1/z}$ about $z=0$ What I thought should be my approach is using the formula above solving the contour integral, since $f(z)$ is non-analytically at $z=0$. However, the solution uses directly that $e^{1/z}=\sum_{n=0}^{\infty} \frac{z^{-n}}{n !}$. So here comes my first question: Why is the Laurent series of $e^{1/z}=\sum_{n=0}^{\infty} \frac{z^{-n}}{n !}$. It seems to me, that they have just substituted $z \rightarrow 1/z$ in the Taylor series for $e^z$. But $e^{1/z}$ is not analytic at $z=0$, so the Taylor expansion around that point shouldn't exist? And if that is not the Taylor series, how do we know it's the Laurent series? After this they multiply the two expressions together: $(z-1) e^{1 / z}=(z-1) \sum_{n=0}^{\infty} \frac{z^{-n}}{n !}=z-\sum_{n=1}^{\infty}\left(\frac{n}{n+1}\right) \frac{z^{-n}}{n !}$ Is that because $z-1$ is a polynomial and thus is analytically around $z=0$ and is it's own Taylor series and Laurent series. So we can just find the Laurent series of each factor and multiply them together to find the final Laurent series? Fx: Consider $h(z)=f(z)g(z)$, if we want to find the Laurent series around $z=z_0$. If the Laurent series of f(z) around $z_0$ is $\sum_{n=-\infty}^{\infty} a_{n}\left(z-z_{0}\right)^{n}$ and of $g(z)$ is $\sum_{n=-\infty}^{\infty} b_{n}\left(z-z_{0}\right)^{n}$. Is the Laurent series of $h(z)$ then: $h(z)=(\sum_{n=-\infty}^{\infty} a_{n}\left(z-z_{0}\right)^{n}) (\sum_{n=-\infty}^{\infty} b_{n}\left(z-z_{0}\right)^{n})$? complex-analysis taylor-expansion laurent-series sjm23sjm23 But $e^{1/z}$ is not analytic at $z=0$, so the Taylor expansion around that point shouldn't exist? No. Not being analytic at point $z_0$ does not necessarily mean that Taylor expansion around $z_0$ does not exist. Consider the example $\frac{\sin z}{z}$ at $0$. The singularity is removable. Though $z=0$ is not a removable singularity of $e^{1/z}$. On the other hand, you are looking for the Laurent series of $e^{1/z}$, not the "Taylor series". All you do in this case is to look at the Taylor series (or definition, depending on how you define the exponential function) of $e^w$ at $w=0$: $$ e^w=\sum_{n=0}^\infty \frac{w^n}{n!}\tag{1} $$ This expansion is valide for any $w\in\mathbb{C}$. So you can do the substitution $w=\frac{1}{z}$ for any $z\ne 0$. How do you know this is the Laurent series in your definition? The substitution above gives you the coefficients $a_n$. You can verify that $$ a_n=\frac{1}{2\pi i}\oint_C\frac{f(z)}{z^{n+1}}dz $$ with $f(z)=e^{1/z}$. Or, you can simply use the uniqueness of the coefficients. Think about what finding Laurent series of $g(z):=(1-z)e^{1/z}$ about $z=0$ really means. What you really looking for is a double-sided series $$ \sum_{n=-\infty}^\infty a_nz^n\tag{2} $$ such that at every $z$ with $0<|z|<R$ (for some $R$), (2) is equal to $g(z)$. On the one hand, you can use your integral definition to find the coefficients. On the other hand, you can directly find the coefficients so that $$ g(z)=\sum_{n=-\infty}^\infty a_nz^n $$ at every $z$ with $0<|z|<R$. Since the coefficients of the Laurent series are unique, you would have the same answer no matter which way you take. Since, for each $z\in\Bbb C\setminus\{0\}$,$$e^{1/z}=1+\frac1z+\frac1{2!z^2}+\frac1{3!z^3}+\cdots,$$you have (again, for each $z\in\Bbb C\setminus\{0\}$),\begin{align}(1-z)e^{1/z}&=-z+(1-1)+\left(1-\frac1{2!}\right)\frac1z+\left(\frac1{2!}-\frac1{3!}\right)\frac1{z^2}+\left(\frac1{3!}-\frac1{4!}\right)\frac1{z^4}+\cdots\\&=-z+\sum_{n=0}^\infty\left(\frac1{n!}-\frac1{(n+1)!}\right)\frac1{z^n}.\end{align} Concerning your final question: $\displaystyle\left(\sum_{n=-\infty}^\infty a_n\left(z-z_{0}\right)^{n}\right)\left(\sum_{n=-\infty}^\infty b_n\left(z-z_{0}\right)^{n}\right)$ is not a Laurent series. José Carlos SantosJosé Carlos Santos $\begingroup$ But how do I know that is the laurent series we have arrived at? It makes sense to me that $e^{1 / z}=\sum_{n=0}^{\infty} \frac{z^{-n}}{n !}$ on $z \in \mathbb{C} \setminus \{0\}$, since we are just substituting into an already known equality. But is $e^{1 / z}=\sum_{n=0}^{\infty} \frac{z^{-n}}{n !}$ a taylor expansion of $e^{1/z}$ around z=0, because that shouldn't exist (Isn't it rather an expansion around z=infinity considering the substitution, if that even makes sense?). And why would that be a laurent series? $\endgroup$ – sjm23 $\begingroup$ If $z_0$ is an isolated singularity of an analytic function, then there is one and only one Laurent series which converges to $f$ on a disk $D_r(z_0)$, which is the Laurent series of $f$. So, the way by which the Laurent series is obtained is nor relevant, as long as you get some Laurent series converging to $f(z)$. Just like you deduce from the equality$$|z|<1\implies\frac1{1-z}=\sum_{n=0}^\infty z^n$$that the power series $\sum_{n=0}^\infty z^n$ is the Taylor series of $\frac1{1-z}$ in the neighborhood of $0$. $\endgroup$ – José Carlos Santos $\begingroup$ Concerning the second question. If that's not the case. What is it we are exploiting? How do we know that one function multiplied with a series gives the laurent series. Why is the function of $1-z$ multiplied with the series of $e^{1/z}$ necessarily a laurent series. $\endgroup$ $\begingroup$ I think that I have already provided an answer to that question. $\endgroup$ $\begingroup$ Okay, so the laurent series is unique in the way, that if I can express the function on the form of the equation of a laurent series (the equality holds using known expressions), then I know that has to be it's laurent series? $\endgroup$ Let us first briefly try to understand the differences between the Laurent and the Taylor series. You have correctly observed that the Taylor series is only defined in a region in which the function is analytic. To be a little bit more specific: If $f$ is analytic at $z_0$, its Taylor series converges uniformly in the biggest circle centered around $z_0$ in which $f$ is analytic, i.e. the radius of the circle is the distance to the closest singularity. The Laurent series is however define around points were $f$ need not be analytic. In fact, we are interested in Laurent series presicely because they have this property. So suppose $f$ has an isolated singularity at $z_0$, then we can expand $f$ in a Laurent series which converges uniformly on a punctured circle around $z_0$ or an annulus. As you have observed the most apparent difference between the two series is that the Laurent series contains negative powers. This really means that is the sum of two series, one converging in an area $|z-z_0| < R$ and another convering in an area $|z-z_0| > r$. The sum will converge in the annulus $r < |z-z_0| < R$. Intuitively, the negative powers "make up for" the behaviour of $f$ close to the singularity. However, if $f$ was actually analytic at $z_0$, the Laurent series will equal the Taylor series. You can easily see this by studying your expression for $a_n$. If $f$ is analytic at $z_0$ this equal $0$ for all $n<0$. Now, to the computational aspect of your question. We usually ever use the integral formula for $a_n$, simply because it is tedious and unnecessary. It is used in proofs, but rarely to find Laurent expansions. It is a legitimate question why you can simply substitute $1/z$ for $z$ in the Taylor series of $e^z$. Think of it this way: Let $w \in \mathbb{C}$. You know that $e^w = \sum_{k=0}^{\infty} \frac{w^k}{k!}$. Now, let $z \neq 0$. $\frac{1}{z}$ is a complex number, so substituting $w = \frac{1}{z}$ is completely legitimate, and we get $$e^{\frac{1}{z}} = \sum_{k=0}^{\infty} \frac{1}{z^k k!}.$$ Notice that this works whever $z \neq 0$, and that the series has now turned into a Laurent series. We claim that it converges for all $z \neq 0$, which follows immediately from how we constructed the series. The "annulus" in which the series converges is $\{ z \in \mathbb{C} \, | \, |z| > 0 \} = \mathbb{C} \setminus \{ 0 \}$. Using this we can easily find the Laurent series you are asked to find: $$(1-z)e^{1/z}=e^{1/z}-ze^{1/z}=\sum_{k=0}^{\infty} \frac{1}{z^k k!}-z \sum_{k=0}^{\infty} \frac{1}{z^k k!} = \sum_{k=0}^{\infty} \frac{1}{z^k k!} - \sum_{k=0}^{\infty} \frac{1}{z^{k-1} k!}$$ Rearranging this, we get $$(1-z)e^{1/z} = -z + \sum_{k=0}^{\infty} \bigg(\frac{1}{k!}-\frac{1}{(k+1)!} \bigg) \frac{1}{z^k}.$$ At last, how do we know that this is the Laurent series and not some other series converging to $(1-z)e^{1/z}$ ? Well, this is a theorem one would usually prove in an introductory course in complex analysis. A series representation of a function of the form $$\sum_{n=-\infty}^{\infty} a_n z^n$$ MöbMöb Not the answer you're looking for? Browse other questions tagged complex-analysis taylor-expansion laurent-series or ask your own question. Finding Laurent and Taylor series Understanding Laurent and Taylor series understanding the difference between Laurent and Taylor series. Taylor series and Laurent series How do I find the Laurent series expansion? Laurent expansion for sin(1/z)
CommonCrawl
Methodology Article Efficient representation of uncertainty in multiple sequence alignments using directed acyclic graphs Joseph L Herman1,2, Ádám Novák1, Rune Lyngsø1, Adrienn Szabó3, István Miklós3,4 & Jotun Hein1 BMC Bioinformatics volume 16, Article number: 108 (2015) Cite this article A standard procedure in many areas of bioinformatics is to use a single multiple sequence alignment (MSA) as the basis for various types of analysis. However, downstream results may be highly sensitive to the alignment used, and neglecting the uncertainty in the alignment can lead to significant bias in the resulting inference. In recent years, a number of approaches have been developed for probabilistic sampling of alignments, rather than simply generating a single optimum. However, this type of probabilistic information is currently not widely used in the context of downstream inference, since most existing algorithms are set up to make use of a single alignment. In this work we present a framework for representing a set of sampled alignments as a directed acyclic graph (DAG) whose nodes are alignment columns; each path through this DAG then represents a valid alignment. Since the probabilities of individual columns can be estimated from empirical frequencies, this approach enables sample-based estimation of posterior alignment probabilities. Moreover, due to conditional independencies between columns, the graph structure encodes a much larger set of alignments than the original set of sampled MSAs, such that the effective sample size is greatly increased. The alignment DAG provides a natural way to represent a distribution in the space of MSAs, and allows for existing algorithms to be efficiently scaled up to operate on large sets of alignments. As an example, we show how this can be used to compute marginal probabilities for tree topologies, averaging over a very large number of MSAs. This framework can also be used to generate a statistically meaningful summary alignment; example applications show that this summary alignment is consistently more accurate than the majority of the alignment samples, leading to improvements in downstream tree inference. Implementations of the methods described in this article are available at http://statalign.github.io/WeaveAlign. Sequence alignment is one of the most intensely studied problems in bioinformatics, and is an important step in a wide range of different analyses, including identification of conserved motifs [1], analysis of molecular coevolution [2-4], estimation of phylogenies [5], and homology-based protein structure prediction [6,7]. Many of the most popular alignment methods seek to compute a single optimal alignment, using dynamic programming algorithms [8,9] as well as a variety of heuristic procedures [10-15]. Similar approaches can be used to find maximum likelihood alignments under certain probabilistic models of insertion, deletion and substitution events [16-20]. Effect of alignment on downstream inference It has become increasingly clear in recent years that downstream analyses are often highly sensitive to the specific choice of alignment. There may be many plausible but suboptimal alignments within the vicinity of the optimum, containing additional—often complementary—information regarding the evolutionary relationships between the sequences [21]; selecting a single point estimate results in the loss of this additional information, and fails to account for the statistical uncertainty associated with different regions of the alignment [22]. A number of studies have highlighted the impact of the choice of alignment on subsequent phylogenetic inference [23-31]; in many cases different alignment methods, or different guide trees, can give rise to very different phylogenies [23,32-36]. Sensitivity to the alignment is also observed in the context of many other types of downstream analysis, including homology modelling of protein structures [37-39], detection of correlated evolution [40,41], prediction of RNA secondary structure [42], and inference of positive selection [36,43-45]. Filtering methods A common approach to tackling the issue of alignment uncertainty has been to attempt to annotate particular regions of the alignment as unreliable, and to remove these before carrying out subsequent analysis. Filtering methods have in some cases been observed to yield improved inference for phylogenies [46-48] and positive selection [44,45]. However, the specific choice of filtering method may have a strong influence on the results [49], and uncertain regions of the alignment may also contain important information that is lost through the use of such methods. For example, tree accuracy is not related in a straightforward fashion to alignment uncertainty [27], and seemingly unreliable regions may be important for accurately resolving phylogenies [50,51]. Regions of high alignment uncertainty can also correspond to sites with higher indel rates [22,52], as well as regions of structural variability [53] or intrinsic disorder [54] in protein structures, and filtering these out may lead to unpredictable biases in subsequent analysis. Joint sampling approaches Within the Bayesian paradigm, alignment uncertainty can be addressed in a more methodical fashion by considering alignments, along with other parameters of interest, as samples from an unknown posterior distribution. In this framework, regions of high alignment variability then correspond to regions of high variance in the posterior. The last decade has seen the development of several fully Bayesian approaches for performing joint inference on alignments along with other objects of interest, such as mutation rates [55], phylogenetic trees [56-58], information about the evolution of protein structure [59-62], and the locations of putative regulatory elements [63-65]; inference on these quantities after accounting for alignment uncertainty can then be obtained by averaging over alignments according to their posterior probability under the joint model. However, although such approaches may be analytically tractable for comparison of a small number of sequences [63,64,66], the computational complexity involved in analysing these hierarchical joint models typically does not scale well with the number of sequences; procedures such as Markov chain Monte Carlo can only increase the range of tractability to a limited extent [56,57,65]. Moreover, adding in another level of annotation or information may require a new model to be formulated, such that in many cases this fully Bayesian approach may be impractical for problems of interest. Alternatives to joint sampling In this work we focus on a tractable alternative that can be used when joint sampling approaches are impractical. This approach takes a collection of alignments sampled according to a particular model, and uses an efficient graph-based representation to generate a much larger set of possible alignments from the initial collection. The acyclic structure of the graph allows many types of analysis to be easily carried out on the whole ensemble of alignments rather than just a single representative, such that the alignment uncertainty quantified by the ensemble can be incorporated into downstream analysis without the need for designing computationally intensive joint sampling approaches. If a single representative of the ensemble is required, this framework also allows for the efficient computation of the single alignment that maximises the expected value of a variety of different accuracy scores. The simple and computationally efficient nature of this representation makes it practical to adopt a more principled, probabilistic approach to quantifying and making use of alignment uncertainty, and we discuss examples of cases where this may prove particularly useful. Quantifying alignment uncertainty A number of different approaches have been developed for quantifying the uncertainty associated with a multiple sequence alignment. Many of these methods focus on the notion of alignment reliability, i.e. the degree to which a particular alignment (or regions thereof) can be trusted as a prediction of the homology between the sequences. One set of approaches involves computing scores or summary statistics on a single alignment of interest, using these as a measure of reliability of the alignment. Some of these approaches equate reliability of a particular alignment column with a high score under the model used to generate the alignment [67], the justification being that low-scoring columns are harder to distinguish from random noise, and so are more likely to contain erroneous homology statements; others generate the alignment using one scoring scheme, and measure its 'reasonableness' based upon another set of criteria [68,69], which may involve looking at the deviation of summary statistics from their expected background distribution under the null hypothesis of no homology [70,71]. One potential issue with some of these approaches is that they introduce a bias towards highly conserved regions, since they do not distinguish between evolutionary variability and statistical uncertainty, often using the term alignment quality as a synonym for reliability. An alternative approach, first mentioned by [49], involves generating a set of plausible alignments, and assessing the alignment uncertainty by measuring the similarity between the alignments in this set. This type of consistency- or congruence-based approach has a more natural statistical interpretation, but requires a method of generating alternative alignments, as well as a measure of alignment similarity or distance; the interpretation of the resulting measures of uncertainty may depend heavily on these two factors. Generating sets of alignments A variety of heuristic methods have been developed in order to generate sets of alignments for the purposes of measuring uncertainty. Perhaps the simplest of these is to align the same sequences with the residue order reversed [72], although the efficacy of this technique is questionable [73,74]. Another class of methods generates alternative alignments by perturbing parameters such as the guide tree [75,76], gap opening and extension penalties [77,78], and substitution matrices [79,80], and recomputing the optimal alignment with these alternative parameters. However, in all these cases the types of perturbations applied to the parameters will affect the resulting estimates of uncertainty in an unpredictable fashion [70]. Another approach is to look at a set of suboptimal alignments under a particular scoring scheme, given fixed parameters [81-83], using these to search for regions of consistency [84-86]. The variability among these suboptimal alignments can then be converted into a measure of statistical uncertainty, using an approximation to the distribution of scores, for example using an extreme value distribution [87]. A Bayesian approach Within a Bayesian framework, the collection of plausible alignments can be identified with the posterior distribution of the alignment given the sequences and other model parameters; this leads to a probabilistic interpretation of alignment uncertainty, whereby the fraction of alignments containing a particular homology statement is a measure of the posterior probability of that homology statement. For the pairwise case, alignments can often be sampled exactly from their posterior distribution under a particular evolutionary model using a dynamic programming approach [88-90]. However, for multiple sequences such approaches rapidly become computationally infeasible, and other types of procedures must be used. A popular option is to use Markov chain Monte Carlo (MCMC) in order to sample from the posterior distribution of alignments [55-58,60,61,65,91-94]. The main advantage of the MCMC approach is that it is guaranteed to sample alignments from the correct probability distribution, provided that the simulation is run for long enough to ensure convergence, although this may require significant amounts of runtime. Representing the distribution of sampled alignments Once a set of plausible alignments has been generated, a common issue that arises is how to represent and/or summarise this set in a useful fashion. In a Bayesian context this entails representing the approximation to the posterior distribution over alignments, given a collection of samples. We shall present here a graph-based formulation that allows for a compact representation of this distribution, permitting algorithms to be designed for efficient inference on exponentially large sets of alignments derived from a collection of samples. Mapping columns to dynamic programming tables A multiple sequence alignment can be represented as a path through a multidimensional matrix; an edge from one cell of the matrix to an adjacent cell represents a particular set of homology statements, synonymous with a column in the alignment. It is a straightforward extension to consider a set of alignments as a set of paths in such a matrix [95]. To formalise this intuition, we introduce a bijection between the set of alignment columns and the set of edges connecting cells in the multidimensional dynamic programming matrix, based on the coding scheme described in the supplementary section of Satija et al. [65]. More specifically, a column X containing N rows can be mapped to an N-tuple C(X)=(c(X 1),…,c(X N )), where c(X i ) is defined as $$ c(X_{i}) = \left\{ \begin{array}{cl} 2j-1 & \text{if}\,\, X_{i} = s^{(i)}_{j}\\ 2j & \text{if}\,\, X_{i} = \text{gap}, \text{between}\,\, s^{(i)}_{j} \text{and}\,\, s^{(i)}_{j+1} \end{array} \right. $$ ((1)) where \(s^{(i)}_{j}\) is the jth character of the ith sequence, such that C(X) corresponds to the coordinates of the midpoint of an edge connecting two cells in the matrix. We will also introduce initial and terminal columns, X (0) and X (T), which can be thought of as all-gap columns preceding the first characters and following the last characters of the sequences, respectively. These will therefore be encoded as C(X (0))=(0,…,0) and C(X (T))=(2L 1,…,2L m ) where L i is the length of the i th sequence. It is then possible to map any global alignment, A, to a path, C(A)=(X (0),C(A (1)),…,C(A (L)),X (T)) through the dynamic programming matrix (see Figure 1). Correspondence between alignment columns and edges connecting cells in a dynamic programming matrix, illustrated for pairwise alignment. In order to permit a directed acyclic graph representation of the space of possible alignments, each column is given a code that distinguishes between gaps based upon where they occur in the alignment. The coding for each column for the two alignments shown in panel a) represents a bijection to the midpoints of edges connecting cells in the dynamic programming table in panel b). Cell boundaries are indicated by thicker gridlines, and the finer gridlines indicate the column coding corresponding to each position, as labelled on the top and right axes. These codings are derived from the characters shown on the bottom and left axes. The midpoint of each cell is labelled with a circle, and each edge is annotated with a rectangle denoting the corresponding column. Each path from X (0) to X (T) (shown as dashed columns at (0,0) and (2,2), respectively) represents a valid alignment. Intersections between alignments The paths corresponding to a particular set of alignments may intersect at one or more points in the matrix; as first discussed by Bucka-Lassen et al. [95], subpaths can be 'spliced' at these points in order to generate new alignments. This approach was originally used to create an augmented search space for locating an optimal alignment [95,96], and more recently has been used as part of a progressive alignment algorithm that keeps track of suboptimal alignments [97]. The types of intersections fall into two categories, as illustrated in Figures 2 and 3. The first of these, which we term an interchange, results when two or more sampled alignments contain the same column, but with a different predecessor and successor, as shown in Figure 2. The second type of intersection is termed a crossover, whereby two or more sampled alignments contain pairs of equivalent columns, as shown in Figure 3. Each interchange or crossover can result in a multiplication of the number of possible ways of recombining the sampled alignments, such that the total number of alignments is greatly increased. Interchanges between alignments can result in a multiplication of the number of possible paths through the DAG.a) Two alignments coded under the map C, as described in Equation (1). b) The resulting alignment DAG contains an interchange column, such that there are four paths through the DAG, arising from only two alignments. c) Correspondence between alignment columns and edges connecting cells in a dynamic programming matrix. Crossovers between two alignments containing no interchange columns.a) Two alignments coded under the map C, as described in Equation (1). b) The resulting alignment DAG allows for crossovers between these alignments, such that there are four possible paths through the DAG, two of which include pairs of columns that are not observed in the input alignments (dashed lines). c) Correspondence between alignment columns and edges connecting cells in a dynamic programming matrix. As a result of this, an initial set of alignments sampled according to a particular model can be used to generate a much larger set of alignments sampled according to the same distribution, as we shall examine in further detail in the subsequent section. Equivalence classes of columns In order to delineate the ways in which a set of columns can be recombined to form new alignments, we introduce the predecessor and successor functions, f P and f S respectively. The functions f P and f S take the coordinates of a column X as input, and return the coordinates of an equivalence class of columns, corresponding to the midpoint of the predecessor (respectively successor) cell in the multidimensional matrix. Each column mapping to a particular f P - or f S -equivalence class can follow the same set of predecessor or successor columns, respectively (see Figure 4). Predecessor and successor functions, and equivalence classes of columns. The predecessor and successor functions (f P and f S respectively) map from columns (edges) to nodes (circles) in the dynamic programming matrix. All columns mapping to a particular node under f P share the same set of possible predecessor columns, and are grouped together in an equivalence class, denoted by E P (shown in red). An analogous definition holds for E S (blue). Denoting the ith coordinate of the output by f P (X) i and f S (X) i , the functions are defined such that $$\begin{array}{*{20}l} f_{P}(X)_{i} &= c(X_{i}) - c(X_{i}) \, \text{mod}\, 2 \end{array} $$ $$\begin{array}{*{20}l} f_{S}(X)_{i} &= c(X_{i}) + c(X_{i}) \, \text{mod}\, 2 \end{array} $$ The original column coding is then uniquely recovered by the backwards mapping $$ C(X) = (f_{P}(X) + f_{S}(X))/2 $$ The equivalence class E P (X) is then defined as the set of columns, {X ′∣f P (X ′)=f P (X)}, with E S (X) similarly defined. Using the definitions above, a column X ′ is a predecessor of X if and only if f S (X ′)=f P (X), since any path connecting them must pass through the separating equivalence class E S (X ′)≡E P (X). We will use the notation \(\mathcal {P}(X) \equiv \{ X^{\prime } \mid f_{S}(X^{\prime }) = f_{P}(X) \}\) to denote the set of predecessors of X. The alignment column graph We can then define the alignment column graph, \(\mathcal {D}(\Xi)\), of a set of columns, Ξ, as a graph whose nodes are the columns in Ξ, with a directed edge from column X to column X ′ if and only if f S (X)=f P (X ′), which we write as \(X \ltimes X^{\prime }\). From the definitions in Equations (2) and (3), we have f P (X)<f S (X) for all X, in the sense that f P (X) i ≤f S (X) i for all i, with no column having f S (X)=f P (X) unless it consists of all gaps. This ensures that the alignment column graph is acyclic, since it is never possible to return to the same equivalence class by following a set of directed edges in the graph. Each directed path through the column graph generates a valid alignment; a global alignment is a valid alignment that begins at X (0) and ends at X (T), such that the number of possible global alignments is equal to the number of distinct paths in \(\mathcal {D}(\Xi)\) that lead from X (0) to X (T). This is typically very large, growing rapidly with the number of intersection points between the alignments used to generate the graph (see Figure 5). The number of paths through the alignment column graph as a function of the number of alignments used to generate the graph. Shown for a set of 10 sequences simulated using DAWG (simulation procedure described in the main text). When crossovers are allowed (corresponding to a mean-field approximation for the conditional marginal for each column), the number of paths increases super-exponentially, resulting in a much higher coverage of the space of possible alignments, and hence more accurate approximations to the posterior probability for each path (see Figure 8). Implicit in the definition of the mapping in Equation (1) is a distinction between gaps based on their position in the alignment, such that the two situations shown in Figure 1 represent distinct alignments, each yielding two different pairs of columns. This assumption is necessary in order to generate a sparse graph; treating all gaps as equivalent is tantamount to replicating each gap-containing column onto all parallels, such that the graph in general becomes maximally dense, making efficient algorithms difficult to implement (see Additional file 1 : Figure S2). Probability distributions on alignment DAGs Due to the high-dimensional nature of the alignment space, in any particular set each alignment will typically occur with a very low frequency; even the most likely alignment may only be sampled once, if at all [93,98]. As such, the relative probabilities of entire alignments are difficult—if not impossible—to estimate directly by their observed frequencies. However, a particular column may occur in many different alignments, allowing the marginal probability of each column, averaged over all alignments, to be estimated much more efficiently [93,99]. As we shall discuss, they also represent useful summary statistics of the full distribution. Alignment probabilities in terms of pair marginals For general evolutionary models, the DAG can be used to construct a factored approximation to the full distribution over alignments; this factored distribution corresponds to a graphical model with dependencies between neighbouring columns defined by the edges in the DAG. Under this factored approximation, the probability of an alignment (corresponding to a path through the DAG) can be written in the form $$\begin{array}{@{}rcl@{}} &p(A) = p\left(A^{(1)}\right) \prod\limits_{i=2}^{L} p\left(A^{(i)} \mid A^{(i-1)}\right) \end{array} $$ $$\begin{array}{@{}rcl@{}} &p\left(A^{(i)} \mid A^{(i-1)}\right) = p\left(A^{(i)},A^{(i-1)}\right) / p\left(A^{(i-1)}\right). \end{array} $$ For evolutionary models based on first-order hidden Markov models (HMMs) (such as the one shown in Additional file 1: Figure S4), the pair-marginal representation is exact, since the dependencies in the model are equivalent to those in the DAG. For models with non-local dependencies between columns, simply setting the pair marginals to be equal to the observed pair marginals minimises the Kullback-Liebler divergence from the full empirical distribution to the pair-marginal approximation (see Additional file 1 : Section S4). Motivations for using factored approximations There are three main reasons for making use of factored approximations of this type: The number of possible column pairs is many orders of magnitude lower than the number of alignments, such that pair marginals can be estimated much more reliably from observed frequencies. These can then be used to construct more accurate estimates of the overall joint probability. Expression of the joint in terms of pair-marginals allows for interchanges in the alignment DAG (cf. Figure 2), allowing many alternative alignments to be generated from an initial collection of samples. Factorisation of the probability into a product of local terms allows for efficient algorithms to be implemented on the DAG structure. We discuss these factors in further detail below. Mean-field approximation As well as distributions involving pair terms, we will also consider a mean-field type approximation, whereby the conditional distribution of each column is given a specific predecessor [cf. Equation (6)] is replaced by an average over all predecessors: $$\begin{array}{@{}rcl@{}} p(X \mid \mathcal{P}(X)) &=& p(X,\mathcal{P}(X)) / p(\mathcal{P}(X)) \end{array} $$ $$\begin{array}{@{}rcl@{}} &=& p(X) / \sum\limits_{X^{\prime} \ltimes X} p(X^{\prime}) \end{array} $$ where \(p(X \mid \mathcal {P}(X))\) is the probability of column X given that any one of its possible predecessors is in the alignment. The second line uses the identities \(p(X, \mathcal {P}(X)) \equiv p(X)\) (since a column can only be present if one of its predecessors is present), and \(p(\mathcal {P}(X)) \equiv \sum _{X^{\prime }\ltimes X} p(X)\) (since only one member of an equivalence class can be present in any particular alignment). An important corollary of the expression in Equation (8) is that single-column marginals are sufficient to reconstruct the mean-field approximation to the joint probability; this has several important consequences, as we shall discuss below. Motivations for using the mean-field approximation The mean-field approximation described above is exact for fully independent sites models, for example pair HMMs with non-affine models for indels. For more general HMMs, there are three major advantages associated with using this approximation rather than the pair-marginal formulation: Since the number of possible columns is substantially less than the number of possible column pairs, it is easier to obtain reliable estimates of single-column marginals from a collection of alignment samples. Hence, the mean-field approximation is likely to be more accurate for lower sample sizes. The use of single-column marginals allows for crossovers in the alignment DAG (cf. Figure 3), whereas the pair-marginal expression will assign a weight of zero to any pairs that are not observed, hence only permitting interchanges of the form shown in Figure 2. This allows for a higher effective sample size for the alignments under the mean-field approximation, with more alternative alignments generated from the same collection of samples. Restricting to single-column marginals more efficient algorithms to be constructed, involving one-step rather than two-step recursions. In the rest of this section, we examine these points in further detail. Estimating marginal probabilities For a pairwise alignment, column marginals can be easily represented using a matrix in which the (i,j) entry contains the marginal probability \(p(s^{(1)}_{i} \diamond s^{(2)}_{j})\), where \(s^{(1)}_{i}\) and \(s^{(2)}_{j}\) are the ith and jth characters in two sequences s (1) and s (2), and the symbol ◇ denotes homology. When only two sequences are under comparison, dynamic programming recursions allow for the exact computation of these marginal probabilities under certain types of evolutionary models [55,100,101]. In the multiple sequence case, such exact computations are typically infeasible. However, if we are provided with a set, , of sampled alignments, an estimate of the marginal probability of each column (after coding) can be computed as the proportion of the alignments in that contain the column, weighted according to the alignment probability. This can be written using the following indicator function notation If we consider a multiset, \(\mathcal {A}^{+}\), containing global alignments sampled one or more times according to their probability, then the factor p(A) can be replaced by the relative frequencies of the sampled alignments. The estimator for the marginal probability \(\hat {p}_{C}(X)\) is then proportional to the fraction of sampled alignments containing a column X ′ for which C(X ′)=C(X): $$\begin{array}{@{}rcl@{}} \hat{p}_{C}(X) &= n_{C}(X,\mathcal{A}^{+}) /|\mathcal{A}^{+}| \end{array} $$ with \(n_{C}(X,\mathcal {A}^{+})\) denoting the number of occurrences of C(X) across all the alignments contained in the multiset \(\mathcal {A}^{+}\). If enough alignments are sampled from the correct distribution, the above estimator will converge to the true value p C (X). Although conditional marginals can also be computed from local alignments (see Additional file 1 : Section S1), in this work we will consider only global alignments, in the interests of simplicity. Since in most cases each sampled alignment will be unique, due to the high dimensional nature of the state space, in the rest of this manuscript we will refer only to the set rather than the multiset \(\mathcal {A}^{+}\). However, for cases where uncertainty is low, and the same alignment may be sampled more than once, it is important to treat each replica as an independent sample when computing marginal probabilities. Marginal probabilities can also be estimated for pairs of columns using observed pair frequencies. However, the space of possible pairs of columns can be much larger than the space of columns; in the worst case this will be by a factor of \(\mathcal {O}(2^{N})\), where N is the number of sequences, since this is the maximum size of an equivalence class. Hence, a larger number of alignment samples will be needed to obtain accurate estimates for pair marginals. As we shall see, this means that pair-based reconstructions of joint probabilities are typically less accurate unless a very large number of samples is used. Reconstructing alignment probabilities from marginals Generally, with sampling-based procedures such as MCMC, posterior probabilities are estimated via sampled frequencies. However, in the case of a very high dimensional parameter such as a multiple sequence alignment, each point in the space may only be visited once, such that it is not possible to estimate posterior probabilities based on these frequencies. As discussed above, the set of marginal probabilities for each column (or pair of neighbouring columns) can be used to reconstruct the posterior probability for any particular alignment, via Equation (5). Although the likelihood for each sampled alignment will often be known as a by-product of the sampling procedure, the marginal posterior probability of each alignment after integrating over other unknown parameters (for example indel rates), will typically not be known. Hence, the DAG-based approach presented here represents a useful way to calculate posterior probabilities in such cases. A similar approach has been used recently to compute the posterior probabilities of phylogenetic trees based on the probabilities of each of the constituent clades, under the assumption of conditional independence between clades [102]. As an illustration of this procedure, a set of pairwise alignments were sampled from the pair-HMM in Additional file 1: Figure S4, combined with the Dayhoff amino acid rate matrix [103], for two globin sequences (sampled alignments illustrated in Additional file 1: Figure S3). As shown in Figures 6 and 7, the DAG-based estimates of the posterior probability converge towards the true probability as the number of samples is increased, reaching a good agreement after just 200 samples, as measured by the mean-squared error of the logarithm: $$ MSE(\hat{p}\, ||\, p) = \frac{1}{|\mathcal{A}|}\sum\limits_{A \in \mathcal{A}} (\log \hat{p}(A) - \log p(A))^{2} $$ Mean squared error in the approximation to the true posterior, as a function of the number of alignment samples. Shown for the pairwise globin example. Although the pair-HMM involves neighbour-dependent terms (leading to an affine gap penalty), the mean-field approximation leads to a better estimate of the true posterior until around 1000-2000 samples are taken. This is due to the presence of intersections between paths in the alignment DAG, which allows for a higher effective sample size to be obtained from the same number of alignments. As more alignment samples are taken, the DAG-based estimate of the log posterior probability for each alignment converges towards the true value. The DAG-based probabilities already yield a good estimate when the number of alignments, N, is just 100. Shown on the top row are the reconstructed probabilities derived using pair marginals, and on the bottom using the mean field approximation, with the line y=x overlaid in red. Since each sampled alignment is generally observed only once, the posterior probability estimated directly from alignment frequency would be 1/N in each case above. The DAG methodology therefore offers a clear advantage for the purposes of computing posterior alignment probabilities. The mean-field approximation results in a lower mean-squared error (MSE), due to the higher effective sample size (see Figure 6). For lower numbers of samples, the estimates are more accurate for the more probable alignments, since the more extreme regions of the space are sampled with lower probability, and hence converge more slowly. Although both pair-marginal and mean-field estimates converge in this case at a similar rate, closer analysis shows that the mean squared error in the approximation to the true posterior is considerably less for the mean-field approximation. This suggests that the improvement obtained by summing over a larger number of paths (see Figure 5) outweighs the approximation introduced by averaging over predecessor states, although eventually at around 2000 samples the pair-marginal estimates begin to dominate the mean-field approximation (see Figure 6), since the true pair-HMM involves neighbour-dependent terms. The precise location of this crossover point will depend on the degree of neighbour dependency; for a completely site-independent model (e.g. the pair-HMM in Additional file 1: Figure S4 with δ=ε=σ), the single-column marginal estimate always dominates (see Additional file 1 : Figure S7). This same pattern is observed in a more striking fashion for a larger, 10-sequence alignment, as shown in Figure 8. Moreover, since the space of possible alignments increases very rapidly with the number of sequences, the benefit of using the mean-field approach to boost the effective sample size is greater in the multiple-sequence case, resulting in much faster convergence of the posterior estimates (see Figure 8). For a larger multiple sequence alignment, the mean-field approximation to the log posterior (bottom row) converges much more quickly than the pair marginal estimate, despite the fact that the indel model used includes neighbour-dependent terms. This is due to the fact that column marginals can be estimated more reliably than pair marginals, combined with the fact that allowing crossovers in the DAG results in a higher effective sample size (see Figure 5). Results shown for the simulated dataset described later in the main text, using the TKF92 indel model [17]. In this case the true posterior probability cannot be computed analytically, but the log likelihood (conditional on specific values of the other unknown parameters) is known. Since the log likelihood is expected to be linearly related to the log posterior, convergence can be gauged approximately by assessing the fit to a relationship of y=x+k (overlaid in red, with k, the approximate normalising constant, chosen to match the distribution to which the mean-field approximation converges, here k=−9420). Approximate summation over all alignments As well as computing the probability of individual paths in the DAG, it is possible to sum over all alignments contained within the DAG using a standard dynamic programming algorithm (see Additional file 1 : Section S5). In the pairwise case, where it is possible to analytically compute the sum over all alignments (by filling out the full dynamic programming table), it is possible to examine how much of the posterior mass is contained within the DAG resulting from a particular set of samples. While the probability mass contained within the individual samples increases relatively slowly, and encapsulates only a very small fraction of the total, the proportion of the posterior mass encapsulated in the set of paths through the alignment DAG increases much more rapidly; the DAG contains in the order of 10- 15% of the total posterior mass over the entire set of possible alignments with just 100 samples, increasing to around 80% after including 2000 samples (see Figure 9 and Additional file 1 : Figure S1). The proportion of the posterior mass contained in paths through the DAG increases rapidly with the number of samples. For the pairwise example discussed in the text, the proportion reaches in the order of 10- 15% of the total posterior mass with just 100 samples, increasing to over 80% after including 2000 samples (left panel). In contrast, the proportion of posterior mass contained within the individual samples is very small (right panel). A similar dynamic programming algorithm can be used to calculate the total number of paths (i.e. alignments) contained within the DAG. Examining the number of paths in the DAG as a function of the number of alignment samples shows a super-exponential relationship when crossovers are allowed, whereas restricting to observed column pairings increases close to exponentially (see Figure 5). In the pairwise case, the theoretical maximum can be computed analytically; for the pairwise example discussed above, the total number of paths in the DAG has an upper bound in the order of 10113. Summarising the alignment distribution Although the set of alignments encoded by the DAG contains a great deal of additional information beyond that contained in any one alignment, there may be situations where a single alignment is desired as a summary of the distribution. Due to the high-dimensional and constrained nature of the state space, standard summary statistics such as the mean are not applicable in this case [104]. Finding the MAP alignment One of the simplest summaries of the distribution is the maximum a posteriori (MAP) alignment. As mentioned earlier, estimation of this quantity directly from sample frequencies is typically very unreliable, since each alignment is typically only sampled once, such that each sample has the same empirical posterior probability. However, as discussed above, the DAG-based approach to estimating posterior probabilities can be used to obtain good estimates of the probability for each possible alignment contained in the DAG. We can then use the fact that the DAG-based log posterior is additive over the columns in the alignment $$\begin{array}{@{}rcl@{}} & \log p(A) = \log p\left(A^{(1)}\right) + \sum\limits_{i=2}^{L} \log p\left(A^{(i)} \mid A^{(i-1)}\right) \end{array} $$ such that the path with the maximum posterior can be found using standard dynamic programming algorithms for DAGs (see Algorithm 1). Nevertheless, due to large size of the space of possible alignments, there may be a large number of very similar alignments with very similar posterior probability. Hence, quantities such as the MAP can be poor summary statistics of the distribution [58,93,94]. Instead, we will consider alternative types of summary alignments that account for the uncertainty contained within the DAG. Loss function formulation The problem of choosing a single summary alignment can be approached within a decision theoretical framework, whereby the choice of summary is designed to minimise the expected value of a particular loss function, also known as the posterior risk [104]. For a loss function defined in terms of alignment accuracy, minimising the posterior risk is equivalent to selecting the maximum expected accuracy alignment [98,105,106]. The loss of an alignment, A, with respect to a reference alignment, A ′, will be denoted by L(A || A ′), and represents a penalty associated with choosing alignment A, given that the true alignment is A ′. The posterior risk associated with A can then be defined as $$\begin{array}{@{}rcl@{}} \mathcal{R}(A) &=& \mathbb{E}\left [ \mathcal{L}(A\ ||\ A^{\prime}) \right ] \end{array} $$ $$\begin{array}{@{}rcl@{}} &=& \sum\limits_{A^{\prime}} p(A^{\prime}) \mathcal{L}(A\ ||\ A^{\prime}) \end{array} $$ where the sum over A ′ includes all alignments. The minimum-risk alignment is then \(\hat {A} = \arg \min _{A} \mathcal {R}(A)\). For loss functions defined as a sum over columns (equivalent to the pointwise gain functions discussed by Hamada et al. [106]), we have $$\begin{array}{@{}rcl@{}} \mathcal{L}(A\, ||\, A^{\prime}) &= k \sum\limits_{X \in A} \mathcal{L}(X\, ||\, A^{\prime}) \end{array} $$ where k is independent of A. In order to define the loss for a particular column, we will consider the following four categories of columns in the predicted alignment, A: True positives (TP) = Columns correctly present False positives (FP) = Columns incorrectly present True negatives (TN) = Columns correctly absent False negatives (FN) = Columns incorrectly absent such that T P∪F P∪T N∪F N=Ξ, the set of all observed columns. Generally we will not be interested in the number of negatives (i.e. columns not included in the alignment), since this will depend on how many alignment samples are used to generate the DAG. We will therefore focus on loss functions of the form where f is a bijective function operating on columns, with f(A)=(f(A (1),…,f(A (L))), and λ FP and ρ TP are loss/reward functions associated with false positives and true positives respectively. As shown in Additional file 1: Section S2, the posterior risk can then be written as $$\begin{array}{@{}rcl@{}} \mathcal{R}_{f}(A) &\propto \sum\limits_{j=1}^{L_{A}} \left[g - p_{f}(A^{(j)})\right] \end{array} $$ where is the marginal probability of column X being present according to the mapping specified by f, and g=λ FP /(ρ TP +λ FP ) is penalty term that penalises longer alignments by a factor proportional to the penalty on false positives. In contrast to an arbitrarily chosen gap penalty, the penalty, g, has a direct interpretation in this case. It is also a straightforward extension to allow λ FP and ρ TP , and hence g, to depend on the specific column, X, for example penalising a false positive proportionally to the number of non-gap characters contained in the column. Loss functions corresponding to common accuracy measures The simplest choice in Equation (16) is to set f(X)=C(X) as defined in Equation (1), such that p f (X) is equal to the marginal probability as defined in Equation (9). The loss function formulation can also be used to represent commonly used measures of alignment accuracy. Perhaps the simplest of these is the so-called column score; this measures the proportion of correct columns, but without differentiating between the positions of the gaps. This can be defined more formally by first introducing an alternative column mapping, C +(X)=(c +(X 1),…,c +(X N )), which groups together all columns that contain the same non-gap characters: $$ c^{+}(X_{i}) = \left\{ \begin{array}{cl} 2j-1 & \text{if}\,\, X_{i} = s^{(i)}_{j}\\ 0 & \text{if}\,\, X_{i} = \text{gap} \end{array} \right. $$ The column score for an alignment, A, with respect to a reference, A ′, can then be defined as \(-\mathcal {L}_{C^{+}}(A\ ||\ A^{\prime })\), with λ FP set to zero. Since we have and hence \(\phantom {\dot {i}\!}p_{C^{+}}(X) \geq p_{C}(X)\) and \(\hat {p}_{C^{+}}(X) \geq \hat {p}_{C}(X)\), the C +-risk, i.e. \(\mathcal {R}_{C^{+}}\), represents an upper bound to the C-risk, \(\mathcal {R}_{C}\). As shown in Figure 10, the alignment minimising the C +-risk will not in general be the same as the alignment minimising the C-risk, although there may be considerable overlap. The minimum-risk path under the C-based loss function (blue) may not be the same as that under the C +-based loss function (red). Column frequencies are shown in blue below each column, and the p C+ marginals shown in red above (as frequencies from a total of 20 samples). In this case, there are two equivalent paths with the same C +-score. As discussed in Additional file 1: Section S3, the above approach can easily be extended to make use of a function, f, which splits a column up into a set of pairwise homology statements. This allows various pairwise accuracy scores to be expressed in terms of similar types of loss functions. Modeller scores One other class of loss function worth mentioning here is the so-called modeller version of each of the aforementioned scores, \(\mathcal {L}^{m}_{f}(A\, ||\, A^{\prime })\), which involve normalising \(\mathcal {L}_{f}(A\, ||\, A^{\prime })\) by the length of the predicted alignment, A. For example, the modeller C-score, corresponding to \(\mathcal {L}^{m}_{C}(A\, ||\, A^{\prime })\), was considered by Collingridge and Kelly [79]; as we shall see, the dependence on the length of the predicted alignment precludes the use of exact optimisation algorithms for loss functions such as this. Efficient algorithms In general, minimising the expectation of any of the aforementioned loss functions over the space of all possible multiple alignments is a problem whose complexity grows exponentially with the number of sequences [107]. For the pairwise case, the minimum-risk/maximum expected accuracy problem can be implemented efficiently using standard dynamic programming algorithms [22,60,61,88,94,98,108-110]; for multiple sequences approximate techniques have generally been used, including simulated annealing [20,111,111,112], and greedy [113] or progressive alignment algorithms [105,114-116]. However, if the solution set is restricted to the (still very large) space of alignments encoded in the DAG, any risk function that is additive over columns [in the sense of Equation (15)] can be minimised in time linear in the number of columns in the DAG, by making use of efficient maximum-weight path algorithms (see Algorithm 2; Figure 11). This type of approach was first mentioned by Lunter et al. [93], and an implementation described by Satija et al. [65], although these previous studies did not examine the algorithm in terms of loss functions. A collection of alignment samples can be combined into a DAG structure, and a summary algorithm generated using efficient algorithms. The graph can be visualised by vertically ordering columns based on the longest path length to the end of the DAG (as shown above). Each path represents a valid combination of the columns in the input alignments, with valid recombinations shown as grey lines in the above figure. The maximum a posteriori or minimal-risk path can then be found efficiently using linear-time algorithms, yielding a single summary alignment (shown in blue) that accounts for the uncertainty in the alignment set, and can be annotated with posterior probabilities for each column (shown in orange). The same approach cannot be applied to minimise the risk under modeller variants, however, since the contribution of each column to the partial sum at each step in the dynamic programming algorithm depends on the unknown final alignment length. Collingridge and Kelly recently presented an algorithm, entitled MergeAlign, that proposed to optimise a score of this type, but as shown in Additional file 1: Figure S5, it is possible to construct counter-examples for which the algorithm does not compute the optimal solution. As we shall illustrate, this lack of optimality can result in significant losses when summarising a set of alignments. Moreover, the same objective, i.e. penalising longer alignments, can be achieved through the use of a non-zero g parameter as described above, such that the use of modeller variant loss functions is unnecessary. Efficient data structures In representing the alignment DAG, it is essential to ensure that the space complexity of the data structure is less than the total number of paths through the graph, which increases very rapidly with the number of columns. The obvious way to represent a graph is via a list of neighbours for each node, which requires \(\mathcal {O}(\bar {d} |\Xi |)\) storage, where |Ξ| is the number of observed columns and \(\bar {d}\) is the average node in-degree. However, within the mean-field setting, we can use the predecessor and successor equivalence classes to significantly increase the space efficiency, since each column need only record its predecessor and successor equivalence class. Given the definitions of the predecessor and successor equivalence classes, we can see that each equivalence class is of size at most 2N−1, where N is the number of sequences, since each row can take one of two possible values (gap/character) in each equivalence class, with the restriction that the column cannot be all gaps. In general, the number of equivalence classes is therefore somewhat less than the number of columns, with \(|\Xi | = \bar {d} |\mathcal {E}|\), where \(1 \leq \bar {d} \leq 2^{N} - 1\). Using an equivalence-class representation of the DAG structure therefore results in \(\mathcal {O}(\bar {d} |\mathcal {E}|) = \mathcal {O}(|\Xi |)\) space requirements, saving a factor of \(\bar {d}\). Similar gains can be made in time complexity. Since any column in a particular f P -equivalence class will have the same set of possible predecessors, and similarly for successors, the partial sums required in dynamic programming algorithms can be stored per equivalence class rather than per node, which results in algorithms of \(\mathcal {O}(|\Xi |)\) time complexity rather than \(\mathcal {O}(\bar {d} |\Xi |)\) (see Algorithms 1 and 2 for examples). In the limit of a large number of short sequences with high uncertainty, this results in going from approximately quadratic time, to time linear in the number of columns. Example application: summary alignments for simulated and benchmark datasets In order to illustrate the utility of the aforementioned procedure, we first simulated sequence data using the program DAWG [117], yielding sets of sequences for which the true alignment is known. Details of the simulation are provided in Additional file 1: Section S7. Data were simulated under three parameter regimes, with indel rates set to low, medium and high (see Additional file 1 : Section S7 for further details); 50 datasets were generated for each regime, yielding 150 datasets overall, each containing 10 sequences, with average sequence length equal to 905 nucleotides. As a biologically relevant example, we also considered a set of 78 alignments taken from the BAliBASE database, comprising the full-length alignments from the Reference 1 set [118]. This set further comprises two subsets, consisting of low sequence identity (Ref 1a, ID <25%) (short: 14, medium: 12, long: 12; average 6.8 sequences per alignment; average sequence length 309), and medium sequence identity (Ref 1b, ID =20−40%) (short: 14, medium: 16, long: 10; average 9.0 sequences per alignment; average sequence length 351). The simulated and BAliBASE datasets can be found in Additional file 2. For each of these datasets, we ran the statistical alignment software StatAlign [56], which jointly samples alignments and trees under a stochastic model of substitution, insertion and deletion [93]. 1000 alignment samples were generated from the posterior distribution, and a Java-based implementation of Algorithm 2 was used to compute a summary alignment minimising the risk under the C- and C +-based loss functions. It is also of interest to consider how the minimum-risk summary approach scales to alignments containing larger numbers of sequences. As a test dataset containing larger alignments, we selected one of the largest alignments from the OXBench suite [119], consisting of 122 immunoglobulin sequences, with average length 113. To assess how the method scaled with the number of sequences after controlling for other factors (such as amino acid content and sequence length), we subsampled smaller datasets from this alignment, yielding datasets with 15, 33, 60 and 122 sequences. These subsets were sampled so as to maximise dissimilarity within the subset, since the original alignment contained several well-defined subgroups that would otherwise skew the analysis. Since full posterior sampling of alignments is only feasible for around 20-30 sequences, we made use of an approximate method for sampling alignments for these datasets [80], generating 2000 alignment samples for each dataset (see Additional file 1 : Section S7 for further details). Comparison to other methods For comparison, we also generated summary alignments for each dataset using the MergeAlign method of Collingridge and Kelly [79], and a consistency-based approach whereby the alignment samples are used as a library for input to the program T-Coffee [114], using the -aln option [120]. We call the latter approach S-Coffee, with the 'S' signifying that the T-Coffee method is being used on a library derived from a set of sampled alignments. As shown in Table 1, our DAG-based implementation is substantially faster than the other methods. Increasing the indel rate results in higher alignment uncertainty and longer alignments, resulting in an increase in runtime for all methods, although the increase is small for the minimum risk algorithm (henceforth referred to as MinRisk). Minimising the risk under the C +-based loss function incurs an additional overhead due to the time needed to compute the weighted marginal probabilities, \(\phantom {\dot {i}\!}p_{C^{+}}(X)\), but this takes less than half a second in all the examples we considered here. Table 1 Average time (in seconds) taken to generate a summary alignment from 1000 samples, for the three simulated datasets Accuracy metrics To assess the performance of each approach, we make use of several measures of alignment accuracy, including the AMA metric of Schwartz [112,121] (measuring the proportion of correct pairwise homology statements), and the column score (equivalent to the C +-score, measuring the proportion of correct columns). In addition, we use the measures shown in Table 2. Table 2 Accuracy measures used to assess the relative performance of the different summary methods For the simulated data, accuracy is computed relative to the known true alignments, and for the BAliBASE datasets, relative to the benchmark alignment provided. Since the minimal \(\mathcal {R}_{C}\) and \(\mathcal {R}_{C^{+}}\) alignments maximise the expectation of the C- and C +-score respectively, it would be expected that these methods perform best under the corresponding scores. The MergeAlign method seeks to maximise the Modeller C score, although as mentioned earlier, the algorithm cannot guarantee an optimal solution. As a pairwise progressive algorithm, the S-Coffee method might be expected to perform best under a sum-of-pairs score, such as the AMA metric. Given that the absolute value of the accuracy varies substantially over the different datasets, we measure the performance of each method by computing a rank score, which indicates the rank of the accuracy of an alignment, \(\hat {A}\), relative to the 1000 samples used as an input ( ) A rank of 1 therefore indicates an alignment that is more accurate under measure α than each of the individual samples, whereas a rank of 0 indicates an accuracy lower than any of the individual samples. Results: simulated data As shown in Table 3, the MinRisk method generally yields summary alignments that are more accurate than the majority of the samples, resulting in a rank score close to 1. As expected, minimising the risk under the C-based loss function results in the highest accuracy under metric α C , and similarly minimising the risk under \(\phantom {\dot {i}\!}\mathcal {R}_{C^{+}}\) results in the highest scores under measure \(\phantom {\dot {i}\!}\alpha _{C^{+}}\). Interestingly, the MinRisk C + method also results in the highest accuracy under the AMA sum-of-pairs metric. In all cases setting g=0 results in the best performance, since these accuracy metrics do not penalise false positives, although setting g=0.5 does not result in a large loss of performance. Table 3 Average rank scores for the different methods on simulated datasets, using the accuracy metrics described in the main text and in Table 2 In contrast, on these datasets MergeAlign typically yields a summary alignment whose accuracy is close to the median, with a rank score close to 0.5, although performance is more reasonable under the α C measure. The progressive heuristic S-Coffee algorithm performs consistently badly in all cases, yielding summary alignments that are typically worse than the majority of the samples used to build the library, suggesting a conflict between the information contained in the samples, and the heuristics used to construct the alignment. When the modeller variants of the scores are considered (Table 4), the general patterns stay much the same, although there is now a benefit observed in increasing the g parameter, since the modeller scores penalise longer alignments. For alignments with more gaps (higher indel rate), the value of g yielding the highest accuracy under the modeller scores tends to decrease (see Figure 12). This reflects the fact that for cases where the true alignment contains many gaps we may wish to be more lenient with the inclusion of additional columns, allowing the alignment to increase in length. Overall, setting g=0.5 yields the best average performance under the modeller variants, corresponding to a loss function that equally penalises false positives and false negatives. Accuracy as a function of the g parameter. Accuracy on the simulated datasets under the \(\protect \phantom {\dot {i}\!}\alpha _{C^{+}}\) (left) and \(\protect \phantom {\dot {i}\!}\alpha ^{m}_{C^{+}}\) (right) measures as a function of the g parameter for low (∘), medium (△) and high (+) indel rates. Table 4 Average rank scores for the different methods on simulated datasets, measured using the modeller scores As might be expected, the performance of MergeAlign improves when the accuracy is measured using the modeller scores. However, better performance can still be obtained under the modeller variants by using the MinRisk method and a non-zero g parameter (see Table 4). As discussed earlier, the g parameter accomplishes the key aim of the modeller score (i.e. to penalise longer alignments) while maintaining computational tractability, and a meaningful statistical interpretation. Given the heterogeneity of the different datasets, it is also useful to visualise the results for the individual datasets. As shown in Figure 13 and Additional file 1: Figure S8, the results are consistent across all datasets, with the MinRisk method yielding alignments that are significantly better than the majority of samples, especially as the indel rate is increased. Conversely, the MergeAlign method consistently yields summary alignments that are close to the median accuracy of the sampled alignments, and the S-Coffee method performs consistently worse than the majority of samples. Accuracy of summary alignments for simulated data. Results for the MinRisk, MergeAlign and S-Coffee methods shown in red, black and blue respectively, for low (top panel), medium (middle panel) and high (bottom panel) indel rates, with accuracy measured by \(\protect \phantom {\dot {i}\!}\alpha _{C^{+}}\). The range of values covered by the 1000 samples is shown in grey, with lighter shading indicating greater distance from the median. Results: BAliBASE For the BAliBASE datasets, the MinRisk method also consistently yields summaries that are better than the majority of samples, and outperforms the other methods examined here in all cases (see Tables 5 and 6). Nevertheless, although still ranking behind most of the MinRisk combinations, MergeAlign performs somewhat better on the BAliBASE datasets than on the simulated data, with ranks scores consistently much higher than the median. This suggests that these particular BAliBASE alignments contain fewer of the types of features (for example large numbers of indels) that are likely to lead to suboptimal solutions under the MergeAlign algorithm. Similarly, the S-Coffee method, although still often worse than the median accuracy of the samples, performs better than on the simulated data, suggesting that the heuristics employed by T-Coffee are tailored more towards aligning these types of datasets. These heuristics may to some extent be overriding the information input via the library, which may explain the poor performance on the simulated datasets. Table 5 Average rank scores for the different methods on BAliBASE datasets, using the accuracy metrics described in the main text and in Table 2 Table 6 Average rank scores for the different methods on BAliBASE datasets, measured using the modeller scores We can see also that in general the optimal value of g for the MinRisk method is higher for the Ref 1b dataset reflecting the fact that these sequences are less diverged, and hence likely to contain fewer indels. However, as with the simulated data, a value of g=0.5 gives results that are close to optimal in all scenarios with the BAliBASE datasets. Results: approximate sampling on larger OXBench alignments Using the OXBench datasets, we can examine how the above conclusions scale to alignments with larger numbers of sequences. As discussed by Bucka-Lassen et al. [95], the number of intersections between sampled alignments may be expected to decrease as the number of sequences is increased, due to the increased size of the state space. Similarly, since the number of possible columns increases exponentially with the number of sequences, it might be expected that the marginal probabilities of each column would decrease as the number of sequences is increased, thereby making the minimum-risk alignment less reliable. However, in the examples considered here, this effect does not appear to be significant, since the alignment uncertainty also decreases as more sequences are added to the alignment, and this appears to more than compensate for the increase in the size of the potential state space (see Table 7). This is also highlighted by the fact that the average number of columns per equivalence class—a measure of the uncertainty surrounding the minimum-risk alignment—does not increase as the number of sequences is increased. Table 7 Results on OXBench datasets As shown in Figure 14, although the marginal probabilities derived by the approximate sampling procedure may be less accurate than those from alignments obtained using StatAlign, the minimum-risk alignment for these alignments is still always better than the majority of samples, with a rank score often above 0.8 (see Table 7). Accuracy as a function of the g parameter. Distribution of alignment accuracy scores for the OXBench datasets. Minimum-risk summary alignments shown in red, for g=0 and g=0.5. The summary alignments are generally more accurate than the majority of samples, and this remains the case as the number of sequences is increased. Since the alignments are of length around 150, and the DAGs contain in the region of 30,000 unique columns, 2000 samples is approximately 10 observations per column. While this appears to be sufficient for estimating the minimum-risk alignment, more samples will be needed in order to accurately estimate the probabilities of the less likely alignments, since these tend to converge more slowly (cf. Figures 7 and 8). Overall the rank scores are of comparable magnitude to those observed with the BAliBASE datasets. Moreover, the performance does not appear to degrade as the number of sequences is increased, although the optimal value of g does switch from 0.5 to 0 as the number of sequences is increased to 60 and 122. This is likely due to the fact that the benchmark alignment increases in length as the number of sequences is increased, and a lower value of g favours longer alignments. Computational considerations While the runtime does increase with the number of sequences, a breakdown of the contributions to these timings shows that the majority of the time is spent reading in the alignments, which scales linearly with the number of alignments multiplied by the number of sequences (cf. Additional file 1 : Figure S9). As discussed earlier, the minimum-risk algorithm scales linearly with the number of columns in the DAG, but this step contributes a very small proportion of the total runtime in the examples shown in Table 7. On our test systems the overall time taken to process and summarise 2000 alignments is only 3 seconds for the 122-sequence dataset (see Table 7), and around 10 seconds for 10,000 alignments (data not shown). For a 20-sequence dataset, analysing 500,000 alignments takes 150 seconds (see Additional file 1 : Figure S9). Memory usage is also generally low, requiring less than 2Gb in all the cases we have tested, even for 500,000 alignments. In all cases we have examined, the time taken to actually generate the alignment samples is significantly larger than the time required to analyse the samples. As such, large gains in efficiency can be obtained by generating one set of alignment samples and carrying out multiple downstream analyses on this same set, compared to carrying out a full joint sampling analysis. Effect of alignment accuracy on tree estimation As discussed in the introduction, a number of studies have highlighted how biases in alignments may lead to misleading conclusions in the context of downstream tree inference. As such, any methodology that has the potential to improve alignment accuracy, particularly in the presence of high uncertainty, has the potential to improve subsequent phylogenetic inference. Here we will provide a brief example to reiterate this point. For each of the simulated datasets discussed earlier, we performed tree inference using the program DNAML from version 3.69 of the PHYLIP package [122], using alignments generated by four commonly used programs, as well as the summary alignments generated using the minimum-risk procedure presented here. DNAML was run with the default settings in each case, and the distance to the known true tree was computed using the Robinson-Foulds distance, equal to the number of bipartitions that differ from the true tree, with maximum value of 2(n−3), where n is the number of leaves in the tree [123]. As shown in Table 8 and Figure 15, the alignment accuracy under these different methods correlates strongly with the accuracy of the resulting trees, with the most accurate alignment methods giving rise to the fewest tree errors. In all cases, the C + version of the minimum-risk algorithm, applied to alignments generated by StatAlign, yields the highest tree accuracy. This example illustrates the types improvements that can be obtained by using more robust methods to generate alignments before carrying out tree inference. Alignment accuracy is strongly correlated with the number of errors in trees estimated by DNAML. Tree accuracy was measured using the Robinson-Foulds distance [123]. Results are shown for low (∘), medium (△) and high (+) indel rates, for the different methods presented in Table 8. In each case, the MinRisk results are highlighted in red (MinRisk C), and blue (MinRisk C +), and tend to give the most accurate alignments and trees. Table 8 Results for tree inference on alignments generated using different methods, on the simulated datasets, as shown in Figure 15 Predictive power of column marginals As well as providing a way to approximate full alignment probabilities, posterior column marginal probabilities can also be good predictors of the presence or absence of a column in the true alignment [22]. In all cases examined here, the column marginals are excellent predictors of the presence or absence of the column in the true alignment, with an AUC close to 1, especially for the BAliBASE datasets (see Table 9). The C +-weighted marginals (the marginal probability of a column after grouping with all other columns containing the same characters, regardless of position in the alignment) are less accurate in predicting the presence/absence of a column under the C + definition, which may be due to the fact that the estimates of \(\phantom {\dot {i}\!}p_{C^{+}}\) make stronger assumptions about the exchangeability of columns, averaging over a larger set of possible predecessors. In all cases, predictive power is higher for alignments containing fewer indels, although the predictive power of the marginals will depend largely on the suitability of the evolutionary model for analysing the dataset. Table 9 Accuracy of marginal probabilities in predicting column presence/absence, as measured by the area under a ROC curve (AUC), including a comparison to results generated using the program GUIDANCE [ 76 ] (indicated by the p G row in the table) Comparison to results generated by the widely-used program GUIDANCE [76] indicate that column marginals are typically a more reliable predictor of column presence/absence. However, it is important to note that the predictive power of these column marginals is dependent on the quality of the alignments used to construct the DAG. Propagating alignment uncertainty into downstream inference So far we have examined how the DAG facilitates the efficient generation of accurate summary alignments, which can then be used for subsequent analyses. However, for many types of analyses it may be advantageous to jointly sample alignments and other parameters of interest, such as trees [56,57], or sequence annotations [65], in order to account for the interdependence of these different quantities. Since joint sampling approaches are typically computationally intensive, it is also desirable to explore alternative ways in which alignment uncertainty can be incorporated into downstream inference in cases where joint analysis is not feasible [29,124]. Sequential approach One way of accomplishing this is to carry out the downstream analyses separately on each of the sampled alignments, averaging or summarising the results as appropriate. This type of sequential approach has been used to assess the sensitivity of phylogenetic inference to the starting alignment [26,29,33], as well as examining the effect of alignment uncertainty on estimates of positive selection [36] and RNA secondary structure prediction [125]. However, as discussed earlier, a set of alignment samples will typically contain only a small portion of the total probability mass, even for pairwise alignments with relatively low uncertainty (cf. Additional file 1 : Figure S3). Hence, the uncertainty quantified in the individual samples will be a significant underestimate of the true alignment uncertainty. Moreover, since the relative frequencies of whole alignments are a very poor estimator of posterior probabilities, simply carrying out an independent analysis on each sampled alignment and then averaging is likely to yield unreliable results. Reweighting procedures such as those discussed by Blackburne and Whelan [36] are only feasible when the posterior probability of each alignment can be computed exactly, which is not the case for many models of interest. DAG-based approach In order to address these issues, we can make use of the alignment DAG, making use of intersections between alignments to increase the effective sample size. Due to the acyclic structure of the graph, it is possible to adapt many standard algorithms, such as forward-backward algorithms for HMMs, to operate on the DAG structure rather than an individual alignment. This allows for downstream inference to be averaged over a very large number of alignments, weighted according to a more reliable estimate of the posterior probability for each alignment, rather than analysing only a small collection of individual samples. As a specific example, we can consider the case of tree inference under an independent-sites model. On a single alignment the posterior probability of a tree, Υ, can be written as a product of contributions from each column: $$\begin{array}{@{}rcl@{}} &p(\Upsilon \mid A, \Theta) \propto p(\Upsilon) \prod\limits_{i=1}^{L_{A}} p\left(A^{(i)} \mid \Upsilon, \Theta\right) \end{array} $$ where Θ represents the parameters of the evolutionary model, and the proportionality involves the quantity \(\int p(A, \Upsilon) d\Upsilon \). It is a straightforward extension then to compute the posterior averaged over all alignments in the DAG, using a dynamic programming approach similar to the algorithms discussed earlier. We first introduce the following partial sum for a column X: $$ z(X \mid \Upsilon, \Theta) \propto p(X \mid \Upsilon,\Theta) \sum\limits_{X^{\prime} \ltimes X} z(X^{\prime} \mid \Upsilon,\Theta) p(X \mid X^{\prime}) $$ such that the marginal posterior for the tree, Υ, summing over all alignments in a DAG \(\mathcal {D}(\mathcal {A})\), can be written as $$\begin{array}{@{}rcl@{}} p(\Upsilon \mid \mathcal{D}(\mathcal{A}), \Theta) &\propto& p(\Upsilon) \sum\limits_{A \in \mathcal{D}(\mathcal{A})} p(A) p(\Upsilon \mid A, \Theta) \end{array} $$ $$\begin{array}{@{}rcl@{}} &\propto& p(\Upsilon)\, z\big(X^{(T)}_{\mathcal{A}} \mid \Upsilon, \Theta\big) \end{array} $$ Example application: marginal probabilities for topologies As an illustration of the utility of this approach, we consider here a 4-sequence example, for which there are three possible unrooted topologies relating the sequences. The specific example we consider consists of three human globin sequences, α-haemoglobin (HbA), myoglobin (Mb), and cytoglobin (Cygb), as well as a plant leghaemoglobin (LegHb) (datasets can be found in Additional file 2). Previous studies have shown significant uncertainty as to the phylogenetic relationship between these different types of globins [62], hence this represents a good test case to analyse the effect of alignment uncertainty on topology inference. Here we restrict our analysis to four sequences for the purposes of simplifying the example. For these sequences, a set of alignment samples, , and tree samples, , was generated using StatAlign (see Additional file 1 : Section S7 for further details), and the marginal likelihood for each tree in the set was then computed as a sum over all the alignments by evaluating the quantity \(z(X^{(T)}_{\mathcal {A}} \mid \Upsilon, \Theta)\) for all \(\Upsilon \in \mathcal {T}\). The parameters, Θ, were set using the Dayhoff substitution matrix [103], with gaps treated as missing data. Assuming a uniform prior, the marginal posterior probability for each topology, τ, was then computed by averaging the marginal likelihoods for all trees in conforming to the particular topology: where indicates that tree Υ conforms to topology τ. These marginal posteriors can then be compared to the topology posterior computed on each alignment individually, replacing \(\mathcal {D}(\mathcal {A})\) with A in Equation (26) above. Although the true tree is not known in this case, the trees sampled by StatAlign place the majority of the posterior mass on the left-most topology shown in the top panel of Figure 16, placing a posterior probability of 0.12 on the centre tree, and 0.09 for the right-most topology. Posterior probabilities for three possible topologies, computed on individual alignment samples (bottom left), as well as marginalising over the alignments within the DAG (bottom right).Top panel: The three unrooted topologies for the four globin sequences discussed in the main text, ordered according to the posterior probability according to StatAlign (left to right, descending in probability). The leghaemoglobin sequence is taken from L.luteus, and all others from H.sapiens. Bottom panel: Posterior probabilities computed on individual alignment samples (left), and by marginalising over all alignments contained within the DAG (right). Bars in the lower panel are colour-coded according to the shading of the tree topologies in the top panel, and ordered according to the probability of the first topology. Also shown is the mean of the probability vectors computed on the individual alignment samples (right). The bottom panel of Figure 16 shows posterior probabilities computed using Equation (26), indicating significant variability depending on which alignment is used. While some alignments result in a posterior probability of more than 0.9 for the most favourable topology, others result in a probability of less than 0.2 for this topology. Simply taking the mean posterior over all the individual alignments in this case results in a posterior probability of only 0.56 for the most favourable topology. However, combining all the alignment samples into the DAG leads to a posterior probability of 0.94. This illustrates the fact that combining the alignments into a DAG may result in additional information being extracted from the same set of alignments, due to the increased effective sample size arising from intersections in the DAG. Since the same DAG is used to compute the likelihood for all trees in the set , the majority of the runtime for this procedure is not spent reading in the alignments from disk (as it was for the minimum-risk summary procedure). As such, the runtime scales linearly with the number of columns in the DAG, as expected (see Additional file 1 : Figure S10). The approaches illustrated here provide a general framework for dealing with alignment uncertainty in a statistically meaningful fashion. Encoding a set of sampled alignments in a DAG structure allows for more accurate estimation of posterior probabilities based on column or pair marginals. Due to interchanges and crossovers in the DAG, the number of alignments encoded in the graph is typically many orders of magnitude greater than the number of samples used to generate the DAG, such that the effective sample size is greatly increased by this representation. Since the graph is acyclic, efficient algorithms can be developed for summation over this very large number of alignments, each weighted according to its probability. As a specific example, we have considered algorithms for generating summary alignments that minimise the expected value of various types of loss functions, observing that this type of algorithm is generally very successful at minimising the loss on a set of test cases. This approach provides a way to conduct many types of sequence analysis on the very large set of alignments encoded in the DAG structure, allowing for alignment uncertainty to be propagated into downstream inference in cases where computationally expensive joint sampling procedures are infeasible. In addition to the tree inference example illustrated here, we are currently working on adapting several other common algorithms to the alignment DAG structure. Combining the output of other alignment programs The approaches detailed here are in theory applicable to a set of alignments generated by any type of method, although the quality of the probability estimates generated by the DAG will depend on the quality of the underlying model used to generate the alignments. Although this type of method can be used to combine the output of several different alignment programs, in a similar fashion to the M-Coffee procedure [120], such an approach does not have a probabilistic interpretation, and will depend heavily on the choice of programs used to generate the input. We have observed that this type of procedure usually yields summary alignments that are similar in accuracy to the program that typically generates the most accurate alignments (data not shown); however, since the most accurate alignment method is usually known from the outset, based on benchmarking results, there is not much to be gained by employing such a procedure. Moreover, the reliability of such an approach as a heuristic will depend strongly on the degree of similarity between the different alignment programs, hence we would recommend against using alignment DAGs as a way of combining the output of non-probabilistic alignment programs. Alignment DAGs as generators of alignment samples One other obvious application of the alignment DAG is as a way of generating additional alignment samples, which can be sampled by using a DAG-based version of the traditional stochastic traceback algorithm (cf. Additional file 1 : Section S6). One potential use for these alignment samples could be as a source of proposals within an MCMC alignment sampler, allowing for a new state to be efficiently generated, along with a known proposal probability for use in a Metropolis-Hastings accept/reject step. Although this type of approach does not allow for the exploration of previously unobserved columns, it could be useful as way to improve mixing, particularly once the key regions of the space have already been explored. Software availability Java software implementing the minimum-risk alignment summary algorithm and computation of marginal topology probabilities is available for download at http://statalign.github.io/WeaveAlign. A platform-independent jar archive containing version 1.2.1 of WeaveAlign is included in Additional file 2, along with datasets and example results. Siepel A, Bejerano G, Pedersen JS, Hinrichs AS, Hou M, Rosenbloom K, et al. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes. Genome Res. 2005; 15(8):1034–50. Altschuh D, Vernet T, Berti P, Moras D, Nagai K. Coordinated amino acid changes in homologous protein families. Protein Eng. 1988; 2(3):193–9. Hopf TA, Colwell LJ, Sheridan R, Rost B, Sander C, Marks DS. Three-dimensional structures of membrane proteins from genomic sequencing. Cell. 2012; 149(7):1607–21. Knudsen B, Hein J. RNA secondary structure prediction using stochastic context-free grammars and evolutionary history. Bioinformatics. 1999; 15(6):446–54. Höhl M, Ragan MA. Is multiple-sequence alignment required for accurate inference of phylogeny?Syst Biol. 2007; 56(2):206–21. Blundell TL, Sibanda B L, Sternberg M J E Thornton J M. Knowledge-based prediction of protein structures and the design of novel molecules. Nature. 1987; 326(6111):347–52. Sali A, Blundell T. Comparative protein modelling by satisfaction of spatial restraints. J Mol Biol. 1993; 234(3):779–815. Needleman S, Wunsch C. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970; 48(3):443–53. Gotoh O. An improved algorithm for matching biological sequences. J Mol Biol. 1982; 162(3):705–8. Edgar RC. MUSCLE: A multiple sequence alignment method with reduced time and space complexity. BMC Bioinformatics. 2004; 5:113. Lupyan D, Leo-Macias A, Ortiz AR. A new progressive-iterative algorithm for multiple structure alignment. Bioinformatics. 2005; 21(15):3255–63. Notredame C, Higgins DG. SAGA: sequence alignment by genetic algorithm. Nucleic Acids Res. 1996; 24(8):1515–24. Kim J, Pramanik S, Chung MJ. Multiple sequence alignment using simulated annealing. Comput Appl Biosci CABIOS. 1994; 10(4):419–26. Feng DF, Doolittle RF. Progressive sequence alignment as a prerequisite to correct phylogenetic trees. J Mol Evol. 1987; 25(4):351–60. Löytynoja A, Goldman N. Phylogeny-aware gap placement prevents errors in sequence alignment and evolutionary analysis. Science. 2008; 320(5883):1632–5. Thorne JL, Kishino H, Felsenstein J. An evolutionary model for maximum likelihood alignment of DNA sequences. J Mol Evol. 1991; 33(2):114–24. Thorne JL, Kishino H, Felsenstein J. Inching toward reality: An improved likelihood model of sequence evolution. J Mol Evol. 1992; 34:3–16. Hein J, Wiuf C, Knudsen B, Møller MB, Wibling G. Statistical alignment: computational properties, homology testing and goodness-of-fit. J Mol Biol. 2000; 302:265–79. Miklós I, Lunter GA, Holmes I. A "long indel"? model for evolutionary sequence alignment. Mol Biol Evol. 2004; 21(3):529–40. Bradley RK, Roberts A, Smoot M, Juvekar S, Do J, Dewey C, et al. Fast statistical alignment. PLoS Comput Biol. 2009; 5(5):e1000392. Godzik A. The structural alignment between two proteins: is there a unique answer?Protein Sci. 1996; 5(7):1325–38. Lunter G, Rocco A, Mimouni N, Heger A, Caldeira A, Hein J. Uncertainty in homology inferences: Assessing and improving genomic sequence alignment. Genome Res. 2008; 18(2):298–309. Lake JA. The order of sequence alignment can bias the selection of tree topology. Mol Biol Evol. 1991; 8(3):378–85. Morrison DA, Ellis JT. Effects of nucleotide sequence alignment on phylogeny estimation: a case study of 18S rDNAs of apicomplexa. Mol Biol Evol. 1997; 14(4):428–41. Ogden TH, Rosenberg MS. Multiple sequence alignment accuracy and phylogenetic inference. Syst Biol. 2006; 55(2):314–28. Liu K, Raghavan S, Nelesen S, Linder CR, Warnow T. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 2009; 324(5934):1561–4. Dessimoz C, Gil M. Phylogenetic assessment of alignments reveals neglected tree signal in gaps. Genome Biol. 2010; 11(4):1–9. Wang LS, Leebens-Mack J, Wall PK, Beckmann K, de Pamphilis CW, Warnow T. The impact of multiple protein sequence alignment on phylogenetic estimation. IEEE/ACM Trans Comput Biol Bioinformatics. 2011; 8(4):1108–19. Liu K, Warnow TJ, Holder MT, Nelesen SM, Yu J, Stamatakis AP, Linder CR. SATé-II: Very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees. Syst Biol. 2012; 61:90–106. Simmons MP, Müller KF, Norton AP. Alignment of, and phylogenetic inference from, random sequences: The susceptibility of alternative alignment methods to creating artifactual resolution and support. Mol Phylogenet Evol. 2010; 57(3):1004–16. Levy Karin E, Susko E, Pupko T. Alignment errors strongly impact likelihood-based tests for comparing topologies. Mol Biol Evol. 2014; 31(11):3057–67. Thorne JL, Kishino H. Freeing phylogenies from artifacts of alignment. Mol Biol Evol. 1992; 9(6):1148–62. Wong KM, Suchard MA, Huelsenbeck JP. Alignment uncertainty and genomic analysis. Science. 2008; 319(5862):473–6. Dwivedi B, Gadagkar S. Phylogenetic inference under varying proportions of indel-induced alignment gaps. BMC Evol Biol. 2009; 9:211. Capella-Gutiérrez S, Gabaldón T. Measuring guide-tree dependency of inferred gaps in progressive aligners. Bioinformatics. 2013; 29(8):1011–7. Blackburne BP, Whelan S. Class of multiple sequence alignment algorithm affects genomic analysis. Mol Biol Evol. 2013; 30(3):642–53. Tramontano A, Leplae R, Morea V. Analysis and assessment of comparative modeling predictions in CASP4. Proteins: Struct Funct Bioinformatics. 2001; 45(S5):22–38. Schwarzenbacher R, Godzik A, Grzechnik SK, Jaroszewski L. The importance of alignment accuracy for molecular replacement. Acta Crystallographica Section D. 2004; 60(7):1229–36. Chivian D, Baker D. Homology modeling using parametric alignment ensemble generation with consensus and energy-based model selection. Nucleic Acids Res. 2006; 34(17):e112. Dickson RJ, Wahl LM, Fernandes AD, Gloor GB. Identifying and seeing beyond multiple sequence alignment errors using intra-molecular protein covariation. PLoS ONE. 2010; 5(6):e11082. Dickson RJ, Gloor GB. Protein sequence alignment analysis by local covariation: Coevolution statistics detect benchmark alignment errors. PLoS ONE. 2012; 7(6):e37645. Gardner PP, Wilm A, Washietl S. A benchmark of multiple sequence alignment programs upon structural RNAs. Nucleic Acids Res. 2005; 33(8):2433–9. Fletcher W, Yang Z. The effect of insertions, deletions, and alignment errors on the branch-site test of positive selection. Mol Biol Evol. 2010; 27(10):2257–67. Privman E, Penn O, Pupko T. Improving the performance of positive selection inference by filtering unreliable alignment regions. Mol Biol Evol. 2012; 29:1–5. Jordan G, Goldman N. The effects of alignment error and alignment filtering on the sitewise detection of positive selection. Mol Biol Evol. 2012; 29(4):1125–39. Castresana J. Selection of conserved blocks from multiple alignments for their use in phylogenetic analysis. Mol Biol Evol. 2000; 17(4):540–52. Talavera G, Castresana J. Improvement of phylogenies after removing divergent and ambiguously aligned blocks from protein sequence alignments. Syst Biol. 2007; 56(4):564–77. Wu M, Chatterji S, Eisen JA. Accounting for alignment uncertainty in phylogenomics. PLoS ONE. 2012; 7:e30288. Gatesy J, DeSalle R, Wheeler W. Alignment-ambiguous nucleotide sites and the exclusion of systematic data. Mol Phylogenet Evol. 1993; 2(2):152–7. Lee MSY. Unalignable sequences and molecular evolution. Trends Ecol Evol. 2001; 16(12):681–5. Ajawatanawong P, Atkinson GC, Watson-Haigh NS, MacKenzie B, Baldauf SL. SeqFIRE: A web application for automated extraction of indel regions and conserved blocks from protein multiple sequence alignments. Nucleic Acids Res. 2012; 40(W1):W340–7. Lunter G. Probabilistic whole-genome alignments reveal high indel rates in the human and mouse genomes. Bioinformatics. 2007; 23(13):289–96. Miklós I, Novák A, Dombai B, Hein J. How reliably can we predict the reliability of protein structure predictions?BMC Bioinformatics. 2008; 9:137. Thompson JD, Linard B, Lecompte O, Poch O. A comprehensive benchmark study of multiple sequence alignment methods: Current challenges and future perspectives. PLoS ONE. 2011; 6(3):e18093. Metzler D, Fleissner R, Wakolbinger A, von Haeseler A. Assessing variability by joint sampling of alignments and mutation rates. J Mol Evol. 2001; 53(6):660–9. Novák A, Miklós I, Lyngsø R, Hein J. StatAlign: an extendable software package for joint Bayesian estimation of alignments and evolutionary trees. Bioinformatics. 2008; 24(20):2403–4. Suchard MA, Redelings BD. BAli-Phy: simultaneous Bayesian inference of alignment and phylogeny. Bioinformatics. 2006; 22(16):2047–8. Redelings BD, Suchard MA. Joint Bayesian estimation of alignment and phylogeny. Syst Biol. 2005; 54(3):401–18. Dryden IL, Hirst JD, Melville JL. Statistical analysis of unlabeled point sets: Comparing molecules in chemoinformatics. Biometrics. 2007; 63:237–51. Green PJ, Mardia KV, Nyirongo VB, Ruffieux Y. Bayesian modelling for matching and alignment of biomolecules. Oxford: Oxford University Press. The Oxford Handbook of Applied Bayesian Analysis; 2010, pp. 27–50. Ruffieux Y, Green PJ. Alignment of multiple configurations using hierarchical models. J Comput Graphical Stat. 2009; 18(3):756–73. Herman J L, Challis CJ, Novák A, Hein J, Schmidler SC. Simultaneous Bayesian estimation of alignment and phylogeny under a joint model of protein sequence and structure. Mol Biol Evol. 2014; 31(9):2251–66. Sinha S, He X. MORPH: Probabilistic alignment combined with hidden Markov models of cis-regulatory modules. PLoS Comput Biol. 2007; 3(11):e216. Satija R, Pachter L, Hein J. Combining statistical alignment and phylogenetic footprinting to detect regulatory elements. Bioinformatics. 2008; 24(10):1236–42. Satija R, Novák A, Miklós I, Lyngsø R, Hein J. BigFoot: Bayesian alignment and phylogenetic footprinting with MCMC. BMC Evol Biol. 2009; 9:217. Hamada M, Sato K, Kiryu H, Mituyama T, Asai K. CentroidAlign: fast and accurate aligner for structured RNAs by maximizing expected sum-of-pairs score. Bioinformatics. 2009; 25(24):3236–43. Capella-Gutiérrez S Silla-Martínez JM. trimAl: a tool for automated alignment trimming in large-scale phylogenetic analyses. Bioinformatics. 2009; 25(15):1972–3. Ahola V, Aittokallio T, Vihinen M, Uusipaikka E. Model-based prediction of sequence alignment quality. Bioinformatics. 2008; 24(19):2165–71. DeBlasio D, Wheeler T, Kececioglu J. Estimating the accuracy of multiple alignments and its use in parameter advising In: Chor B, editor. Research in Computational Molecular Biology, Volume 7262 of Lecture Notes in Computer Science. Berlin Heidelberg: Springer: 2012. p. 45–59. Misof B, Misof K. A Monte Carlo approach successfully identifies randomness in multiple sequence alignments: A more objective means of data exclusion. Syst Biol. 2009; 58(1):21–34. Dress A, Flamm C, Fritzsch G, Grunewald S, Kruspe M, Prohaska S, Stadler P. Noisy: Identification of problematic columns in multiple sequence alignments. Algorithms Mol Biol. 2008; 3:7. Landan G, Graur D. Heads or Tails: A simple reliability check for multiple sequence alignments. Mol Biol Evol. 2007; 24(6):1380–3. Hall B G. How well does the HoT score reflect sequence alignment accuracy?Mol Biol Evol. 2008; 25(8):1576–80. Wise MJ. Not so HoT? Heads or tails is not able to reliably compare multiple sequence alignments. Cladistics. 2010; 26(4):438–43. Penn O, Privman E, Landan G, Graur D, Pupko T. An alignment confidence score capturing robustness to guide tree uncertainty. Mol Biol Evol. 2010; 27(8):1759–67. Penn O, Privman E, Ashkenazy H, Landan G, Graur D, Pupko T. GUIDANCE a web server for assessing alignment confidence scores. Nucleic Acids Res. 2010; 38(suppl 2):W23–8. Löytynoja A, Milinkovitch M C. SOAP: cleaning multiple alignments from unstable blocks. Bioinformatics. 2001; 17(6):573–4. Wheeler WC. Sequence alignment, parameter sensitivity, and the phylogenetic analysis of molecular data. Syst Biol. 1995; 44(3):321–31. Collingridge P, Kelly S. MergeAlign: Improving multiple sequence alignment performance by dynamic reconstruction of consensus multiple sequence alignments. BMC Bioinformatics. 2012; 13:117. Herman JL, Szabó A, Miklós I, Hein J. Approximate posterior sampling of multiple sequence alignments by iterative perturbation of substitution matrices. 2015. arXiv: arXiv:1501.04986. Waterman MS, Byers TH. A dynamic programming algorithm to find all solutions in a neighborhood of the optimum. Math Biosci. 1985; 77(1-2):179–88. Zuker M. Suboptimal sequence alignment in molecular biology: Alignment with error analysis. J Mol Biol. 1991; 221(2):403–20. Vingron M. Near-optimal sequence alignment. Curr Opinion Struct Biol. 1996; 6(3):346–52. Vingron M, Argos P. Determination of reliable regions in protein sequence alignments. Protein Eng. 1990; 3(7):565–9. Mevissen HT, Vingron M. Quantifying the local reliability of a sequence alignment. Protein Eng. 1996; 9(2):127–32. Landan G, Graur D. Local reliability measures from sets of co-optimal multiple sequence alignments. In: Pacific Symposium on Biocomputing., Volume 13. Kohala Coast, HI, USA: 2008. p. 15–24. Karlin S, Altschul SF. Applications and statistics for multiple high-scoring segments in molecular sequences. Proc Nat Acad Sci. 1993; 90(12):5873–7. Durbin R, Eddy SR, Krogh A, Mitchison G. Biological Sequence Analysis Probabilistic Models of Proteins and Nucleic Acids. Cambridge, UK: Cambridge University Press; 1998. Zhu J, Liu JS, Lawrence CE. Bayesian adaptive sequence alignment algorithms. Bioinformatics. 1998; 14:25–39. Webb BJM, Liu JS, Lawrence CE. BALSA: Bayesian algorithm for local sequence alignment. Nucleic Acids Res. 2002; 30(5):1268–77. Churchill GA. Monte Carlo sequence alignment. In: Proceedings of the First Annual International Conference on Computational Molecular Biology. Santa Fe, NM, USA: ACM: 1997. p. 93–97. Metzler D. Statistical alignment based on fragment insertion and deletion models. Bioinformatics. 2003; 19(4):490–99. Lunter GA, Miklós I, Drummond A, Jensen JL, Hein J. Bayesian coestimation of phylogeny and sequence alignment. BMC Bioinformatics. 2005; 6:83. Green PJ, Mardia KV. Bayesian alignment using hierarchical models, with applications in protein bioinformatics. Biometrika. 2006; 93(2):235–54. Bucka-Lassen K, Caprani O, Hein J. Combining many multiple alignments in one improved alignment. Bioinformatics. 1999; 15(2):122–30. Schwikowski B, Vingron M. Weighted sequence graphs: boosting iterated dynamic programming using locally suboptimal solutions. Discrete Appl Math. 2003; 127:95–117. Szabó A, Novák A, Miklós I, Hein J. Reticular alignment: A progressive corner-cutting method for multiple sequence alignment. BMC Bioinformatics. 2010; 11:570. Hamada M, Asai K. A classification of bioinformatics algorithms from the viewpoint of maximizing expected accuracy (MEA). J Comput Biol. 2012; 19(5):532–49. Redelings BD, Suchard MA. Robust inferences from ambiguous alignments, Sequence, Alignment: Methods, Models, Concepts and Strategies. Oakland, CA: University of California Press; 2011, pp. 209–271. Thorne JL, Churchill GA. Estimation and reliability of molecular sequence alignments. Biometrics. 1995; 51:100–13. Yu L, Smith T. Positional statistical significance in sequence alignment. J Comput Biol. 1999; 6(2):253–9. Larget B. The estimation of tree posterior probabilities using conditional clade probability distributions. Syst Biol. 2013; 62(4):501–11. Dayhoff MO, Schwartz RM, Orcutt BC. A model of evolutionary change in proteins. Atlas Protein Seq Struct. 1978; 5(suppl 3):345–51. Carvalho LE, Lawrence CE. Centroid estimation in discrete high-dimensional spaces with applications in biology. Proc Nat Acad Sci. 2008; 105(9):3209–14. Roshan U, Livesay DR. Probalign: multiple sequence alignment using partition function posterior probabilities. Bioinformatics. 2006; 22(22):2715–21. Hamada M, Kiryu H, Iwasaki W, Asai K. Generalized centroid estimators in bioinformatics. PLoS ONE. 2011; 6(2):e16450. Wang L, Jiang T. On the complexity of multiple sequence alignment. J Comput Biol. 1994; 1(4):337–48. Miyazawa S. A reliable sequence alignment method based on probabilities of residue correspondences. Protein Eng. 1995; 8(10):999–1009. Holmes I, Durbin R. Dynamic programming alignment accuracy. J Comput Biol. 1998; 5(3):493–504. Wolfsheimer S, Hartmann A, Rabus R, Nuel G. Computing posterior probabilities for score-based alignments using ppALIGN. Stat Appl Genet Mol Biol. 2012; 11(4). Article 1. Schwartz AS, Pachter L. Multiple alignment by sequence annealing. Bioinformatics. 2007; 23(2):e24–9. Schwartz AS. Posterior decoding methods for optimization and accuracy control of multiple alignments. PhD thesis. Berkeley: University of California; 2007. Sahraeian SME, Yoon BJ. PicXAA: greedy probabilistic construction of maximum expected accuracy alignment of multiple sequences. Nucleic Acids Res. 2010; 38(15):4917–28. Notredame C, Higgins DG, Heringa J. T-Coffee: A novel method for fast and accurate multiple sequence alignment. J Mol Biol. 2000; 302:205–17. Do CB, Mahabhashyam MSP, Brudno M, Batzoglou S. ProbCons: Probabilistic consistency-based multiple sequence alignment. Genome Res. 2005; 15(2):330–40. Liu Y, Schmidt B, Maskell DL. MSAProbs: multiple sequence alignment based on pair hidden Markov models and partition function posterior probabilities. Bioinformatics. 2010; 26(16):1958–64. Cartwright RA. DNA assembly with gaps (DAWG): Simulating sequence evolution. Bioinformatics. 2005; 21(Suppl 3):31–8. Thompson JD, Koehl P, Ripp R, Poch O. BAliBASE 3.0: Latest developments of the multiple sequence alignment benchmark. Proteins: Struct Funct Bioinformatics. 2005; 61:127–36. Raghava G, Searle S, Audley P, Barber J, Barton G. OXBench: A benchmark for evaluation of protein multiple sequence alignment accuracy. BMC Bioinformatics. 2003; 4:47. Wallace IM, O'Sullivan O, Higgins DG, Notredame C. M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34(6):1692–9. Schwartz AS, Myers EW, Pachter L. Alignment metric accuracy. arXiv:q-bio/0510052. 2005. Felsenstein J. Evolutionary trees from DNA sequences: A maximum likelihood approach. J Mol Evol. 1981; 17(6):368–376. Robinson D, Foulds L. Comparison of phylogenetic trees. Math Biosci. 1981; 53(1-2):131–47. Lunter G, Drummond AJ, Miklós I, Hein J. Statistical Alignment Recent progress, new applications, and challenges. In: Statistical Methods in, Molecular Evolution, Statistics for Biology and Health. New York: Springer: 2005. p. 375–405. Arunapuram P, Edvardsson I, Golden M, Anderson JWJ, Novák A, Sükösd Z, et al. StatAlign 2.0: combining statistical alignment with RNA secondary structure prediction. Bioinformatics. 2013; 29(5):654–5. This work was supported by grants from the EPSRC (JLH) and BBSRC (ÁN). The authors thank Ian Holmes and Benjamin Redelings for productive discussions. Department of Statistics, University of Oxford, 1 South Parks Road, Oxford, OX1 3TG, UK Joseph L Herman , Ádám Novák , Rune Lyngsø & Jotun Hein Division of Mathematical Biology, National Institute of Medical Research,, The Ridgeway, London, NW7 1AA, UK Institute of Computer Science and Control, Hungarian Academy of Sciences, Lagymanyosi u. 11., Budapest, 1111, Hungary Adrienn Szabó & István Miklós Department of Stochastics, Rényi Institute, Reáltanoda u. 13-15, Budapest, 1053, Hungary István Miklós Search for Joseph L Herman in: Search for Ádám Novák in: Search for Rune Lyngsø in: Search for Adrienn Szabó in: Search for István Miklós in: Search for Jotun Hein in: Correspondence to Joseph L Herman. JLH wrote the manuscript, developed the statistical formulation of the DAG structure, developed software, and conducted the data analyses; ÁN developed the software implementing the minimum-risk decoding algorithm, generated simulated datasets and scripts for analysing alignment accuracy; RL proved the NP-hardness of constructing efficient algorithms under the C + mapping; AS worked on an initial implementation of the DAG representation, assisted with software development, generated the OXBench datasets and assisted with the analysis thereof; IM developed the alignment coding scheme as a bijection to the DP matrix; JH supervised the project. All authors read and approved the final manuscript. Additional file 1 Supplementary methods and figures. Software and datasets. Herman, J.L., Novák, Á., Lyngsø, R. et al. Efficient representation of uncertainty in multiple sequence alignments using directed acyclic graphs. BMC Bioinformatics 16, 108 (2015) doi:10.1186/s12859-015-0516-1 Alignment graphs Statistical alignment Alignment uncertainty
CommonCrawl
Modeling framework for isotopic labeling of heteronuclear moieties Mark I. Borkum1Email author, Patrick N. Reardon1, Ronald C. Taylor2 and Nancy G. Isern1 Isotopic labeling is an analytic technique that is used to track the movement of isotopes through reaction networks. In general, the applicability of isotopic labeling techniques is limited to the investigation of reaction networks that consider homonuclear moieties, whose atoms are of one tracer element with two isotopes, distinguished by the presence of one additional neutron. This article presents a reformulation of the modeling framework for isotopic labeling, generalized to arbitrarily large, heteronuclear moieties, arbitrary numbers of isotopic tracer elements, and arbitrary numbers of isotopes per element, distinguished by arbitrary numbers of additional neutrons. With this work, it is now possible to simulate the isotopic labeling states of metabolites in completely arbitrary biochemical reaction networks. Isotopic labeling Isotopomers Cumomers Elementary metabolite units Isotopic labeling is an analytic technique that is used to track the movement of isotopes through reaction networks. First, specific atoms of the reagent moieties are replaced with detectable, "labeled" isotopic variants. Then, after the reactions have been allowed to proceed, the position and relative abundance of labeled isotopic atoms are determined by experiment. Subsequent analysis of the measured quantities elucidates the characteristics of the reaction network, e.g., the rate constants of the reactions. In general, the applicability of isotopic labeling techniques is limited to the investigation of reaction networks that consider homonuclear moieties, whose atoms are of one isotopic tracer element with two isotopes, distinguished by the presence of one additional neutron. The contribution of this article is a reformulation of the modeling framework for isotopic labeling, generalized to arbitrarily large, heteronuclear moieties, arbitrary numbers of isotopic tracer elements, and arbitrary numbers of isotopes per element, distinguished by arbitrary numbers of additional neutrons. The first group to give a partial solution to the problem of the representation of isotopic labeling states of arbitrary moieties was Malloy et al. [1]. Representing isotopic labeling states of carbon atoms in backbones of metabolites as vectors of Boolean truth values, which they referred to as "isotopomers" (a contraction of the term "isotopic isomers"), using 0 and 1 to denote, respectively, 12 \({\textsf{C}}\)-unlabeled and 13 \({\textsf{C}}\)-labeled atoms, they showed that relative abundances of metabolites in biological systems could be calculated using nonlinear functions of relative abundances of isotopomers, which they referred to as "isotopomer fractions." For example, a metabolite of two carbon atoms has \(2^{2} = 4\) isotopomers, 00, 01, 10 and 11; and hence, the isotopic labeling state of the metabolite is given by 4 isotopomer fractions. Aside from being nonlinear, Malloy et al.'s construction suffers from the fact that an exponential number of isotopomer fractions are required in order to determine the isotopic labeling state of any given metabolite. Wiechert et al. [2] noted that, by the nature of the problem, isotopic labeling states of biochemical reaction network substrates are wholly determined by specific subsets of carbon atoms; and therefore, that isotopic labeling states of the complement of each subset can be omitted, incurring no loss of information. Representing isotopic labeling states of carbon atoms in backbones of metabolites as vectors of placeholder variables, which they referred to as "cumomers" (a contraction of the term "cumulative isotopomers"), using 1 and x to denote, respectively, determinate (13 \({\textsf{C}}\)-labeled) and indeterminate (12 \({\textsf{C}}\)-unlabeled or 13 \({\textsf{C}}\)-labeled) isotopic labeling states, with a total ordering given by \(x < 1\), they showed that Malloy et al.'s construction could be reformulated as a cascade system of linear functions of relative abundances of cumomers, which they referred to as "cumomer fractions," with the original construction being recovered via a "suitable variable transformation" [2, Eq. 7] in the form of an invertible square matrix. For example, a metabolite of two carbon atoms has \(2^{2} = 4\) cumomers, xx, x1, 1x and 11; and hence, the isotopic labeling state of the metabolite is given by 4 cumomer fractions. Antoniewicz et al. [3] shed new light on this subject. Manipulating isotopic labeling states of specific subsets of carbon atoms as aggregations, rather than as singletons, they showed that, under certain conditions, Wiechert et al.'s cascade system could be optimized using graph-theoretic methods, e.g., vertex reachability analysis, edge smoothing and Dulmage–Mendelsohn decomposition, thereby significantly reducing the total number of system variables. Representing isotopic labeling states of carbon atoms in backbones of metabolites as mass distributions, which they referred to as "Elementary Metabolite Units (EMU)," they showed that every EMU has a unique factorization, and that the mass distribution of a given EMU is equal to the vector convolution of the mass distributions of its proper factors. Moreover, they showed that, in a given mass distribution, the mass fraction corresponding to the greatest number of mass shifts is equivalent to a specific cumomer fraction. Antoniewicz et al.'s work provided the foundation for a whole host of important biological investigations [4–6], and inspired many software implementations [7–10]. Even so, the cases not solved by Antoniewicz et al. merit attention for four reasons. First, the subject is very closely tied to the concept of isotopic labeling, and can thus serve to bring greater clarity and determinacy to its mathematical formulation. In this respect, treatment of the subject possesses an immediate interest. Second, the EMU method suffers from the same system variable explosion issue as the isotopomer and cumomer methods that preceded it. Is this issue intrinsic to the problem, i.e., unavoidable, or simply, difficult to mitigate? Third, Antoniewicz et al. give neither a rigorous proof of the correctness of the EMU method, nor a derivation of its construction from prior research. Why does the EMU method appear to work? Is it the most effective decomposition that can be "done" to a model of a given biological system? Fourth, and finally, the EMU method has thus far been demonstrated for homonuclear systems only; specifically, those containing carbon atoms, where unlabeled and labeled variants of isotopic tracer elements are distinguished by the presence of one additional neutron. Can the EMU method be generalized to arbitrary, heteronuclear systems? At the conclusion of their essay, Antoniewicz et al. promise to return to these cases later; but, thus far, this promise has gone unfulfilled. Nor do the works of Srour et al. [11] fill this gap. Their essays, which are contemporaneous with those of Antoniewicz et al., develop a counterpart to the EMU method, where system variables are combinations of flux variables and cumomer fractions, which they refer to as "fluxomers." While the fluxomer method does enable a reformulation of the flux balance equation that, it is claimed, has greater numerical precision and does not require matrix inversion for its solution, it is, in actuality, a method for constructing a specific matrix in a single pass, which, with some algebraic manipulation, can be shown to be equivalent to the EMU method for certain biochemical reaction networks. Specifically, the fluxomer method involves the construction of the directed line graph for each EMU reaction network (the new graph that represents the adjacencies between the edges of the original graph). Furthermore, the software implementation of the fluxomer method is, in general, far more complex than that of the EMU method, limiting its widespread adoption. The most recent work in this area, by Nillson and Jain [12], gives a construction for the representation and manipulation of "multivariate mass isotopomer distributions" of heteronuclear moieties, which they demonstrate for moieties of carbon and nitrogen atoms. The key advantage of their approach is that the probabilities associated with each combination of the contribution of each isotopic tracer element are uniquely identifiable. The disadvantage of their approach, however, is that it requires the use of k-dimensional vectors, where k is the number of isotopic tracer elements, increasing the complexity of the software implementation. In the discussion of their essay, the authors claim that "the cumomer framework can be easily extended along the same lines;" however, they give neither a construction of multivariate cumomers nor a method of conversion to multivariate mass distributions. Moreover, their method is only demonstrated for isotopic tracer elements with two isotopic labeling states per atom: unlabeled or labeled. In summary: Malloy et al. represented isotopic labeling states of carbon atoms in backbones of metabolites as probability distributions of configurations, obtaining a nonlinear formulation of the flux balance equation. Wiechert et al. omitted specific isotopic labeling states, obtaining a cascade system. Antoniewicz et al. manipulated aggregations of isotopic labeling states, represented as mass distributions, yielding an optimization; and Srour et al. gave an algebraically equivalent reformulation. Nillson and Jain used multidimensional vectors to represent mass distributions of heteronuclear moieties, but did not give a construction for isotopomers or cumomers. As things stand, however, representation of and conversion between isotopic labeling states of metabolites in completely arbitrary biological systems is not possible. The indeterminacy which still prevails on a number of fundamental points in the theory of isotopic labeling compels us to make some prefatory remarks about the concept of an isotopic labeling state and the scope of its validity. For this investigation, unless otherwise stated, a moiety is any set of atoms, not necessarily connected via chemical bonds, nor of the same metabolite. Furthermore, we take as an axiom that the number of nucleons in an atomic nucleus is quantized, i.e., is a non-negative integer. First and foremost, what do we understand by the concept of an isotopic labeling state? Representations of states of moieties of isotopic atoms are isomorphic to logical conjunctions of assertions of both the proton and neutron numbers of each atom, e.g., "the first atom has 1 proton and 0 neutrons, the second atom has 6 protons and 7 neutrons, etc." In this way, the characteristics of each isotopic atom are completely specified; and hence, the representation is an unambiguous description of the isotopic labeling state of the given moiety. However, if there exists a mapping \(f\,{:}\,{\mathbb {Z}} \rightarrow {\mathbb {Z}}\) from proton to least neutron number, e.g., \(f\left( {1} \right) = 0, f\left( {6} \right) = 6\), etc., where \({\mathbb {Z}}\) denotes the set of integers, then representations of states of moieties are isomorphic to logical conjunctions of assertions of both the proton and additional neutron numbers, i.e., mass shifts, of each atom, e.g., "the first atom has 1 proton and 0 additional neutrons, the second atom has 6 protons and 1 additional neutron, etc." In this way, the characteristics of each isotopic atom are still completely specified; but, the total number of system variables per moiety, required to represent the mass distribution, is reduced to the successor of the greatest additional neutron number. Let m be a non-negative integer that denotes the greatest neutron number (either directly or, assuming the existence of a mapping from proton to least neutron number, indirectly), then the set of determinate isotopic labeling states of a given isotopic atom of a given moiety, denoted \({\text {St}}_{m}\), is the set of integral members of the closed interval \(\left[ {0,\,m} \right]\) $$\begin{aligned} {\text {St}}_{m} \overset{\text {def}}{=} \left\{ {0,\,1,\,\ldots ,\,m } \right\} . \end{aligned}$$ Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and greatest neutron number, then the set of isotopomers of a given moiety, denoted \({\text {Isotopomer}}_{n,\,m}\), is the set of length-n vectors of members of the set \({\text {St}}_{m}\) $$\begin{aligned} {\text {Isotopomer}}_{n,\,m} \overset{\text {def}}{=} \left\{ {\left\{ { a_{1},\,a_{2},\,\dots ,\,a_{n} } \right\} \vert \ \forall \ i : \left( {i \in \left[ {1,\,n} \right] } \right) \wedge \left( {a_{i} \in {\text {St}}_{m}} \right) } \right\} , \end{aligned}$$ and the set of cumomers of a given moiety, denoted \({\text {Cumomer}}_{n,\,m}\), is the set of sequences of members of the set \({\text {St}}_{m} \cup \left\{ {\top } \right\}\) $$\begin{aligned} {\text {Cumomer}}_{n,\,m}\overset{\text {def}}{=} \left\{ { \left( {a_{1},\,a_{2},\,\dots ,\,a_{n},\,a_{n + 1},\,\ldots } \right) \vert \ \forall \ i : \left( {i \in {\mathbb {Z}}} \right) \wedge \left( { a_{i} \in \left\{ { \begin{array}{lll} {\text {St}}_{m} \cup \left\{ {\top } \right\} &\quad {\text {if}} & i \le n\\ \left\{ {\top } \right\} &\quad {\text {if}} & i > n\\ \end{array} } \right. } \right) } \right\} , \end{aligned}$$ where \(\top\) denotes the indeterminate isotopic labeling state, the logical disjunction of every determinate isotopic labeling state (viz., logical true). Importantly, while an isotopomer is a representation of the isotopic labeling state of a given moiety in the context of itself, a cumomer is a representation of the isotopic labeling state of a given moiety in the context of a larger, virtual moiety of a countably infinite number of atoms, whose isotopic labeling states are mutually independent, where the isotopic labeling states of non-local atoms are, by definition, indeterminate (as displayed by the quantification of atom indices over the nonzero, positive members of \({\mathbb {Z}}\)). It is this essential difference that enables the formulation of the flux balance equation as a cascade system. Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and the greatest neutron number, then the set of cumomers of a given moiety can be exhaustively partitioned into a set of \(2^{n}\) mutually disjoint subsets, denoted \({\text {EMU}}_{n,\,m}\), where the isotopic labeling states of local atoms in each subset of cumomers, denoted \({\text {EMU}}_{n,\,m,\,N}\), are determinate for the members of a member N of the power-set of the set of atoms of the given moiety $$\begin{aligned}&{\text {EMU}}_{n,\,m}\overset{\text {def}}{=}\left\{ {\text {EMU}_{n,\,m,\,N} \vert \ \forall \ N : \left( {N \in {\mathbb {P}}\left( {\left[ {1,\,n} \right] } \right) } \right) } \right\} \nonumber \\&{\text {EMU}}_{n,\,m,\,N} \overset{\text {def}}{=} \left\{ { a\ \vert \left( {a \in {\text {Cumomer}}_{n,\,m}} \right) \wedge \left( { \forall \ i : \left( {i \in \left[ {1,\,n} \right] } \right) \wedge \left( { a_{i} \in \left\{ { \begin{array}{ccc} {\text {St}}_{m} & {\text {if}} & i \in N\\ \left\{ {\top } \right\} & {\text {if}} & i \not \in N\\ \end{array} } \right. } \right) } \right) } \right\} , \end{aligned}$$ where \({\mathbb {P}}\) denotes the power-set function. Representation as Boolean functions Isotopomer and cumomer fraction vectors for uni- and multi-atomic EMUs are, by definition, discrete probability distributions of the isotopic labeling state of the given moiety. Interpreted as Boolean functions, isotopomer and cumomer fraction vectors are "multiplied" using the Cartesian product under multiplication. Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and the greatest neutron number, then the set of isotopomer fractions of a given moiety, corresponding to the set \({\text {Isotopomer}}_{n,\,m}\), represented as a vector of \(\left( {m + 1} \right) ^{n}\) entries, each giving the functional value of the corresponding minterm, is a completely specified, Boolean function of n variables, with \(m + 1\) values per variable. The sum of the set of isotopomer fractions of a given moiety is, therefore, =100%. Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and the greatest neutron number, and \(N \in {\mathbb {P}}\left( {\left[ {1,\,n} \right] } \right)\) be a subset of atoms with determinate isotopic labeling states, then any (mutually disjoint) subset of cumomer fractions of a given moiety, corresponding to the set \({\text {EMU}}_{n,\,m,\,N}\), represented as a vector of \(\left( {m + 1} \right) ^{\left| {N} \right| }\) entries, each giving the functional value of the corresponding minterm, is a completely specified, Boolean function of \(\left| {N} \right|\) variables, with \(m + 1\) values per variable. The sum of a (mutually disjoint) subset of cumomer fractions of a given moiety is, therefore, =100 %. Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and the greatest neutron number, then the set of cumomer fractions of a given moiety, corresponding to the set \({\text {Cumomer}}_{n,\,m}\), represented as a vector of \(\left( {m + 2} \right) ^{n}\) entries, each giving the functional value of the corresponding minterm, is a completely specified, Boolean function of n variables, with \(m + 2\) values per variable (the extra value being the intermediate isotopic labeling state). The sum of the set of cumomer fractions of a given moiety is, therefore, \({=}2^{n} \times 100\%\). For the above to hold in situations where isotopic tracer elements have different numbers of isotopes, distinguished by different numbers of additional neutrons, it is necessary to assume that surplus isotopic labeling states are undetectable, i.e., that the probability of their detection is =0%. Given this assumption, isotopomers and cumomers that include surplus isotopic labeling states are also undetectable, i.e., the corresponding isotopomer and cumomer fractions are also =0%. Representation as mass distributions Cumomer fraction vectors for uni-atomic EMUs are, by definition, probability mass functions that characterize discrete probability distributions of the mass of the given atom, i.e., mass distributions. Interpreted as mass distributions (instead of Boolean functions), cumomer fraction vectors are "multiplied" using vector convolution (instead of the Cartesian product under multiplication). Representation as arbitrary monoids Both interpretations induce a commutative monoid structure on the set of EMUs, where the left- and right-identity of the monoidal product is the interpretation of the trivial EMU, corresponding to the "empty moiety" of zero atoms, denoted \({\text {EMU}}_{n,\,m,\,\emptyset }\), where \(\emptyset\) denotes the empty set. What other interpretations are possible for uni-atomic EMUs? Under what circumstances are they valid? A homomorphism \(f\,{:}\,A \rightarrow B\) is a function between two sets A and B, such that $$\begin{aligned} f\left( { \mu _{A}\left( { a_{1},\,a_{2},\,\dots ,\,a_{n} } \right) } \right) = \mu _{B}\left( { f\left( {a_{1}} \right) ,\,f\left( {a_{2}} \right) ,\,\ldots ,\,f\left( {a_{n}} \right) } \right) \end{aligned}$$ holds, for the variadic functions \(\mu _{A}\) and \(\mu _{B}\), and for all elements \(a_{1},\,a_{2},\,\dots ,\,a_{n} \in A\). Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and the greatest neutron number, and \(N \in {\mathbb {P}}\left( {\left\{ {1,\,2,\,\ldots ,\,n} \right\} } \right)\) be a subset of atoms with determinate isotopic labeling states, then the function \(f\,{:}\,\left\{ {{\mathbb {Z}}} \right\} \rightarrow S\) is a homomorphism between the domain, the set of (mutually disjoint) subsets of atoms, and the codomain S, where \(\mu _{\left\{ {{\mathbb {Z}}} \right\} }\) and \(\mu _{S}\) are, respectively, set union and the monoidal product. Interpretations of uni-atomic EMUs are valid, therefore, if and only if: The codomain S is isomorphic to the set of vectors; whose lengths, non-negative integers, are given by a well-defined function of \(\left| {N} \right|\) and m; and, The codomain S is a commutative monoid. Hence, given n and m, the function \(f\left( {N} \right) = {\text {EMU}}_{n,\,m,\,N}\) is a valid homomorphism, with the codomain being interpreted as either the set of mass distributions or the set of Boolean functions. Moreover, the function \(f\left( {N} \right) = \left| N \right|\) is also a valid homomorphism, with the codomain being the set of non-negative integers under addition. Notice that, if we follow Wiechert et al.'s advice and partition the members of the codomain S by the number of isotopic atoms, then, by definition, the vector representations of the members of each partition have the same length. Therefore, any valid interpretation can be used in flux balance equations if the corresponding vector representations are transposed (from column to row vectors). Conversion of isotopomer and cumomer fractions Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and the greatest neutron number, then conversion from isotopomer fraction vectors to cumomer fraction vectors is "done" using the rectangular matrix where \(\otimes\) denotes the Kronecker product, and \({\mathbb {I}}_{k}\) denotes the identity matrix of dimension \(k \times k\). Exemplar matrices for small values of n and m are given in Table 1. Matrices for conversion of isotopomer and cumomer fraction vectors \(\overline{\mathbf{IC }}_{n,m}\) \(n=0\) \(m=0\) \(\begin{bmatrix} 1 \end{bmatrix}\) \(\begin{bmatrix} 1&0\\ 0&1\\ 1&1\\ \end{bmatrix}\) \(\begin{bmatrix} 1&0&0&0\\ 0&1&0&0\\ 1&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 0&0&1&1\\ 1&0&1&0\\ 0&1&0&1\\ 1&1&1&1\\ \end{bmatrix}\) \(\begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1\\ 1&1&1\\ \end{bmatrix}\) \(\begin{bmatrix} 1&0&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0&0\\ 1&1&1&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&0\\ 0&0&0&0&0&1&0&0&0\\ 0&0&0&1&1&1&0&0&0\\ 0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&0&1\\ 0&0&0&0&0&0&1&1&1\\ 1&0&0&1&0&0&1&0&0\\ 0&1&0&0&1&0&0&1&0\\ 0&0&1&0&0&1&0&0&1\\ 1&1&1&1&1&1&1&1&1\\ \end{bmatrix}\) Non-square, rectangular matrices are, of course, non-invertible; however, isomorphisms between the set of isotopomers and certain subsets of cumomers can be defined, witnessed by a family of square matrices, as we will now show. Let m be a non-negative integer that denotes the greatest neutron number, and \(s \in {\text {St}}_{m}\) be a determinate isotopic labeling state, then the set of punctured, determinate isotopic labeling states of an isotopic atom of a given moiety with respect to s is the set \({\text {St}}_{m} \setminus \left\{ {s} \right\}\). Removal of the \(\left( {s + 1} \right)\)th row of the rectangular matrix \(\overline{\mathbf{IC }}_{1,\,m}\), followed by the repeated Kronecker product, always yields an invertible matrix. In fact, for a given n and m, the product of any two of these invertible matrices is an injective matrix (the utility of which we have not yet established). Why is this the case? For every determinate isotopic labeling state s, there exists a set of punctured, determinate isotopic labeling states, and a corresponding set of punctured cumomers (the term "punctured" referring to the exclusion of any one member of the set). The set of punctured cumomers is, therefore, the logical conjunction-exclusive disjunction expansion of the set of isotopomers with respect to s; or, in coding theory parlance, the Reed–Muller spectral domain [13, 14] of the set of isotopomers with respect to s. The square matrices are, therefore, the Reed–Muller transform matrices. Note that the set of "cumomers" described by Wiechert et al. is the set of punctured cumomers with respect to s = 0. Conversion of isotopomer and mass fractions Let n and m be non-negative integers that denote, respectively, the number of isotopic atoms and the greatest neutron number, then conversion from isotopomer fraction vectors to mass fraction vectors is "done" using the rectangular matrix $$\begin{aligned} \overline{\mathbf{IM }}_{n,\,m} \overset{\text {def}}{=} \prod _{i =1}^{n}{ {\text {interspersecols}}\left( {{\mathbb {I}}_{m + 1},\,\left( {m + 1} \right) ^{n} - 1,\,0} \right) } \end{aligned}$$ where \(\prod\) denotes matrix convolution, \({\mathbb {I}}_{k}\) denotes the identity matrix of dimension \(k \times k\), and \({\text {interspersecols}}\) is a function that takes as input an arbitrary matrix \(\mathbf A\), a spacing k and an element x, and returns as output a new matrix, where the columns of \(\mathbf A\) are interspersed with k columns of the specified element x. Matrices for conversion of isotopomer and mass fraction vectors \(\overline{\mathbf{IM }}_{n,m}\) \(\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}\) \(\begin{bmatrix} 1&0&0&0\\ 0&1&1&0\\ 0&0&0&1 \end{bmatrix}\) \(\begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1 \end{bmatrix}\) \(\begin{bmatrix} 1&0&0&0&0&0&0&0&0\\ 0&1&0&1&0&0&0&0&0\\ 0&0&1&0&1&0&1&0&0\\ 0&0&0&0&0&1&0&1&0\\ 0&0&0&0&0&0&0&0&1 \end{bmatrix}\) Non-square, rectangular matrices are non-invertible. While every rectangular matrix has a Moore–Penrose pseudoinverse, their use, in this context, is incorrect, as we will now show with a pathological example. Let A be a homonuclear moiety of two isotopic atoms, considering one isotopic tracer element, with two isotopes, distinguished by one mass shift, then conversion of isotopomer and mass fraction vectors is given by the linear equation $$\begin{aligned} \begin{bmatrix} \begin{array}{c} {\text {A}}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 0}\\ {\text {A}}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 1}\\ {\text {A}}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 2}\\ \end{array} \end{bmatrix} = \overline{\mathbf{IM }}_{2,\,1} \cdot \begin{bmatrix} \begin{array}{c} {\text {A}}_{\left( {0,\,0} \right) }\\ {\text {A}}_{\left( {0,\,1} \right) }\\ {\text {A}}_{\left( {1,\,0} \right) }\\ {\text {A}}_{\left( {1,\,1} \right) }\\ \end{array} \end{bmatrix}; \end{aligned}$$ (8a) and, to aid the example, assuming different values for each of the four isotopomer fractions of A, then $$\begin{aligned} \begin{bmatrix} \begin{array}{c} 0.4\\ 0.5\\ 0.1\\ \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 1\\ \end{array} \end{bmatrix} \cdot \begin{bmatrix} \begin{array}{c} 0.4\\ 0.3\\ 0.2\\ 0.1\\ \end{array} \end{bmatrix}, \end{aligned}$$ (8b) which is correct, given the specified isotopomer fractions. However, conversion between mass and isotopomer fraction vectors, using the Moore–Penrose pseudoinverse, is incorrect $$\begin{aligned} \begin{bmatrix} \begin{array}{c} {\text {A}}_{\left( {0,\,0} \right) }\\ {\text {A}}_{\left( {0,\,1} \right) }\\ {\text {A}}_{\left( {1,\,0} \right) }\\ {\text {A}}_{\left( {1,\,1} \right) }\\ \end{array} \end{bmatrix} \ne \left( { \overline{\mathbf{IM }}_{2,\,1} } \right) ^{+} \cdot \begin{bmatrix} \begin{array}{c} {\text {A}}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 0}\\ {\text {A}}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 1}\\ {\text {A}}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 2}\\ \end{array} \end{bmatrix}, \end{aligned}$$ (8c) $$\begin{aligned} \begin{bmatrix} \begin{array}{c} 0.4\\ 0.25\\ 0.25\\ 0.1\\ \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{ccc} 1 & 0 & 0\\ 0 & 0.5 & 0\\ 0 & 0.5 & 0\\ 0 & 0 & 1\\ \end{array} \end{bmatrix} \cdot \begin{bmatrix} \begin{array}{c} 0.4\\ 0.5\\ 0.1\\ \end{array} \end{bmatrix}. \end{aligned}$$ (8d) Logical conjunction and disjunction of isotopic labeling states Let m be a non-negative integer that denotes the greatest neutron number, then logical conjunction is defined for pairs of members of the set \({\text {St}}_{m} \cup \left\{ \top \right\}\) that are either: equal, determinate and indeterminate, or indeterminate and determinate. The Cayley table for the set \({{\text {St}}_{m} \cup {\left\{ {\bot , \top } \right\} }}\) under logical conjunction, with rows and columns permuted to facilitate this exposition, is showing that the set \({\text {St}}_{m} \cup \left\{ {\bot , \top } \right\}\) is closed under logical conjunction, as are four of its cosets: \(\left\{ {\bot } \right\} ,\left\{ {\top } \right\} ,\left\{ {\bot , \top } \right\}\), and \({\text {St}}_{m} \cup \left\{ {\bot } \right\}\); however, the set \({\text {St}}_{m}\), and by extension, the set \({\text {St}}_{m} \cup \left\{ \top \right\}\), is neither closed nor well-defined under logical conjunction; since, by definition, it does not contain \(\bot\). The set \({{\text {St}}_{m} \cup {\left\{ {\bot , \top } \right\} }}\) is, therefore, a multi-valued logic, with m distinguished, mutually contradictory elements that are identified with neither logical true \(\top\) nor logical false \(\bot\). Logical conjunction is, therefore, defined only for pairs of isotopomers that are pairwise equal; and is equivalent to multiplication of the corresponding isotopomer fractions; however, since logical conjunction is not defined for every pair of isotopomers, we cannot take the Cartesian product under logical conjunction of pairs of sets of isotopomers. Logical conjunction is, therefore, defined only for pairs of cumomers that are pairwise: equal, determinate and indeterminate, or indeterminate and determinate; and it is equivalent to multiplication of the corresponding cumomer fractions. Since logical conjunction is defined for pairs of cumomers that are determinate for disjoint subsets of atoms, we can take the Cartesian product under logical conjunction of pairs of sets of cumomers that are determinate for disjoint subsets of atoms. Logical disjunction is, of course, defined, for all isotopomers and cumomers, and it is equivalent to addition of the corresponding isotopomer and cumomer fractions. Decomposition of arbitrary biochemical reactions A biochemical reaction is an identity of power-sets of isotopic atoms of virtual moieties, corresponding to the sets of all isotopic atoms on the left- and right-hand sides of the "arrow." For example, the expression $$\begin{aligned} {\text {A}}_{\left\{ {a, b} \right\} } + {\text {B}}_{\left\{ {c} \right\} } \overset{v}{\longrightarrow } {\text {C}}_{\left\{ {a} \right\} } + {\text {D}}_{\left\{ {b, c} \right\} } \end{aligned}$$ (10a) denotes a biochemical reaction with two reagents, A and B, two products, C and D, and a flux variable v. It is a representation of the identity $$\begin{aligned} f\left( { \left( { {\mathbb {P}}\left( {\left\{ {a_{{\text {right}}}} \right\} } \right) \cup {\mathbb {P}}\left( {\left\{ {b_{{\text {right}}}, c_{{\text {right}}}} \right\} } \right) } \right) \setminus \emptyset } \right) = v \times f\left( { \left( { {\mathbb {P}}\left( {\left\{ {a_{{\text {left}}}, b_{{\text {left}}}} \right\} } \right) \cup {\mathbb {P}}\left( {\left\{ {c_{{\text {left}}}} \right\} } \right) } \right) \setminus \emptyset } \right) , \end{aligned}$$ (10b) where \({\mathbb {P}}\) denotes the power-set function, \(\emptyset\) denotes the empty set, and the function f is a valid homomorphism, given the conditions that we have established. Note the empty set is deliberately excluded from both power-sets. Doing otherwise would introduce a contradiction: \(f\left( {\emptyset } \right) = v \times f\left( {\emptyset } \right)\). Application of the power-set and set difference functions yields $$\begin{aligned} f\left( { \left\{ {\left\{ {a_{{\text {right}}}} \right\} , \left\{ {b_{{\text {right}}}} \right\} , \left\{ {c_{{\text {right}}}}\right\} , \left\{ {b_{{\text {right}}}, c_{{\text {right}}}} \right\} } \right\} } \right) = v \times f\left( { \left\{ {\left\{ {a_{{\text {left}}}} \right\} , \left\{ {b_{{\text {left}}}} \right\} , \left\{ {a_{{\text {left}}}, b_{{\text {left}}}} \right\} , \left\{ {c_{{\text {left}}}} \right\} } \right\} } \right) ; \end{aligned}$$ (10c) and hence, the decomposition of the original identity is the system of identities $$\begin{aligned} f\left( {\left\{ {a_{{\text {right}}}} \right\} } \right)&= v \times f\left( {\left\{ {a_{{\text {left}}}} \right\} } \right) \nonumber \\ f\left( {\left\{ {b_{{\text {right}}}} \right\} } \right)&= v \times f\left( {\left\{ {b_{{\text {left}}}} \right\} } \right) \nonumber \\ f\left( {\left\{ {c_{{\text {right}}}} \right\} } \right)&= v \times f\left( {\left\{ {c_{{\text {left}}}} \right\} } \right) \nonumber \\ f\left( {\left\{ {b_{{\text {right}}}, c_{{\text {right}}}} \right\} } \right)&= v \times \left( {f\left( {\left\{ {b_{{\text {left}}}} \right\} } \right) \star f\left( {\left\{ {c_{{\text {left}}}}\right\} } \right) } \right) \end{aligned}$$ (10d) where \(\star\) denotes the monoidal combinator (for the codomain of the function f). Replacement of each moiety by its isotopic labeling state, i.e., application of the function f, we obtain the system of identities $$\begin{aligned} {\text {C}}_{\left\{ {1} \right\} }&= v \times {\text {A}}_{\left\{ {1} \right\} }\nonumber \\ {\text {D}}_{\left\{ {1} \right\} }&= v \times {\text {A}}_{\left\{ {2} \right\} }\nonumber \\ {\text {D}}_{\left\{ {2} \right\} }&= v \times {\text {B}}_{\left\{ {1} \right\} }\nonumber \\ {\text {D}}_{\left\{ {1,2} \right\} }&= v \times \left( {{\text {A}}_{\left\{ {2} \right\} } \star {\text {B}}_{\left\{ {1}\right\} }} \right) \end{aligned}$$ (10e) Thus, the problem of decomposition of representations of "biochemical reactions" is reduced to the decomposition of identities of power-sets of "isotopic atoms," making no reference to the underlying chemistry, where "isotopic atoms" are, in practice, members of any countably infinite set for which equality can be established, e.g., the set of natural numbers. Note that, unlike the "EMU decomposition" algorithm of Antoniewicz et al., our decomposition algorithm is exhaustive, runs in constant time and memory, has a worst-case computational complexity of \({\mathcal {O}}\left( {n^{2}} \right)\) with respect to the number of "isotopic atoms" n, and does not require backtracking. Formulation of flux balance equations A biochemical reaction network is a set of biochemical reactions; and therefore, the decomposition of a biochemical reaction network is a set of systems of identities of isotopic labeling states. From this point, formulation of the flux balance equations is purely mechanical: Take the union of the set of systems of identities of isotopic labeling states; Partition the resulting set by the number of isotopic atoms per identity; Represent each partition as an adjacency matrix; Optimize the adjacency matrices using any valid graph-theoretic method (optional, but highly recommended); Formulate the flux balance equation for each adjacency matrix. Adjacency matrices are representations of labeled, directed graphs. After permutation of the rows and columns, vectors of vertices (of the directed graphs) are of the form $$\begin{aligned} a = \begin{bmatrix} \begin{array}{c} a_{1}\\ a_{2}\\ a_{3}\\ \end{array} \end{bmatrix}, \end{aligned}$$ where \(a_{1}, a_{2}\) and \(a_{3}\) denote, respectively, sub-vectors of source, intermediate and sink vertices, corresponding to substrates, intermediates and products; and, adjacency matrices are of the form $$\begin{aligned} A = \begin{bmatrix} \begin{array}{ccc} 0 & 0 & 0\\ A_{2,1} & A_{2,2} & 0\\ A_{3,1} & A_{3,2} & 0\\ \end{array} \end{bmatrix}, \end{aligned}$$ where the element \(A_{i,j} = \lambda\) denotes the identity \(a_{j} = \lambda \times a_{i}\) for the coefficient \(\lambda\). Assuming a variational formulation of fluid dynamics for a bounded domain with a piecewise-smooth boundary and a general "mass" conservation law [15], flux balance equations are of the form $$\begin{aligned} f\left( {\ldots } \right) = {\text {diag}}\left( {{\text {rowsum}}\left( { \begin{bmatrix} \begin{array}{ll} A_{2,1} &\quad A_{2,2}\\ A_{3,1} &\quad A_{3,2}\\ \end{array} \end{bmatrix} } \right) } \right) \cdot \begin{bmatrix} \begin{array}{l} a_{2}\\ a_{3}\\ \end{array} \end{bmatrix} - \begin{bmatrix} \begin{array}{cc} A_{2,1} &\quad A_{2,2}\\ A_{3,1} &\quad A_{3,2}\\ \end{array} \end{bmatrix} \cdot \begin{bmatrix} \begin{array}{l} a_{1}\\ a_{2}\\ \end{array} \end{bmatrix} \end{aligned}$$ where f is an arbitrary function, diag denotes a function that constructs a square matrix with the specified leading diagonal, and rowsum denotes a function that yields a column vector of the sums of the elements of each row of the specified matrix. (The steady-state hypothesis being \(f\left( {\ldots } \right) \approx 0\)). On the right-hand side of (13), the first matrix is the mass lumping matrix for intermediate and sink vertices using the row sum technique; and, the second matrix is the discrete transport operator for source and intermediate vertices. However, (13) is nonlinear, referring to the set of intermediates in the expressions of both influx and efflux; and therefore, is not in the required form. Rearranging the right-hand side of (13) using elementary algebra, we obtain $$\begin{aligned} f\left( {\ldots } \right) = \left( { {\text {diag}}\left( {{\text {rowsum}}\left( { \begin{bmatrix} \begin{array}{cc} A_{2,1} & \quad A_{2,2}\\ A_{3,1} &\quad A_{3,2}\\ \end{array} \end{bmatrix} } \right) } \right) - \begin{bmatrix} \begin{array}{cc} A_{2,2} &\quad 0\\ A_{3,2} &\quad 0\\ \end{array} \end{bmatrix} } \right) \cdot \begin{bmatrix} \begin{array}{c} a_{2}\\ a_{3}\\ \end{array} \end{bmatrix} - \begin{bmatrix} \begin{array}{cc} A_{2,1}\\ A_{3,1}\\ \end{array} \end{bmatrix} \cdot a_{1} \end{aligned}$$ which is of the required form. On the right-hand side of (14), the first matrix is the consistent mass matrix for intermediate and sink vertices; and, the second matrix is the discrete transport operator for source vertices only, i.e., the required form. Notice that the elements of the leading diagonal of the consistent mass matrix are positive. This is to ensure that the eigenvalues of the consistent mass matrix are non-negative [16]. Assuming the steady-state hypothesis, for example, we obtain $$\begin{aligned} \left( { {\text {diag}}\left( {{\text {rowsum}}\left( { \begin{bmatrix} \begin{array}{cc} A_{2,1} &\quad A_{2,2}\\ A_{3,1} &\quad A_{3,2}\\ \end{array} \end{bmatrix} } \right) } \right) - \begin{bmatrix} \begin{array}{cc} A_{2,2} &\quad 0\\ A_{3,2} &\quad 0\\ \end{array} \end{bmatrix} } \right) \cdot \begin{bmatrix} \begin{array}{c} a_{2}\\ a_{3}\\ \end{array} \end{bmatrix} \approx \begin{bmatrix} \begin{array}{cc} A_{2,1}\\ A_{3,1}\\ \end{array} \end{bmatrix} \cdot a_{1} \end{aligned}$$ which is readily solved by inversion of the consistent mass matrix. Thus, the problem of formulating cascade systems of flux balance equations for "biochemical reaction networks" is reduced to the construction and optimization of dependency networks of adjacency matrices, followed by the extraction of block matrices of predetermined dimensions, making no reference to the underlying chemistry. Practical usage of representations of detectable isotopic labeling states In this manuscript, we have given constructive definitions of representations of isotopic labeling states of moieties. Our mathematical framework, independent of Chemistry, is based on the premise that every conceivable isotopic labeling state is (mathematically) representable. As such, we make no attempt to distinguish between (mathematically) conceivable and (experimentally) detectable isotopic labeling states. Instead, we assert that representations of undetectable isotopic labeling states be identified by logical false, i.e., that the probability of detection is =0%. It remains to give examples of the practical usage of representations of detectable isotopic labeling states in the context of different experimentations: mass spectrometry (MS) and nuclear magnetic resonance (NMR) spectroscopy. Following an MS experiment, a post-processing function is applied to the experimentally obtained data in order to yield a representation of the isotopic labeling state of the detected fragments; typically, a mass distribution. For example, in the case of electrospray mass ionization (ESI) and high-resolution MS experiments, where fragmentation is optional, the post-processing function may be the identity function. By contrast, in the case of gas chromatography mass spectrometry (GC-MS), experimentally obtained data is corrected with respect to the representation of the isotopic labeling state of the chemical derivatization agent, yielding an "underivatized" representation of the isotopic labeling state for each detected fragment. Irrespective of the underlying MS experiment, each fragment is identified with an EMU. For example, consider the cyano radical; molecular formula: \({\cdot}CN\). In a heteronuclear model, where the two isotopic atoms are, respectively, carbon and nitrogen, if an isotopically labeled cyano radical is detected using high-resolution MS, then the result is, by definition, a matrix of the probability of occurrence of each of the four possible combinations of detectable isotopes (viz., a mass fraction matrix). \(^{{\textsf {14}}}{\textsf {N}}\) 12 \({\textsf{C}}\) \(\mathrm {P}\left( {\left\{ {{\mathrm {C}} = 0} \right\} \cap \left\{ {\mathrm {N} = 0} \right\} } \right)\) Accordingly, the representation of each event corresponds to a specific isotopomer of the cyano radical; \({\cdot}CN_{\left( {0,\,0} \right) }\) which, in turn, are equivalent to isotopomers of EMUs of the cyano radical (precisely, the EMU that corresponds to the moiety of all atoms of the cyano radical). \({\cdot}CN_{\left\{ {1,\,2} \right\} ,\,\left( {0,\,0} \right) }\) Taking the trace of the anti-diagonals of the matrix, which corresponds to taking the logical disjunction of the subset of (heteronuclear) mass fractions that correspond to isotopically labeled moieties with the same number of additional neutrons, yields a vector; specifically, the mass fraction vector that is obtained by low-resolution MS of the same sample. $$\begin{aligned} \begin{bmatrix} \begin{array}{c} {\cdot}{ CN}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 0}\\ {\cdot}{ CN}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 1}\\ {\cdot}{ CN}_{\left\{ {1,\,2} \right\} ,\,{\text {M}} + 2}\\ \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{c} \mathrm {P}\left( {\left\{ {{\mathrm {C}} = 0} \right\} \cap \left\{ {\mathrm {N} = 0} \right\} } \right) \\ \mathrm {P}\left( { \left( { \left\{ {{\mathrm {C}} = 0} \right\} \cap \left\{ {\mathrm {N} = 1} \right\} } \right) \cup \left( { \left\{ {{\mathrm {C}} = 1} \right\} \cap \left\{ {\mathrm {N} = 0} \right\} }\right) } \right) \\ \mathrm {P}\left( {\left\{ {{\mathrm {C}} = 1} \right\} \cap \left\{ {\mathrm {N} = 1} \right\} } \right) \\ \end{array} \end{bmatrix} \end{aligned}$$ In this way, both low- and high-resolution MS data can be leveraged within the same mathematical framework. Nuclear magnetic resonance After Fourier transformation, the result of a NMR experiment (of arbitrary dimensionality) is a frequency-domain spectrum. If we assume, without loss of generality, that the integral for the resonance of a given "observed" nucleus of a given metabolite is equal to unity, then the proportional integral of a given NMR splitting pattern of a moiety of said metabolite is equal to the conditional probability of measuring a given isotopic labeling state of the "detected" nucleus whilst fixing the isotopic labeling state of the "observed" nucleus. Note that, in the context of a given moiety, the "detected" and "observed" nuclei need not be directly connected via a single chemical bond. Consider Leucine (abbreviated: "Leu"), an amino acid of 6 Carbon atoms. The second Carbon atom is denoted \({\mathrm {C}}_{\alpha }\); whereas, the fifth and sixth Carbon atoms, indistinguishable by NMR, are both denoted \({\mathrm {C}}_{\delta }\). The integral of the NMR splitting pattern $$\begin{aligned} {\mathrm {C}}_{\alpha } - {\mathrm {C}}_{\delta } \end{aligned}$$ is equal to the conditional probability of the detected atom \({\mathrm {C}}_{\delta }\) being 13 \({\textsf{C}}\)-labeled when the observed atom \({\mathrm {C}}_{\alpha }\) is also 13 \({\textsf{C}}\)-labeled $$\begin{aligned} \mathrm {P}\left( { \left\{ { {\mathrm {C}}_{\delta } = 1 } \right\} \vert \left\{ { {\mathrm {C}}_{\alpha } = 1 } \right\} } \right) , \end{aligned}$$ which, by definition, is equal to the quotient $$\begin{aligned} \frac{ \mathrm {P}\left( { \left\{ { {\mathrm {C}}_{\delta } = 1 } \right\} \cap \left\{ { {\mathrm {C}}_{\alpha } = 1 } \right\} } \right) }{ \mathrm {P}\left( { \left\{ { {\mathrm {C}}_{\alpha } = 1 } \right\} } \right) }. \end{aligned}$$ Since there are two indistinguishable Carbon atoms, the probability of either or both of \({\mathrm {C}}_{\delta }\) being 13 \({\textsf{C}}\)-labeled is equal to $$\begin{aligned} \mathrm {P}\left( { \left( { \left( { \left\{ {{\mathrm {C}}_{\delta _{1}} = 0 } \right\} \cap \left\{ { {\mathrm {C}}_{\delta _{2}} = 1 } \right\} } \right) \cup \left( {\left\{ { {\mathrm {C}}_{\delta _{1}} = 1 } \right\} \cap \left\{ {{\mathrm {C}}_{\delta _{2}} = 0 } \right\} } \right) \cup \left( {\left\{ { {\mathrm {C}}_{\delta _{1}} = 1 } \right\} \cap \left\{ {{\mathrm {C}}_{\delta _{2}} = 1 } \right\} } \right) } \right) }\right) . \end{aligned}$$ Therefore, the result is equal to $$\begin{aligned} \frac{ \left( { \mathrm {Leu}_{\left( {\top ,\,\top ,\,\top ,\,\top ,\,0,\,1} \right) } + \mathrm {Leu}_{\left( {\top ,\,\top ,\,\top ,\,\top ,\,1,\,0} \right) } + \mathrm {Leu}_{\left( {\top ,\,\top ,\,\top ,\,\top ,\,1,\,1} \right) } } \right) \times \mathrm {Leu}_{\left( {\top ,\,1,\,\top ,\,\top ,\,\top ,\,\top } \right) } }{ \mathrm {Leu}_{\left( {\top ,\,1,\,\top ,\,\top ,\,\top ,\,\top }\right) } }, \end{aligned}$$ which, given logical conjunction, is equal to $$\begin{aligned} \frac{ \mathrm {Leu}_{\left( {\top ,\,1,\,\top ,\,\top ,\,0,\,1}\right) } + \mathrm {Leu}_{\left( {\top ,\,1,\,\top ,\,\top ,\,1,\,0}\right) } + \mathrm {Leu}_{\left( {\top ,\,1,\,\top ,\,\top ,\,1,\,1}\right) } }{ \mathrm {Leu}_{\left( {\top ,\,1,\,\top ,\,\top ,\,\top ,\,\top } \right) } }. \end{aligned}$$ (17f) Since cumomers of a metabolite are equivalent to isotopomers of EMUs of said metabolite, the final result is equal to $$\begin{aligned} \frac{ \mathrm {Leu}_{\left\{ {2,\,5,\,6} \right\} ,\,\left( {1,\,0,\,1} \right) } + \mathrm {Leu}_{\left\{ {2,\,5,\,6}\right\} ,\,\left( {1,\,1,\,0} \right) } + \mathrm {Leu}_{\left\{ {2,\,5,\,6} \right\} ,\,\left( {1,\,1,\,1} \right) } }{\mathrm {Leu}_{\left\{ {2} \right\} ,\,\left( {1} \right) } }. \end{aligned}$$ (17g) Beginning with a single assumption (the quantization of nucleons in an atomic nucleus), we have formulated a mathematical framework for isotopic labeling that is independent of Chemistry. Reclothing the theory of isotopic labeling with familiar concepts from Chemistry only serves to obscure its generality and rigor. Furthermore, imposing artificial constraints on the theory of isotopic labeling, motivated by Chemistry, rather than by Mathematics, is, obviously, incorrect. (As a corollary, imposing artificial constraints on the practice of isotopic labeling, such as limiting oneself to chemically and financially feasible experiments, is, obviously, correct). Identifying the true name of the mathematical objects of the problem has allowed us to leverage the comparatively vast corpora of a diverse range of disciplines, including: Computer Science, Mathematics, Information Theory and Coding Theory. For example, we have shown that the set of punctured cumomers of a given moiety is the Reed–Muller spectral domain of the set of isotopomers of said moiety; and hence, that we can use the Reed–Muller transform matrices to convert isotopomer and punctured cumomer fraction vectors. The isotopomer method, the most naïve representation of isotopic labeling state, is, clearly, flawed. Malloy et al. assume, incorrectly, that the "labeled" and "unlabeled" variants of isotopic tracer elements can be identified with the Boolean truth values. Instead, as we have shown, determinate isotopic labeling states are distinguished elements of a multi-valued logic, whose undistinguished elements are identified with contradiction \(\bot\) and the indeterminate isotopic labeling state \(\top\). The cumomer method fairs slightly better. Weichert et al. do not recognize the complete set of cumomers of a given moiety. Instead, what they refer to as "cumomers" is the set of punctured cumomers with respect to \(s = 0\). In contrast to the advice of the Mathematics community, they assume that the indeterminate isotopic labeling state is less than every determinate isotopic labeling state; and hence, the elements of Weichert et al.'s cumomer fraction vectors are out of order, making conversion between isotopomer and cumomer fraction vectors cumbersome in the general case. Furthermore, contrary to the advice of the Fluid Dynamics community, Weichert et al. assume that, in the flux balance equation, influx is positive and efflux is negative. Hence, in cumomer-based flux balance equations, the consistent mass matrix has non-positive eigenvalues. We have shown that the EMU method can be "done" because the function that interprets a set of isotopic atoms (with determinate isotopic labeling states) as a mass distribution is a homomorphism, and because the set of mass distributions is a commutative monoid (when sets of isotopic atoms are pairwise disjoint). From this new perspective, the cumomer method can be "done" for two reasons: because cumomer fraction vectors are a valid codomain of the homomorphism and EMU-based flux balance equations can be transformed into cumomer-based flux balance equations by elementary algebraic manipulations: replacement of flux variable coefficients by coefficient matrices; transposition of row vectors and subsequent construction of a block column vector; and, permutation of rows and columns. Cumomer and mass fraction vectors are not the only "valid" interpretations. Zero dimensional, i.e., scalar, interpretations are possible, the simplest being the number of isotopic atoms per moiety. One dimensional, i.e., vector, interpretations include: cumomer fraction vectors, punctured cumomer fraction vectors and mass distributions. Two dimensional, i.e., matrix, interpretations, considering both the number of protons and neutrons of each moiety are also admissible. (In the flux balance equation, coefficient matrices are replaced by tensors). We have shown that the "flux balance equation" is the "mass balance equation" of Fluid Dynamics, under the following assumptions: A bounded domain with a piecewise-smooth boundary; A unit volume; An incompressible fluid; Mass lumping approximated by the row-sum technique; and, A distance metric that is constant for all pairs, i.e., that all "stuff" in a compartment is equally, infinitely close. This suggests that the flux balance equation could be derived from Fluid Dynamics under much more general assumptions. In the forwards direction, this technology transfer could enable the translation of Systems Biology models into Fluid Dynamics models, and subsequently, the simulation of Systems Biology models using Fluid Dynamics tools. In the backwards direction, establishing a dictionary could enable Fluid Dynamics concepts, theorems and results to be utilized by Systems Biology researchers. For example, what are the "translations" of laminar flow and friction? Moreover, can a biological reaction network be embedded in an arbitrary manifold, thereby allowing the model to account for the spatial characteristics of the compartment? The contributions of this article are as follows: We have rigorously defined the concept of an isotopic labeling state of an arbitrary moiety, consisting of an arbitrary number of isotopic atoms, with an arbitrary number of isotopes per isotopic tracer element, distinguished by an arbitrary number of mass shifts. We have reformulated the isotopomer, cumomer and EMU methods in terms of our new formulation of isotopic labeling states; demonstrating both the decomposition of arbitrary biochemical reaction networks and the construction of the corresponding flux balance equations. We have given matrices for the conversion of arbitrary isotopomer, cumomer and mass fraction vectors; recognizing that the set of punctured cumomers of a given moiety (with respect to any determinate isotopic labeling state) is the Reed–Muller spectral domain of the corresponding set of isotopomers. An immediate application of this work is the development of software systems that are informed of all mathematical relationships between the mathematical objects. We expect that this work will be incorporated into future implementations of Systems Biology software systems. The following research questions are unanswered by this study and are left for future work: How should the new capability, determination of isotopic labeling states of heteronuclear moieties, be leveraged? For example, is it more computationally efficient and/or numerically precise to fit metabolic flux variables to experimentally obtained measurements of hetero- rather than homonuclear moieties? This study assumes that isotopic labeling states of atoms in a given moiety are mutually independent. Is it possible to provide a formulation where isotopic labeling states are conditionally dependent? In the context of Radiochemistry, can the new formulation be used to determine isotopic labeling states of moieties in the presence of radioactive decay and/or neutron sources? The manuscript was written through contributions of all authors. All authors read and approved the final manuscript. The authors would like to thank Nathan Baker, Juan Brandi-Lozano, Luke Gosink, Costas Maranas, Panos Stinis and Xiu Yang for their comments and feedback. The authors would also like to thank the anonymous reviewers for their suggestions and comments. The research was performed using EMSL, a DOE Office of Science User Facility sponsored by the Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. Funding for this work was provided in part by the William Wiley Postdoctoral Fellowship from EMSL to P.N.R. Additional funding was provided by the Development of an Integrated EMSL MS and NMR Metabolic Flux Analysis Capability In Support of Systems Biology: Test Application for Biofuels Production intramural research project from EMSL to N.G.I. Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, 3335 Innovation Boulevard, Richland, WA 99354, USA Biological Sciences Division, Pacific Northwest National Laboratory, 3335 Innovation Boulevard, Richland, WA 99354, USA Malloy CR, Sherry AD, Jeffrey FM (1988) Evaluation of carbon flux and substrate selection through alternate pathways involving the citric acid cycle of the heart by 13 \({\textsf{C}}\) NMR spectroscopy. J Biol Chem 263(15):6964–6971Google Scholar Wiechert W, Möllney M, Isermann N, Wurzel M, de Graaf AA (1999) Bidirectional reaction steps in metabolic networks: III. Explicit solution and analysis of isotopomer labeling systems. Biotechnol Bioeng 66(2):69–85View ArticleGoogle Scholar Antoniewicz MR, Kelleher JK, Stephanopoulos G (2007) Elementary metabolite units (EMU): a novel framework for modeling isotopic distributions. Metab Eng 9(1):68–86View ArticleGoogle Scholar Metallo CM, Walther JL, Stephanopoulos G (2009) Evaluation of 13 \({\textsf{C}}\) isotopic tracers for metabolic flux analysis in mammalian cells. J Biotechnol 144(3):167–174View ArticleGoogle Scholar Walther JL, Metallo CM, Zhang J, Stephanopoulos G (2012) Optimization of 13 \({\textsf{C}}\) isotopic tracers for metabolic flux analysis in mammalian cells. Metab Eng 14(2):162–171View ArticleGoogle Scholar Martín HG, Kumar VS, Weaver D, Ghosh A, Chubukov V, Mukhopadhyay A, Arkin A, Keasling JD (2015) A method to constrain genome-scale models with 13 \({\textsf{C}}\) labeling data. PLoS Comput Biol 11(9):1004363View ArticleGoogle Scholar Quek L-E, Wittmann C, Nielsen LK, Krömer JO (2009) OpenFLUX: efficient modelling software for 13 \({\textsf{C}}\)-based metabolic flux analysis. Microb Cell Fact 8(1):1–15View ArticleGoogle Scholar Shupletsov MS, Golubeva LI, Rubina SS, Podvyaznikov DA, Iwatani S, Mashko SV (2014) OpenFLUX2: 13 \({\textsf{C}}\)-MFA modeling software package adjusted for the comprehensive analysis of single and parallel labeling experiments. Microb Cell Fact 13(1):152Google Scholar Young JD (2014) INCA: a computational platform for isotopically non-stationary metabolic flux analysis. Bioinformatics 30(9):1333–1335View ArticleGoogle Scholar Kajihata S, Furusawa C, Matsuda F, Shimizu H (2014) OpenMebius: an open source software for isotopically nonstationary 13 \({\textsf{C}}\)-based metab olic flux analysis. BioMed Res Int 2014. https://www.hindawi.com/journals/bmri/2014/627014/ Srour O, Young JD, Eldar YC (2011) Fluxomers: a new approach for 13 \({\textsf{C}}\) metabolic flux analysis. BMC Syst Biol 5(1):129View ArticleGoogle Scholar Nilsson R, Jain M (2016) Simultaneous tracing of carbon and nitrogen isotopes in human cells. Mol BioSyst 12(6):1929–1937View ArticleGoogle Scholar Reed I (1954) A class of multiple-error-correcting codes and the decoding scheme. Trans IRE Prof Group Inf Theory 4(4):38–49View ArticleGoogle Scholar Muller DE (1954) Application of Boolean algebra to switching circuit design and to error detection. Trans IRE Prof Group Electron Comput 3(3):6–12Google Scholar Kuzmin D, Hämäläinen J (2015) Finite element methods for computational fluid dynamics: a practical guide. SIAM Rev 57(4):642Google Scholar Felippa C (2013) Matrix finite element methods in dynamics (course in preparation). http://www.colorado.edu/engineering/CAS/courses.d/MFEMD.d/. Accessed 1 Apr 2016
CommonCrawl
Search all SpringerOpen articles Nanoscale Research Letters Nano Express | Open | Published: 16 April 2019 The Potential Application of BAs for a Gas Sensor for Detecting SO2 Gas Molecule: a DFT Study Jian Ren1, Weijia Kong2 & Jiaming Ni ORCID: orcid.org/0000-0002-2781-19843 Nanoscale Research Lettersvolume 14, Article number: 133 (2019) | Download Citation Different atmospheric gas molecules (e.g., N2, O2, CO2, H2O, CO, NO, NO2, NH3, and SO2) are absorbed on the pristine hexagonal boron arsenide (BAs) through density functional theory calculations. For each gas molecules, various adsorption positions were considered. The most stable adsorption depended on position, adsorption energy, charge transfer, and work function. SO2 gas molecules had the best adsorption energy, the shortest distance for BAs surface in the atmospheric gas molecule, and a certain amount of charge transfer. The calculation of work function was important for exploring the possibilities of adjusting the electronic and optical properties. Our results presented BAs materials can be the potential gas sensor of SO2 with high sensitivity and selectivity. BAs (hexagonal boron arsenide) is composed of groups III and V elements. The groups of III–V elements have excellent properties, such as excellent photoelectric properties, mechanical properties, and large band gap [1]. The promising potential applications of 2D materials [2,3,4,5] were well documented in recent studies [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]; these materials had been used to recognize various biomolecules [21, 22], pollutants [23, 24], and gas molecules [25, 26] to develop suitable sensing devices. We had found more and more the groups of III–V element materials, for example, BN, AlN, GaN, GaAs, and BP, and it has more and more studies for the gas molecules by theoretical calculation. Strak et al. [27] discovered AlN(0001) was a powerful catalyst for high-pressure-high-temperature synthesis of ammonia, and the work also confirmed the possibility of the efficient synthesis of ammonia at the AlN(0001) surface. Diao et al. [28] presented adsorption of H2O, CO2, CO, H2, and N2 on (10–10) surfaces of pristine and Zn-doped GaAs nanowires; the effect of the adsorption of CO2 and N2 on absorption coefficients was the largest. Cheng et al. [29] showed the adsorption of most gas molecules on pure BP and doped BP by first principle study and concluded that N-BP was more suitable as a gas sensor for SO2, NO, and NO2 due to the existence of the desorption process. Kamaraj and Venkatesan [30] studied the structure and electronic properties of the BAs by the DFT and LDA; although considerable progress had been made in the experimental synthesis and theoretical study of BAs, the results of BAs nanosheets endowed the system with promising applications in nanoelectronics and photovoltaics. In this work, we firstly investigated the gas sensing properties to fully exploit the possibilities of BAs as gas sensors by density functional theory (DFT) calculations. We predicted the adsorption properties of atmospheric gases (e.g., CO2, O2, N2, H2O, NO, NO2, NH3, CO, and SO2) on BAs based on first principle calculations. Our work demonstrated the apparent adsorption behavior, moderate charge transfers, and unique transmission characteristics of SO2 adsorption on BAs. The results suggested that monolayer BAs possessed great potential for SO2 sensing application. Theory and Method of Simulations The system was modeled as a 4 × 4 supercell of BAs and atmospheric gas molecules adsorbed onto it. In DMol3 [31] calculation process, exchange-correlation factions were calculated within a general gradient approximate (GGA) with the Perdew-Burke-Ernzerhof (PBE) [32]. The Brillouin zone was sampled using a 5 × 5 × 1 Monkhorst-Pack k-point grid and Methfessel-Paxton smearing of 0.01 Ry. All the atomic structures were relaxed until the total energy and the Hellmann-Feynman force converged to 1.0 × 10−5 eV and 0.06 eV/Å [33]. To evaluate the interaction between gas molecules and adsorption sheet surface, we calculated the adsorption energy (Ead) of adsorbed systems, which was defined as: $$ {E}_{\mathrm{ad}}={E}_{\mathrm{BAs}+\mathrm{gas}\mathrm{molecule}}-\left({E}_{\mathrm{BAs}}+{E}_{\mathrm{gas}\ \mathrm{molecule}}\right) $$ where EBAs + gas molecule is the total energy of BAs-adsorbed system, EBAs is the energy of BAs, and Egas molecule is the energy of a gas molecule. All energies were calculated for optimized atomic structures. The charge transfer was investigated by Mulliken's population analysis. Result and Discussion Three adsorption sites were considered for BAs in this work, namely top of a boron atom (B), the top of an arsenic atom (As), and the center of a hexagonal B-As (center), as indicated in Fig. 1a. We studied the presence of the atmosphere and found the best gas sensor. a Schematic view of top sites and center site on BAs. b The DOS of the BAs First of all, the geometric structure of pristine BAs monolayer had been optimized, and as shown in Fig. 1b, BAs bond length was 1.967 Å. There was an indirect band gap of 1.381 eV to exhibit in the band structure of BAs sheet, which was smaller than that of the bulk structure. These values were in good agreement with the previously reported values (Fig. 2) [34, 35]. The most energetically favorable adsorption configurations of the gas molecules: N2 (a), O2 (b), CO2 (c), H2O (d), CO (e), NO (f), NO2 (g), NH3 (h), and SO2 (i) on monolayer BAs Meanwhile, we had analyzed the adsorption energy, the charge transfer, and the distance between the molecules and BAs surface. The final result was as shown in Table 1. Table 1 Adsorption energy (Ead), Mulliken charge (Q) from the molecule to monolayer BAs, and distance (gas molecule/BAs) of the equilibrium nearest atom of gas molecule to atom of BAs monolayer N2 adsorption: Adsorption of N2 gas molecule on BAs was studied for three configurations of N2/BAs, viz. top side of the B atom, top side of As atom, and the center of a hexagonal ring above the BAs surface, and the nearest distance was 3.764 Å, 3.549 Å, and 3.65 Å and corresponding adsorption energy was − 0.24 eV, − 0.27 eV, and − 0.24 eV, respectively. The center had the best adsorption energy and the most stable structure. The adsorption energy of N2BAs was − 0.24 eV, the charge transfer from BAs to N2 gas molecule was 0.014e, and the distance of the N2-BAs was 3.65 Å. Fig. 3a showed that there were many lines under the Fermi energy level, and the corresponding density of states had several peaks under the Fermi energy level. As shown in the figure, the N2 gas molecule had four peaks, which had a certain influence on BAs, mainly from − 5 to 0 eV, and had great contributions to the DOS. Overall, the effect of N2 gas molecule adsorption on BAs was poor. Density of states (DOS) of N2/BAs (a), O2/BAs (b), CO2/BAs (c), H2O/BAs (d), CO/BAs (e), NO/BAs (f), NO2/BAs (g), NH3/BAs (h) and SO2/BAs (i) O2 adsorption: O2 gas molecule tended to adsorb on the central point. The adsorption energy of O2/BAs was − 0.35 eV, and the distance of the O2-BAs was 2.90 Å. The total band structure and DOS for O2/BAs were plotted in Fig. 3. It was obvious that an extra line crossed the zero point and reduced the band gap; O2 gas molecule had a peak at − 1 to 0 eV and had an effect on the density of states above the Fermi level. The population analysis for Mulliken charge transfer showed that − 0.172e was transferred from BAs surface to the O2 gas molecule, suggesting that O2 gas molecule acted as an acceptor. In general, the O2 gas molecule adsorption on BAs was better than N2. CO2 adsorption: CO2 gas molecule tended to adsorb on the top of As atom. The adsorption energy of CO2/BAs was − 0.28 eV, the charge transfer from BAs to CO2 gas molecule was − 0.018e, and the distance of the CO2-BAs was 3.55 Å. As shown in Fig. 3, compared to pristine BAs, the structure had no apparent change, and there were some obvious wave crests of the energy of − 9 eV in DOS, which had great contributions to the DOS. This point also highlighted the adsorption of CO2 gas molecule by BAs. The results showed that the adsorption effect and sensitivity of BAs to CO2 gas molecule were general. H2O adsorption: H2O gas molecule tended to adsorb on the top of As atom. The adsorption energy of H2O/BAs was − 0.38 eV, the charge transfer from BAs to H2O gas molecule was − 0.03e, and the distance of the H2O-BAs was 3.63 Å. As shown in Fig. 3, there were no great changes in the structure compared to pristine BAs. The Fermi level of Al-G increased obviously and moved to the valence band. In general, the H2O gas molecule adsorption on BAs was ignored. CO adsorption: CO gas molecule tended to adsorb on the top of As atom. The adsorption energy of CO/BAs was − 0.27 eV, the charge transfer from BAs to CO gas molecule was − 0.024e, and the distance of the CO-BAs was 3.50 Å. The total density of states (DOS) and band structure for BAs-CO were plotted in Fig. 3. CO gas molecule and As atom played a huge role in the effect of a peak of 3 to 4 eV on the DOS. However, there was no deviation in DOS in − 7 to 4 eV range, which suggested that CO was weekly adsorbed on BAs. There was some obvious wave crest of the energy of − 3 to 1 eV and 3 eV, which had great contributions to the DOS. The population analysis for Mulliken charge transfer showed that − 0.024e charge was transferred from BAs surface to the CO gas molecule, and it suggested that CO gas molecule acted as an acceptor. Overall, the effect of CO gas molecule adsorption on BAs was not special. NO adsorption: NO gas molecule tended to adsorb on the top of B atom. The adsorption energy of NO/BAs was − 0.18 eV, the charge transfer was − 0.01e from NO gas molecule to BAs, and the distance of the NO-BAs was 2.86 Å. There were a lot of lines upon the Fermi energy level. It found that the energy gap in the middle band reduced the band gap value. From the density diagram of states, there was an extra wave peak above the Fermi energy level, but there was little change under the Fermi energy level, relatively stable in Fig. 3. The mixing of orbitals caused small charge transfer and redistribution over the interacting region. The population analysis for Mulliken charge transfer showed that 0.01e charge was transferred from BAs surface to the NO molecule, suggesting that NO acted as a donor. There was no deviation in DOS in − 7 to 4 eV range, which suggested that NO was weekly adsorbed on BAs. NO2 adsorption: NO2 gas molecule tended to adsorb on the top of As atom. The adsorption energy of NO2/BAs was − 0.43 eV, and the distance of the NO2-BAs was 2.47 Å. The interesting was that the zero point in the band crossed a straight line directly after the adsorption of NO2 gas molecule, which meant that the BAs, which is a semiconductor, was transformed into the gold attribute; band gap was 0 eV. There was no great change in the whole, and a peak was generated at about − 3 eV due to NO2 gas molecular adsorption. There was some obvious wave crest of the energy of − 7 eV and 2 eV, which had great contributions to the DOS. In general, the adsorption of NO2 by BAs was better than that of several molecules above. NH3 adsorption: NH3 gas molecule tended to adsorb on the top of As atom. The adsorption energy of NH3/ BA was -0.34 eV, the charge transfer from NH3 gas molecule to BA was 0.007e, and the distance of the NH3-BA was 3.27 Å. There was no clear change in the energy band and the density of states, except that there was an obvious peak of adsorption of NH3 gas molecule below the Fermi level. The NH3 gas molecule had a little impact on BAs at − 8 to − 4 eV, forming a 15 eV peak. The adsorption effect and sensitivity of its BAs to NH3 gas molecule were general. SO2 adsorption: SO2 gas molecule tended to adsorb on the central point, the adsorption energy of SO2/BAs was − 0.92 eV, and the population analysis for Mulliken charge transfer showed that − 0.179e charge was transferred from BAs surface to the SO2 gas molecule, suggesting that SO2 gas molecule acts as an acceptor. The distance of the SO2/BAs was 2.46 Å. Compared to other gas molecules, the SO2/BAs had the biggest adsorption energy, the second largest electron transfer, and the shortest distance of the SO2-BAs. As shown in Fig. 3, the valence band of BAs had an obvious up and band gap decreased, and due to the adsorbed SO2 gas molecule, it could be seen from the density of states that there was one more wave peak at − 7.5 eV and certain transfer at the Fermi level. The adsorption of SO2 by BAs had the excellent effect. Fig. 4i showed the electron density diagram of SO2/BAs and the electron local overlap between BAs and SO2 gas molecule. On this basis, we drew the conclusion that the adsorption of SO2 by BAs was physical adsorption. The calculation of WF shown in Fig. 5 was of great significance in exploring the possibility of regulating the electronic and optical properties (such as absorption spectra and energy loss functions) by adsorbing small molecules. The work function was defined in solid physics as the minimum energy required to move an electron from the interior of a solid to the surface of the object. The work function of pristine BAs was 4.84 eV. NO and NH3 gas molecules were donors in charge transfer, and their work function decreased; the work function was 4.80 eV and 4.68 eV, respectively. The work function of N2/BAs, CO2/BAs, and CO/BAs was the same as that of BAs. The work function of O2/BAs, NO2/BAs, and SO2/BAs was higher than BAs. Combined with the above adsorption energy, distance of gas molecules and BAs surface, charge transfer, and work function, we found that SO2 gas molecule was most suitable for BAs materials. Electron density for pristine N2/BAs (a), O2/BAs (b), CO2/BAs (c), H2O/BAs (d), CO/BAs (e), NO/BAs (f), NO2/BAs (g), NH3/BAs (h), and SO2/BAs (i) Work function of BAs N2/BAs, O2/BAs, CO2/BAs, H2O/BAs, CO/BAs, NO/BAs, NO2/BAs, NH3/BAs, and SO2/BAs We have presented the structural and electronic properties of BAs with adsorbents N2, O2, CO2, H2O, CO, NO, NO2, NH3, and SO2 gas molecule, using density functional theory method. In the adsorption energy, SO2 > NO2 > H2O > O2 > NH3 > CO2 > CO > N2 > NO and SO2 < NO2 < NO<O2 < NH3 < CO < CO2 < H2O < N2 in the adsorption distance. NO2 has the largest Q and work function, maybe it could be detected by the proposed material because of good electrical response. SO2 gas molecule had the best adsorption energy, the shortest distance for gas molecule and BAs surface, and a certain amount of charge transfer. Combined with the above adsorption energy, distance of gas molecule and BAs surface, charge transfer, and work function, the current and the adsorption-induced current change of BAs exhibit strong anisotropic characteristics. Such sensitivity and selectivity to SO2 gas molecule adsorption make BAs a desirable candidate as a superior gas sensor. BAs: Hexagonal boron arsenide Density of states WF: Work function Lindsay L, Broido DA, Reinecke TL (2013) First-principles determination of ultrahigh thermal conductivity of boron arsenide: a competitor for diamond. Phys Rev Lett 111(2):025901 Xu M, Liang T, Shi M et al (2013) Graphene-like two-dimensional materials. Chem Rev 113(5):3766–3798 Miró P, Audiffred M, Heine T (2014) An atlas of two-dimensional materials. Chem Soc Rev 43(18):6537 Zhang S, Yan Z, Li Y et al (2015) Atomically thin arsenene and antimonene: semimetal–semiconductor and indirect–direct band-gap transitions. Angewandte Chemie 127(10):3155–3158 Zhang S, Zhou W, Ma Y et al (2017) Antimonene oxides: emerging tunable direct bandgap semiconductor and novel topological insulator. Nano Lett 17(6):3434–3440 Perreault F, Fonseca d FA, Elimelech M (2015) Environmental applications of graphene-based nanomaterials. Chem Soc Rev 46(39):5861–5896 Zhang S, Guo S, Chen Z et al (2017) Recent progress in 2D group-VA semiconductors: from theory to experiment. Chem Soc Rev 468:209–222 Zhou W, Guo S, Zhang S et al (2018) DFT coupled with NEGF study of a promising two-dimensional channel material: black phosphorene-type GaTeCl. Nanoscale 10(7):3350–3355 Bhimanapati GR, Lin Z, Meunier V et al (2015) Recent advances in two-dimensional materials beyond graphene. Acs Nano 9(12):11509–11539 Lin Y, Connell JW (2012) Advances in 2D boron nitride nanostructures: nanosheets, nanoribbons, nanomeshes, and hybrids with graphene. Nanoscale 4(22):6908–6939 Hinnemann B, Moses PG, Bonde J et al (2005) Biomimetic hydrogen evolution: MoS2, nanoparticles as catalyst for hydrogen evolution. J Am Chem Soc 36(25):5308–5309 Umadevi D, Panigrahi S, Sastry GN (2014) Noncovalent interaction of carbon nanostructures. Acc Chem Res 47(8):2574–2581 Weng Q, Wang X, Wang X et al (2016) Functionalized hexagonal boron nitride nanomaterials: emerging properties and applications. Chem Soc Rev 45(14):3989–4012 Novoselov KS, V I F′K, Colombo L et al (2012) A roadmap for graphene. Nature 490(7419):192–200 Li J, Fan H, Jia X (2010) Multilayered ZnO nanosheets with 3D porous architectures: synthesis and gas sensing application. J Phys Chem C 114(35):14684–14691 Tian H, Fan H, Li M et al (2015) Zeolitic imidazolate framework coated ZnO nanorods as molecular sieving to improve selectivity of formaldehyde gas sensor. ACS Sensors 1(3):243–250 Ma L, Fan H, Tian H et al (2016) The n-ZnO/n-In2O3 heterojunction formed by a surface-modification and their potential barrier-control in methanal gas sensing. Sensors Actuators B Chem 222:508–516 Wang W, Fan H, Ye Y (2010) Effect of electric field on the structure and piezoelectric properties of poly (vinylidene fluoride) studied by density functional theory. Polymer 51(15):3575–3581 Liu K, Fan H, Ren P et al (2011) Structural, electronic and optical properties of BiFeO3 studied by first-principles. J Alloys Compounds 509(5):1901–1905 Liu X, Fan HQ (2018) Electronic structure, elasticity, Debye temperature and anisotropy of cubic WO3 from first-principles calculation. Royal Soc Open Sci 5(6):171921 He S, Song B, Li D et al (2010) A graphene nanoprobe for rapid, sensitive, and multicolor fluorescent DNA analysis. Adv Func Mat 20(3):453–459 Mudedla SK, Balamurugan K, Kamaraj M et al (2015) Interaction of nucleobases with silicon doped and defective silicon doped graphene and optical properties. Phys Chem Chem Phys 18(1):295–309 Setiadi A, Shafiul Alam M, Muttaqien F et al (2013) Hydrogen adsorption in capped armchair edge (5,5) carbon nanotubes. Jap J Appl Phys 52(12):599–602 Shautsova V, Gilbertson AM, Black NCG et al (2016) Hexagonal boron nitride assisted transfer and encapsulation of large area CVD graphene. Sci Rep 6:30210 Schedin F, Geim AK, Morozov SV et al (2006) Detection of individual gas molecules adsorbed on graphene. Nat Mat 6(9):652–655 Varghese SS, Lonkar S, Singh KK et al (2015) Recent advances in graphene based gas sensors. Sensors Actuators B Chem 218:160–183 Strak P, Sakowski K, Kempisty P et al (2018) Adsorption of N2 and H2 at AlN (0001) surface: ab initio assessment of the initial stage of ammonia catalytic synthesis. J Phys Chem C 122(35):20301–20311 Diao Y, Liu L, Xia S (2018) Adsorption of residual gas molecules on (10–10) surfaces of pristine and Zn-doped GaAs nanowires. J Mat Sci 53(20):14435–14446 Cheng Y, Meng R, Tan C et al (2018) Selective gas adsorption and I–V response of monolayer boron phosphide introduced by dopants: afirst-principle study. Appl Surf Sci 427:176–188 Manoharan K, Subramanian V (2018) Exploring multifunctional applications of hexagonal boron arsenide sheet: a DFT study. ACS Omega 3 O.-T W J (1984) Molecular mechanics: by U. Burkert and N. L. Allinger, American Chemical Society, Washington. J Mol Struc Theochem 109(3–4):401–401 Delley B (1990) An all-electron numerical method for solving the local density functional for polyatomic molecules. J Chem Phys 92(1):508–517 Becke AD (1993) A new mixing of Hartree–Fock and local density-functional theories. J Chem Phys 98(2):1372–1377 Tong CJ, Zhang H, Zhang YN et al (2014) New manifold two-dimensional single-layer structures of zinc-blende compounds. J Mat Chem A 2(42):17971–17978 Zhuang HL, Hennig RG (2012) Electronic structures of single-layer boron pnictides. Appl Phys Lett 101(15):153109 We thank the College of Materials Science and Engineering, Anhui University of Science and Technology, for its assistance with the MD simulations. The financial support for this work was from the Natural Science Fund for Colleges and Universities in Jiangsu Province (17KJB510007). All data are fully available without restriction. School of Computer Science and Technology, Huaiyin Normal University, Chang Jiang West Road 111, Huaian, 223300, Jiangsu, China Jian Ren Department of Chemistry, Beijing Normal University, No.19, Waidajie, Xinjiekou, Haidian District, Beijing, 100875, China Weijia Kong School of Mechanical and Electrical Engineering, Guilin University of Electronic Technology, Jinji Road No.1, 54100, Gui, China Jiaming Ni Search for Jian Ren in: Search for Weijia Kong in: Search for Jiaming Ni in: JR and WK designed and carried out the experiments and drafted the manuscript. JMN participated in the work to analyze the data and gave the materials and supporting software. All authors read and approved the final manuscript. Correspondence to Jian Ren. Authors' Information Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Adsorption energy Gas molecule Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Diffany: an ontology-driven framework to infer, visualise and analyse differential molecular networks Sofie Van Landeghem1,2, Thomas Van Parys1,2, Marieke Dubois1,2, Dirk Inzé1,2 & Yves Van de Peer1,2,3 Differential networks have recently been introduced as a powerful way to study the dynamic rewiring capabilities of an interactome in response to changing environmental conditions or stimuli. Currently, such differential networks are generated and visualised using ad hoc methods, and are often limited to the analysis of only one condition-specific response or one interaction type at a time. In this work, we present a generic, ontology-driven framework to infer, visualise and analyse an arbitrary set of condition-specific responses against one reference network. To this end, we have implemented novel ontology-based algorithms that can process highly heterogeneous networks, accounting for both physical interactions and regulatory associations, symmetric and directed edges, edge weights and negation. We propose this integrative framework as a standardised methodology that allows a unified view on differential networks and promotes comparability between differential network studies. As an illustrative application, we demonstrate its usefulness on a plant abiotic stress study and we experimentally confirmed a predicted regulator. Diffany is freely available as open-source java library and Cytoscape plugin from http://bioinformatics.psb.ugent.be/supplementary_data/solan/diffany/. In the early days of Systems Biology, when molecular interaction data was still relatively sparse, all interactions known for a model organism were typically added to a single large interaction network. Such an integrated view would combine data from the proteome, transcriptome and metabolome [1–4]. While such studies certainly proved valuable to gain insights into the general characteristics of molecular networks, they lack the level of detail required to analyse specific response mechanisms of the interactome to changing conditions or stimuli. Consequently, differential networks have been introduced to model the dynamic rewiring of the interactome under specific conditions [5, 6]. Differential networks only depict the set of interactions that changed after the introduction of a stimulus. Most current research in this field has focused on a single interaction type such as expression data [7, 8], genetic interactions [9] or protein complexes [10]. Further, the analysis is usually limited to the comparison of only two networks [11–13]. At the same time, several promising studies have constructed multiple condition-specific networks such as time-course data [14, 15], tissue-specific networks [16, 17] or stress-induced co-expression networks [18]. These studies analyse general network statistics such as connectivity scores or employ machine-learning techniques to identify significantly rewired genes. However, due to the black-box behaviour of the methods and because these studies do not actually generate and visualise differential networks, the resulting prioritised gene lists cannot be easily interpreted by domain experts. By contrast, we believe it to be crucial that researchers can visualise and further explore the rewiring events in their network context. Unfortunately, there is currently no standardised methodology that would allow to integrate heterogeneous condition-specific networks on the one hand, and produce intercomparable differential networks on the other hand. Here, we introduce a novel ontology-based framework to standardise condition-specific input networks and to allow an arbitrary number of such networks to be used in the inference of a differential network. The network algorithms are designed to cope with a high variety of heterogeneous input data, including physical interactions and regulatory associations, symmetric and directed edges, explicitly negated interactions and edge weights. Depending on the application, these weights may be used to model the strength of an interaction, determined for instance by the expression levels of the interacting genes, or they may represent the probability that an interaction occurs when dealing with computationally inferred networks such as regulatory associations derived from co-expression analysis. To the best of our knowledge, our integrative framework named 'Diffany' (Differential network analysis tool) is unique in the emerging field of differential network biology, and we hope its open-source release will facilitate and enhance differential network studies. As one such example, we will present how the reanalysis, with Diffany, of a previously published experimental dataset has unveiled a novel candidate regulator for plant responses to mannitol. Experimental validation confirmed that this regulator, HY5, might indeed be involved in the mannitol-responsive network in growing Arabidopsis leaves. In this section, we detail the various parts of the Diffany framework (Additional file 1). Network terminology To perform a differential network analysis, two types of input data sources are required. First, a reference network R models an untreated/unperturbed interactome, serving as the point of reference to compare other networks to. Second, one or more condition-specific networks each represent the interactome after a certain treatment, perturbation or stimulus. We denote them as N i with i between 1 and c, and c the number of distinct conditions that are being compared to the reference state. Both types of input networks may have edges with a certain weight associated to them. Such weights in the networks may be interpreted differently according to the application for which the framework is used. For instance, they may model the strength of physical interactions as determined by expression levels of the interacting genes. In other cases, when dealing with network data inferred through computational methods, such as regulatory associations derived from co-expression data, these weights may instead model the probability/confidence that an interaction really does occur. Whichever the case, the Diffany framework assumes the weights assigned to the edges are sensible and comparable to each other. The two input sources are used to generate a differential network D (Fig. 1) that depicts the rewiring events from the reference state to the perturbed interactome. Further, an inferred consensus network C models the interactions that are common to the reference and condition-specific networks, sometimes also called 'housekeeping' interactions. We do not adopt the latter terminology, because while some unchanged interactions may indeed provide information about the cell's standard machinery (i.e. housekeeping functions), others may simply refer to interactions that change under some other condition than the one tested in the experimental setup. Differential edges. Artificial example of the inference of differential edges (c) from a reference network (a) and a condition-specific network (b). Edge thickness refers to the weight of an edge. In Subfigure (c), the top connection (A-B) shows a negative differential edge ('decreases_regulation') occurring because of a switched polarity from positive (green) to negative (red) regulation, while the second and third links (M-N and X-Y) show a negative differential edge because the original positive edge was decreased or even entirely removed in the condition-specific network. The thickness of the differential edge represents the difference in weight between the reference and condition edge. Column (d) depicts the corresponding 'consensus' edges: both input networks are found to have a regulatory edge between nodes A and B and a positive regulation edge between M and N, but there is no consensus edge between X and Y Interaction ontology The interaction ontology is a crucial component that assigns meaning to heterogeneous input data types. Analogous to the Systems Biology Graphical Notation (SBGN) [19], this structured vocabulory provides a distinction between 'Activity Flow' interactions and 'Process' interactions, modelling regulatory associations and physical interactions separately. However, in contrast to SBGN, these complementary interaction classes can be freely mixed within one network, allowing for a varying level of modelling detail combined into one visualisation. In the Diffany framework, a default interaction ontology is available, covering genetic interactions, regulatory associations, co-expression, protein-protein interactions, and post-translational modifications (Fig. 2). This ontology was composed specifically to support a wide range of use-cases, and is used throughout this paper. However, the ontology structure itself, as well as the mapping of spelling variants, can be extended or modified based on specific user demands. Additionally, when unknown interaction types are encountered in the input data, they are transparently added as unconnected root categories. Interaction ontology. Default edge ontology structure, with activity flow interaction types on the left, and process types on the right. Root categories are shown with black borders, and have a default symmetry state: directed (→) or symmetrical (-). Because of space constraints, not all PTM (post-translational modification) subclasses are shown Network inference The interaction ontology defines the root categories for which consensus and differential edges can be inferred. For the sake of simplification of the formulae in the following, we define R=N 0, and we thus have a set \(\mathcal {N}\) of c+1 input networks. The union of all nodes in these c+1 input networks is represented by \(\mathcal {G}\), and an edge of semantic root category S between two nodes X and Y in an input network N i as I sxyi . Notice that I sxyi may also refer to a non-existing or 'void' edge when the two nodes X and Y are not connected by any edge of that semantic category S in the network N i . A differential network is then inferred by considering each possible node pair (X,Y) in (\(\mathcal {G} \times \mathcal {G}\)) and, for each such pair, constructing the set of input edges \(\mathcal {I}_{\textit {sxy}}\) for each semantic category S. The calculation of differential and consensus edges E from that set of input edges \(\mathcal {I}_{\textit {sxy}}\) involves the determination of the following edge parameters: edge negation: n e g(E) is a boolean value edge symmetry: s y m m(E) is a boolean value edge weight: w e i g h t(E) is a positive real number edge type: t y p e(E) is a String value Differential networks The hierarchical structure of the interaction ontology forms the backbone for the inference of differential networks. First, all (affirmative) condition-specific edges in \(\mathcal {I}_{\textit {sxy}}\) for a specific category S are processed to construct a support tree (Fig. 3). Such an edge provides support not only for the category it belongs to (e.g. 'inhibition'), but also for all super-categories in the tree (in casu, 'negative regulation' and 'regulation', cf. left tree in Fig. 3). From the support tree that is thus generated, it becomes possible to synthesize the number of condition-specific networks that support a certain category, and by which weights they do so (cf. right tree in Fig. 3). Evidence summarisation. Example of how the evidence from four different condition-specific networks ((a): C1-C4 from top to bottom) is summarised using the default edge ontology as backbone (shown only partially). Each condition-specific edge provides support not only for the category it belongs to (e.g. inhibition), but also for all super-categories in the tree (e.g. regulation (b)). After processing all condition-specific edges (c), the support tree summarises the number of condition-specific networks that support a certain category, and with which weights they do so Negated edges in \(\mathcal {I}_{\textit {sxy}}\) are interpreted as explicit recordings of links that are not present in the interactome, but otherwise do not influence the support tree. A differential edge D sxy is always affirmative (Formula (1)), and is only symmetrical when all input edges in \(\mathcal {I}_{\textit {sxy}}\) are symmetrical (Formula (2)). When only some of the edges in \(\mathcal {I}_{\textit {sxy}}\) are symmetrical while others are directed, the symmetrical ones are unmerged into two opposite directed edges of equal type and weight. To further determine the type and weight of a differential edge D sxy , the reference edge R sxy is compared to the produced support tree of the condition-specific networks. If the set of values in the support tree (e.g. {0.6, 0.7, 0.8} for 'regulation') contains values both below as well as above the weight of R sxy , no meaningful differential edge D sxy can be deduced, as the response varies in directionality between the different conditions. This is also the case when the edges in \(\mathcal {C}_{\textit {sxy}}\) all appear to be equal to R sxy . Otherwise, when all conditions support a higher weight than the weight of R sxy , the minimal difference to those supporting edges determines the increase value shared among all conditions and is thus used as the weight of D sxy (Formula (3)). Similarly, when all conditions support a lower weight, the minimal difference determines the decrease value shared among all conditions. For example, if R sxy would be a regulation edge of weight 0.9, D sxy would be of type decrease_regulation and weight 0.1 according to the support tree of of Fig. 3. If R sxy would have weight 0.4 instead, D sxy would be of type increase_regulation and weight 0.2. While a Process edge expresses a physical interaction and has no polarity, an Activity flow edge can be determined to have a general 'positive' or 'negative' effect. This means that for an edge in the Activity flow category (e.g. 'positive regulation') also edges of the opposite category can be compared (in casu 'negative regulation'). While in principle edge weights are positive, in this case the weights of the opposite category will be converted to negative values only for calculation purposes. As such, the differential edge between 'negative regulation' of 0.2 (interpreted as −0.2 for calculation purposes) and 'positive regulation' of 0.3 would be of weight 0.5. $$ {neg(D_{sxy}) = false} $$ ((1)) $$ {symm(D_{sxy}) = \bigwedge\limits_{i=0}^{c} symm(I_{sxyi})} $$ $$ {weight(D_{sxy}) = \min\limits_{i=1}^{c} \left(\left|weight(I_{sxy0}) - weight(I_{sxyi})\right|\right)} $$ Consensus networks The inference of consensus networks follows a similar procedure. To calculate a consensus edge C sxy from a set of affirmative input edges \(\mathcal {I}_{\textit {sxy}}\), the reference edge R sxy is first added to the support tree in a similar fashion as done previously for the condition-specific edges. The most-specific edge type with highest weight that is supported by all input networks is then chosen to define the consensus edge. In the case when all edges in \(\mathcal {I}_{\textit {sxy}}\) are negated, we construct a similar support tree, but one where the support travels downwards to sub-categories instead of upwards (e.g. 'no regulation' also implies 'no inhibition'). In this case, the least-specific edge type with the highest weight that is supported by all, will represent the consensus edge, which will also be negated (Formula (4)). When \(\mathcal {I}_{\textit {sxy}}\) contains both affirmative and negated edges, no consensus edge will be deduced between nodes X and Y. As described above, consensus edges are defined by retrieving a weight value that is supported by all input, thus effectively applying a 'minimum' operator to the input weights (Formula (6)). However, it is also possible to apply the maximum operator, which will identify the highest weight that is supported by at least one input network, thus simulating a 'union' operation rather than an 'intersection' between the given input edges. More sophisticated weighting mechanisms will be implemented in the future, depending on the applications in which the framework will be used. $$ {neg(C_{sxy}) = \bigwedge\limits_{i=0}^{c} neg(I_{sxyi})} $$ $$ {symm(C_{sxy}) = \bigwedge\limits_{i=0}^{c} symm(I_{sxyi})} $$ $$ { weight(C_{sxy}) = \min\limits_{i=0}^{c} weight(I_{sxyi})} $$ An optional post-processing step is to automatically remove all inferred edges in the differential and/or consensus networks below a user-defined weight threshold. The exact value of this threshold should be chosen based on the input data and the edge weight normalisations of the original resources. For example, the differential weights could be indexed against the null distribution of values expected when the reference and condition-specific networks would represent equal replicates [6]. Fuzzy inference The differential inference methods as described above can identify a rewiring event that is common to all conditions, as compared to one reference network. However, in some cases it might be beneficial to allow for one or more mismatches. Such a relaxed constraint enables for instance the retrieval of rewiring events that occur in three out of four conditions, thus allowing a more 'fuzzy' or less stringent mode of comparison. For the calculation of consensus networks, similar relaxed criteria can be applied. In this case, it can be specified whether or not the reference network always needs to 'match' or not. If this is set to 'true', a consensus edge will always need support from the reference network specifically. Otherwise, all input networks are treated as equals. Diffany is implemented in Java 1.6 and the code, released under an open-source license, contains extensive in-line documentation as well as detailed javadoc annotationsa. JUnit tests ensure proper behaviour of the algorithms also after code refactoring. A GitHub repository provides version control, public issue tracking and a wiki with documentation. For instance, the framework could be extended by adapting more complex statistical scoring strategies [7, 12] into the ontology-based backbone. As this is a non-trivial task, we encourage others to contribute to this effort through the online GitHub repository. The code base is structured in a modular fashion, with various methods for network cleaning, building and refining the ontology structure, applying custom edge filters, and so on. It is straightforward to extend the available functionality with additional network algorithms or filtering steps. By keeping semantics separate from functionality throughout the code, it becomes straightforward to create a custom ontology for any given project. On top of this core library, we have also implemented a Cytoscape plugin ('app') for the new Cytoscape 3 framework [20], providing an intuitive user interface and allowing straightforward integration with other network inference/analysis tools such as ClueGO [21], BINGO [22] or GeneMANIA [23]. Finally, a commandline interface supports large-scale bioinformatics studies through the generation of differential networks in straightforward tab-delimited file formats. By design, the framework presented here can deal with any mixed input networks of negated edges, different edge weights, directed as well as symmetrical edges and a variety of edge types. Herein lays the main strength of our framework that is thus applicable to a wide range of comparative network studies. Genetic networks To evaluate the implementation of our novel framework, we have applied it first to a small, artificial network available in previous literature (Fig. 4). Using the original inference as inspiration (Fig. 4 a) to model the input networks (Fig. 4 b-c), Diffany produced differential and consensus networks (Fig. 4 d-e). Remarkably, compared to the inference of [6], the consensus network generated by Diffany contains one additional edge: the (weak) unspecified genetic interaction (gi) between A and B. Indeed, because our framework is ontology-driven, it can recognise the fact that 'positive gi' and 'negative gi' are both subclasses of the more general category 'genetic interaction'. As a result, there is an edge of type 'unspecified genetic interaction' between nodes A and B in the consensus network. Artificial differential network of genetic interactions. A comparison of Diffany results with a previously published (artificial) differential network involving positive (alleviating) and negative (aggrevating) genetic interactions. a: The original picture by [6]. The reference network is denoted as 'Condition 1' and the condition-specific network as 'Condition 2'. The differential network is displayed at the right, and the consensus network at the bottom ('Housekeeping interactions'). b-e: The differential d and consensus networks e produced by Diffany from the same input data. Because they do not contribute to an enhanced understanding of the molecular rewiring, unconnected nodes are not included in the networks In cases where such general, unspecified edges without polarity are unwanted, it is trivial to remove them from the network in a post-processing filtering step. However, we believe this additional information can be valuable when combined with the information in the differential networks themselves, as the presence or absence of such a generic consensus edge helps distinguishing between the three different cases as depicted in Fig. 1. Specifically, this generic regulatory edge provides evidence for the fact that both the reference and condition-specific network contain a regulatory edge between nodes A and B, but with opposite polarity, as is the case in the top example in Fig. 1. Given that the differential edge presents an increase in regulation, this means that the reference network contained a negative (down-) regulation, and the condition-specific network a positive (up-) regulation. When instead the consensus edge would not have this general, unspecified edge, as in the case of the bottom example in Fig. 1, this would mean that the condition-specific network simply did not have any link between the two nodes. Heterogeneous data The second example presents the application of the Diffany inference tool to heterogeneous input networks, further illustrating the power of the Interaction Ontology. Here, a differential and a consensus network are generated from reference and condition-specific networks obtained through integrating various interaction and regulation types (Fig. 5). Notice how directionality, different edge types and weights can all be mixed freely in the networks. Artificial differential network of heterogeneous data. More complex calculation of differential (c) and consensus (d) networks from the reference (a) and condition-specific (b) networks. Notice how directionality, different edge types and weights can all be mixed freely in the networks Mannitol-stress in plants To demonstrate the practical utility of our framework, we have used Diffany to reanalyse a previously published experimental dataset measuring mannitol-induced stress responses in the model plant Arabidopsis thaliana [24]. In this study, nine-days-old seedlings were transferred to either control medium, or medium supplemented with 25 mM mannitol. At this developmental stage, the third true leaf is very small and its cells are actively proliferating. RNA from these young leaves was extracted at 1.5, 3, 12 and 24 h after transfer. The expression data were processed with robust multichip average (RMA) as implemented in BioConductor [25, 26]. Further, the Limma package [27] was applied to identify differentially expressed (DE) genes at two FDR-corrected P-values: 0.05 and 0.1, giving rise to two sets of DE genes for each time-point (Table 1 and Additional file 2). Table 1 Number of differentially expressed genes per dataset Input networks To determine the set of genes (nodes) relevant to this study, we have first taken all differentially expressed genes across all time-points, using the strict 0.05 FDR threshold. Next, all the PPI neighbours of these genes were extracted from CORNET [28, 29] and added, with the exception of non-DE PPI hubs, as the inclusion of such hubs would extend our networks to irrelevant nodes. Analysis showed that for instance 10 % of all nodes account for 70 % of all PPI edges, and we have removed the bias towards such generic hubs by automatically excluding proteins with at least 10 PPI partners. Note that such hubs will still appear in the networks when they are differentially expressed themselves. Subsequently, all regulatory neighbours of the extended node set were added, using both the AGRIS TF-target data [30] and the kinase-target relations from PhosPhAt [31]. From the kinase-target relations, hubs with at least 30 partners were excluded, removing mainly MAP kinase phosphatases (MKPs) which are involved in a large number of physiological processes during development and growth [32]. Finally, we also added DE genes from the second, less stringent result set (FDR cut-off 0.1), if they could be directly connected to at least one of the genes found up until that point. This approach allows us to explore also those genes that are only slightly above the strict 0.05 FDR cut-off, while reducing noise by excluding those that are not connected to our pathways of interest. In general, this two-step methodology as well as the hub filtering was found to produce more meaningful results. However, both steps are optional and can be removed from the pipeline when using the Diffany library in other studies. The reference network was then defined by generating all PPI and regulatory edges between the node set as determined in the previous steps. All edges in the reference network were given weight one, a default value used when no overexpression is measured (yet). This resulted in a reference network of 1393 nodes and 2354 non-redundant edges, of which 56 % protein-protein interactions, 24 % TF regulatory interactions and 20 % kinase-target interactions. Subsequently, each time-specific network was constructed by altering the edge weights according to the expression levels of the corresponding nodes/genes measured at that time point. All interactions with at least one significantly differentially expressed gene as interaction partner is thus down- or upweighted. To define differential expression, the less stringent criterium (0.1 FDR) is used here. For instance, the activation of a non-DE gene by a gene that is differentially expressed at that specific time point, would get a weight proportional to the fold change of that differentially expressed activator. By contrast, an edge would be removed (weight zero) when the edge does not fit the expression values at this time point, for instance when an activator is overexpressed but the target is underexpressed. This allows us to remove the interactions that, even though reported in the public interaction data, are probably not occurring in this specific context. As a final result, the information on differentially expressed genes has now been encoded in the edge weights of the time-specific networks. By comparing them to the generic reference network, the Diffany algorithms will now be able to produce differential and consensus networks which depict the changes in expression values across the time measurements. In the following, we describe these results and provide interpretations that show-case how this type of analysis may lead to novel insights. Differential network for one condition With the statistically significant DE values translated into input networks, the differential networks can then be generated by either comparing the reference network to each time-specific network individually, or by comparing all time-specific networks against the reference network simultaneously. As an example of the first mode of comparison, Fig. 6 depicts the differential network after 1.5 hours, illustrating the rewiring events occurring in this short time frame after the induction of mannitol stress. At this early time point, it is rather unlikely that the expression of the DE genes was affected by subsequent transcriptional cascades. By including transcription factors upstream of the DE genes in the network even if they are not DE themselves, it is possible to identify new putative regulators as compared to previous analysis methods. For example, HY5 and PIL5 might be suitable candidates, as they contain a putative phosphorylation site and are thus likely to be posttranslationally regulated. Mannitol-induced stress response at 1.5 h. Analysis of the mannitol-induced stress response, depicting the generated differential network at the 1.5 h time point: increase/decrease of regulation in dark green and red respectively, increase/decrease of PPI in light green and orange, increase/decrease in phosphorylation in blue and purple. It is important to note that in these differential networks the arrows point to rewiring events: a decrease of regulation for instance (red arrows) does not necessarily point to an inhibition, but may also indicate a discontinued activation. Diamond nodes represent proteins with a known phosphorylation site, and proteins with a kinase function are shown with a black border. Blue and yellow nodes identify underexpressed and overexpressed genes respectively To further investigate the possibility that HY5 would be a transcriptional regulator under mannitol stress, we validated the Diffany results by measuring the expression level of the proposed HY5-target genes in the growing leaves of WT and HY5 loss-of-function mutants. These genes, except ARL, were all underexpressed in hy5 mutants as compared to WT, confirming that HY5 is indeed involved in the regulation of the MYB51, EXO, RAV2 and TCH3 expression in growing Arabidopsis leaves (Additional files 3 and 4). To further explore if HY5 is involved in leaf growth regulation under mannitol stress, phenotypic analysis was performed on hy5 mutants under both long term and short term mannitol treatment. The hy5 seedlings were clearly hypersensitive to stress, with decreased leaf size under long term and short term stress, and showed complete bleaching upon long term mannitol stress (Fig. 7, Additional file 4). These biological results demonstrate that HY5, which has been identified with Diffany as a putative regulator of mannitol stress, might indeed be involved in the mannitol-responsive network in growing Arabidopsis leaves. Phenotype of the hy5 mutant on mannitol-stress. Rosettes of WT (left) and hy5 mutants (right) on control medium (top panel) and mannitol-supplemented medium (bottom panel). Plants are 22 days old. Scalebar = 1 cm Next to the identification of new putative regulatory links, the differential PPI edges make it possible to understand complex formation under specific conditions. For example, the EBF2 sub-complex presents a nice example of how the induction of one protein is sufficient to increase the activity of a whole complex. The EBF2 is a stress-responsive E3-ligase involved in the posttranslational regulation of the ethylene-responsive factors EIN3 and EIL1 [33, 34]. In this differential network, EBF2 forms a complex with these two targets, which are induced by mannitol as well. However, some of the other members of the SCF-complex, such as CUL1, SKP1, ASK1 and ASK2, are missing from the differential network. As these SCF-complexes are involved in many cellular processes, their specificity being defined by the E3-ligase, we can speculate that the other members of the complex are highly abundant and not specific to mannitol-stress. Their automatic removal from the differential network thus allows the user to focus on the truly interesting genes for this specific stress condition. Differential network for all conditions The second mode of comparison allows to simultaneously compare all condition-specific networks to one reference network. In this specific case, such an analysis models the stress-specific, but time-independent response. Fig. 8 shows these rewiring interactions. Strikingly, mainly the overexpressed genes (yellow nodes) remain differentially expressed throughout the time-course experiment, while this is only the case for a few of the underexpressed genes (blue nodes). This implies that in this context, the upregulation of genes is a more stable and long-term process. Mannitol-induced stress response across all time points (strict). Analysis of the mannitol-induced stress response, showing the differential network generated by comparing the reference network to all four time points simultaneously, and calculating the overall differential rewiring. Color coding as in Fig. 6 For instance, the upregulation of TCH3 by HY5 is present because TCH3 is overexpressed at all time points and its upregulation by HY5 may thus play a significant role in the overall stress response. To validate this biologically, the expression level of TCH3 and other previously mentioned HY5 target genes was measured in WT and hy5 mutants, 24 h upon transfer to control or mannitol-supplemented medium (Additional file 4). While the induction of TCH3, MYB51 and ARL could be clearly observed in WT plants, a more variable but less pronounced upregulation was observed in hy5 mutants. Thus, HY5 might be involved in the regulation of TCH3, MYB51 and ARL under mannitol, although it is probably not the sole regulator of these targets, but instead acts in parallel with other regulators previously identified in the early mannitol-response of growing Arabidopsis leaves [24, 35]. Finally, we can apply a less stringent criterium to the inference of differential networks by only requiring that three out of four time points need to match for a rewiring event to be included in the differential network (Fig. 9). This results in more robust network inference, as the differential network would remain the same when some noise would be introduced at one of the time points. Additionally, this method provides a more complete view on the rewiring pathway occurring in response to osmotic stress in plants. All these different settings and options are also available when generating the differential networks through the Cytoscape plugin. Mannitol-induced stress response across all time points, allowing for one mis-match per edge. Analysis of the mannitol-induced stress response, showing the differential network generated by comparing the reference network to all four time points but allowing a match when only three out of four time points share the same response. Color coding as in Fig. 6, pink arrows depict an increase in dephosphorylation. In this figure, only regulatory interactions are shown as the addition of PPI data would obscure the visualisation Discussion and conclusion We have developed an open-source framework, called Diffany, for the inference of differential networks from an arbitrary set of input networks. This input set always contains one reference network which represents the interactome of an untreated/unperturbed organism, while all other networks are condition-specific, each modelling the interactome of the same organism subjected to a specific environmental condition or stimulus. Differential networks allow focusing specifically on the rewiring of the network as a response to such stimuli, by modelling only the changed interactions. At the same time, interactions that remain (largely) the same are summarised in a 'consensus' network that provides insight into the basic interactions that are not influenced by changes of internal or external conditions. The analysis of these differential and consensus networks provides a unique opportunity to enhance our understanding of rewiring events occurring for instance when plants undergo environmental stress, or when a disease manifests in the human body. Further, the fact that the framework can compare an arbitrary number of condition-specific networks to one reference network at the same time, forms a powerful tool to analyse distinct but related conditions, such as different human diseases that may share a defected pathway, or various abiotic stresses influencing a plant in a similar fashion. In comparison to previous work in the emerging field of differential network biology, Diffany is the first generic framework that provides data integration functionality in the context of differential networks. To this end, we have implemented an Interaction Ontology which enables seamless integration of different interaction types, provides semantic interpretation, and deals with heterogeneous input networks containing both directed and symmetrical relations. This ontology forms the backbone for the implementation of the network inference methods that produces differential networks. As in any Systems Biology study or application, a known challenge involves the issue of non-existing edges: an interaction may be missing from the network because it was experimentally determined that no association occurred, or it may simply be that there is a lack of evidence for the interaction, not actually excluding its existence. To deal with these cases, Diffany allows the definition of negated edges, which are explicit recordings of interactions that were determined not to happen under a specific condition. To provide easy access to the basic functionality of inference and visualisation of differential and consensus networks, we have developed a commandline interface and a Cytoscape plugin. The Cytoscape plugin allows to generate custom differential networks as well as reproduce the use-cases described in this paper. All relevant code is released under an open-source license. Finally, we have illustrated the practical utility of Diffany on a study involving osmotic stress responses in Arabidopsis thaliana. The resulting differential networks were found to be concise and coherent, modelling the response to mannitol-induced stress adequately. The analysis of these differential networks and a preliminary experimental validation has led to the identification of new candidate regulators for early mannitol-response, such as PIL5 and HY5, which likely contribute to the fast transcriptional induction of mannitol-responsive genes. Further detailed biological validation, including for instance ChIP experiments and experimental systems biology approaches, are necessary to confirm the role of HY5 in this context and fully unravel the early stress-induced rewiring events of this complex regulatory network. a API at http://bioinformatics.psb.ugent.be/supplementary_data/solan/diffany/. Srinivasan BS, Shah NH, Flannick JA, Abeliuk E, Novak AF, Batzoglou S. Current progress in network research: toward reference networks for key model organisms. Briefings in Bioinforma. 2007; 8(5):318–32. doi:10.1093/bib/bbm038. Balaji S, Babu MM, Aravind L. Interplay between network structures, regulatory modes and sensing mechanisms of transcription factors in the transcriptional regulatory network of E. coli. J Mole Biol; 372(4):1108–22. Fiedler D, Braberg H, Mehta M, Chechik G, Cagney G, Mukherjee P, et al.Functional organization of the S. cerevisiae phosphorylation network. Cell. 2009; 136(5):952–63. Friedel S, Usadel B, Von Wirén N, Sreenivasulu N. Reverse engineering: A key component of systems biology to unravel global abiotic stress cross-talk. Front Plant Sci. 2012; 3(294):1–16. doi:10.3389/fpls.2012.00294. Przytycka TM, Singh M, Slonim DK. Toward the dynamic interactome: it's about time. Brief Bioinforma. 2010; 11(1):15–29. doi:10.1093/bib/bbp057. Ideker T, Krogan NJ. Differential network biology. Mole Syst Biol. 2012; 8(565):1–9. doi:10.1038/msb.2011.99. Gill R, Datta S, Datta S. A statistical framework for differential network analysis from microarray data. BMC Bioinforma. 2010; 11(1):95. doi:10.1186/1471-2105-11-95. Tesson B, Breitling R, Jansen R. DiffCoEx: a simple and sensitive method to find differentially coexpressed gene modules. BMC Bioinforma. 2010; 11(1):497. doi:10.1186/1471-2105-11-497. Bandyopadhyay S, Mehta M, Kuo D, Sung MK, Chuang R, Jaehnig EJ, et al. Rewiring of genetic networks in response to DNA damage. Science. 2010; 330(6009):1385–89. doi:10.1126/science.1195618. Bisson N, James DA, Ivosev G, Tate SA, Bonner R, Taylor L, et al.Selected reaction monitoring mass spectrometry reveals the dynamics of signaling through the GRB2 adaptor. Nature Biotechnol. 2011; 29:653–8. doi:10.1038/nbt.1905. Zhang B, Li H, Riggins RB, Zhan M, Xuan J, Zhang Z, et al.Differential dependency network analysis to identify condition-specific topological changes in biological networks. Bioinforma. 2009; 25(4):526–32. doi:10.1093/bioinformatics/btn660. Bean G, Ideker T. Differential analysis of high-throughput quantitative genetic interaction data. Genome Biol. 2012; 13(12):123. doi:10.1186/gb-2012-13-12-r123. Amar D, Shamir R. Constructing module maps for integrated analysis of heterogeneous biological networks. Nucleic Acids Res. 2014; 42(7):4208–219. doi:10.1093/nar/gku102. Hudson NJ, Reverter A, Dalrymple BP. A differential wiring analysis of expression data correctly identifies the gene containing the causal mutation. PLoS Comput Biol. 2009; 5(5):1000382. doi:10.1371/journal.pcbi.1000382. Krouk G, Mirowski P, LeCun Y, Shasha D, Coruzzi G. Predictive network modeling of the high-resolution dynamic plant transcriptome in response to nitrate. Genome Biol. 2010; 11(12):123. doi:10.1186/gb-2010-11-12-r123. Guan Y, Gorenshteyn D, Burmeister M, Wong AK, Schimenti JC, Handel MA, et al.Tissue-specific functional networks for prioritizing phenotype and disease genes. PLoS Comput Biol. 2012; 8(9):1002694. doi:10.1371/journal.pcbi.1002694. Magger O, Waldman YY, Ruppin E, Sharan R. Enhancing the prioritization of disease-causing genes through tissue specific protein interaction networks. PLoS Comput Biol. 2012; 8(9):1002690. doi:10.1371/journal.pcbi.1002690. Ma C, Xin M, Feldmann KA, Wang X. Machine learning-based differential network analysis: A study of stress-responsive transcriptomes in arabidopsis. The Plant Cell Online. 2014; 26(2):520–37. doi:10.1105/tpc.113.121913. Novère NL, Hucka M, Mi H, Moodie S, Schreiber F, Sorokin A, et al.The Systems Biology Graphical Notation. Nat Biotechnol; 27:735–41. Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, et al.Cytoscape: A software environment for integrated models of biomolecular interaction networks. Genome Res. 2003; 13(11):2498–504. doi:10.1101/gr.1239303. Bindea G, Mlecnik B, Hackl H, Charoentong P, Tosolini M, Kirilovsky A, et al. ClueGO: a Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. Bioinforma. 2009; 25(8):1091–93. doi:10.1093/bioinformatics/btp101. Maere S, Heymans K, Kuiper M. BiNGO: a Cytoscape plugin to assess overrepresentation of Gene Ontology categories in biological networks. Bioinforma. 2005; 21(16):3448–449. doi:10.1093/bioinformatics/bti551. Montojo J, Zuberi K, Rodriguez H, Kazi F, Wright G, Donaldson SL, et al.GeneMANIA Cytoscape plugin: fast gene function predictions on the desktop. Bioinforma. 2010; 26(22):2927–928. doi:10.1093/bioinformatics/btq562. Skirycz A, Claeys H, De Bodt S, Oikawa A, Shinoda S, Andriankaja M, et al. Pause-and-stop: The effects of osmotic stress on cell proliferation during early leaf development in arabidopsis and a role for ethylene signaling in cell cycle arrest. The Plant Cell Online. 2011; 23(5):1876–88. doi:10.1105/tpc.111.084160. Irizarry RA, Hobbs B, Collin F, Beazer?Barclay YD, Antonellis KJ, Scherf U, et al.Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostat. 2003; 4(2):249–64. doi:10.1093/biostatistics/4.2.249. Gentleman R, Carey V, Bates D, Bolstad B, Dettling M, Dudoit S, et al.Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10):80. doi:10.1186/gb-2004-5-10-r80. Smyth GK. Limma: linear models for microarray data. In: Gentleman R, Carey V, Dudoit S, Irizarry R, Huber W, editors. Bioinformatics and Computational Biology Solutions Using R and Bioconductor. New York: Springer. p. 397–420. De Bodt S, Carvajal D, Hollunder J, Van den Cruyce J, Movahedi S, Inzé D. CORNET: A user-friendly tool for data mining and integration. Plant Physiology. 2010; 152(3):1167–79. De Bodt S, Hollunder J, Nelissen H, Meulemeester N, Inzé D. CORNET 2.0: integrating plant coexpression, protein-protein interactions, regulatory interactions, gene associations and functional annotations. New Phytologist. 2012; 195(3):707–20. Yilmaz A, Mejia-Guerra MK, Kurz K, Liang X, Welch L, Grotewold E. Agris: the arabidopsis gene regulatory information server, an update. Nucleic Acids Res. 2011; 39(suppl 1):1118–22. doi:10.1093/nar/gkq1120. Zulawski M, Braginets R, Schulze WX. PhosPhAt goes kinases - searchable protein kinase target information in the plant phosphorylation site database PhosPhAt. Nucleic Acids Res. 2013; 41(D1):1176–84. doi:10.1093/nar/gks1081. González Besteiro MA, Ulm R. Phosphorylation and stabilization of arabidopsis MAP Kinase Phosphatase 1 in response to UV-B stress. J Biol Chem. 2013; 288(1):480–6. doi:10.1074/jbc.M112.434654. Guo H, Ecker JR. Plant responses to ethylene gas are mediated by SCFEBF1/EBF2-dependent proteolysis of EIN3 transcription factor. Cell; 115(6):667–77. Potuschak T, Lechner E, Parmentier Y, Yanagisawa S, Grava S, Koncz C, et al.EIN3-dependent regulation of plant ethylene hormone signaling by two arabidopsis F box proteins: EBF1 and EBF2. Cell. 2003; 115(6):679–89. Dubois M, Skirycz A, Claeys H, Maleux K, Dhondt S, De Bodt S, et al.ETHYLENE RESPONSE FACTOR6 acts as a central regulator of leaf growth under water-limiting conditions in arabidopsis. Plant Physiology. 2013; 162(1):319–32. doi:10.1104/pp.113.216341. We want to thank Nathalie Gonzalez and Jasmien Vercruysse for fruitful discussions and feedback during the development of the framework. We want to thank the reviewers and editor for their constructive input and ideas on rendering this a more comprehensible manuscript. This work was supported by Ghent University (Multidisciplinary Research Partnership Bioinformatics: from nucleotides to networks) [to SVL, TVP, YVdP], the Research Foundation Flanders (FWO) [to SVL], and the Interuniversity Attraction Poles Program (grant no. P7/29 'MARS') initiated by the Belgian Science Policy Office, by Ghent University (Bijzonder Onderzoeksfonds Methusalem project no. BOF08/01M00408, Multidisciplinary Research Partnership Biotechnology for a Sustainable Economy project no. 01MRB510W) [to MD, DI]. Department of Plant Systems Biology, VIB, Technologiepark 927, Ghent, 9052, Belgium Sofie Van Landeghem, Thomas Van Parys, Marieke Dubois, Dirk Inzé & Yves Van de Peer Department of Plant Biotechnology and Bioinformatics, Ghent University, Technologiepark 927, Ghent, 9052, Belgium Genomics Research Institute, University of Pretoria, Private bag X200028, Pretoria, South Africa Yves Van de Peer Sofie Van Landeghem Thomas Van Parys Marieke Dubois Dirk Inzé Correspondence to Yves Van de Peer. SVL and TVP have designed and implemented the Diffany framework. SVL drafted the manuscript and performed the differential analysis for the osmotic stress study. MD interpreted the results of this study and performed the experimental validation. DI and YVDP have helped coordinating the study, provided feedback, and helped to draft the manuscript. All authors read and approved the final manuscript. Overview of the Diffany framework. Overview of the Diffany framework and its typical usage in a specific experiment involving the perturbation of an interactome under one or more conditions. (DOCX 183 KB) List of differentially expressed genes. Dataset of differentially expressed genes, as originally published by [24]. Here, those genes are listed that are differentially expressed in at least one of the 4 time points and in either the more (FDR < 0.05) or less (FDR < 0.1) stringent dataset. This file also depicts the overlap of genes at the different time points. (XLSX 514 KB) Experimental methodology. Methodological details of the experiments performed on the putative HY5 regulator. (DOCX 22 KB) Figure showing the experimental validation of the putative HY5 regulator. Detailed analysis of hy5 mutants and WT lines when exposed to mannitol-induced stress, comparing both leaf area as well as expression levels of putative HY5-target genes such as TCH3 and MYB51. (DOCX 472 KB) Landeghem, S.V., Parys, T.V., Dubois, M. et al. Diffany: an ontology-driven framework to infer, visualise and analyse differential molecular networks. BMC Bioinformatics 17, 18 (2016). https://doi.org/10.1186/s12859-015-0863-y Osmotic stress response Networks analysis
CommonCrawl
Fixed point of a monotone on [0,1]. prove or disprove: Let $f:[0,1]\rightarrow[0,1]$ be a monotone (need not be strict) function then f has a fixed point. Can I have a hint? real-analysis Selvakumar A Selvakumar ASelvakumar A $\begingroup$ But no continuity is given $\endgroup$ – Selvakumar A Mar 20 '18 at 11:44 $\begingroup$ I tried it but not succeeded. $\endgroup$ – Selvakumar A Mar 20 '18 at 11:46 $\begingroup$ Well, now you changed the question, yes it can. Take $f(x)=x$. But fixed points do not always exist for monotone functions in general. $\endgroup$ – The Phenotype Mar 20 '18 at 11:47 $\begingroup$ But in this case, I think it will that's why I asked proof. $\endgroup$ – Selvakumar A Mar 20 '18 at 11:49 $\begingroup$ I'll give you a way to visualize the problem: draw the plane $[0,1]\times [0,1]$ and $y=x$. Now construct any (discontinuous) function that does not intersect $y=x$. $\endgroup$ – The Phenotype Mar 20 '18 at 11:50 This is a special case of the Knaster-Tarski fixed point theorem. Suppose $f:[0,1] \to [0,1]$ is any monotonous function, i.e. whenever we have $x \le y$ in $[0,1]$ we have $f(x) \le f(y)$. (no continuity assumptions). Define $A = \{x \in [0,1]: x \le f(x)\}$. The set $A$ is non-empty, as $0 \in A$. So by completeness of $\mathbb{R}$ (every bounded above non-empty set has a supremum, and $1$ surely is an upperbound) $s:= \sup(A) \le 1$ exists. I claim that $f(s) = s$. To see this: if $x \in A$ then $x \le s$, so $x\le f(x) \le f(s)$, and as $x \in A$ is arbitary, $f(s)$ is an upperbound for $A$ as well and $s$ is the least upper bound for $A$, so $s \le f(s)$. $s \le f(s)$ shows that in fact $s \in A$ itself, and also implies that $f(s) \le f(f(s))$, which shows that $f(s) \in A$ as well. This implies $f(s) \le s$ (as $s$ is an upperbound for $A$) and so $f(s) = s$ from both inequalities. So $f$ has a fixed point. If $f$ is monotonous the other way round ($x \le y \rightarrow f(x) \ge f(y)$) adapt the argument using $\inf$ e.g. (Or compose with an order reversing bijection of $[0,1]$, like $h(x) = 1-x$ and apply the above to the composed map first). Henno BrandsmaHenno Brandsma $\begingroup$ You said you assume any monotone function, but infact you assumed a non-decreasing one. And a decreasing function may miss a fixed point, e.g. $f(x)=1-x/2$ for $0\le x\le 0.5$ and $f(x)=(1-x)/2$ for $0.5 < x\le 1$. $\endgroup$ – CiaPan Mar 20 '18 at 12:31 $\begingroup$ Actually, he assumes increasing $\endgroup$ – Selvakumar A Mar 20 '18 at 15:12 $\begingroup$ @SelvakumarA: the function $f(x)=\frac12$ meets the constraints of the answer, so $f$ is non-decreasing, not necessarily increasing. $\endgroup$ – robjohn♦ Mar 20 '18 at 22:31 $\begingroup$ Your last paragraph is obscure, because you neglected to state a conclusion. It sounds as if you are saying that a function $f$ such that $x\le y\to f(x)\ge f(y)$ has a fixed point, which is of course false. $\endgroup$ – bof Jan 24 at 10:25 This might be a somewhat simpler argument. Suppose that $f:[0,1]\mapsto[0,1]$ is non-decreasing and assume that $f$ has no fixed point. Define $$ s=\sup\{x:f(x)\gt x\} $$ Note that $0\in\{x:f(x)\gt x\}$, so $s\in[0,1]$. If $f(s)\gt s$, then for all $x\in(s,f(s))$, we must have $\ f(x)\lt x$ by the definition of $s$. But then $f(x)\lt x\lt f(s)$ contradicts that $f$ is non-decreasing. If $f(s)\lt s$, then, by the definition of $s$, there must be some $x\in(f(s),s)$ so that $f(x)\gt x$. But then $f(s)\lt x\lt f(x)$ contradicts that $f$ is non-decreasing. Thus, the assumption that $f$ has no fixed point must be false. robjohn♦robjohn Let $g(x):[0,1]$ be a monotone function. Suppose that $g$ has no fixed points. Then $g(x)\neq x$ for all $x\in [0,1]$, so that we can decompose $[0,1]$ into two (non empty) disjoint sets: $$ [0,1] = \{x: g(x)>x\} \cup \{x:g(x)<x\} = G^+ \cup G^-, \quad G^+\cap G^- = \emptyset, \quad 0 \in G^+,\ 1\in G^-. $$ Now let $\delta = \sup G^+$. I claim that $\delta \in G^+$. Indeed, by definition of the sup, there is a sequence $x_n \uparrow \delta, x_n\in G^+$ for all $n\in \mathbb{N}$. Then $$ g(\delta)\geq g(x_n) >x_n, \quad g(\delta)\geq \lim_n x_n = \delta. $$ Since $g(\delta)\neq \delta$ we conclude that $g(\delta) > \delta$ and $\delta \in G^+$. Now, since $\delta = \sup G^+$ and $\delta<1$, the interval $(\delta, 1]\subset G^-$. Now let $x_n \in G^-, x_n \downarrow \delta$. We obtain $$ g(\delta)\leq g(x_n)<x_n, \quad g(\delta)\leq \lim_n x_n = \delta, $$ and again, since $g(\delta)\neq \delta$, $g(\delta)<\delta$. So $\delta \in G^-$, but this is contradiction since $G^-\cap G^+=\emptyset$. This shows that $g$ has a fixed point. In particular we showed that, if $g(0)>0$ and $g(1)<1$, then $g(\delta)=\delta$. Tommaso SeneciTommaso Seneci If you allow decreasing functions, counterexamples are easy . . . For a counterexample where $f$ is non-strictly decreasing, let $f:[0,1]\to [0,1]$ be defined by $$ f(x)= \begin{cases} 1&\text{if}\;x=0 \qquad\;\;\;\;\;\,\\[4pt] 0&\text{otherwise} \end{cases} $$ For a counterexample where $f$ is strictly decreasing, let $f:[0,1]\to [0,1]$ be defined by $$ f(x)= \begin{cases} 1-\frac{x}{2}&\text{if}\;x < \frac{1}{2}\\[4pt] \frac{1}{2}-\frac{x}{2}&\text{otherwise} \end{cases} $$ On the other hand . . . Claim: If $f:[0,1]\to [0,1]$ is monotonically (not necessarily strictly) increasing, then $f$ has a fixed point. Suppose $f:[0,1]\to [0,1]$ is a monotonically (not necessarily strictly) increasing function such that $f$ does not have a fixed point. Our goal is to derive a contradiction. Let $A=\{x\in [0,1]\mid f(x) > x\}$, and let $B=\{x\in [0,1]\mid f(x) < x\}$. Since $f$ has no fixed point, we get $f(0) > 0$, and $f(1) < 1$, so we have $0 \in A$, and $1\in B$. Then $A,B$ are nonempty, disjoint, and $A \cup B = [0,1]$. Let $c=\text{glb}(B)$, and let $d=f(c)$. Consider two cases . . . Case $(1)$:$\;c\in A$. Then $f(c) > c$, hence $f(f(c)) \ge f(c)$, so $f(d) \ge d$. Since $f$ has no fixed point, we get $f(d) > d$. Thus, $c < f(c) = d < f(d)$. Since $c=\text{glb}(B)$, and $c \notin B$, there exists $b\in B$, with $c < b < d$. \begin{align*} \text{Then}\;\;&c < b\\[4pt] \implies\;&f(c) \le f(b)&&\text{[by monotonicity of $f$]}\\[4pt] \implies\;&d \le f(b)&&\text{[since $d=f(c)$]}\\[4pt] \implies\;&b < f(b)&&\text{[since $b < d$]}\\[4pt] \end{align*} contrary to $b\in B$. Case $(2)$:$\;c\in B$. \begin{align*} \text{Then}\;\;&d = f(c)\\[4pt] \implies\;&d < c&&\text{[since $f(c) < c$]}\\[4pt] \implies\;&f(d) \le f(c)&&\text{[by monotonicity of $f$]}\\[4pt] \implies\;&f(d) \le d&&\text{[since $d=f(c)$]}\\[4pt] \implies\;&f(d) < d&&\text{[since $f$ has no fixed points]}\\[4pt] \implies\;&d \in B\\[4pt] \end{align*} contradiction, since $c=\text{glb(B)}$, and $d < c$. Thus, both cases yield a contradiction, which completes the proof. quasiquasi Not the answer you're looking for? Browse other questions tagged real-analysis or ask your own question. Given a book with $100$ pages and a $100$ lemmas, prove that there is some lemma written on the same page as its index Fixed point and period of continuous function Convergence of monotone $f_n:[0, \infty) \rightarrow [0,1]$ to continuous, monotone $g$ is uniform If real function $f$ is continuous and injective, is it strictly monotone? Whether a continuous function has fixed point or not when the domain and range are not $[0,1]$ Proving the existence of a fixed point Does there exists an interval in which a non-constant absolutely continuous function is strictly monotone? Fixed Point of function $f([0,1] \supset[0,1] $ Let $f:[0,1] \rightarrow [0,1] $ then does $ f $ takes the value $\int_{0}^{1} f^{2} (x)dx $ Question is about fixed point theorem in $(0,1)$? Prove that every continuous map $f: [0,1] \rightarrow [0,1]$ has a fixed point.
CommonCrawl
Article ID 0045 October 2016 Regular Various properties of the 0.6BaTiO$_3$–0.4Ni$_{0.5}$Zn$_{0.5}$Fe$_2$O$_4$ multiferroic nanocomposite RENUKA CHAUHAN R C SRIVASTAVA Structural, magnetic and ferroelectric properties of 0.6BaTiO$_3$–0.4(Ni$_{0.5}Zn$_{0.5}Fe$_2$O$_4$) multiferroic nanocomposite are presented here. The structural properties of the samples were studied by XRD and Raman spectroscopy which confirm the formation of BaTiO$_3$ (BTO) phase with a tetragonal perovskite structure and asmall secondary spinel phase due to the ferrite content. The magnetic and electric orderings were investigated by vibrating sample magnetometer (VSM) and ferroelectric ($P–E$) loop tracer at room temperature. The inceptionof ferroelectric properties is due to barium titanate. The remnant polarization increases ∼5 times for the composite with Ni$_{0.5}$Zn$_{0.5}$Fe$_2$O$_4$ (NZFO) substitution compared to BTO. The remnant polarization is conducive forswitching applications of multiferroic composite. The effect of concentration of H$_2$ physisorption on the current–voltage characteristic of armchair BN nanotubes in CNT–BNNT–CNT set R AZIMIRAD A H BAYANI S SAFA In this research, we have studied physisorption of hydrogen molecules on armchair boron nitride (BN) nanotube (3,3) using density functional methods and its effect on the current–voltage ($I–V$) characteristic of the nanotube as a function of concentration using Green's function techniques. The adsorption geometries and energies, charge transfer and electron transport are calculated. It is found that H$_2$ physisorption can suppress the $I–V$ characteristic of the BN nanotube, but it has no effect on the band gap of the nanotube. As the H$_2$concentration increases, under the same applied bias voltage, the current through the BN nanotube first increases and then begins to decline. The current–voltage characteristic indicates that H$_2$ molecules can be detected by aBN-based sensor. Cylindrically symmetric cosmological model in the presence of bulk stress with varying $\Lambda$ V G METE A S NIMKAR V D ELKAR Cylindrically symmetric non-static space–time is investigated in the presence of bulk stress given by Landau and Lifshitz. To get a solution, a supplementary condition between metric potentials is used. The viscosity coefficient of the bulk viscous fluid is assumed to be a power function of mass density whereas the coefficient of shear viscosity is considered as proportional to the scale of expansion in the model. Also some physical and geometrical properties of the model are discussed. Statistical model of stress corrosion cracking based on extended form of Dirichlet energy: Part 2 HARRY YOSH In the previous paper ({\it Pramana – J. Phys.} 81(6), 1009 (2013)), the mechanism of stress corrosion cracking (SCC) based on non-quadratic form of Dirichlet energy was proposed and its statistical features were discussed. Following those results, we discuss here how SCC propagates on pipe wall statistically. It reveals that SCC growth distribution is described with Cauchy problem of time-dependent first-order partial differential equation characterized by the convolution of the initial distribution of SCC over time. We also discuss the extension of the above results to the SCC in two-dimensional space and its statistical features with a simple example. Chaos in discrete fractional difference equations AMEY DESHPANDE VARSHA DAFTARDAR-GEJJI Recently, the discrete fractional calculus (DFC) is receiving attention due to its potential applications in the mathematical modelling of real-world phenomena with memory effects. In the present paper, the chaotic behaviour of fractional difference equations for the tent map, Gauss map and 2x(mod 1) map are studied numerically. We analyse the chaotic behaviour of these fractional difference equations and compare them with their integer counterparts. It is observed that fractional difference equations for the Gauss and tent maps are more stable compared to their integer-order version. Optimal control of vibrational transitions of HCl KRISHNA REDDY NANDIPATI ARUN KUMAR KANAKATI Control of fundamental and overtone transitions of a vibration are studied for the diatomic molecule, HCl. Specifically, the results of the effect of variation of the penalty factor on the physical attributes of the system (i.e., probabilities) and pulse (i.e., amplitudes) considering three different pulse durations for each value of the penalty factor are shown and discussed. We have employed the optimal control theory to obtain infrared pulses for selective vibrational transitions. The optimization of initial guess field with Gaussian envelope, phrased as maximization of cost functional, is done using the conjugate gradient method. The interaction of the field with the molecule is treated within the semiclassical dipole approximation. The potential and the dipole moment functions used in the calculations of control dynamics are obtained from high level ab-initio calculations. A note on analytical solutions of nonlinear fractional 2D heat equation with non-local integral terms O S IYIOLA F D ZAMAN In this paper, we consider the (2+1) nonlinear fractional heat equation with non-local integral terms and investigate two different cases of such non-local integral terms. The first has to do with the time-dependent non-local integral term and the second is the space-dependent non-local integral term. Apart from the nonlinear nature of these formulations, the complexity due to the presence of the non-local integral terms impelled us to use a relatively new analytical technique called q-homotopy analysis method to obtain analytical solutions to both cases in the form of convergent series with easily computable components. Our numerical analysis enables us to show the effects of non-local terms and the fractional-order derivative on the solutions obtained by this method. Root mean square radii of heavy flavoured mesons in a quantum chromodynamics potential model TAPASHI DAS D K CHOUDHURY We report the results of root mean square (r.m.s.) radii of heavy flavoured mesons in a QCD model with the potential $V (r) = −(4\alpha_{s}/3r) + br + c$. As the potential is not analytically solvable, we first obtain the results in the absence of confinement and Coulomb terms respectively. Confinement and Coulomb effects are then introduced successively in the approach using the Dalgarno's method of perturbation. We explicitly consider the following two quantum mechanical aspects in the analysis: (a) The scale factor $c$ in the potential should not effect the wave function of the system even while applying the perturbation theory. (b) Choice of perturbative piece of the Hamiltonian (confinement or linear) should determine the effective radial separation between the quarks and antiquarks. The results are then compared with the available theoretical values of r.m.s. radii. A generic travelling wave solution in dissipative laser cavity BALDEEP KAUR SOUMENDU JANA A large family of cosh-Gaussian travelling wave solution of a complex Ginzburg–Landau equation (CGLE), that describes dissipative semiconductor laser cavity is derived. Using perturbation method, the stability region is identified. Bifurcation analysis is done by smoothly varying the cavity loss coefficient to provide insight of the system dynamics. He's variational method is adopted to obtain the standard sech-type and the notso-explored but promising cosh-Gaussian type, travelling wave solutions. For a given set of system parameters, only one sech solution is obtained, whereas several distinct solution points are derived for cosh-Gaussian case. These solutions yield a wide variety of travelling wave profiles, namely Gaussian, near-sech, flat-top and a cosh-Gaussian with variable central dip. A split-step Fourier method and pseudospectral method have been used for direct numerical solution of the CGLE and travelling wave profiles identical to the analytical profiles have been obtained. We also identified the parametric zone that promises an extremely large family of cosh-Gaussian travelling wave solutions with tunable shape. This suggests that the cosh-Gaussian profile is quite generic and would be helpful for further theoretical as well as experimental investigation on pattern formation, pulse dynamics and localization in semiconductor laser cavity. Calculation of energy spectrum of $^{12}$C isotope with modified Yukawa potential by cluster models MOHAMMAD REZA SHOJAE NAFISEH ROSHAN BAKHT In this paper, we have calculated the energy spectrum of 12C isotope in two-cluster models, $3\alpha$ cluster model and $^8$Be + $\alpha$ cluster model. We use the modified Yukawa potential for interaction between theclusters and solve the Schrödinger equation using Nikiforov–Uvarov method to calculate the energy spectrum. Then, we increase the accuracy by adding spin-orbit coupling and tensor force and solve them by perturbationtheory in both models. Finally, the calculated results for both models are compared with each other and with the experimental data. The results show that the isotope $^{12}$C should be considered as a three-$\alpha$ cluster and themodified Yukawa potential is adaptable for cluster interactions. Periodic solutions ofWick-type stochastic Korteweg–de Vries equations JIN HYUK CHOI DAEHO LEE HYUNSOO KIM Nonlinear stochastic partial differential equations have a wide range of applications in science and engineering. Finding exact solutions of the Wick-type stochastic equation will be helpful in the theories and numerical studies of such equations. In this paper, Kudrayshov method together with Hermite transform isimplemented to obtain exact solutions of Wick-type stochastic Korteweg–de Vries equation. Further, graphical illustrations in two- and three-dimensional plots of the obtained solutions depending on time and space are also given with white noise functionals. Synthesis, characterization and third-order nonlinear optical properties of polydiacetylene nanostructures, silver nanoparticles and polydiacetylene–silver nanocomposites B BHUSHAN S S TALWAR T KUNDU B P SINGH We have synthesized, characterized and studied the third-order nonlinear optical properties of two different nanostructures of polydiacetylene (PDA), PDA nanocrystals and PDA nanovesicles, along with silver nanoparticles-decorated PDA nanovesicles. The second molecular hyperpolarizability $\gamma (−\omega; \omega,−\omega,\omega$) of the samples has been investigated by antiresonant ring interferometric nonlinear spectroscopic (ARINS) technique using femtosecond mode-locked Ti:sapphire laser in the spectral range of 720–820 nm. The observed spectral dispersion of $\gamma$ has been explained in the framework of three-essential states model and a correlation between the electronic structure and optical nonlinearity of the samples has been established. The energy of two-photon state, transition dipole moments and linewidth of the transitions have been estimated. We have observed that the nonlinear optical properties of PDA nanocrystals and nanovesicles are different because of the influence of chain coupling effects facilitated by the chain packing geometry of the monomers. On the other hand, our investigation reveals that the spectral dispersion characteristic of $\gamma$ for silver nanoparticles-coated PDA nanovesicles is qualitatively similar to that observed for the uncoated PDA nanovesicles but bears no resemblance to that observed in silver nanoparticles. The presence of silver nanoparticles increases the $\gamma$ values of the coated nanovesicles slightly as compared to that of the uncoated nanovesicles, suggesting a definite but weak coupling between the free electrons of the metal nanoparticles and $\pi$ electrons of the polymer in the composite system. Our comparative studies show that the arrangement of polymer chains in polydiacetylene nanocrystals is more favourable for higher nonlinearity. Structural, morphological, optical and antibacterial activity of rod-shaped zinc oxide and manganese-doped zinc oxide nanoparticles A DHANALAKSHMI B NATARAJAN V RAMADAS A PALANIMURUGAN S THANIKAIKARASAN Pure ZnO and Mn-doped ZnO nanoparticles were synthesized by Co-precipitate method. The structural characterizations of the nanoparticles were investigated by X-ray diffraction (XRD) and scanning electron microscopy (SEM) techniques. UV–Vis, FTIR and photoluminescence (PL) spectroscopy were used for analysingthe optical properties of the nanoparticles. XRD results revealed the formation of ZnO and Mn-doped ZnO nanoparticles with wurtzite crystal structure having average crystalline size of 39 and 20 nm. From UV–Vis studies, the optical band-gap energy of 3.20 and 3.25 eV was obtained for ZnO and Mn-doped ZnO nanoparticles, respectively. FTIR spectra confirm the presence of ZnO and Mn-doped ZnO nanoparticles. Photoluminescence analysis of all samples showed four main emission bands: a strong UV emission band, a weak blue band, a weak blue–green band and a weak green band indicating their high structural and optical qualities. The antibacterial efficiency of ZnO and Mn-doped ZnO nanoparticles were studied using disc diffusion method. The Mn-dopedZnO nanoparticles show better antibacterial activity when higher doping level is 10 at% and has longer duration of time. Overlapping community detection using weighted consensus clustering LINTAO YANG ZETAI YU JING QIAN SHOUYIN LIU Many overlapping community detection algorithms have been proposed. Most of them are unstable and behave non-deterministically. In this paper, we use weighted consensus clustering for combining multiple base covers obtained by classic non-deterministic algorithms to improve the quality of the results. We first evaluate a reliability measure for each community in all base covers and assign a proportional weight to each one. Then we redefine the consensus matrix that takes into account not only the common membership of nodes, but also the reliability of the communities. Experimental results on both artificial and real-world networks show that our algorithm can find overlapping communities accurately. The classification of the single travelling wave solutions to the variant Boussinesq equations YUE KAI The discrimination system for the polynomial method is applied to variant Boussinesq equations to classify single travelling wave solutions. In particular, we construct corresponding solutions to the concrete parameters to show that each solution in the classification can be realized. Bose gases in one-dimensional harmonic trap JI-XUAN HOU JING YANG Thermodynamic quantities, occupation numbers and their fluctuations of a one-dimensional Bose gas confined by a harmonic potential are studied using different ensemble approaches. Combining number theory methods, a new approach is presented to calculate the occupation numbers of different energy levels in microcanonical ensemble. The visible difference of the ground state occupation number in grand-canonical ensemble and microcanonical ensemble is found to decrease by power law as the number of particles increases. Relativistic quantum correlations in bipartite fermionic states S KHAN N A KHAN The influences of relative motion, the size of the wave packet and the average momentum of the particles on different types of correlations present in bipartite quantum states are investigated. In particular, the dynamics of the quantum mutual information, the classical correlation and the quantum discord on the spincorrelations of entangled fermions are studied. In the limit of small average momentum, regardless of the size of the wave packet and the rapidity, the classical and the quantum correlations are equally weighted. On the otherhand, in the limit of large average momentum, the only correlations that exist in the system are the quantum correlations. For every value of the average momentum, the quantum correlations maximize at an optimal size of the wave packet. It is shown that after reaching a minimum value, the revival of quantum discord occurs with increasing rapidity. Effects of thermal stratification on transient free convective flow of a nanofluid past a vertical plate NIRMAL CHAND PEDDISETTY An analysis of thermal stratification in a transient free convection of nanofluids past an isothermal vertical plate is performed. Nanofluids containing nanoparticles of aluminium oxide, copper, titanium oxide and silver having volume fraction of the nanoparticles less than or equal to 0.04 with water as the base fluid are considered. The governing boundary layer equations are solved numerically. Thermal stratification effects and volume fraction of the nanoparticles on the velocity and temperature are represented graphically. It is observed that an increase in the thermal stratification parameter decreases the velocity and temperature profiles of nanofluids. An increase in the volume fraction of the nanoparticles enhances the temperature and reduces the velocity of nanofluids. Also, the influence of thermal stratification parameter and the volume fraction of the nanoparticles of local as well as average skin friction and the rate of heat transfer of nanofluids are discussed and represented graphically.The results are found to be in good agreement with the existing results in literature. Homotopy deform method for reproducing kernel space for nonlinear boundary value problems MIN-QIANG XU YING-ZHEN LIN In this paper, the combination of homotopy deform method (HDM) and simplified reproducing kernel method (SRKM) is introduced for solving the boundary value problems (BVPs) of nonlinear differential equations. The solution methodology is based on Adomian decomposition and reproducing kernel method (RKM). By the HDM, the nonlinear equations can be converted into a series of linear BVPs. After that, the simplified reproducing kernel method, which not only facilitates the reproducing kernel but also avoids the time-consuming Schmidt orthogonalization process, is proposed to solve linear equations. Some numerical test problems including ordinary differential equations and partial differential equations are analysed to illustrate the procedure and confirm the performance of the proposed method. The results faithfully reveal that our algorithm is considerably accurate and effective as expected.
CommonCrawl
If you're suffering from blurred or distorted vision or you've noticed a sudden and unexplained decline in the clarity of your vision, do not try to self-medicate. It is one thing to promote better eyesight from an existing and long-held baseline, but if you are noticing problems with your eyes, then you should see an optician and a doctor to rule out underlying medical conditions. The idea of a digital pill that records when it has been consumed is a sound one, but as the FDA notes, there is no evidence to say it actually increases the likelihood patients that have a history of inconsistent consumption will follow their prescribed course of treatment. There is also a very strange irony in schizophrenia being the first condition this technology is being used to target. Pharmaceutical, substance used in the diagnosis, treatment, or prevention of disease and for restoring, correcting, or modifying organic functions. (See also pharmaceutical industry.) Records of medicinal plants and minerals date to ancient Chinese, Hindu, and Mediterranean civilizations. Ancient Greek physicians such as Galen used a variety of drugs in their profession.… "Cavin, you are phemomenal! An incredulous journey of a near death accident scripted by an incredible man who chose to share his knowledge of healing his own broken brain. I requested our public library purchase your book because everyone, those with and without brain injuries, should have access to YOUR brain and this book. Thank you for your legacy to mankind!" If the entire workforce were to start doping with prescription stimulants, it seems likely that they would have two major effects. Firstly, people would stop avoiding unpleasant tasks, and weary office workers who had perfected the art of not-working-at-work would start tackling the office filing system, keeping spreadsheets up to date, and enthusiastically attending dull meetings. Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector. Low-tech methods of cognitive enhancement include many components of what has traditionally been viewed as a healthy lifestyle, such as exercise, good nutrition, adequate sleep, and stress management. These low-tech methods nevertheless belong in a discussion of brain enhancement because, in addition to benefiting cognitive performance, their effects on brain function have been demonstrated (Almeida et al., 2002; Boonstra, Stins, Daffertshofer, & Beek, 2007; Hillman, Erickson, & Kramer, 2008; Lutz, Slagter, Dunne, & Davidson, 2008; Van Dongen, Maislin, Mullington, & Dinges, 2003). Our 2nd choice for a Brain and Memory supplement is Clari-T by Life Seasons. We were pleased to see that their formula included 3 of the 5 necessary ingredients Huperzine A, Phosphatidylserine and Bacopin. In addition, we liked that their product came in a vegetable capsule. The product contains silica and rice bran, though, which we are not sure is necessary. I tried taking whole pills at 1 and 3 AM. I felt kind of bushed at 9 AM after all the reading, and the 50 minute nap didn't help much - I was sleep only around 10 minutes and spent most of it thinking or meditation. Just as well the 3D driver is still broken; I doubt the scores would be reasonable. Began to perk up again past 10 AM, then felt more bushed at 1 PM, and so on throughout the day; kind of gave up and began watching & finishing anime (Amagami and Voices of a Distant Star) for the rest of the day with occasional reading breaks (eg. to start James C. Scotts Seeing Like A State, which is as described so far). As expected from the low quality of the day, the recovery sleep was bigger than before: a full 10 hours rather than 9:40; the next day, I slept a normal 8:50, and the following day ~8:20 (woken up early); 10:20 (slept in); 8:44; 8:18 (▁▇▁▁). It will be interesting to see whether my excess sleep remains in the hour range for 'good modafinil nights and two hours for bad modafinil nights. I had tried 8 randomized days like the Adderall experiment to see whether I was one of the people whom modafinil energizes during the day. (The other way to use it is to skip sleep, which is my preferred use.) I rarely use it during the day since my initial uses did not impress me subjectively. The experiment was not my best - while it was double-blind randomized, the measurements were subjective, and not a good measure of mental functioning like dual n-back (DNB) scores which I could statistically compare from day to day or against my many previous days of dual n-back scores. Between my high expectation of finding the null result, the poor experiment quality, and the minimal effect it had (eliminating an already rare use), the value of this information was very small. Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn't do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits. Another factor to consider is whether the nootropic is natural or synthetic. Natural nootropics generally have effects which are a bit more subtle, while synthetic nootropics can have more pronounced effects. It's also important to note that there are natural and synthetic nootropics. Some natural nootropics include Ginkgo biloba and ginseng. One benefit to using natural nootropics is they boost brain function and support brain health. They do this by increasing blood flow and oxygen delivery to the arteries and veins in the brain. Moreover, some nootropics contain Rhodiola rosea, panxax ginseng, and more. And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy. Nicotine's stimulant effects are general and do not come with the same tweakiness and aggression associated with the amphetamines, and subjectively are much cleaner with less of a crash. I would say that its stimulant effects are fairly strong, around that of modafinil. Another advantage is that nicotine operates through nicotinic receptors and so doesn't cross-tolerate with dopaminergic stimulants (hence one could hypothetically cycle through nicotine, modafinil, amphetamines, and caffeine, hitting different receptors each time). "How to Feed a Brain is an important book. It's the book I've been looking for since sustaining multiple concussions in the fall of 2013. I've dabbled in and out of gluten, dairy, and (processed) sugar free diets the past few years, but I have never eaten enough nutritious foods. This book has a simple-to-follow guide on daily consumption of produce, meat, and water. More photos from this reportage are featured in Quartz's new book The Objects that Power the Global Economy. You may not have seen these objects before, but they've already changed the way you live. Each chapter examines an object that is driving radical change in the global economy. This is from the chapter on the drug modafinil, which explores modifying the mind for a more productive life. But, thanks to the efforts of a number of remarkable scientists, researchers and plain-old neurohackers, we are beginning to put together a "whole systems" model of how all the different parts of the human brain work together and how they mesh with the complex regulatory structures of the body. It's going to take a lot more data and collaboration to dial this model in, but already we are empowered to design stacks that can meaningfully deliver on the promise of nootropics "to enhance the quality of subjective experience and promote cognitive health, while having extremely low toxicity and possessing very few side effects." It's a type of brain hacking that is intended to produce noticeable cognitive benefits. Another class of substances with the potential to enhance cognition in normal healthy individuals is the class of prescription stimulants used to treat attention-deficit/hyperactivity disorder (ADHD). These include methylphenidate (MPH), best known as Ritalin or Concerta, and amphetamine (AMP), most widely prescribed as mixed AMP salts consisting primarily of dextroamphetamine (d-AMP), known by the trade name Adderall. These medications have become familiar to the general public because of the growing rates of diagnosis of ADHD children and adults (Froehlich et al., 2007; Sankaranarayanan, Puumala, & Kratochvil, 2006) and the recognition that these medications are effective for treating ADHD (MTA Cooperative Group, 1999; Swanson et al., 2008). …researchers have added a new layer to the smart pill conversation. Adderall, they've found, makes you think you're doing better than you actually are….Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job….But the results of the new University of Pennsylvania study, funded by the U.S. Navy and not yet published but presented at the annual Society for Neuroscience conference last month, are consistent with much of the existing research. As a group, no overall statistically-significant improvement or impairment was seen as a result of taking Adderall. The research team tested 47 subjects, all in their 20s, all without a diagnosis of ADHD, on a variety of cognitive functions, from working memory-how much information they could keep in mind and manipulate-to raw intelligence, to memories for specific events and faces….The last question they asked their subjects was: How and how much did the pill influence your performance on today's tests? Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job on the tasks they'd been given, even though their performance did not show an improvement over that of those who had taken the placebo. According to Irena Ilieva…it's the first time since the 1960s that a study on the effects of amphetamine, a close cousin of Adderall, has asked how subjects perceive the effect of the drug on their performance. Another classic approach to the assessment of working memory is the span task, in which a series of items is presented to the subject for repetition, transcription, or recognition. The longest series that can be reproduced accurately is called the forward span and is a measure of working memory capacity. The ability to reproduce the series in reverse order is tested in backward span tasks and is a more stringent test of working memory capacity and perhaps other working memory functions as well. The digit span task from the Wechsler (1981) IQ test was used in four studies of stimulant effects on working memory. One study showed that d-AMP increased digit span (de Wit et al., 2002), and three found no effects of d-AMP or MPH (Oken, Kishiyama, & Salinsky, 1995; Schmedtje, Oman, Letz, & Baker, 1988; Silber, Croft, Papafotiou, & Stough, 2006). A spatial span task, in which subjects must retain and reproduce the order in which boxes in a scattered spatial arrangement change color, was used by Elliott et al. (1997) to assess the effects of MPH on working memory. For subjects in the group receiving placebo first, MPH increased spatial span. However, for the subjects who received MPH first, there was a nonsignificant opposite trend. The group difference in drug effect is not easily explained. The authors noted that the subjects in the first group performed at an overall lower level, and so, this may be another manifestation of the trend for a larger enhancement effect for less able subjects. Want to try a nootropic stack for yourself? Your best bet is to buy Smart Drugs online. You can get good prices and have the supplements delivered to your home. This means no hassle for you. And after you get them in the mail, you can start to see the benefits for yourself. If you're going to order smart drugs on the internet, it's important to go with one of the top manufacturers so that you get the best product possible. My predictions were substantially better than random chance7, so my default belief - that Adderall does affect me and (mostly) for the better - is borne out. I usually sleep very well and 3 separate incidents of horrible sleep in a few weeks seems rather unlikely (though I didn't keep track of dates carefully enough to link the Zeo data with the Adderall data). Between the price and the sleep disturbances, I don't think Adderall is personally worthwhile. "Cavin has done an amazing job in all aspects of his life. Overcoming the horrific life threatening accident, and then going on to do whatever he can to help others with his contagious wonderful attitude. This book is an easy to understand fact filled manual for anyone, but especially those who are or are caregivers for a loved one with tbi. I also highly recommend his podcast series." The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. Another moral concern is that these drugs — especially when used by Ivy League students or anyone in an already privileged position — may widen the gap between those who are advantaged and those who are not. But others have inverted the argument, saying these drugs can help those who are disadvantaged to reduce the gap. In an interview with the New York Times, Dr. Michael Anderson explains that he uses ADHD (a diagnosis he calls "made up") as an excuse to prescribe Adderall to the children who really need it — children from impoverished backgrounds suffering from poor academic performance. Nevertheless, a drug that improved your memory could be said to have made you smarter. We tend to view rote memory, the ability to memorize facts and repeat them, as a dumber kind of intelligence than creativity, strategy, or interpersonal skills. "But it is also true that certain abilities that we view as intelligence turn out to be in fact a very good memory being put to work," Farah says. "Where can you draw the line between Red Bull, six cups of coffee and a prescription drug that keeps you more alert," says Michael Schrage of the MIT Center for Digital Business, who has studied the phenomenon. "You can't draw the line meaningfully - some organizations have cultures where it is expected that employees go the extra mile to finish an all-nighter. " Modafinil is a eugeroic, or 'wakefulness promoting agent', intended to help people with narcolepsy. It was invented in the 1970s, but was first approved by the American FDA in 1998 for medical use. Recent years have seen its off-label use as a 'smart drug' grow. It's not known exactly how Modafinil works, but scientists believe it may increase levels of histamines in the brain, which can keep you awake. It might also inhibit the dissipation of dopamine, again helping wakefulness, and it may help alertness by boosting norepinephrine levels, contributing to its reputation as a drug to help focus and concentration. Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis. It's basic economics: the price of a good must be greater than cost of producing said good, but only under perfect competition will price = cost. Otherwise, the price is simply whatever maximizes profit for the seller. (Bottled water doesn't really cost $2 to produce.) This can lead to apparently counter-intuitive consequences involving price discrimination & market segmentation - such as damaged goods which are the premium product which has been deliberately degraded and sold for less (some Intel CPUs, some headphones etc.). The most famous examples were railroads; one notable passage by French engineer-economist Jules Dupuit describes the motivation for the conditions in 1849: Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007). Either way, if more and more people use these types of stimulants, there may be a risk that we will find ourselves in an ever-expanding neurological arm's race, argues philosophy professor Nicole Vincent. But is this necessarily a bad thing? No, says Farahany, who sees the improvement in cognitive functioning as a social good that we should pursue. Better brain functioning would result in societal benefits, she argues, "like economic gains or even reducing dangerous errors." Googling, you sometimes see correlational studies like Intake of Flavonoid-Rich Wine, Tea, and Chocolate by Elderly Men and Women Is Associated with Better Cognitive Test Performance; in this one, the correlated performance increase from eating chocolate was generally fairly modest (say, <10%), and the maximum effects were at 10g/day of what was probably milk chocolate, which generally has 10-40% chocolate liquor in it, suggesting any experiment use 1-4g. More interesting is the blind RCT experiment Consumption of cocoa flavanols results in acute improvements in mood and cognitive performance during sustained mental effort11, which found improvements at ~1g; the most dramatic improvement of the 4 tasks (on the Threes correct) saw a difference of 2 to 6 at the end of the hour of testing, while several of the other tests converged by the end or saw the controls winning (Sevens correct). Crews et al 2008 found no cognitive benefit, and an fMRI experiment found the change in brain oxygen levels it wanted but no improvement to reaction times. Capsule Connection sells 1000 00 pills (the largest pills) for $9. I already have a pill machine, so that doesn't count (a sunk cost). If we sum the grams per day column from the first table, we get 9.75 grams a day. Each 00 pill can take around 0.75 grams, so we need 13 pills. (Creatine is very bulky, alas.) 13 pills per day for 1000 days is 13,000 pills, and 1,000 pills is $9 so we need 13 units and 13 times 9 is $117. "Such an informative and inspiring read! Insight into how optimal nutrients improved Cavin's own brain recovery make this knowledge-filled read compelling and relatable. The recommendations are easy to understand as well as scientifically-founded – it's not another fad diet manual. The additional tools and resources provided throughout make it possible for anyone to integrate these enhancements into their nutritional repertoire. Looking forward to more from Cavin and Feed a Brain!!!!!!" See Melatonin for information on effects & cost; I regularly use melatonin to sleep (more to induce sleep than prolong or deepen it), and investigating with my Zeo, it does seem to improve & shorten my sleep. Some research suggests that higher doses are not necessarily better and may be overkill, so each time I've run out, I've been steadily decreasing the dose from 3mg to 1.5mg to 1mg, without apparently compromising the usefulness. One of the most obscure -racetams around, coluracetam (Smarter Nootropics, Ceretropic, Isochroma) acts in a different way from piracetam - piracetam apparently attacks the breakdown of acetylcholine while coluracetam instead increases how much choline can be turned into useful acetylcholine. This apparently is a unique mechanism. A crazy Longecity user, ScienceGuy ponied up $16,000 (!) for a custom synthesis of 500g; he was experimenting with 10-80mg sublingual doses (the ranges in the original anti-depressive trials) and reported a laundry list of effects (as does Isochroma): primarily that it was anxiolytic and increased work stamina. Unfortunately for my stack, he claims it combines poorly with piracetam. He offered free 2g samples for regulars to test his claims. I asked & received some. Smart pills have huge potential and several important applications, particularly in diagnosis. Smart pills are growing as a highly effective method of endoscopy, particularly for gastrointestinal diseases. Urbanization and rapid lifestyle changes leaning toward unhealthy diets and poor eating habits have led to distinctive increasing lifestyle disorders such as gastroesophageal reflux disease (GERD), obesity, and gastric ulcers. It's been widely reported that Silicon Valley entrepreneurs and college students turn to Adderall (without a prescription) to work late through the night. In fact, a 2012 study published in the Journal of American College Health, showed that roughly two-thirds of undergraduate students were offered prescription stimulants for non-medical purposes by senior year. This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit. Most of the most solid fish oil results seem to meliorate the effects of age; in my 20s, I'm not sure they are worth the cost. But I would probably resume fish oil in my 30s or 40s when aging really becomes a concern. So the experiment at most will result in discontinuing for a decade. At $X a year, that's a net present value of sum $ map (\n -> 70 / (1 + 0.05)^n) [1..10] = $540.5. But when aficionados talk about nootropics, they usually refer to substances that have supposedly few side effects and low toxicity. Most often they mean piracetam, which Giurgea first synthesized in 1964 and which is approved for therapeutic use in dozens of countries for use in adults and the elderly. Not so in the United States, however, where officially it can be sold only for research purposes. These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds." The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement. The reviews on this site are a demonstration of what someone who uses the advertised products may experience. Results and experience may vary from user to user. All recommendations on this site are based solely on opinion. These products are not for use by children under the age of 18 and women who are pregnant or nursing. If you are under the care of a physician, have a known medical condition or are taking prescription medication, seek medical advice from your health care provider before taking any new supplements. All product reviews and user testimonials on this page are for reference and educational purposes only. You must draw your own conclusions as to the efficacy of any nutrient. Consumer Advisor Online makes no guarantee or representations as to the quality of any of the products represented on this website. The information on this page, while accurate at the time of publishing, may be subject to change or alterations. All logos and trademarks used in this site are owned by the trademark holders and respective companies. I ultimately mixed it in with the 3kg of piracetam and included it in that batch of pills. I mixed it very thoroughly, one ingredient at a time, so I'm not very worried about hot spots. But if you are, one clever way to get accurate caffeine measurements is to measure out a large quantity & dissolve it since it's easier to measure water than powder, and dissolving guarantees even distribution. This can be important because caffeine is, like nicotine, an alkaloid poison which - the dose makes the poison - can kill in high doses, and concentrated powder makes it easy to take too much, as one inept Englishman discovered the hard way. (This dissolving trick is applicable to anything else that dissolves nicely.) The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia. As opposed to what it might lead you to believe, Ginkgo Smart is not simply a Ginkgo Biloba supplement. In all actuality, it's much more than that – a nootropic (Well duh, we wouldn't be reviewing it otherwise). Ginkgo Smart has actually been seeing quite some popularity lately, possibly riding on the popularity of Ginkgo Biloba as a supplement, which has been storming through the US lately, and becoming one of the highest selling supplement in the US. We were pleasantly pleased at the fact that it wasn't too hard to find Ginkgo Smart's ingredients… Learn More... Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment. Another moral concern is that these drugs — especially when used by Ivy League students or anyone in an already privileged position — may widen the gap between those who are advantaged and those who are not. But others have inverted the argument, saying these drugs can help those who are disadvantaged to reduce the gap. In an interview with the New York Times, Dr. Michael Anderson explains that he uses ADHD (a diagnosis he calls "made up") as an excuse to prescribe Adderall to the children who really need it — children from impoverished backgrounds suffering from poor academic performance. For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular. The Smart Pills Technology are primarily utilized for dairy products, soft drinks, and water catering in diverse shapes and sizes to various consumers. The rising preference for easy-to-carry liquid foods is expected to boost the demand for these packaging cartons, thereby, fueling the market growth. The changing lifestyle of people coupled with the convenience of utilizing carton packaging is projected to propel the market. In addition, Smart Pills Technology have an edge over the glass and plastic packaging, in terms of environmental-friendliness and recyclability of the material, which mitigates the wastage and reduces the product cost. Thus, the aforementioned factors are expected to drive the Smart Pills Technology market growth over the projected period. From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More... Potassium citrate powder is neither expensive nor cheap: I purchased 453g for $21. The powder is crystalline white, dissolves instantly in water, and largely tasteless (sort of saline & slightly unpleasant). The powder is 37% potassium by weight (the formula is C6H5K3O7) so 453g is actually 167g of potassium, so 80-160 days' worth depending on dose. Taken together, these considerations suggest that the cognitive effects of stimulants for any individual in any task will vary based on dosage and will not easily be predicted on the basis of data from other individuals or other tasks. Optimizing the cognitive effects of a stimulant would therefore require, in effect, a search through a high-dimensional space whose dimensions are dose; individual characteristics such as genetic, personality, and ability levels; and task characteristics. The mixed results in the current literature may be due to the lack of systematic optimization. The advantage of adrafinil is that it is legal & over-the-counter in the USA, so one removes the small legal risk of ordering & possessing modafinil without a prescription, and the retailers may be more reliable because they are not operating in a niche of dubious legality. Based on comments from others, the liver problem may have been overblown, and modafinil vendors post-2012 seem to have become more unstable, so I may give adrafinil (from another source than Antiaging Central) a shot when my modafinil/armodafinil run out. The information on this website has not been evaluated by the Food & Drug Administration or any other medical body. We do not aim to diagnose, treat, cure or prevent any illness or disease. Information is shared for educational purposes only. You must consult your doctor before acting on any content on this website, especially if you are pregnant, nursing, taking medication, or have a medical condition. Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients. This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn't satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability. If you could take a pill that would help you study and get better grades, would you? Off-label use of "smart drugs" – pharmaceuticals meant to treat disorders like ADHD, narcolepsy, and Alzheimer's – are becoming increasingly popular among college students hoping to get ahead, by helping them to stay focused and alert for longer periods of time. But is this cheating? Should their use as cognitive enhancers be approved by the FDA, the medical community, and society at large? Do the benefits outweigh the risks? Taken together, the available results are mixed, with slightly more null results than overall positive findings of enhancement and evidence of impairment in one reversal learning task. As the effect sizes listed in Table 5 show, the effects when found are generally substantial. When drug effects were assessed as a function of placebo performance, genotype, or self-reported impulsivity, enhancement was found to be greatest for participants who performed most poorly on placebo, had a COMT genotype associated with poorer executive function, or reported being impulsive in their everyday lives. In sum, the effects of stimulants on cognitive control are not robust, but MPH and d-AMP appear to enhance cognitive control in some tasks for some people, especially those less likely to perform well on cognitive control tasks. Some data suggest that cognitive enhancers do improve some types of learning and memory, but many other data say these substances have no effect. The strongest evidence for these substances is for the improvement of cognitive function in people with brain injury or disease (for example, Alzheimer's disease and traumatic brain injury). Although "popular" books and companies that sell smart drugs will try to convince you that these drugs work, the evidence for any significant effects of these substances in normal people is weak. There are also important side-effects that must be considered. Many of these substances affect neurotransmitter systems in the central nervous system. The effects of these chemicals on neurological function and behavior is unknown. Moreover, the long-term safety of these substances has not been adequately tested. Also, some substances will interact with other substances. A substance such as the herb ma-huang may be dangerous if a person stops taking it suddenly; it can also cause heart attacks, stroke, and sudden death. Finally, it is important to remember that products labeled as "natural" do not make them "safe." QUALITY : They use pure and high quality Ingredients and are the ONLY ones we found that had a comprehensive formula including the top 5 most proven ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine, Bacopin and N-Acetyl L-Tyrosine. Thrive Natural's Super Brain Renew is fortified with just the right ingredients to help your body fully digest the active ingredients. No other brand came close to their comprehensive formula of 39 proven ingredients. The "essential 5" are the most important elements to help improve your memory, concentration, focus, energy, and mental clarity. But, what also makes them stand out above all the rest was that they have several supporting vitamins and nutrients to help optimize brain and memory function. A critical factor for us is that this company does not use fillers, binders or synthetics in their product. We love the fact that their capsules are vegetarian, which is a nice bonus for health conscious consumers. The above information relates to studies of specific individual essential oil ingredients, some of which are used in the essential oil blends for various MONQ diffusers. Please note, however, that while individual ingredients may have been shown to exhibit certain independent effects when used alone, the specific blends of ingredients contained in MONQ diffusers have not been tested. No specific claims are being made that use of any MONQ diffusers will lead to any of the effects discussed above. Additionally, please note that MONQ diffusers have not been reviewed or approved by the U.S. Food and Drug Administration. MONQ diffusers are not intended to be used in the diagnosis, cure, mitigation, prevention, or treatment of any disease or medical condition. If you have a health condition or concern, please consult a physician or your alternative health care provider prior to using MONQ diffusers.
CommonCrawl
Physics Meta The energy stored in the electromagnetic field of an electron Due to Wikipedia the total energy per unit volume stored in an electromagnetic field is $$u_{EM}=\frac{\varepsilon}{2}|\mathbb E|^2+\frac{1}{2\mu}|\mathbb B|^2$$ How does the energy stored in the electric field of the electron relates to its rest mass? How large part of the rest mass comes from this field? And related, how does the energy stored in the magnetic field induced by a moving electron relates to its kinetic energy? A Rigorous Derivation of Electromagnetic Self-force seems to give relevant information related to this question. electromagnetism energy electrons quantum-electrodynamics Lehs LehsLehs $\begingroup$ Please avoid list-type questions. $\endgroup$ – Gert Sep 1 '16 at 21:48 $\begingroup$ @Gert: what is a list-type question? $\endgroup$ – flippiefanus Sep 3 '16 at 8:40 How does the energy stored in the electric field of the electron relates to its rest mass? It depends on whether we assume the electron has finite charge density everywhere or not. In case the charge density of electron is finite everywhere (like it is in the Lorentz and Abraham models of the electron, where charge is distributed on the surface or throughout the volume of a sphere), Poynting's equation is valid everywhere and implies expression for EM energy density you wrote above. It can be shown that net result of mutual EM forces between parts of the sphere results in increase of effective rest mass and other effects, like radiation damping. The change in the rest mass can be then related to Poynting's energy of the electron's field. However, how large these effects are depends on many details, like the size of the sphere, distribution of charge in it and nature of non-EM forces that hold the electric charge together. It is possible that change in the mass is very small part of total mass, but it could also be a substantial part. In case the electron's charge is concentrated at some point so density is infinite, local Poynting's equation is invalid at that point and thus cannot be relied upon to calculate total EM energy. For example, if the electrons are points, one needs to use theory of point particles to calculate their EM energy. In this kind of theory, a theorem analogous to Poynting's can be derived. It implies different formula for EM energy density where one charged point particle has EM field, but there is zero EM energy associated with it. Only if there are several particles, the net EM energy can be non-zero. For example, in Frenkel-type theory of electrons, the electrons are points with individual EM fields. The particles interact via EM forces but one electron has no parts that could interact among themselves, so there is no change in its mass due to EM interactions. Also there is no EM energy associated with the EM field of one lone electron. How large part of the rest mass comes from this field? We do not know if electron is extended or point-like. Consequently, we do not know what part of its mass if any can be related to EM energy stored in the space around it. In the end of the 19th century and first years of 20th century there was an hypothesis that all mass of the electron is electromagnetic mass and Kaufmann's experiments on the behaviour of fast electrons in electric and magnetic field seemed to support it. This idea was largely abandoned when special relativity got accepted, because in special relativity electromagnetic and non-electromagnetic mass behave the same. The past experiments got reinterpreted in such a way that no evidence of EM mass could be found from them. J. Frenkel, Zur Elektrodynamik punktfoermiger Elektronen, Zeits. f. Phys., 32, (1925), p. 518-534. http://dx.doi.org/10.1007/BF01331692 J. A. Wheeler, R. P. Feynman, Classical Electrodynamics in Terms of Direct Interparticle Interaction, Rev. Mod. Phys., 21, 3, (1949), p. 425-433. http://dx.doi.org/10.1103/RevModPhys.21.425 https://en.wikipedia.org/wiki/Kaufmann%E2%80%93Bucherer%E2%80%93Neumann_experiments Ján LalinskýJán Lalinský $\begingroup$ Besides from the idea of electromagnetic mass, which was proposing that all mass was electromagnetic, how to explain the measureable electric field around a charged body (which contain energy) if the energy of the electrons charging this body, doesn't contribute to the energy of this field? $\endgroup$ – Lehs Sep 5 '16 at 2:19 $\begingroup$ @Lehs, in above theories, electromagnetic energy is not a function of the total electromagnetic field. It is zero for one lone particle, because there is no work needed to form it - it has no parts. But bringing two charged particles close to each other does take some work and so the net electromagnetic energy of such a system is positive. It can be expressed as function of positions of particles or functional of their individual fields. This functional is zero for one lone particle, but can be positive or negative for system of two and more particles. $\endgroup$ – Ján Lalinský Sep 5 '16 at 10:25 $\begingroup$ I like your point of view that the question is model dependent. $\endgroup$ – Lehs Sep 5 '16 at 10:59 The energy in an electron E=me c^2, where me is the mass of the electron. Simple calculation shows that the energy required to bring an electron from infinity against another electron repulsion is Integral(F.dr)=Int( (k e^2/r^2) dr)= k e^2/r, evaluated from infinity to r(resulting in -ve sign cancellation), and k is the electrostatic coupling constant. Equate the two energies and you find; m c^2= -k e^2 /r; giving r=2.82 e-15 m, which is the classical electron radius. This shows that the energy equivalent of the mass is the same that is in the field of the electron.(this also shows that two electrons can never collide). Note also that a force is not the same as energy. The force from one electron can extend to infinity, but still have finite energy. Only when a force acts on a distance we get energy.. E is the integral of force x distance.Thus as long as there is no accompanying motion, there are no energy limits on how much force there is or for what extent it acts. RiadRiad This is still a good question, because we know that energy stored in electromagnetic field is real. When we store energy in a capacitor that energy is 1/2 ED V, where V is the volume of the capacitor. We can than convert this energy into mass connecting capacitor to the electric bulb which will radiate this energy in the form of photons. Energy stored in the field of the electron is at least α*me/2, where α is fine structure constant (approximately equal 1/137). We have integrated energy density around an electron from infinity up to the so called reduced Compton length of the electron (386 fm) i.e. to the limit of localisation of electron. So the answer is that minimum contribution of classical electromagnetic energy to the electron mass is 1/274 of electron mass. Below the distance 386 fm (fermi) we have still divergences in calculations on the field theory level .. Jarek Kulp jarekjarek Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged electromagnetism energy electrons quantum-electrodynamics or ask your own question. Stepping down and taking a break Is the 0.511 MeV mass energy of an isolated electron equivalent to the sum of the electron's electrostatic and magnetic energy? Where is the evidence that the electron is pointlike? On Electromagnetic Self Energy Where is the magnetic self energy term in $L$ for a charged particle in an electromagnetic field? Energy of electric field Electron Electric Field Mass? How can we define the energy stored in a (conservative) force field? Self-energy of an electron Most general non-relativistic Hamiltonian for hydrogenic atom in quantised electromagnetic field Energy stored in the magnetic field Energy conservation in induction Mass of the electron
CommonCrawl
Honours and Memberships Professor Mark Haskins FLSW Faculty of Natural Sciences, Department of Mathematics Visiting Professor +44 (0)20 7594 8550m.haskins CV 668Huxley BuildingSouth Kensington Campus Professor Mark Haskins FLSW is a Professor in Pure Mathematics at Imperial College London. His main research interests lie in differential geometry and geometric analysis; several of his research interests also lie at the intersection between geometry and theoretical physics. Since July 2016 Professor Haskins has been the Deputy Collaboration Director and member of the Steering Committee for the Simons Collaboration on Special Holonomy in Geometry, Analysis and Physics. The Collaboration, funded by an $8.5 million grant from the Simons Foundation, is a large-scale international collaboration, directed by Professor Robert Bryant at Duke University. The Collaboration will advance the theory and applications of spaces with special holonomy and the geometric structures—calibrated submanifolds and instantons—associated with them, particularly in the two exceptional cases: spaces with holonomy G2 or Spin(7) in 7 or 8 dimensions, respectively. Haskins joined Imperial College in 2004 as a postdoctoral research fellow working with Professor Sir Simon Donaldson FRS. In 2005 he was appointed Lecturer in Pure Mathematics at Imperial, and promoted to Reader in 2007. From 2009 to 2013 Professor Haskins was a Leadership Fellow of the Engineering and Physical Sciences Research Council (EPSRC) pursuing his research project on Geometric Analysis and Special Lagrangian geometry. In 2013 Haskins was promoted to Professor of Pure Mathematics. From 2013 to 2015 he was an EPSRC Developing Leaders Fellow working on his research project Singular spaces of special and exceptional holonomy. In 2014 he was elected Fellow of the Learned Society of Wales. In 2016 he was a Research Professor at the MSRI programme in Differential Geometry. Currently his main research interests centre around the geometry of manifolds and singular spaces with special and exceptional holonomy and the geometric objects associated with these spaces — calibrated submanifolds and generalised instantons (generalised anti-self-dual connections). He has made important contributions to the theory of singular special Lagrangian n-folds, compact manifolds with G2 holonomy, associative 3-folds in manifolds with G2 holonomy, noncompact Calabi-Yau manifolds and nearly Kaehler 6-manifolds. Recently, Haskins and former PhD student Dr Lorenzo Foscolo solved a well-known long-standing (since 1968) foundational problem in nearly Kaehler geometry: do there exist any complete inhomogeneous nearly Kaehler 6-manifolds? They proved that the 6-sphere S6 and the product of a pair of 3-spheres S3 ×S3 both admit complete inhomogeneous nearly Kaehler structures. Their work will appear in Annals of Mathematics in early 2017. At present his work is focused on the phenomenon of Riemannian collapse within the context of spaces with special or exceptional holonomy, especially the construction of families of G2 holonomy metrics on 7 manifolds that collapse in the limit to 6 dimensional Calabi-Yau spaces. In theoretical physics, M theory is an 11 dimensional physical theory, while String Theories are 10 dimensional theories. To obtain real-world physics in 4 dimensions, M theory must be compactified on a 7 dimensional manifold, while in String theory the compactification space is 6 dimensional. In the simplest cases, supersymmetry forces these spaces to be G2 manifolds in the case of M theory, and Calabi-Yau manifolds in the case of String Theories. In various limits it is expected that M theory reduces to a String Theory. In one of these limits the geometric underpinning of the connection between M theory in 11 dimensions and String Theory in 10 dimensions is the collapse of G2 holonomy metrics on a 7 manifold to a Calabi-Yau metric on a 6 manifold. Postdoctoral position available A 3-year postdoctoral position at Imperial funded by the Simons Collaboration on Special Holonomy in Geometry, Analysis and Physics is currently available. The postdoc will work directly with Professor Haskins on projects in the broad areas of the Collaboration, and within Professor Haskins' research interests. Further details of the position, including instructions on how to apply, can be found here. The application deadline is 8 December 2016. (You must apply through the Imperial College job website as described in the adverts above.) For additional information please email Professor Haskins. Research Meetings and Schools Haskins has organised a number of recent meetings and research schools in geometry and geometric analysis, especially on the subject of the geometry of special or exceptional holonomy spaces. Research Meetings In January 2017 he is organising a meeting on Collapse, adiabatic limits and special holonomy, at Imperial College as part of the Simons Collaboration on Special Holonomy. In September 2016 he was the main organiser of the Inaugural Meeting of the Simons Collaboration on Special Holonomy, hosted by the Simons Center for Geometry and Physics, at Stony Brook. In February 2015 he coorganised the week-long Oberwolfach miniworkshop Singularities in G2 geometry. In summer 2014 he coorganised, with Sir Simon Donaldson and Professor Dietmar Salamon, a 6-week long program G2 manifolds at the Simons Center for Geometry and Physics. The successful proposal for a Simons Collaboration on Special Holonomy grew out of this activity. Haskins has also organised research schools aimed at early career researchers in geometry. In summer 2014 he was the main organiser for the London Mathematical Society/Clay Mathematics Institute Research School, An Invitation to Geometry and Topology via G2. In summer 2013 he co-organised a 1-week summer school and a 1-week research workshop Ricci curvature: limit spaces and Kaehler geometry at ICMS Edinburgh. Speakers included: Jeff Cheeger, Sir Simon Donaldson, John Lott, Aaron Naber, Gang Tian and Burkhard Wilking. RECENT Invited Lectures and Presentations Talks in 2016 2-part lecture: An Introduction to Ricci-flat spaces and metrics with special and exceptional holonomy, MSRI, Introductory workshop: Modern Riemannian Geometry, Jan 2016. Southern California Geometric Analysis conference, University of California Irvine, Jan 2016. Bay Area Differential Geometry seminar, Stanford University, Feb 2016. Differential Geometry in the Large, conference in honour of Wolfang Meyer's 80th birthday, Florence, July 2016. Inaugural meeting of the Simons Collaboration on Special Holonomy in Geometry, Analysis and Physics, Simons Center for Geometry and Physics, Sept 2016. Geometric flows and the geometry of space-time, University of Hamburg, Sept 2016. First Mathematics Colloquium, Basque Centre for Applied Mathematics and Universidad del Pas Vasco, Bilbao, Nov 2016. Talks in 2015. London Geometry and Topology seminar, Imperial College London, Jan 2015. Department Colloquium, University Of Warwick, Mar 2015. Seminaire d'analyse et geometrie, Institut de Mathematiques, Jussieu, Paris, Apr 2015. Geometry and Analysis seminar, Oxford Mathematical Institute, May 2015. Oberseminar Differentialgeometrie, Westfalische Wilhems-Universitat Munster, Nov 2015. Department of Mathematics and Physics Kolloquium, Leibniz Universitat Hannover, Nov 2015. Geometry-related seminars at Imperial. I organised the London Geometry and Topology seminar from 2005 to 2013. Professor Paolo Cascini now organises the seminar. About the seminar. In 2011 Professor Andre Neves and I started the Geometry and Analysis seminar with assistance of several postdocs in geometry. London Mathematical Society Journals. From January 2011 to 2016 I was the editorial adviser responsible for Differential Geometry and Geometric Analysis for the three London Mathematical Society journals: Bulletin of the LMS, Journal of the LMS and Proceedings of the LMS. Dr Felixe Schulze has now taken over as editorial adviser in this area. The Proceedings, Journal and Bulletin of the London Mathematical Society are among the world's leading mathematical research journals. Although they share a common Editorial Advisory Board, a paper should be submitted directly to one of the journals. How to submit a paper to one of these LMS journals Foscolo L, Haskins M, 2016, New G(2)-holonomy cones and exotic nearly Kahler structures on S-6 and S-3 x S-3, Annals of Mathematics, Vol:185, ISSN:0003-486X, Pages:59-130 Author Web Link Open Access Link Haskins M, Hein H-J, Nordstroem J, 2015, ASYMPTOTICALLY CYLINDRICAL CALABI-YAU MANIFOLDS, Journal of Differential Geometry, Vol:101, ISSN:0022-040X, Pages:213-265 Corti A, Haskins M, Nordstroem J, et al.Corti A, Haskins M, Nordstroem J, Pacini T close, 2015, G(2)-manifold and associative submanifolds via semi-fano 3-folds, Duke Mathematical Journal, Vol:164, ISSN:1547-7398, Pages:1971-2092 Degeratu A, Haskins M, Weiß H, 2015, Mini-Workshop: Singularities in $\mathrm G_2$-geometry, Oberwolfach Reports, Vol:12, ISSN:1660-8933, Pages:449-488 Corti A, Haskins M, Nordström J, et al.Corti A, Haskins M, Nordström J, Pacini T close, 2013, Asymptotically cylindrical Calabi–Yau 3–folds from weak Fano 3–folds, Geometry & Topology, Vol:17, ISSN:1465-3060, Pages:1955-2059 Mathematics honorary staff Search College Directory Main campus address: Imperial College London, South Kensington Campus, London SW7 2AZ, tel: +44 (0)20 7589 5111 Campus maps and information About this site This site uses cookies Accessibility Log in
CommonCrawl
The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation Davide Chicco ORCID: orcid.org/0000-0001-9655-71421,2 & Giuseppe Jurman ORCID: orcid.org/0000-0002-2705-57283 BMC Genomics volume 21, Article number: 6 (2020) Cite this article 515 Accesses To evaluate binary classifications and their confusion matrices, scientific researchers can employ several statistical rates, accordingly to the goal of the experiment they are investigating. Despite being a crucial issue in machine learning, no widespread consensus has been reached on a unified elective chosen measure yet. Accuracy and F1 score computed on confusion matrices have been (and still are) among the most popular adopted metrics in binary classification tasks. However, these statistical measures can dangerously show overoptimistic inflated results, especially on imbalanced datasets. The Matthews correlation coefficient (MCC), instead, is a more reliable statistical rate which produces a high score only if the prediction obtained good results in all of the four confusion matrix categories (true positives, false negatives, true negatives, and false positives), proportionally both to the size of positive elements and the size of negative elements in the dataset. In this article, we show how MCC produces a more informative and truthful score in evaluating binary classifications than accuracy and F1 score, by first explaining the mathematical properties, and then the asset of MCC in six synthetic use cases and in a real genomics scenario. We believe that the Matthews correlation coefficient should be preferred to accuracy and F1 score in evaluating binary classification tasks by all scientific communities. Given a clinical feature dataset of patients with cancer traits [1, 2], which patients will develop the tumor, and which will not? Considering the gene expression of neuroblastoma patients [3], can we identify which patients are going to survive, and which will not? Evaluating the metagenomic profiles of patients [4], is it possible to discriminate different phenotypes of a complex disease? Answering these questions is the aim of machine learning and computational statistics, nowadays pervasive in analysis of biological and health care datasets, and many other scientific fields. In particular, these binary classification tasks can be efficiently addressed by supervised machine learning techniques, such as artificial neural networks [5], k-nearest neighbors [6], support vector machines [7], random forest [8], gradient boosting [9], or other methods. Here the word binary means that the data element statuses and prediction outcomes (class labels) can be twofold: in the example of patients, it can mean healthy/sick, or low/high grade tumor. Usually scientists indicate the two classes as the negative and the positive class. The term classification means that the goal of the process is to attribute the correct label to each data instance (sample); the process itself is known as the classifier, or classification algorithm. Scientists have used binary classification to address several questions in genomics in the past, too. Typical cases include the application of machine learning methods to microarray gene expressions [10] or to single-nucleotide polymorphisms (SNPs) [11] to classify particular conditions of patients. Binary classification can also be used to infer knowledge about biology: for example, computational intelligence applications to ChIP-seq can predict transcription factors [12], applications to epigenomics data can predict enhancer-promoter interactions [13], and applications to microRNA can predict genomic inverted repeats (pseudo-hairpins) [14]. A crucial issue naturally arises, concerning the outcome of a classification process: how to evaluate the classifier performance? A relevant corpus of published works has stemmed until today throughout the last decades for possible alternative answers to this inquiry, by either proposing a novel measure or comparing a subset of existing ones on a suite of benchmark tasks to highlight pros and cons [15–28], also providing off-the-shelf software packages [29, 30]. Despite the amount of literature dealing with this problem, this question is still an open issue. However, there are several consolidated and well known facts driving the choice of evaluating measures in the current practice. Accuracy, MCC, F1 score. Many researchers think the most reasonable performance metric is the ratio between the number of correctly classified samples and the overall number of samples (for example, [31]). This measure is called accuracy and, by definition, it also works when labels are more than two (multiclass case). However, when the dataset is unbalanced (the number of samples in one class is much larger than the number of samples in the other classes), accuracy cannot be considered a reliable measure anymore, because it provides an overoptimistic estimation of the classifier ability on the majority class [32–35]. An effective solution overcoming the class imbalance issue comes from the Matthews correlation coefficient (MCC), a special case of the ϕ phi coefficient [36]. Stemming from the definition of the phi coefficient, a number of metrics have been defined and mainly used for purposes other than classification, for instance as association measures between (discrete) variables, with the Cramér's V (or Cramér's ϕ) being one of the most common rates [37]. Originally developed by Matthews in 1975 for comparison of chemical structures [38], MCC was re-proposed by Baldi and colleagues [39] in 2000 as a standard performance metric for machine learning with a natural extension to the multiclass case [40]. MCC soon started imposing as a successful indicator: for instance, the Food and Drug Administration (FDA) agency of the USA employed the MCC as the main evaluation measure in the MicroArray II / Sequencing Quality Control (MAQC/SEQC) projects [41, 42]. The effectiveness of MCC has been shown in other scientific fields as well [43, 44]. Although being widely acknowledged as a reliable metric, there are situations - albeit extreme - where either MCC cannot be defined or it displays large fluctuations [45], due to imbalanced outcomes in the classification. Even if mathematical workarounds and Bayes-based improvements [46] are available for these cases, they have not been adopted widely yet. Shifting context from machine learning to information retrieval, and thus interpreting positive and negative class as relevant and irrelevant samples respectively, the recall (that is the accuracy on the positive class) can be seen as the fraction of relevant samples that are correctly retrieved. Then its dual metric, the precision, can be defined as the fraction of retrieved documents that are relevant. In the learning setup, the pair precision/recall provides useful insights on the classifier's behaviour [47], and can be more informative than the pair specificity/sensitivity [48]. Meaningfully combining precision and recall generates alternative performance evaluation measures. In particular, their harmonic mean has been originally introduced in statistical ecology by Dice [49] and Sørensen [50] independently in 1948, then rediscovered in the 1970s in information theory by van Rijsbergen [51, 52] and finally adopting the current notation of F1 measure in 1992 [53]. In the 1990s, in fact, F1 gained popularity in the machine learning community, to the point that it was also re-introduced later in the literature as a novel measure [54]. Nowadays, the F1 measure is widely used in most application areas of machine learning, not only in the binary scenario, but also in multiclass cases. In multiclass cases, researchers can employ the F1 micro/macro averaging procedure [55–60], which can be even targeted for ad-hoc optimization [61]. The distinctive features of F1 score have been discussed in the literature [62–64]. Two main properties characterize F1 from MCC. First, F1 varies for class swapping, while MCC is invariant if the positive class is renamed negative and vice versa. This issue can be overcome by extending the macro/micro averaging procedure to the binary case itself [17], by defining the F1 score both on the positive and negative classes and then average the two values (macro), and using the average sensitivity and average precision values (micro). The micro/macro averaged F1 is invariant for class swapping and its behaviour is more similar to MCC. However, this procedure is biased [65], and it is still far from being accepted as a standard practice by the community. Second, F1 score is independent from the number of samples correctly classified as negative. Recently, several scientists highlighted drawbacks of the F1 measure [66, 67]: in fact, Hand and Peter [68] claim that alternative measures should be used instead, due to its major conceptual flaws. Despite the criticism, F1 remains one of the most widespread metrics among researchers. For example, when Whalen and colleagues released TargetFinder, a tool to predict enhancer-promoters interactions in genomics, they showed its results measured only by F1 score [13], making it impossible to detect the actual true positive rate and true negative rate of their tests [69]. Alternative metrics. The current most popular and widespread metrics include Cohen's kappa [70–72]: originally developed to test inter-rater reliability, in the last decades Cohen's kappa entered the machine learning community for comparing classifiers' performances. Despite its popularity, in the learning context there are a number of issues causing the kappa measure to produce unreliable results (for instance, its high sensitivity to the distribution of the marginal totals [73–75]), stimulating research for more reliable alternatives [76]. Due to these issues, we chose not to include Cohen's kappa in the present comparison study. In the 2010s, several alternative novel measures have been proposed, either to tackle a particular issue such as imbalance [34, 77], or with a broader purpose. Among them, we mention the confusion entropy [78, 79], a statistical score comparable with MCC [80], and the K measure [81], a theoretically grounded measure that relies on a strong axiomatic base. In the same period, Powers proposed informedness and markedness to evaluate binary classification confusion matrices [22]. Powers defines informedness as true positive rate – true negative rate, to express how the predictor is informed in relation to the opposite condition [22]. And Powers defines markedness as precision – negative predictive value, meaning the probability that the predictor correctly marks a specific condition [22]. Other previously introduced rates for confusion matrix evaluations are macro average arithmetic (MAvA) [18], geometric mean (Gmean or G-mean) [82], and balanced accuracy [83], which all represent classwise weighted accuracy rates. Notwithstanding their effectiveness, all the aforementioned measures have not yet achieved such a diffusion level in the literature to be considered solid alternatives to MCC and F1 score. Regarding MCC and F1, in fact, Dubey and Tatar [84] state that these two measure "provide more realistic estimates of real-world model performance". However, there are many instances where MCC and F1 score disagree, making it difficult for researchers to draw correct deductions on the behaviour of the investigated classifier. MCC, F1 score, and accuracy can be computed when a specific statistical threshold τ for the confusion matrix is set. When the confusion matrix threshold is not unique, researchers can instead take advantage of classwise rates: true positive rate (or sensitivity, or recall) and true negative rate (or specificity), for example, computed for all the possible confusion matrix thresholds. Different combinations of these two metrics give rise to alternative measures: among them, the area under the receiver operating characteristic curve (AUROC or ROC AUC) [85–91] plays a major role, being a popular performance measure when a singular threshold for the confusion matrix is unavailable. However, ROC AUC presents several flaws [92], and it is sensitive to class imbalance [93]. Hand and colleagues proposed improvements to address these issues [94], that were partially rebutted by Ferri and colleagues [95] some years later. Similar to ROC curve, the precision-recall (PR) curve can be used to test all the possible positive predictive values and sensitivities obtained through a binary classification [96]. Even if less common than the ROC curve, several scientists consider the PR curve more informative than the ROC curve, especially on imbalanced biological and medical datasets [48, 97, 98]. If no confusion matrix threshold is applicable, we suggest the readers to evaluate their binary evaluations by checking both the PR AUC and the ROC AUC, focusing on the former [48, 97]. If a confusion matrix threshold is at disposal, instead, we recommend the usage of the Matthews correlation coefficient over F1 score, and accuracy. In this manuscript, we outline the advantages of the Matthews correlation coefficient by first describing its mathematical foundations and its competitors accuracy and F1 score ("Notation and mathematical foundations" section), and by exploring their relationships afterwards (Relationships between rates). We decided to focus on accuracy and F1 score because they are the most common metrics used for binary classification in machine learning. We then show some examples to illustrate why the MCC is more robust and reliable than F1 score, on six synthetic scenarios ("Use cases" section) and a real genomics application ("Genomics scenario: colon cancer gene expression" section). Finally, we conclude the manuscript with some take-home messages ("Conclusions" section). Notation and mathematical foundations Setup. The framework where we set our investigation is a machine learning task requiring the solution of binary classification problem. The dataset describing the task is composed by n+ examples in one class, labeled positive, and n− examples in the other class, called negative. For instance, in a biomedical case control study, the healthy individuals are usually labelled negative, while the positive label is usually attributed to the sick patients. As a general practice, given two phenotypes, the positive class corresponds to the abnormal phenotype. This ranking is meaningful for example, in different stages of a tumor. The classification model forecasts the class of each data instance, attributing to each sample its predicted label (positive or negative): thus, at the end of the classification procedure, every sample falls in one of the following four cases: Actual positives that are correctly predicted positives are called true positives (TP); Actual positives that are wrongly predicted negatives are called false negatives (FN); Actual negatives that are correctly predicted negatives are called true negatives (TN); Actual negatives that are wrongly predicted positives are called false positives (FP). This partition can be presented in a 2×2 table called confusion matrix\(\textbf {M}=\left (\begin {array}{ll} \text {TP} & \text {FN}\\ \text {FP} & \text {TN}\end {array}\right)\) (expanded in Table 1), which completely describes the outcome of the classification task. Table 1 The standard confusion matrix M Clearly TP+FN=n+ and TN+FP=n−. When one performs a machine learning binary classification, she/he hopes to see a high number of true positives (TP) and true negatives (TN), and less false negatives (FN) and false positives (FP). When \(\textbf {M}=\left (\begin {array}{cc}n^{+} & 0\\ 0 & n^{-}\end {array}\right)\)the classification is perfect. Since analyzing all the four categories of the confusion matrix separately would be time-consuming, statisticians introduced some useful statistical rates able to immediately describe the quality of a prediction [22], aimed at conveying into a single figure the structure of M. A set of these functions act classwise (either actual or predicted), that is, they involve only the two entries of M belonging to the same row or column (Table 2). We cannot consider such measures fully informative because they use only two categories of the confusion matrix [39]. Table 2 Classwise performance measures Accuracy. Moving to global metrics having three or more entries of M as input, many researchers consider computing the accuracy as the standard way to go. Accuracy, in fact, represents the ratio between the correctly predicted instances and all the instances in the dataset: $$ \text{accuracy} = \frac{\text{TP}+\text{TN}}{n^{+} + n^{-}} = \frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}} $$ (worst value: 0; best value: 1) By definition, the accuracy is defined for every confusion matrix M and ranges in the real unit interval [0,1]; the best value 1.00 corresponds to perfect classification \(\textbf {M}=\left (\begin {array}{cc}n^{+} & 0 \\ 0 & n^{-} \end {array}\right)\) and the worst value 0.00 corresponds to perfect misclassification \(\textbf {M}=\left (\begin {array}{cc}0 & n^{+} \\ n^{-} & 0\end {array}\right)\). As anticipated (Background), accuracy fails in providing a fair estimate of the classifier performance in the class-unbalanced datasets. For any dataset, the proportion of samples belonging to the largest class is called the no-information error rate\(\text {ni}=\frac {\max \{n^{+},n^{-}\}}{n^{+} + n^{-}}\); a binary dataset is (perfectly) balanced if the two classes have the same size, that is, \(\text {ni}=\frac {1}{2}\), and it is unbalanced if one class is much larger than the other, that is \(\text {ni}\gg \frac {1}{2}\). Suppose now that \(\text {ni}\not = \frac {1}{2}\), and apply the trivial majority classifier: this algorithm learns only which is the largest class in the training set, and attributes this label to all instances. If the largest class is the positive class, the resulting confusion matrix is \(\textbf {M}=\left (\begin {array}{cc}n^{+} & 0 \\ n^{-} & 0 \end {array}\right)\), and thus accuracy=ni. If the dataset is highly unbalanced, ni≈1, and thus the accuracy measure gives an unreliable estimation of the goodness of the classifier. Note that, although we achieved this result by mean of the trivial classifier, this is quite a common effect: as stated by Blagus and Lusa [99], several classifiers are biased towards the largest class in unbalanced studies. Finally, consider another trivial algorithm, the coin tossing classifier: this classifier randomly attributes to each sample, the label positive or negative with probability \(\frac {1}{2}\). Applying the coin tossing classifier to any binary dataset gives an accuracy with expected value \(\frac {1}{2}\), since \(\langle \textbf {M}\rangle = \left (\begin {array}{cc}n^{+} / 2 & n^{+} / 2 \\ n^{-} / 2 & n^{-} / 2\end {array}\right)\). Matthews correlation coefficient (MCC). As an alternative measure unaffected by the unbalanced datasets issue, the Matthews correlation coefficient is a contingency matrix method of calculating the Pearson product-moment correlation coefficient [22] between actual and predicted values. In terms of the entries of M, MCC reads as follows: $$ {\begin{aligned} \textrm{MCC} = \frac{\text{TP}\cdot\text{TN}-\text{FP}\cdot\text{FN}}{\sqrt{ (\text{TP}+\text{FP})\cdot(\text{TP}+\text{FN})\cdot(\text{TN}+\text{FP})\cdot(\text{TN}+\text{FN}) }}\ \end{aligned}} $$ (worst value: –1; best value: +1) MCC is the only binary classification rate that generates a high score only if the binary predictor was able to correctly predict the majority of positive data instances and the majority of negative data instances [80, 97]. It ranges in the interval [−1,+1], with extreme values –1 and +1 reached in case of perfect misclassification and perfect classification, respectively, while MCC=0 is the expected value for the coin tossing classifier. A potential problem with MCC lies in the fact that MCC is undefined when a whole row or column of M is zero, as it happens in the previously cited case of the trivial majority classifier. However, some mathematical considerations can help meaningfully fill in the gaps for these cases. If M has only one non-zero entry, this means that all samples in the dataset belong to one class, and they are either all correctly (for TP≠0 or TN≠0) or incorrectly (for FP≠0 or FN≠0) classified. In this situations, MCC=1 for the former case and MCC=−1 for the latter case. We are then left with the four cases where a row or a column of M are zero, while the other two entries are non zero. That is, when M is one of \(\left (\begin {array}{ll}a & 0 \\ b & 0 \end {array}\right)\), \(\left (\begin {array}{ll}a & b \\ 0 & 0 \end {array}\right)\), \(\left (\begin {array}{ll}0 & 0 \\ b & a \end {array}\right)\)or \(\left (\begin {array}{ll}0 & b \\ 0 & a \end {array}\right)\), with a,b≥1: n in all four cases, MCC takes the indefinite form \(\frac {0}{0}\). To detect a meaningful value of MCC for these four cases, we proceed through a simple approximation via a calculus technique. If we substitute the zero entries in the above matrices with the arbitrarily small value ε, in all four cases, we obtain $$ \begin{aligned} \textrm{MCC} &= \frac{a\epsilon-b\epsilon}{\sqrt{(a+b)(a+\epsilon)(b+\epsilon)(\epsilon+\epsilon)}}\\ &= \frac{\epsilon}{\sqrt{\epsilon}} \frac{a-b}{\sqrt{2(a+b)(a+\epsilon)(b+\epsilon)}}\\ &\approx \sqrt{\epsilon} \frac{a-b}{\sqrt{2ab(a-b)}} \to 0 \quad \textrm{for \(\epsilon\to 0\)}\ \end{aligned} $$ With these positions MCC is now defined for all confusion matrices M. As a consequences, MCC=0 for the trivial majority classifier, and 0 is also the expected value for the coin tossing classifier. Finally, in some cases it might be useful to consider the normalized MCC, defined as \(\textrm {nMCC}=\frac {\textrm {MCC}+1}{2}\), and linearly projecting the original range into the interval [0,1], with \(\textrm {nMCC}=\frac {1}{2}\) as the average value for the coin tossing classifier. F1 score. This metric is the most used member of the parametric family of the F-measures, named after the parameter value β=1. F1 score is defined as the harmonic mean of precision and recall (Table 2) and as a function of M, has the following shape: $$ F_{1} \; \text{score} = \frac{2 \cdot \text{TP}}{2 \cdot \text{TP} + \text{FP} + \text{FN}} = 2 \cdot \frac{\text{precision} \cdot \text{recall}}{\text{precision} + \text{recall}}\ $$ F1 ranges in [0,1], where the minimum is reached for TP=0, that is, when all the positive samples are misclassified, and the maximum for FN=FP=0, that is for perfect classification. Two main features differentiate F1 from MCC and accuracy: F1 is independent from TN, and it is not symmetric for class swapping. F1 is not defined for confusion matrices \(\textbf {M} = \left (\begin {array}{cc}0 & 0 \\ 0 & n^{-} \end {array}\right)\): we can set F1=1 for these cases. It is also worth mentioning that, when defining the F1 score as the harmonic mean of precision and recall, the cases TP=0, FP>0, and FN>0 remain undefined, but using the expression \(\frac {2 \cdot \text {TP}}{2 \cdot \text {TP} + \text {FP} + \text {FN}}\), the F1 score is defined even for these confusion matrices and its value is zero. When a trivial majority classifier is used, due to the asymmetry of the measure, there are two different cases: if n+>n−, then \(\textbf {M}=\left (\begin {array}{cc}n^{+} & 0 \\ n^{-} & 0 \end {array}\right)\)and \(F_{1}=\frac {2n^{+}}{2n^{+}n^{-}}\), while if n−>n+ then \(\textbf {M}=\left (\begin {array}{cc}0 & n^{+} \\ 0& n^{-} \end {array}\right)\), so that F1=0. Further, for the coin tossing algorithm, the expected value is \(F_{1} = \frac {2n^{+}}{3n^{+} + n^{-}}\). Relationship between measures After having introduced the statistical background of Matthews correlation coefficient and the other two measures to which we compare it (accuracy and F1 score), we explore here the correlation between these three rates. To explore these statistical correlations, we take advantage of the Pearson correlation coefficient (PCC) [100], which is a rate particularly suitable to evaluate the linear relationship between two continuous variables [101]. We avoid the usage of rank correlation coefficients (such as Spearman's ρ and Kendall's τ [102]) because we are not focusing on the ranks for the two lists. For a given positive integer N≥10, we consider all the possible \(\binom {N+3}{3}\) confusion matrices for a dataset with N samples and, for each matrix, compute the accuracy, MCC and F1 score and then the Pearson correlation coefficient for the three set of values. MCC and accuracy resulted strongly correlated, while the Pearson coefficient is less than 0.8 for the correlation of F1 with the other two measures (Table 3). Interestingly, the correlation grows with N, but the increments are limited. Table 3 Correlation between MCC, accuracy, and F1 score values Similar to what Flach and colleagues did for their isometrics strategy [66], we depict a scatterplot of the MCCs and F1 scores for all the 21 084 251 possible confusion matrices for a toy dataset with 500 samples (Fig. 1). We take advantage of this scatterplot to overview the mutual relations between MCC and F1 score. Relationship between MCC and F1 score. Scatterplot of all the 21 084 251 possible confusion matrices for a dataset with 500 samples on the MCC/ F1 plane. In red, the (−0.04, 0.95) point corresponding to use case A1 The two measures are reasonably concordant, but the scatterplot cloud is wide, implying that for each value of F1 score there is a corresponding range of values of MCC and vice versa, although with different width. In fact, for any value F1=ϕ, the MCC varies approximately between [ϕ−1,ϕ], so that the width of the variability range is 1, independent from the value of ϕ. On the other hand, for a given value MCC=μ, the F1 score can range in [0,μ+1] if μ≤0 and in [μ,1] if μ>0, so that the width of the range is 1−|μ|, that is, it depends on the MCC value μ. Note that a large portion of the above variability is due to the fact that F1 is independent from TN: in general, all matrices \(\textbf {M}=\left (\begin {array}{cc}\alpha & \beta \\ \gamma & x\end {array}\right)\) have the same value \(F_{1}=\frac {2\alpha }{2\alpha +\beta +\gamma }\) regardless of the value of x, while the corresponding MCC values range from \(-\sqrt {\frac {\beta \gamma }{(\alpha +\beta)(\alpha +\gamma)}}\) for x=0 to the asymptotic \(\frac {a}{\sqrt {(\alpha +\beta)(\alpha +\gamma)}}\) for x→∞. For example, if we consider only the 63 001 confusion matrices of datasets of size 500 where TP=TN, the Pearson correlation coefficient between F1 and MCC increases to 0.9542254. Overall, accuracy, F1, and MCC show reliable concordant scores for predictions that correctly classify both positives and negatives (having therefore many TP and TN), and for predictions that incorrectly classify both positives and negatives (having therefore few TP and TN); however, these measures show discordant behaviors when the prediction performs well just with one of the two binary classes. In fact, when a prediction displays many true positives but few true negatives (or many true negatives but few true positives) we will show that F1 and accuracy can provide misleading information, while MCC always generates results that reflect the overall prediction issues. After having introduced the mathematical foundations of MCC, accuracy, and F1 score, and having explored their relationships, here we describe some synthetic, realistic scenarios where MCC results are more informative and truthful than the other two measures analyzed. Positively imbalanced dataset — Use case A1. Consider, for a clinical example, a positively imbalanced dataset made of 9 healthy individuals (negatives =9%) and 91 sick patients (positives =91%) (Fig. 2c). Suppose the machine learning classifier generated the following confusion matrix: TP=90, FN=1, TN=0, FP=9 (Fig. 2b). Use case A1 — Positively imbalanced dataset. a Barplot representing accuracy, F1, and normalized Matthews correlation coefficient (normMCC = (MCC + 1) / 2), all in the [0, 1] interval, where 0 is the worst possible score and 1 is the best possible score, applied to the Use case A1 positively imbalanced dataset. b Pie chart representing the amounts of true positives (TP), false negatives (FN), true negatives (TN), and false positives (FP). c Pie chart representing the dataset balance, as the amounts of positive data instances and negative data instances In this case, the algorithm showed its ability to predict the positive data instances (90 sick patients out of 91 were correctly predicted), but it also displayed its lack of talent in identifying healthy controls (only 1 healthy individual out of 9 was correctly recognized) (Fig. 2b). Therefore, the overall performance should be judged poor. However, accuracy and of F1 showed high values in this case: accuracy = 0.90 and F1 score = 0.95, both close to the best possible value 1.00 in the [0, 1] interval (Fig. 2a). At this point, if one decided to evaluate the performance of this classifier by considering only accuracy and F1 score, he/she would overoptimistically think that the computational method generated excellent predictions. Instead, if one decided to take advantage of the Matthews correlation coefficient in the Use case A1, he/she would notice the resulting MCC = –0.03 (Fig. 2a). By seeing a value close to zero in the [–1, +1] interval, he/she would be able to understand that the machine learning method has performed poorly. Positively imbalanced dataset — Use case A2. Suppose the prediction generated this other confusion matrix: TP = 5, FN = 70, TN = 19, FP = 6 (Additional file 1b). Here the classifier was able to correctly predict negatives (19 healthy individuals out of 25), but was unable to correctly identify positives (only 5 sick patients out of 70). In this case, all three statistical rates showed a low score which emphasized the deficiency in the prediction process (accuracy = 0.24, F1 score = 0.12, and MCC = −0.24). Balanced dataset — Use case B1. Consider now, as another example, a balanced dataset made of 50 healthy controls (negatives =50%) and 50 sick patients (positives =50%) (Additional file 2c). Imagine that the machine learning prediction generated the following confusion matrix: TP=47, FN=3, TN=5, FP=45 (Additional file 2b). Once again, the algorithm exhibited its ability to predict the positive data instances (47 sick patients out of 50 were correctly predicted), but it also demonstrated its lack of talent in identifying healthy individuals (only 5 healthy controls of 50 were correctly recognized) (Additional file 2b). Again, the overall performance should be considered mediocre. Checking only F1, one would read a good value (0.66 in the [0, 1] interval), and would be overall satisfied about the prediction (Additional file 2a). Once again, this score would hide the truth: the classification algorithm has performed poorly on the negative subset. The Matthews correlation coefficient, instead, by showing a score close to random guessing (+0.07 in the [–1, +1] interval) would be able to inform that the machine learning method has been on the wrong track. Also, it is worth noticing that accuracy would provide with an informative result in this case (0.52 in the [0, 1] interval). Balanced dataset — Use case B2. As another example, imagine the classifier produced the following confusion matrix: TP = 10, FN = 40, TN = 46, FP = 4 (Additional file 3b). Similar to what happened for the Use case A2, the method was able to correctly predict many negative cases (46 healthy individuals out of 50), but failed in predicting most of positive data instances (only 10 sick patients were correctly predicted out of 50). Like for the Use case A2, accuracy, F1 and MCC show average or low result scores (accuracy = 0.56, F1 score = 0.31, and MCC = +0.17), correctly informing you about the non-optimal performance of the prediction method (Additional file 3a). Negatively imbalanced dataset — Use case C1. As another example, analyze now this imbalanced dataset made of 90 healthy controls (negatives =90%) and 10 sick patients (positives =10%) (Additional file 4c). Assume the classifier prediction produced this confusion matrix: TP = 9, FN = 1, TN = 1, FP = 89 (Additional file 4b). In this case, the method revealed its ability to predict positive data instances (9 sick patients out of 10 were correctly predicted), but it also has shown its lack of skill in identifying negative cases (only 1 healthy individual out of 90 was correctly recognized) (Additional file 4c). Again, the overall performance should be judged modest. Similar to the Use case A2 and B2, all three statistical scores generated low results that reflect the mediocre quality of the prediction: F1 score = 0.17 and accuracy = 0.10 in the [0, 1] interval, and MCC = −0.19 in the [–1, +1] interval (Additional file 4a). Negatively imbalanced dataset — Use case C2. As a last example, suppose you obtained this alternative confusion matrix, through another prediction: TP = 2, FN = 9, TN = 88, FP = 1 (Additional file 5b). Similar to the Use case A1 and B1, the method was able to correctly identify multiple negative data instances (88 healthy patients out of 89), but unable to correctly predict most of sick patients (only 2 true positives out of 11 possible elements). Here, accuracy showed a high value: 0.90 in the [0, 1] interval. On the contrary, if one decided to take a look at F1 and at the Matthews correlation coefficient, by noticing low values value (F1 score = 0.29 in the [0, 1] interval and MCC = +0.31 in the [–1, +1] interval), she/he would be correctly informed about the low quality of the prediction (Additional file 5a). As we explained earlier, the key advantage of the Matthews correlation coefficient is that it generates a high quality score only if the prediction correctly classified a high percentage of negative data instances and a high percentage of positive data instances, with any class balance or imbalance. Recap. We recap here the results obtained for the six use cases (Table 4). For the Use case A1 (negatively imbalanced dataset), the machine learning classifier was unable to correctly predict negative data instances, and it therefore produced confusion matrices featuring few true negatives (TN). There, accuracy and F1 generated overoptimistic and inflated results, while the Matthews correlation coefficient was the only statistical rate which identified the aforementioned prediction problem, and therefore to provide a low truthful quality score. Table 4 Recap of the six use cases results In the Use case A2 (positively imbalanced dataset), instead, the method did not predict correctly enough positive data instances, and therefore showed few true positives. Even if accuracy showed an excessively high result score, the values of F1 and MCC correctly reflected the low quality of the prediction. In the Use case B1 (balanced dataset), the machine learning method was unable to correctly predict negative data instances, and therefore produced a confusion matrix featuring few true negatives (TN). In this case, F1 generated an overoptimistic result, while accuracy and the MCC correctly produced low results that highlight an issue in the prediction. The classifier did not find enough true positives for the Use case B2 (balanced dataset), too. In this case, all the analyzed rates (accuracy, F1, and MCC) produced average or low results which correctly represented the prediction issue. Also in the Use case C1 (positively imbalanced dataset), the machine learning method was unable to correctly recognize negative data instances, and therefore produced a confusion matrix with a low number of true negative (TN). Here, accuracy again generated an overoptimistic inflated score, while F1 and the MCC correctly produced low results that indicated a problem in the prediction process. Finally, in the last Use case C2 (positively imbalanced dataset), the prediction technique failed in predicting negative elements, and therefore its confusion matrix showed a low percentage of true negatives. Here accuracy again generated overoptimistic, misleading, and inflated high results, while F1 and MCC were able to produce a low score that correctly reflected the prediction issue. In summary, even if F1 and accuracy results were able to reflect the prediction issue in some of the six analyzed use cases, the Matthews correlation coefficient was the only score which correctly indicated the prediction problem in all six examples (Table 4). Particularly, in the Use case A1 (a prediction which generated many true positives and few true negatives on a positively imbalanced dataset), the MCC was the only statistical rate able to truthfully highlight the classification problem, while the other two rates showed misleading results (Fig. 2). These results show that, while accuracy and F1 score often generate high scores that do not inform the user about ongoing prediction issues, the MCC is a robust, useful, reliable, truthful statistical measure able to correctly reflect the deficiency of any prediction in any dataset. Genomics scenario: colon cancer gene expression In this section, we show a real genomics scenario where the Matthews correlation coefficient result being more informative than accuracy and F1 score. Dataset. We trained and applied several machine learning classifiers to gene expression data from the microarray experiments of colon tissue released by Alon et al. [103] and made it publically available within the Partial Least Squares Analyses for Genomics (plsgenomics) R package [104, 105]. The dataset contains 2,000 gene probsets for 62 patients, of which 22 are healthy controls and 40 have colon cancer (35.48% negatives and 64.52% positives) [106]. Experiment design. We employed machine learning binary classifiers to predict patients and healthy controls in this dataset: gradient boosting [107], decision tree [108], k-nearest neighbors (k-NN) [109], support vector machine (SVM) with linear kernel [7], and support vector machine with radial Gaussian kernel [7]. For gradient boosting and decision tree, we trained the classifiers on a training set containing 80% of randomly selected data instances, and test them on the test set containing the remaining 20% data instances. For k-NN and SVMs, we split the dataset into training set (60% data instances, randomly selected), validation set (20% data instances, randomly selected), and the test set (remaining 20% data instances). We used the validation set for the hyper-parameter optimization grid search [97]: number k of neighbors for k-NN, and cost C hyper-parameter for the SVMs. We trained each model having a different hyper-parameter on the training set, applied it to the validation set, and then picked the one obtaining the highest MCC as final model to be applied to the test set. For all the classifiers, we repeated the experiment execution ten times and recorded the average results for MCC, F1 score, accuracy, true positive (TP) rate, and true negative (TN) rate. We then ranked the results obtained on the test sets or the validation sets first based on the MCC, then based on the F1 score, and finally based on the accuracy (Table 5). Table 5 Colon cancer prediction rankings Results: different metric, different ranking. The three rankings we employed to report the same results (Table 5) show two interesting aspects. First, the top classifier changes when we consider the ranking based on MCC, F1 score, or accuracy. In the MCC ranking, in fact, the top performing method is gradient boosting (MCC = +0.55), while in the F1 score ranking and in the accuracy ranking the best classifier resulted being k-NN (F1 score = 0.87 and accuracy = 0.81). The ranks of the other methods change, too: linear SVM is ranked forth in the MCC ranking and in the accuracy ranking, but ranked second in the F1 score ranking. Decision tree changes its position from one ranking to another, too. As mentioned earlier, for binary classifications like this, we prefer to focus on the ranking obtained by the MCC, because this rate generates a high score only if the classifier was able to correctly predict the majority of the positive data instances and the majority of the negative data instances. In our example, in fact, the top MCC ranking classifier gradient boosting did quite well both on the recall (TP rate = 0.85) and on the specificity (TN rate = 0.69). k-NN, that is the top performing method both in the F1 score ranking and in the accuracy ranking, instead, obtained an excellent score for recall (TP rate = 0.92) but just sufficient on the specificity (TN rate = 0.52). The F1 score ranking and the accuracy ranking, in conclusion, are hiding this important flaw of the top classifier: k-NN was unable top correctly predict a high percentage of patients. The MCC ranking, instead, takes into account this information. Results: F1 score and accuracy can mislead, but MCC does not. The second interesting aspect of the results we obtained relates to the radial SVM (Table 5). If a researcher decided to evaluate the performance of this method by observing only the F1 score and the accuracy, she/he would notice good results (F1 score = 0.75 and accuracy = 0.67) and might be satisfied about them. These results, in fact, mean 3/4 correct F1 score and 2/3 correct accuracy. However, these values of F1 score and accuracy would mislead the researcher once again: with a closer look to the results, one can notice that the radial SVM has performed poorly on the true negatives (TN rate = 0.40), by correctly predicting less than half patients. Similar to the synthetic Use case A1 previously described (Fig. 2 and Table 4), the Matthews correlation coefficient is the only aggregate rate highlighting the weak performance of the classifier here. With its low value (MCC = +0.29), the MCC informs the readers about the poor general outcome of the radial SVM, while the accuracy and F1 score show misleading values. Scientists use confusion matrices to evaluate binary classification problems; therefore, the availability of a unified statistical rate that is able to correctly represent the quality of a binary prediction is essential. Accuracy and F1 score, although popular, can generate misleading results on imbalanced datasets, because they fail to consider the ratio between positive and negative elements. In this manuscript, we explained the reasons why Matthews correlation coefficient (MCC) can solve this issue, through its mathematical properties that incorporate the dataset imbalance and its invariantness for class swapping. The criterion of MCC is intuitive and straightforward: to get a high quality score, the classifier has to make correct predictions both on the majority of the negative cases, and on the majority of the positive cases, independently of their ratios in the overall dataset. F1 and accuracy, instead, generate reliable results only when applied to balanced datasets, and produce misleading results when applied to imbalanced cases. For these reasons, we suggest all the researchers working with confusion matrices to evaluate their binary classification predictions through the MCC, instead of using F1 score or accuracy. Regarding the limitations of this comparative article, we recognize that additional comparisons with other rates (such as Cohen's Kappa [70], Cramér's V [37], and K measure [81]) would have provided further information about the role of MCC in binary classification evaluation. We prefered to focus on accuracy and F1 score, instead, because accuracy and F1 score are more commonly used in machine learning studies related to biomedical applications. In the future, we plan to investigate further the relationship between MCC and Cohen's Kappa, Cramér's V, K measure, balanced accuracy, F macro average, and F micro average. The data and the R software code used in this study for the tests and the plots are publically available at the following web URL: https://github.com/davidechicco/MCC. k-NN: k-nearest neighbors AUC: FDA: FN: False negatives FP: false positives. MAQC/SEQC: MicroArray II / sequencing quality control MCC: Matthews correlation coefficient PCC: PLS: Partial least squares Precision-recall ROC: Receiver operating characteristic SVM: TN: True negatives TP: True positives Chicco D, Rovelli C. Computational prediction of diagnosis and feature selection on mesothelioma patient health records. PLoS ONE. 2019; 14(1):0208737. Fernandes K, Chicco D, Cardoso JS, Fernandes J. Supervised deep learning embeddings for the prediction of cervical cancer diagnosis. PeerJ Comput Sci. 2018; 4:154. Maggio V, Chierici M, Jurman G, Furlanello C. Distillation of the clinical algorithm improves prognosis by multi-task deep learning in high-risk neuroblastoma. PLoS ONE. 2018; 13(12):0208924. Fioravanti D, Giarratano Y, Maggio V, Agostinelli C, Chierici M, Jurman G, Furlanello C. Phylogenetic convolutional neural networks in metagenomics. BMC Bioinformatics. 2018; 19(2):49. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(7553):436. Peterson LE. K-nearest neighbor. Scholarpedia. 2009; 4(2):1883. Hearst MA, Dumais ST, Osuna E, Platt J, Scholkopf B. Support vector machines. IEEE Intell Syst Appl. 1998; 13(4):18–28. Breiman L. Random forests. Mach Learn. 2001; 45(1):5–32. Chen T, Guestrin C. XGBoost: a scalable tree boosting system. In: Proceedings of KDD 2016 – the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM: 2016. p. 785–94. https://doi.org/10.1145/2939672.2939785. Ressom HW, Varghese RS, Zhang Z, Xuan J, Clarke R. Classification algorithms for phenotype prediction in genomics and proteomics. Front Biosci. 2008; 13:691. Nicodemus KK, Malley JD. Predictor correlation impacts machine learning algorithms: implications for genomic studies. Bioinformatics. 2009; 25(15):1884–90. Karimzadeh M, Hoffman MM. Virtual ChIP-seq: predicting transcription factor binding by learning from the transcriptome. bioRxiv. 2018; 168419. Whalen S, Truty RM, Pollard KS. Enhancer–promoter interactions are encoded by complex genomic signatures on looping chromatin. Nat Genet. 2016; 48(5):488. Ng KLS, Mishra SK. De novo SVM classification of precursor microRNAs from genomic pseudo hairpins using global and intrinsic folding measures. Bioinformatics. 2007; 23(11):1321–30. Demšar J. Statistical comparisons of classifiers over multiple data sets,. J Mach Learn Res. 2006; 7:1–30. García S, Herrera F. An extension on "Statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons. J Mach Learn Res. 2008; 9:2677–94. Sokolova M, Lapalme G. A systematic analysis of performance measures for classification tasks. Informa Process Manag. 2009; 45:427–37. Ferri C, Hernández-Orallo J, Modroiu R. An experimental comparison of performance measures for classification. Pattern Recogn Lett. 2009; 30:27–38. Garcia V, Mollineda RA, Sanchez JS. Theoretical analysis of a performance measure for imbalanced data. In: Proceedings of ICPR 2010 – the IAPR 20th International Conference on Pattern Recognition. IEEE: 2010. p. 617–20. https://doi.org/10.1109/icpr.2010.156. Choi S-S, Cha S-H. A survey of binary similarity and distance measures. J Syst Cybernet Informa. 2010; 8(1):43–8. Japkowicz N, Shah M. Evaluating Learning Algorithms: A Classification Perspective. Cambridge: Cambridge University Press; 2011. Powers DMW. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness & correlation. J Mach Learn Technol. 2011; 2(1):37–63. Vihinen M. How to evaluate performance of prediction methods? Measures and their interpretation in variation effect analysis. BMC Genomics. 2012; 13(4):2. Shin SJ, Kim H, Han S-T. Comparison of the performance evaluations in classification. Int J Adv Res Comput Commun Eng. 2016; 5(8):441–4. Branco P, Torgo L, Ribeiro RP. A survey of predictive modeling on imbalanced domains. ACM Comput Surv (CSUR). 2016; 49(2):31. Ballabio D, Grisoni F, Todeschini R. Multivariate comparison of classification performance measures. Chemom Intell Lab Syst. 2018; 174:33–44. Tharwat A. Classification assessment methods. Appl Comput Informa. 2018:1–13. https://doi.org/10.1016/j.aci.2018.08.003. Luque A, Carrasco A, Martín A, de las Heras A. The impact of class imbalance in classification performance metrics based on the binary confusion matrix. Pattern Recogn. 2019; 91:216–31. Anagnostopoulos C, Hand DJ, Adams NM. Measuring Classification Performance: the hmeasure Package. Technical report, CRAN. 2019:1–17. Parker C. An analysis of performance measures for binary classifiers. In: Proceedings of IEEE ICDM 2011 – the 11th IEEE International Conference on Data Mining. IEEE: 2011. p. 517–26. https://doi.org/10.1109/icdm.2011.21. Wang L, Chu F, Xie W. Accurate cancer classification using expressions of very few genes. IEEE/ACM Trans Comput Biol Bioinforma. 2007; 4(1):40–53. Sokolova M, Japkowicz N, Szpakowicz S. Beyond accuracy, F-score and ROC: a family of discriminant measures for performance evaluation. In: Proceedings of Advances in Artificial Intelligence (AI 2006), Lecture Notes in Computer Science, vol. 4304. Heidelberg: Springer: 2006. p. 1015–21. Gu Q, Zhu L, Cai Z. Evaluation measures of the classification performance of imbalanced data sets. In: Proceedings of ISICA 2009 – the 4th International Symposium on Computational Intelligence and Intelligent Systems, Communications in Computer and Information Science, vol. 51. Heidelberg: Springer: 2009. p. 461–71. Bekkar M, Djemaa HK, Alitouche TA. Evaluation measures for models assessment over imbalanced data sets. J Informa Eng Appl. 2013; 3(10):27–38. Akosa JS. Predictive accuracy: a misleading performance measure for highly imbalanced data. In: Proceedings of the SAS Global Forum 2017 Conference. Cary, North Carolina: SAS Institute Inc.: 2017. p. 942–2017. Guilford JP. Psychometric Methods. New York City: McGraw-Hill; 1954. Cramér H. Mathematical Methods of Statistics. Princeton: Princeton University Press; 1946. Matthews BW. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Biophys Acta (BBA) Protein Struct. 1975; 405(2):442–51. Baldi P, Brunak S, Chauvin Y, Andersen CA, Nielsen H. Assessing the accuracy of prediction algorithms for classification: an overview. Bioinformatics. 2000; 16(5):412–24. Gorodkin J. Comparing two K-category assignments by a K-category correlation coefficient. Comput Biol Chem. 2004; 28(5–6):367–74. The MicroArray Quality Control (MAQC) Consortium. The MAQC-II Project: a comprehensive study of common practices for the development and validation of microarray-based predictive models. Nat Biotechnol. 2010; 28(8):827–38. The SEQC/MAQC-III Consortium. A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequence Quality Control consortium. Nat Biotechnol. 2014; 32:903–14. Liu Y, Cheng J, Yan C, Wu X, Chen F. Research on the Matthews correlation coefficients metrics of personalized recommendation algorithm evaluation. Int J Hybrid Informa Technol. 2015; 8(1):163–72. Naulaerts S, Dang CC, Ballester PJ. Precision and recall oncology: combining multiple gene mutations for improved identification of drug-sensitive tumours. Oncotarget. 2017; 8(57):97025. Brown JB. Classifiers and their metrics quantified. Mol Inform. 2018; 37:1700127. Boughorbel S, Jarray F, El-Anbari M. Optimal classifier for imbalanced data using Matthews correlation coefficient metric. PLoS ONE. 2017; 12(6):0177678. Buckland M, Gey F. The relationship between recall and precision. J Am Soc Inform Sci. 1994; 45(1):12–9. Saito T, Rehmsmeier M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE. 2015; 10(3):0118432. Dice LR. Measures of the amount of ecologic association between species ecology. Ecology. 1945; 26(3):297–302. Sørensen T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. K Dan Vidensk Sels. 1948; 5(4):1–34. van Rijsbergen CJ. Foundations of evaluation. J Doc. 1974; 30:365–73. van Rijsbergen CJ, Joost C. Information Retrieval. New York City: Butterworths; 1979. Chinchor N. MUC-4 evaluation metrics. In: Proceedings of MUC-4 – the 4th Conference on Message Understanding. McLean: Association for Computational Linguistics: 1992. p. 22–9. Zijdenbos AP, Dawant BM, Margolin RA, Palmer AC. Morphometric analysis of white matter lesions in MR images: method and validation. IEEE Trans Med Imaging. 1994; 13(4):716–24. Tague-Sutcliffe J. The pragmatics of information retrieval experimentation. In: Information Retrieval Experiment, Chap. 5. Amsterdam: Butterworths: 1981. Tague-Sutcliffe J. The pragmatics of information retrieval experimentation, revisited. Informa Process Manag. 1992; 28:467–90. Lewis DD. Evaluating text categorization. In: Proceedings of HLT 1991 – Workshop on Speech and Natural Language. p. 312–8. https://doi.org/10.3115/112405.112471. Lewis DD, Yang Y, Rose TG, Li F. RCV1: a new benchmark collection for text categorization research. J Mach Learn Res. 2004; 5:361–97. Tsoumakas G, Katakis I, Vlahavas IP. Random k-labelsets for multilabel classification. IEEE Trans Knowl Data Eng. 2011; 23(7):1079–89. Pillai I, Fumera G, Roli F. Designing multi-label classifiers that maximize F measures: state of the art. Pattern Recogn. 2017; 61:394–404. Lipton ZC, Elkan C, Naryanaswamy B. Optimal thresholding of classifiers to maximize F1 measure. In: Proceedings of ECML PKDD 2014 – the 2014 Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science, vol. 8725. Heidelberg: Springer: 2014. p. 225–39. Sasaki Y. The truth of the F-measure. Teach Tutor Mater. 2007; 1(5):1–5. Hripcsak G, Rothschild AS. Agreement, the F-measure, and reliability in information retrieval. J Am Med Inform Assoc. 2005; 12(3):296–8. Powers DMW. What the F-measure doesn't measure...: features, flaws, fallacies and fixes. arXiv:1503.06410. 2015. Van Asch V. Macro-and micro-averaged evaluation measures. Technical report. 2013:1–27. Flach PA, Kull M. Precision-Recall-Gain curves: PR analysis done right. In: Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS 2015). Cambridge: MIT Press: 2015. p. 838–46. Yedidia A. Against the F-score. 2016. Blogpost: https://adamyedidia.files.wordpress.com/2014/11/f_score.pdf. Accessed 10 Dec 2019. Hand D, Christen P. A note on using the F-measure for evaluating record linkage algorithms. Stat Comput. 2018; 28:539–47. Xi W, Beer MA. Local epigenomic state cannot discriminate interacting and non-interacting enhancer–promoter pairs with high accuracy. PLoS Comput Biol. 2018; 14(12):1006625. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960; 20(1):37–46. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977; 33(1):159–74. McHugh ML. Interrater reliability: the Kappa statistic. Biochem Med. 2012; 22(3):276–82. Flight L, Julious SA. The disagreeable behaviour of the kappa statistic. Pharm Stat. 2015; 14:74–8. Powers DMW. The problem with Kappa. In: Proceedings of EACL 2012 – the 13th Conference of the European Chapter of the Association for Computational Linguistics. Avignon: ACL: 2012. p. 345–55. Delgado R, Tibau X-A. Why Cohen's Kappa should be avoided as performance measure in classification. PloS ONE. 2019; 14(9):0222916. Ben-David A. Comparison of classification accuracy using Cohen's Weighted Kappa. Expert Syst Appl. 2008; 34:825–32. Barandela R, Sánchez JS, Garca V, Rangel E. Strategies for learning in class imbalance problems. Pattern Recogn. 2003; 36(3):849–51. Wei J-M, Yuan X-J, Hu Q-H, Wang S-Q. A novel measure for evaluating classifiers. Expert Syst Appl. 2010; 37:3799–809. Delgado R, Núñez González JD. Enhancing confusion entropy (CEN) for binary and multiclass classification. PLoS ONE. 2019; 14(1):0210264. Jurman G, Riccadonna S, Furlanello C. A comparison of MCC and CEN error measures in multi-class prediction. PLoS ONE. 2012; 7(8):41882. Sebastiani F. An axiomatically derived measure for the evaluation of classification algorithms. In: Proceedings of ICTIR 2015 – the ACM SIGIR 2015 International Conference on the Theory of Information Retrieval. New York City: ACM: 2015. p. 11–20. Espíndola R, Ebecken N. On extending F-measure and G-mean metrics to multi-class problems. WIT Trans Inf Commun Technol. 2005; 35:25–34. Brodersen KH, Ong CS, Stephan KE, Buhmann JM. The balanced accuracy and its posterior distribution. In: Proceeedings of IAPR 2010 – the 20th IAPR International Conference on Pattern Recognition. IEEE: 2010. p. 3121–4. https://doi.org/10.1109/icpr.2010.764. Dubey A, Tarar S. Evaluation of approximate rank-order clustering using Matthews correlation coefficient. Int J Eng Adv Technol. 2018; 8(2):106–13. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982; 143:29–36. Bradley AP. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recogn. 1997; 30:1145–59. Flach PA. The geometry of ROC space: understanding machine learning metrics through ROC isometrics. In: Proceedings of ICML 2003 – the 20th International Conference on Machine Learning. Palo Alto: AAAI Press: 2003. p. 194–201. Huang J, Ling CX. Using AUC and accuracy in evaluating learning algorithms. IEEE Trans Knowl Data Eng. 2005; 17(3):299–310. Fawcett T. An introduction to ROC analysis. Pattern Recogn Lett. 2006; 27(8):861–74. Hand DJ. Evaluating diagnostic tests: the area under the ROC curve and the balance of errors. Stat Med. 2010; 29:1502–10. Suresh Babu N. Various performance measures in binary classification – An overview of ROC study. Int J Innov Sci Eng Technol. 2015; 2(9):596–605. Lobo JM, Jiménez-Valverde A, Real R. AUC: a misleading measure of the performance of predictive distribution models. Glob Ecol Biogeogr. 2008; 17(2):145–51. Hanczar B, Hua J, Sima C, Weinstein J, Bittner M, Dougherty ER. Small-sample precision of ROC-related estimates. Bioinformatics. 2010; 26(6):822–30. Hand DJ. Measuring classifier performance: a coherent alternative to the area under the ROC curve. Mach Learn. 2009; 77(9):103–23. Ferri C, Hernández-Orallo J, Flach PA. A coherent interpretation of AUC as a measure of aggregated classification performance. In: Proceedings of ICML 2011 – the 28th International Conference on Machine Learning. Norristown: Omnipress: 2011. p. 657–64. Keilwagen J, Grosse I, Grau J. Area under precision-recall curves for weighted and unweighted data. PLoS ONE. 2014; 9(3):92209. Chicco D. Ten quick tips for machine learning in computational biology. BioData Min. 2017; 10(35):1–17. Ozenne B, Subtil F, Maucort-Boulch D. The precision–recall curve overcame the optimism of the receiver operating characteristic curve in rare diseases. J Clin Epidemiol. 2015; 68(8):855–9. Blagus R, Lusa L. Class prediction for high-dimensional class-imbalanced data. BMC Bioinformatics. 2010; 11:523. Sedgwick P. Pearson's correlation coefficient. Br Med J (BMJ). 2012; 345:4483. Hauke J, Kossowski T. Comparison of values of Pearson's and Spearman's correlation coefficients on the same sets of data. Quaest Geographicae. 2011; 30(2):87–93. Chicco D, Ciceri E, Masseroli M. Extended Spearman and Kendall coefficients for gene annotation list correlation. In: International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics. Springer: 2014. p. 19–32. https://doi.org/10.1007/978-3-319-24462-4_2. Alon U, Barkai N, Notterman DA, Gish K, Ybarra S, Mack D, Levine AJ. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci (PNAS). 1999; 96(12):6745–50. Boulesteix A-L, Strimmer K. Partial least squares: a versatile tool for the analysis of high-dimensional genomic data. Brief Bioinforma. 2006; 8(1):32–44. Boulesteix A-L, Durif G, Lambert-Lacroix S, Peyre J, Strimmer K. Package 'plsgenomics'. 2018. https://cran.r-project.org/web/packages/plsgenomics/index.html. Accessed 10 Dec 2019. Alon U, Barkai N, Notterman DA, Gish K, Ybarra S, Mack D, Levine AJ. Data pertaining to the article 'Broad patterns of gene expression revealed by clustering of tumor and normal colon tissues probed by oligonucleotide arrays'. 2000. http://genomics-pubs.princeton.edu/oncology/affydata/index.html. Accessed 10 Dec 2019. Friedman JH. Stochastic gradient boosting. Comput Stat Data Anal. 2002; 38(4):367–78. Timofeev R. Classification and regression trees (CART) theory and applications. Berlin: Humboldt University; 2004. Beyer K, Goldstein J, Ramakrishnan R, Shaft U. When is "nearest neighbor" meaningful? In: International Conference on Database Theory. Springer: 1999. p. 217–35. https://doi.org/10.1007/3-540-49257-7_15. The authors thank Julia Lin (University of Toronto) and Samantha Lea Wilson (Princess Margaret Cancer Centre) for their help in the English proof-reading of this manuscript, and Bo Wang (Peter Munk Cardiac Centre) for his helpful suggestions. Krembil Research Institute, Toronto, Ontario, Canada Davide Chicco Peter Munk Cardiac Centre, Toronto, Ontario, Canada Fondazione Bruno Kessler, Trento, Italy Giuseppe Jurman Search for Davide Chicco in: Search for Giuseppe Jurman in: DC conceived the study, designed and wrote the "Use cases" section, designed and wrote the "Genomics scenario: colon cancer gene expression" section, reviewed and approved the complete manuscript. GJ designed and wrote the "Background" and the "Notation and mathematical foundations" sections, reviewed and approved the complete manuscript. Both the authors read and approved the final manuscript. Correspondence to Davide Chicco. The authors declare they have no competing interests. Additional file 1 Use case A2 — Positively imbalanced dataset. (a) Barplot representing accuracy, F1 score, and normalized Matthews correlation coefficient (normMCC = (MCC + 1) / 2), all in the [0, 1] interval, where 0 is the worst possible score and 1 is the best possible score, applied to the Use case A2 positively imbalanced dataset. (b) Pie chart representing the amounts of true positives (TP), false negatives (FN), true negatives (TN), and false positives (FP). (c) Pie chart representing the dataset balance, as the amounts of positive data instances and negative data instances. Additional file 2 Use case B1 — Balanced dataset. (a) Barplot representing accuracy, F1 score, and normalized Matthews correlation coefficient (normMCC = (MCC + 1) / 2), all in the [0, 1] interval, where 0 is the worst possible score and 1 is the best possible score, applied to the Use case B1 balanced dataset. (b) Pie chart representing the amounts of true positives (TP), false negatives (FN), true negatives (TN), and false positives (FP). (c) Pie chart representing the dataset balance, as the amounts of positive data instances and negative data instances. Additional file 4 Use case C1 — Negatively imbalanced dataset. (a) Barplot representing accuracy, F1 score, and normalized Matthews correlation coefficient (normMCC = (MCC + 1) / 2), all in the [0, 1] interval, where 0 is the worst possible score and 1 is the best possible score, applied to the Use case C1 negatively imbalanced dataset. (b) Pie chart representing the amounts of true positives (TP), false negatives (FN), true negatives (TN), and false positives (FP). (c) Pie chart representing the dataset balance, as the amounts of positive data instances and negative data instances. Chicco, D., Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genomics 21, 6 (2020) doi:10.1186/s12864-019-6413-7 Binary classification F1 score Dataset imbalance Comparative and evolutionary genomics
CommonCrawl
Laplace probability distribution and the truncated skew Laplace probability distribu-tion and show that these models are better than the existing models to model some of the real world problems. , Legal. , . ( b b {\displaystyle X+(-Y)} Laplace Distribution Class. ≥ μ Mathematical and statistical functions for the Laplace distribution, which is commonly used in signal processing and finance. 0 / the mgf of NL (α,β,µ,σ2) is ... Laplace distribution; and as α,β → ∞, it tends to a normal distribution. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. For this reason, it is also called the double exponential distribution. Recall that \( \E(X) = a + b \E(U) \) and \( \var(X) = b^2 \var(U) \), so the results follow from the mean and variance of \( U \). Open the Special Distribution Simulator and select the Laplace distribution. One of the advantages of (MSL) distribution is that it can handle both heavy tails and skewness and that it has a simple form compared to other multivariate skew distributions. Laplace in 1778 published his second law of errors wherein he noted that the frequency of an error was proportional to the exponential of the square of its magnitude. {\displaystyle U} ) , Laplace distribution represents the distribution of differences between two independent variables having identical exponential distributions. J Roy Stat Soc, 74, 322–331, Characteristic function (probability theory), "On the multivariate Laplace distribution", "JPEG standard uniform quantization error modeling with applications to sequential and progressive operation modes", CumFreq for probability distribution fitting, https://en.wikipedia.org/w/index.php?title=Laplace_distribution&oldid=991866527, Location-scale family probability distributions, Creative Commons Attribution-ShareAlike License, The Laplace distribution is a limiting case of the, The addition of noise drawn from a Laplacian distribution, with scaling parameter appropriate to a function's sensitivity, to the output of a statistical database query is the most common means to provide, This page was last edited on 2 December 2020, at 05:47. {\displaystyle N} 0 25 Downloads. On multiplying these characteristic functions (equivalent to the characteristic function of the sum of the random variables (revealing a link between the Laplace distribution and least absolute deviations). {\displaystyle {\textrm {Exponential}}(1/b)} where Mémoires de l'Academie Royale des Sciences Presentés par Divers Savan, 6, 621–656, Wilson EB (1923) First and second laws of error. ∼ In this paper, upon using the known expressions for the Best Linear Unbiased Estimators (BLUEs) of the location and scale parameters of the Laplace distribution based on a progressively Type-II right censored sample, we derive the exact moment generating function (MGF) of the linear combination of standard Laplace order statistics. Run the simulation 1000 times and compare the emprical density function and the probability density function. p Using the CDF of U we have \( \P(V \le v) = \P(-v \le U \le v) = G(v) - G(-v) = 1 - e^{-v} \) for \( v \in [0, \infty) \). {\displaystyle {\textrm {Laplace}}(0,1)} of Suppose that \(X\) has the Laplace distribution with location parameter \( a \in \R \) and scale parameter \(b \in (0, \infty)\), and that \( c \in \R \) and \(d \in (0, \infty)\). λ To do this, we must replace the argument s in the MGF with −s to turn it into a Laplace transform. 1 μ (a) A RV X Has A Laplace Distribution If Its Pdf Is 1 Fx(x) = -te-Als! {\displaystyle p=0} ( Watch the recordings here on Youtube! = 1. Keep the default parameter value. The moments of \( X \) about the location parameter have a simple form. [10][11], Keynes published a paper in 1911 based on his earlier thesis wherein he showed that the Laplace distribution minimised the absolute deviation from the median. \( \E\left[(X - a)^n\right] = b^n n! For various values of the scale parameter, compute selected values of the distribution function and the quantile function. {\displaystyle x_{1},x_{2},...,x_{N}} Suppose that \( (Z_1, Z_2, Z_3, Z_4) \) is a random sample of size 4 from the standard normal distribution. n Tests are given for the Laplace or double exponential distribution. of The standard Laplace distribution is generalized by adding location and scale parameters. Suppose that \(U\) has the standard Laplace distribution. ( Recall that \(M(t) = e^{a t} m(b t)\) where \(m\) is the standard Laplce MGF. Thus the results from the skewness and kurtosis of \( U \). The first quartile is \(q_1 = -\ln 2 \approx -0.6931\). / μ Recall that \(F(x) = G\left(\frac{x - a}{b}\right)\) where \(G\) is the standard Laplace CDF. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. This distribution is often referred to as Laplace's first law of errors. Generating random variables according to the Laplace distribution [12], Generating values from the Laplace distribution, Johnson, N.L., Kotz S., Balakrishnan, N. (1994), Laplace, P-S. (1774). μ Laplace Here, \(f\) increases on \([0, a]\) and decreases on \([a, \infty)\) with mode \(x = a\). \( \E\left[(X - a)^n\right] = 0 \) if \( n \in \N \) is odd. 0 \) if \( n \in \N \) is even. , which is, Sargan distributions are a system of distributions of which the Laplace distribution is a core member. − , can also be generated as the logarithm of the ratio of two i.i.d. \(U\) has moment generating function \(m\) given by \[ m(t) = \E\left(e^{t U}\right) = \frac{1}{1 - t^2}, \quad t \in (-1, 1) \], For \( t \in (-1, 1) \), \[ m(t) = \int_{-\infty}^\infty e^{t u} g(u) \, du = \int_{-\infty}^0 \frac{1}{2} e^{(t + 1)u} du + \int_0^\infty \frac{1}{2} e^{(t - 1)u} du = \frac{1}{2(t + 1)} - \frac{1}{2(t - 1)} = \frac{1}{1 - t^2}\], This result can be obtained from the moment generating function or directly. Overview; Functions ; function y = laprnd(m, n, mu, sigma) %LAPRND generate i.i.d. {\displaystyle {\textrm {Laplace}}(0,b)} ), the result is, This is the same as the characteristic function for Y Part (a) is due to the symmetry of \( g \) about 0. It is also called double exponential distribution. Cumulative distribution function. Have questions or comments? \(X\) has quantile function \(F^{-1}\) given by \[ F^{-1}(p) = \begin{cases} a + b \ln(2 p), & 0 \le p \le \frac{1}{2} \\ a - b \ln[2(1 - p)], & \frac{1}{2} \le p \lt 1 \end{cases} \]. Its cumulative distribution function is as follows: The inverse cumulative distribution function is given by. \(X\) has distribution function \(F\) given by \[ F(x) = \begin{cases} \frac{1}{2} \exp\left(\frac{x - a}{b}\right), & x \in (-\infty, a] \\ 1 - \frac{1}{2} \exp\left(-\frac{x - a}{b}\right), & x \in [a, \infty) \end{cases} \]. The formula for the quantile function follows immediately from the CDF by solving \(p = G(u)\) for \(u\) in terms of \(p \in (0, 1)\). − b {\displaystyle E_{n}(x)=x^{n-1}\Gamma (1-n,x)} Compute selected values of the distribution function and the quantile function. Consequently, the Laplace distribution has fatter tails than the normal distribution. x , the random variable. b = . Run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts. The standard Laplace distribution has a curious connection to the standard normal distribution. {\displaystyle b=1} are, respectively. a. = − and Open the random quantile experiment and select the Laplace distribution. + Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. The Laplace distribution is one of the earliest distributions in probability theory. Run the simulation 1000 times and compare the empirical mean and standard deviation to the distribution mean and standard deviation. X α The first quartile is \(q_1 = a - b \ln 2 \). ( ( (2013) who define a Generalized Laplace distribution as location-scale mixtures of normal distributions where r t ∼ ML(μ t, H t), with conditional mean μ t and conditional covariance H t.The mixing distribution is the standard exponential. CONTINUOUS DISTRIBUTIONS Laplace transform (Laplace-Stieltjes transform) Definition The Laplace transform of a non-negative random variable X ≥ 0 with the probability density function f(x) is defined as f∗(s) = Z ∞ 0 e−stf(t)dt = E[e−sX] = Z ∞ 0 e−stdF(t) also denoted as L X(s) • Mathematically it is the Laplace transform of the pdf function. E 1 Open the Special Distribution Calculator and select the Laplace distribution. I'm studying the distributional properties of a laplace distribution, and I'm trying to get some intuition beyond plotting the distribution of what it means to have an undefined moment. is a location parameter and ) If \( X \) has the Laplace distribution with location parameter \( a \) and scale parameter \( b \), then \[ V = \frac{1}{2} \exp\left(\frac{X - a}{b}\right) \bs{1}(X \lt a) + \left[1 - \frac{1}{2} \exp\left(-\frac{X - a}{b}\right)\right] \bs{1}(X \ge a)\] has the standard uniform distribution. Consider two i.i.d random variables • In dealing with continuous ra Note that \( \E\left[(X - a)^n\right] = b^n \E(U^n) \) so the results follow the moments of \( U \). The latter leads to the usual random quantile method of simulation. th order Sargan distribution has density[2][3]. The Laplace Distribution and Generalizations A Revisit with Applications to Communications, Economics, Engineering, and Finance Birkhäuser Boston • Basel • Berlin . 4.7. Again by definition, we can take \( X = a + b U \) where \( U \) has the standard Laplace distribution. 0 . By symmetry \[ \int_{-\infty}^\infty \frac{1}{2} e^{-\left|u\right|} du = \int_0^\infty e^{-u} du = 1 \]. The Laplace distribution is easy to integrate (if one distinguishes two symmetric cases) due to the use of the absolute value function. A Laplace random variable can be represented as the difference of two iid exponential random variables. , Compute The Moment Generating Function Of X. Example .2: maple Inversion of Gamma Distribution mgf. x μ β As before, the excess kurtosis is \( \kur(X) - 3 = 3 \). Mémoire sur la probabilité des causes par les évènements. is the generalized exponential integral function This is a two-parameter, flexible family with a sharp peak at the mode, very much in the spirit of the classical Laplace distribution. Let \( h \) denote the standard exponential PDF, extended to all of \( \R \), so that \( h(v) = e^{-v} \) if \( v \ge 0 \) and \( h(v) = 0 \) if \( v \lt 0 \). , Value . λ Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution. 2 The third quartile is \(q_3 = \ln 2 \approx 0.6931\). The probability density function of the Laplace distribution is also reminiscent of the normal distribution; however, whereas the normal distribution is expressed in terms of the squared difference from the mean {\displaystyle X,Y\sim {\textrm {Exponential}}(\lambda )} 1 ( The standard Laplace distribution is a continuous distribution on \( \R \) with probability density function \( g \) given by \[ g(u) = \frac{1}{2} e^{-\left|u\right|}, \quad u \in \R \], It's easy to see that \( g \) is a valid PDF. ^ If \( U \) has the standard Laplace distribution then \( V = |U| \) has the standard exponential distribution. Equivalently, This distribution is often referred to as Laplace's first law of errors. Moment Generating Function (MGF) MGF… Hence the MGF of \( U \) is \( t \mapsto 1 / (1 - t)(1 + t) = 1 / (1 - t^2) \) for \( -1 \lt t \lt 1 \), which is the standard Laplace MGF. Using convolution, the PDF of \( V - W \) is \(g(u) = \int_\R h(v) h(v - u) dv \). , the maximum likelihood estimator {\displaystyle X,-Y} {\displaystyle Z\sim {\textrm {Laplace}}(0,1/\lambda )} % mu : mean % sigma : … Then \( X \) has a general exponential distribution in the scale parameter \( b \), with natural parameter \( -1/b \) and natural statistics \( \left|X - a\right| \). The probability density function \( g \) satisfies the following properties: These results follow from standard calculus, since \( g(u) = \frac 1 2 e^{-u} \) for \( u \in [0, \infty) \) and \( g(u) = \frac 1 2 e^u \) for \( u \in (-\infty, 0] \). To read more about the step by step tutorial on Exponential distribution refer the link Exponential Distribution. If \( u \ge 0 \) then \[ \P(U \le u) = \P(I = 0) + \P(I = 1, V \le u) = \P(I = 0) + \P(I = 1) \P(V \le u) = \frac{1}{2} + \frac{1}{2}(1 - e^{-u}) = 1 - \frac{1}{2} e^{-u} \] If \( u \lt 0 \), \[ \P(U \le u) = \P(I = 0, V \gt -u) = \P(I = 0) \P(V \gt -u) = \frac{1}{2} e^{u} \]. {\displaystyle b} This function is the CDF of the standard exponential distribution. The Laplace distribution, named for Pierre Simon Laplace arises naturally as the distribution of the difference of two independent, identically distributed exponential variables. n The next example shows how the mgf of an exponential random variableis calculated. {\displaystyle p} Y The following is a formal definition. random number drawn from laplacian distribution with specified parameter. Open the Special Distribution Simulator and select the Laplace distribution. , Keep the default parameter value and note the shape of the probability density function. Some fundamental properties of the multivariate skew Laplace distribution are discussed. {\displaystyle \mu } Given a random variable 0 The standard Laplace distribution function \(G\) is given by \[ G(u) = \begin{cases} \frac{1}{2} e^u, & u \in (-\infty, 0] \\ 1 - \frac{1}{2} e^{-u}, & u \in [0, \infty) \end{cases} \]. independent and identically distributed samples It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together back-to-back, although the term is also sometimes used to refer to the Gumbel distribution. (b) Now Let Y And Z Be Independent Random Variables. μ If \( U \) has the standard Laplace distribution then \( V = \frac{1}{2} e^U \bs{1}(U \lt 0) + \left(1 - \frac{1}{2} e^{-U}\right) \bs{1}(U \ge 0)\) has the standard uniform distribution. We can use the inverse Laplace transform option in maple to invert the gamma MGF to a density. . variate can also be generated as the difference of two i.i.d. {\displaystyle E_{n}()} Connections to the standard uniform distribution. ( In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. Laplace We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. We introduce moment generating functions (MGFs), which have many uses in probability. \(f\) is concave upward on \([0, a]\) and on \([a, \infty)\) with a cusp at \( x = a \). ) ) It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together back-to-back, although the term is also sometimes used to refer to the Gumbel distribution. Suppose that \( X \) has the Laplace distribution with known location parameter \( a \in \R \) and unspecified scale parameter \( b \in (0, \infty) \). {\displaystyle \mu } , the positive half-line is exactly an exponential distribution scaled by 1/2. 1 For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. The quantile function \(G^{-1}\) given by \[ G^{-1}(p) = \begin{cases} \ln(2 p), & p \in \left[0, \frac{1}{2}\right] \\ -\ln[2(1 - p)], & p \in \left[\frac{1}{2}, 1\right] \end{cases} \]. The Laplacian distribution has been used in speech recognition to model priors on DFT coefficients [5] and in JPEG image compression to model AC coefficients [6] generated by a DCT. We say that X has a Laplace distribution if its pdf is . \) if \( n \in \N \) is even. , The Laplace distribution is easy to integrate (if one distinguishes two symmetric cases) due to the use of the absolute value function. Its cumulative distribution function is as follows: The inverse cumulative distribution function is given by. For y ~ 1 (where y is the response) the maximum likelihood estimate (MLE) for the location parameter is the sample median, and the MLE for \(b\) is mean(abs(y-location)) (replace location by its MLE if unknown). {\displaystyle \mu =0} JASA 18, 143, Keynes JM (1911) The principal averages and the laws of error which lead to them. > Definitions: Let '(t) be defined on real line. Open the random quantile experiment and select the Laplace distribution. {\displaystyle \alpha \geq 0,\beta _{j}\geq 0} He published it in 1774 when he noted that the frequency of an error could be expressed as an exponential function of its magnitude once its sign was disregarded. This follows from the inverse cumulative distribution function given above. \]. We study a class of probability distributions on the positive real line, which arise by folding the classical Laplace distribution around the origin. . 1 random variables. A 0 For the even order moments, symmetry and an integration by parts (or using the gamma function) gives \[ \E(U^n) = \frac{1}{2} \int_{-\infty}^0 u^n e^u du + \frac{1}{2} \int_0^\infty u^n e^{-u} du = \int_0^\infty u^n e^{-u} du = n! If only β = ∞ the distribution is that of the sum of independent normal and exponential components and has a fatter tail than the normal only in the upper tail. This parameterization is called the classical Laplace distribution by Kotz et al. Let W = Y – Z. Then \(Y = c + d X\) has the Laplace distribution with location parameter \( c + a d \) scale parameter \(b d\). . b No License. 0 / \(X\) has probability density function \(f\) given by \[ f(x) = \frac{1}{2 b} \exp\left(-\frac{\left|x - a\right|}{b}\right), \quad x \in \R \]. For any set of independent continuous random variables, for any linear combination of those variables, its characteristic function (which uniquely determines the distribution) can be acquired by multiplying the corresponding characteristic functions. / Recall that skewness and kurtosis are defined in terms of the standard score, and hence are unchanged by a location-scale transformation. {\displaystyle {\hat {b}}} 1 {\displaystyle {\textrm {Laplace}}(\mu ,b)} U \( g \) is concave upward on \( (-\infty, 0] \) and on \( [0, \infty) \) with a cusp at \( u = 0 \), \( G^{-1}(1 - p) = -G^{-1}(p) \) for \( p \in (0, 1) \). A random variable has a ( This tutorial will help you to understand Exponential distribution and you will learn how to derive mean, variance, moment generating function of Exponential distribution and other properties of Exponential distribution. . ( The difference between two independent identically distributedexponential random variables is governed by a Laplace … ) , which is sometimes referred to as the diversity, is a scale parameter. \( F^{-1}(1 - p) = a - b F^{-1}(p) \) for \( p \in (0, 1) \). Find the moments of the distribution that has mgf 2. has a Laplace distribution with parameters The MGF of this distribution is \[ m_0(t) = \E\left(e^{t Z_1 Z_2}\right) = \int_{\R^2} e^{t x y} \frac{1}{2 \pi} e^{-(x^2 + y^2)/2} d(x, y) \] Changing to polar coordinates gives \[ m_0(t) = \frac{1}{2 \pi} \int_0^{2 \pi} \int_0^\infty e^{t r^2 \cos \theta \sin \theta} e^{-r^2/2} r \, dr \, d\theta = \frac{1}{2 \pi} \int_0^{2 \pi} \int_0^\infty \exp\left[r^2\left(t \cos \theta \sin\theta - \frac{1}{2}\right)\right] r \, dr \, d\theta \] The inside integral can be done with a simple substitution for \( \left|t\right| \lt 1 \), yielding \[ m_0(t) = \frac{1}{2 \pi} \int_0^{2 \pi} \frac{1}{1 - t \sin(2 \theta)} d\theta = \frac{1}{\sqrt{1 - t^2}} \] Hence \( U \) has MGF \( m_0^2(t) = \frac{1}{1 - t^2} \) for \( \left|t\right| \lt 1 \), which again is the standard Laplace MGF. E X Open the Special Distribution Simulator and select the Laplace distribution. If \( V \) and \( W \) are independent and each has the standard exponential distribution, then \( U = V - W \) has the standard Laplace distribution. uniform random variables. In addition to real-valued distributions (univariate distributions), moment-generating functions can be defined for vector- or matrix-valued random variables, and can even be extended to more general cases. \( g \) increases on \( (-\infty, 0] \) and decreases on \( [0, \infty) \), with mode \( u = 0 \). ) , In wikipedia you can see that the mgf is only defined for $|t| < 1/b$ so as the variance of the laplace distribution increases to 1, you lose all moments including the mean. The Standard Laplace Distribution For selected values of the parameters, run the simulation 1000 times and compare the empirical density function, mean, and standard deviation to their distributional counterparts. If Again this follows from basic calculus, since \( g(u) = \frac{1}{2} e^u \) for \( u \le 0 \) and \( g(u) = \frac{1}{2} e^{-u} \) for \( u \ge 0 \). The MGF of \( V \) is \( t \mapsto 1/(1 - t) \) for \( t \lt 1 \). laplace distribution mgf Petsafe Carelift Support Dog Harness, Cushion Sea Star, Reticular Opacities Ct, Tommy Bahama Kids Beach Chair, Peter Thomas Roth Exfoliating Clarifying Liquid, Fnp Mastery App Cost, Resin Tray Price, Porto Rico Sweet Potatoes, Where To Buy Cardamom In Singapore, Ge Café Convection Oven, Craig Foster Octopus Wife, Zinnia Grandiflora 'gold On Blue, laplace distribution mgf 2020
CommonCrawl
Moment of Inertia of Continuous Bodies - JEE Important Topic Moment of Inertia of Continuou... What is the Moment of Inertia? The moment of inertia is a measure of an object's resistance to changes in its rotation. It is the rotational analogue to mass, which quantifies an object's resistance to changes in its linear motion. The moment of inertia depends on the shape of the object and its distribution of mass. For example, a sphere has a lower moment of inertia than a cylinder with the same mass and radius because the sphere's mass is more evenly distributed around its centre. The moment of inertia is important in many fields, including Engineering, Physics, and Astronomy. In Engineering, it is used to calculate the strength of beams and columns. In Physics, it is used to study the dynamics of rotating objects. Formula of Moment of Inertia The formula of moment of inertia (I) is the sum of the products of the mass (m) of each particle and the square of its distance (r) from the axis of rotation: $\Rightarrow l=m{{r}^{2}}$ This equation is derived from Newton's second law of motion, which states that the sum of the forces acting on a body is equal to its mass times its acceleration. The formula can be used to calculate the moment of inertia for any object, provided that the masses and distances of all particles are known. Factors on which Moment of Inertia Depends There are three primary factors which affect the moment of inertia of an object. They are: The distribution of mass within the object - How the mass is distributed will affect how much rotational force is required to move it. The shape of the object - A more compact shape will have a lower moment of inertia than a less compact one. The size of the object - A larger object will have a higher moment of inertia than a smaller one. Moment of Inertia of Continuous Bodies Depends on The moment of inertia of a continuous body is the sum of the moments of inertia of all its constituent parts. The total moment of inertia is a measure of an object's resistance to changes in its rotation. A body with a large moment of inertia will require more force to change its rotation than a body with a small moment of inertia. The moment of inertia of a continuous body depends on the distribution of mass within that body. If the body is symmetrical, then the moment of inertia will be lower than if it were not symmetrical. For example, a sphere has a lower moment of inertia than an ellipsoid. The reason for this is that the mass of a sphere is evenly distributed throughout its volume, while the mass of an ellipsoid is not. The second factor that affects the moment of inertia of a continuous body is the shape of that body. A slender body has a lower moment of inertia than a squat body. This is because a slender body has less mass at its edges, where it would rotate if it were to fall over. Finally, the third factor affecting the moment of inertia of a continuous body is the location of its centre of gravity. A body with its centre of gravity closer to its edges will have a higher moment of inertia than one with its centre of gravity closer to its centre. This is because when a body rotates, all of its mass does not rotate around the centre of gravity; only the mass at its edges does so. Theorems to Calculate Moment of Inertia of Bodies If we observe the moment of inertia of different bodies then we will get to know that the position of the axis of rotation affects the moment of inertia of bodies. To find the moment of inertia through any given axis, we have theorems which work perfectly for our use case. Parallel Axis Theorem The Parallel Axis Theorem of Moment of Inertia states that the moment of inertia of a body about any axis is equal to the sum of the moments of inertia about a parallel axis through the centre of mass plus the product of the mass and square of the distance between the axes. In other words, this theorem allows us to calculate the moment of inertia for a body about an axis that is not passing through its centre of gravity. This can be very useful when dealing with objects that have irregular shapes or are not symmetrical. The moment of inertia using parallel axis theorem is given by \[\text{l }=\text{ }{{\text{l}}_{cm}}\text{ }+\text{ }M{{d}^{2}}\] Where d is the distance between the two axes. M is mass of body ${{l}_{cm}}$ is moment of inertia with respect to the centre of mass Perpendicular Axis Theorem The Perpendicular Axis Theorem of Moment of Inertia states that the moment of inertia of any object about an axis perpendicular to its plane of symmetry is equal to the sum of the moments of inertia about any two other mutually perpendicular axes lying in that plane, each passing through the point where the first axis intersects the plane. \[{{I}_{z}}\text{ }=~\text{ }{{I}_{x}}\text{ }+~\text{ }{{I}_{y}}\] Moment of Inertia for Different Bodies Shape of Object M.I along the Symmetry Axis M.I about the Diameter $I=\dfrac{1}{12}M{{L}^{2}}$ $I=\dfrac{1}{3}M{{L}^{2}}$ $I=\dfrac{1}{2}M{{R}^{2}}$ $I=\dfrac{1}{4}M{{R}^{2}}+\frac{1}{12}M{{L}^{2}}$ $M{{R}^{2}}$ Rectangular Objects ${{I}_{c}}=\dfrac{1}{12}m({{h}^{2}}+{{w}^{2}})$ ${{I}_{d}}=\dfrac{1}{12}m({{h}^{2}}+{{w}^{2}})$ $I=M{{R}^{2}}$ $I=\dfrac{M{{r}^{2}}}{2}$ The moment of inertia is the rotational analogue to mass, which quantifies an object's resistance to changes in its linear motion. It is important in many fields, including Engineering, Physics, and Astronomy. Moment of inertia depends on the shape, size and distribution of mass of an object. A larger or more compact object will require more force to change its rotation than a smaller one. The moment of inertia of a continuous body depends on the distribution of mass within that body. If the body is symmetrical, then its inertia will be lower than if it is not. This is because the mass of a spherical body is evenly distributed throughout its volume, and an ellipsoid has less mass at its edges. The location of a body's centre of gravity is also affected by the shape of that body - a slender body has a lower moment of stability than a squat one. Competitive Exams after 12th Science Spring Block Oscillations - Important Concepts and Tips for JEE Uniform Pure Rolling - Important Concepts and Tips for JEE Electrical Field of Charged Spherical Shell - Important Concepts and Tips for JEE Position Vector and Displacement Vector - Important Concepts and Tips for JEE Parallel and Mixed Grouping of Cells - Important Concepts and Tips for JEE Charge in a Magnetic Field - Important Concepts and Tips for JEE FAQs on Moment of Inertia of Continuous Bodies - JEE Important Topic 1. Does the moment of inertia of a rigid body change with the speed of rotation? The moment of inertia of a rigid body is determined solely by the distribution of mass about the axis of rotation and is independent of the rotational speed. As a result, the moment of inertia of a rigid body does not change with rotational speed. 2. Why do we calculate the moment of inertia? Moment of inertia acts as an important tool in rotational motion. For any object or body subjected to rotational motion, the amount of torque required by an item to achieve a given angular acceleration depends on its moment of inertia. The moment of inertia of rigid bodies help us in determining few parameters, such as how to calculate torque or rotational force. JEE students also check JEE Main Overview JEE Main Question Papers JEE Main Study Material JEE Main FAQ JEE Main News JEE Main Participating Colleges JEE Advanced Overview JEE Advanced Question Papers JEE Advanced Study Material JEE Advanced FAQ JEE Advanced Participating Colleges
CommonCrawl
multiplicative formal group law Euler's addition formula for a certain elliptic integral, X p 1 Y4 +Y p 1 X4 1 X2Y2 2 Z[1=2][[X;Y]]: Quillen's work on formal group laws and complex cobordism theory Doug Ravenel Quillen's 6 page paper Enter the formal group law Show it is universal The Brown-Peterson K-theory spectrum; Last revised on June 6, 2017 at 08:08:18. If you're willing to look at non-algebraically-closed bases, then even $\mathbf G_{\text m}$, the multiplicative formal group, doesn't show up in the formal group of an Abelian variety over $\Bbb F_p$. characteristic zero, the additive and multiplicative formal groups are not isomorphic. (d) F(x;y) = (x p 1 y4 + y 1 x4)=(1 + x2y2), a formal group law over Z[1=2]. (c) F(x;y) = (x+y)=(1+xy). The additive formal group law G^ a is simply X+Y, while the multiplicative formal group law G^ m is X+ Y + XY. Note that for any commutative formal group law, the corresponding Lie algebra is an abelian Lie algebra. $\endgroup$ – Lubin Jan 10 '18 at 20:56 A less trivial example is the construction of the group law associated to an elliptic curve, which will be given in §4. To prove this, we need an invariant which can be used to tell two formal groups apart. (b) F(x;y) = x + y + uxy (where u is a unit in R), the multiplicative formal group law, so named because 1+ uF = (1+ux)(1+uy). Other formal group laws … In fact, all formal group laws are commutative as long as Ahas no elements that are both nilpotent and torsion. Let F;Gbe formal group laws over a ring A. Related concepts. De nition 5. In this sense, taking the Lie algebra forgets much of the structure. However, we will simply assume that a \formal group law" is commutative (and one-dimensional) from now on. ditive formal group law. The formal group law of topological K-theory which is induced by its canonical complex orientation is the multiplicative formal group law. With formal group laws de ned, we should now discuss maps between them. First, we need a brief digression concerning endomorphisms of a formal group law. De nition 1.4. In particular, the Lie algebra cannot distinguish between the additive formal group law, multiplicative formal group law, and elliptic curve group … (This expression is (1 + X)(1 + Y) 1, and therefore represents multiplication for a parameter centered around 0 rather than 1.) See at topological K-theory the section Complex orientation and Formal group law. (And in fact, -theory—the natural candidate for this—is not real-oriented, only spin-oriented.) With this in mind, the analog of Quillen's theorem for becomes: Theorem 1 (Quillen) The formal group law for is the universal formal group law over a satisfying . This is not satisfied by, say, the multiplicative formal group law. The formal group law of an elliptic curve has seen recent applications to computational algebraic geometry in the work of Cou- ... Y+ XY (called the multiplicative group law). X +Y +XY, the multiplicative formal group law. No Bake Lemon Pie With Graham Cracker Crust, Paternity Test While Pregnant Without Father Knowing, Calvin And Hobbes Books Uk, Thai Street Noodles Recipe, Garnier Fructis Sleek And Shine Hairspray, multiplicative formal group law 2020
CommonCrawl
New answers tagged launch Advantages of launching very large rocket while submerged, buoyant, in a body of water Launching from international waters. In addition to the factors mentioned in other posts, there's an additional benefit from launching from the ocean: you can launch from international waters. This could be handy if you're launching a rocket that uses some form of material or process that is illegal or heavily regulated for civilian use in your home country.... launch rockets launch-site Early in the development of the Polaris missile system, there was a lot of work on launching a missile from underwater. Polaris was a nuclear deterrent to rapidly launch multiple missiles from a fully submerged submarine. Staying submerged until a boat-load of launches were complete was a key goal: the boat was to be very difficult to track and destroy ... Bob Jacobsen Sea Dragon The very large rocket was probably Sea dragon and the advantages were more on allowing a massive vehicle to be built at all rather than inherent advantages in starting underwater. (image credits) Building the launch vehicle on a slip way and floating it to the launch site bypasses a number of size constraints in building and moving large ... GremlinWranger You might be thinking of the Sea Dragon project, although this never got past the conceptual / early planning stages. Some of the advantages of a sea launch are that you can be far away from habitation and the water can provide cooling and acoustic damping during launch. But the disadvantages are also serious. You are even more at the mercy of the weather ... Why does launching east result in an orbital inclination equal to the latitude of the launch site? Another way to look at it: Consider an object orbiting Earth at some inclination relative to the equator. Now consider its path projected onto Earth's surface. If you plot that path as latitude vs longitude, you'll get a sinusoid. You can see this in views of mission control rooms where an orbiting spacecraft is displayed on an equirectangular world map. ... orbital-mechanics launch launch-site inclination Anthony X Is there any advantage in launching spacecraft from a high latitude, or why was Plesetsk built so far north? Molniya orbits As written by Aurovrata Plesetsk is actually ideally situated to launch satellites into Molniya orbits, and as a result saw many more launches than Baikonour. A or better THE molniya orbit has an inclination of 63.4°. The lauchpades are located about 62.9° N. So launching nearly straight east (which gives you the maximum input from ... launch russia soviet-union spaceport CallMeTom Reductio ad absurdum If you could choose freely on which circle to orbit, the most convenient place to take off from would be the North pole. That would set the circle diameter to zero. You would then climb to whichever altitude you pleased and remain there, immobile in space, for as long as you wanted. How cool would that be? An attempt at analogy In ... kuroi neko The short answer is that a spacecraft is attracted to the center point of the earth, not to the earth's rotational axis. [I]t would make sense to me that launching east would result in a 0° inclination with the orbital plane raised so it's parallel to the equator but above or below it. Here's one explanation of why that wouldn't happen that you might ... Tanner Swett The center of the Earth is, for any reasonable approximation, in one of the focus points of an elliptical orbit. For a circular orbit, there is only one focus point, so the center of the Earth is in the center of the orbit. The plane of the orbit thus would intersect both the center of the Earth as well as the launching site. If the launch site was on the ... Earth's gravity pulls you towards the centre of the Earth, so if you're above Kennedy, that pull has a Southwards component, as well as the component towards the Earth's axis. So your path curves South, so that in the end the orbit spends equal amounts of time North and South of the equator, and the pulls in that direction balance out over time. All orbits ... Steve Linton Why can't we launch from space? An early concept for the Apollo mission relied on launch from space. Several Nova rockets with 8 F-1 engines for the first stage would have been used to lift the parts into a low Earth orbit. After assembling all parts the whole stack would launch from orbit to the Moon. The return capsule to Earth would land on the Moon and return to Earth from the surface. ... launch orbit physics mass We do 'launch from space'. Indeed that's exactly what Apollo did, for instance. They got themselves into low Earth orbit, and then from that orbit went to the Moon. And the numbers behind this tell you why this is not some magic bullet: The Saturn V stack had a wet (fueled) mass of about 3,000 tonnes. It could put about 140 tonnes into LEO. Getting to ... tfb Why aren't sea launches used more often? Tall long heavy things filled with explosive liquid on a pitching base and a pitching center of gravity and an entended mechanical leverage, not good. Metals can only flex so much. launch water launch-site launchpad What does S represent in Chris Hadfield's D = ½ ρ v² S? As you correctly noted, the S he's using is a combination of effective surface area and drag coefficient. All literature I've come across uses S for surface area and expresses drag as $D = C_D\frac{1}{2}\rho V^2S$ where $C_D$ is the drag coefficient and S is the effective surface area. Combining these two into one parameter makes sense to me however. $C_D$ ... launch astronauts physics atmospheric-drag Alexander Vandenberghe launch-campaign launch × 819 rockets × 167 spacex × 145 falcon-9 × 70 launch-site × 50 space-shuttle × 41 falcon-heavy × 39 orbital-mechanics × 37 artificial-satellite × 37 failure × 34 nasa × 31 launchpad × 31 fuel × 29 orbit × 28 propulsion × 25 low-earth-orbit × 24 launch-sequence × 24 crewed-spaceflight × 21 engines × 21 spacecraft × 18 trajectory × 18 reuse × 17 apollo-program × 16 history × 16 physics × 16
CommonCrawl
Home Journals TS Reverse Synchronous Transmission of Electrical Signals Based on Parallel Injection and Series Pickup Reverse Synchronous Transmission of Electrical Signals Based on Parallel Injection and Series Pickup Hong Zhang* | Xinxin Lu | Xiuye Yin School of Network Engineering, Zhoukou Normal University, Zhoukou 466001, China School of Computer Science and Technology, Zhoukou Normal University, Zhoukou 466001, China [email protected] To eliminate the interference with the transmission of electrical signals, this paper puts forward a reverse synchronous transmission (RST) control method based on parallel injection and series pickup. Firstly, the synchronous transmission mechanism of electrical signals was analyzed, followed by the design of the framework and workflow of signal transmission. Next, an RST channel model was established for electrical signals, and the transmission parameters were configured based on the transmission properties of these signals. Through alternative current (AC) impedance analysis, the Laplace transform was performed on the transmission loop to increase the voltage of the transmission channel, and to elevate the signal-to-noise ratio (SNR) of the voltage across the resistor. Finally, the voltage comparator was adopted to obtain the digital information of the baseband signal, and the power signal was transmitted to the RST channel, completing the RST control of electrical signals. The experimental results show that the transmission speed of the system was 0.7488, and the reverse transmission of electrical signals was only delayed by 5ms, when the intensity of electromagnetic radiation was 2.0μT. Through parallel injection and series pickup, the proposed system can effectively realize the RST of electrical signals, without changing the topology of the transmission system. parallel injection, series pickup, electrical signal, reverse synchronous transmission (RST), alternative current (AC) impedance Signal transmission plays an important role in the electricity transmission in inductively coupled power transfer (ICPT) system. The signals can feedback the working conditions or transmit control commands. Traditionally, the ICPT system synchronously transmits electrical signals by three methods: single channel technology, dual channel technology and radio frequency technology. Judging by system integration and reliability, the single channel technology boasts relatively high research value and good application prospects. Based on signal transmission mode, single channel technology falls into two categories: energy modulation and carrier modulation. In energy modulation, the electrical signals are transmitted by identifying and creating an energy envelope. This transmission mode can be further divided into voltage regulation, frequency modulation and tuning. However, energy modulation suffers from poor electrical quality and low transmission efficiency. In carrier modulation, the signal spectrum can be expanded through sine wave carrier modulation, facilitating the signal transmission in the channel. Then, the modulated signal is transmitted to the energy channel, which serves as a signal channel for transmission. This modulation method has been widely used and researched, for its limited impact on power transmission. However, the electrical signals need to be transmitted through an independent channel, rather than general channels. Otherwise, the power frequency will interfere with the channel operation, causing errors to signal transmission. To solve the above problems, this paper proposes a reverse synchronous transmission (RST) method for electrical signals based on parallel injection and series pickup. Firstly, an RST channel was constructed after analyzing the RST process of electrical signals. The transmission parameters of the modulated signal were adjusted through parallel injection and series pick-up. Through alternative current (AC) impedance analysis, the Laplace transform was performed on the transmission loop to increase the voltage of the transmission channel, and to elevate the signal-to-noise ratio (SNR) of the voltage across the resistor. Finally, the signal was demodulated by the voltage comparator to obtain the digital information of the baseband signal, and the power signal was transmitted to the RST channel. 2. RST Mechanism Based on Parallel Injection and Series Pickup This paper selects a classic ICPT system [1] as the experimental system. The basic architecture of the system is given in Figure 1. The ICPT system is divided into a primary side and a secondary side. The current from the DC power supply on the primary side is inverted by high-frequency, transmitted to the receiving coil of the secondary side, and supplied to the electrical equipment after compensation, rectification, and filtering. Take the compensation architecture of series pickup on the primary side and parallel injection on the secondary side for example. The decoupled equivalent circuit [2] of the architecture is shown in Figure 2. Based on the principle of reflected impedance, the signals are transmitted from the secondary side to the primary side. The voltage modulation of the primary and secondary loops can be respectively described as: $\left\{ \begin{align} & {{Z}_{11}}{{I}_{1}}-j\omega M{{I}_{2}}={{V}_{1}} \\ & -j\omega M{{I}_{i}}+{{Z}_{22}}{{I}_{2}}=0 \\ \end{align} \right.$ (1) where, Z11 and Z22 are the self-impedances of the primary and secondary loops, respectively. The following can be derived from formula (1): ${{I}_{1}}=\frac{{{V}_{1}}}{j\left( \omega {{L}_{1}}+\omega {{L}_{2}}-\frac{1}{\omega {{C}_{1}}}-\frac{\omega {{C}_{2}}R_{L}^{2}}{1+{{\omega }^{2}}C_{2}^{2}R_{L}^{2}} \right)+\frac{{{R}_{L}}}{1+{{\omega }^{2}}C_{2}^{2}R_{L}^{2}}}$ (2) The above analysis shows that the current is affected by the primary-side compensation capacitor [3], secondary-side compensation capacitor C2, and the load RL. To complete the RST of electrical signals at load change, formulas (1) and (2) were combined to design a control algorithm for the RST of electrical signals in the ICPT system [4] (Figure 3), which adds a signal conditioning structure and a signal modulation structure to the primary and secondary sides, respectively. Figure 1. The basic architecture of the ICPT system Figure 2. The equivalent circuit of the primary and secondary sides of the ICPT system The RST loop consists of the DC input voltage Ud, a high-frequency inverter composed of S1-S4, the primary-side series pickup capacitor C1, the secondary-side parallel injection capacitor C2, a two-way switch that cuts in/out the signal modulation capacitor C0, a variable load RL, a current energy detection circuit, a drive module, and a signal demodulation structure. As shown in Figure 3, the DC current is converted into AC current by the high-frequency inverter, and transmitted to the secondary side through inductive coupling, before powering the load via pickup and filtering. The load size is evaluated by the current energy detection circuit. Based on the detected information, the controller [5-9] cuts in/out the signal modulation capacitor C0 via the drive module and the two-way switch, providing the primary-side current with signal features. Finally, the signals are restored and extracted through the signal conditioning structure. Figure 3. The workflow of the RST of electrical signals 3. Modeling of RST Channel If the architecture and parameters (except for load) of the ICPT are determined, the primary-side currents when the signal modulation capacitor C0 is cut into/out of the system were calculated by the principle of reflected impedance: (1) When the signal modulation capacitor C0 is cut out of the system, the equivalent circuit diagram of the system is shown in Figure 2. If $\omega^{2} L_{1} C_{1}=1$ and $\omega^{2} L_{2} C_{2}=1$, then the primary-side current can be calculated by: ${{I}_{p1}}=\frac{{{V}_{1}}{{L}_{2}}}{{{\omega }^{2}}{{M}^{2}}C_{2}^{2}{{R}_{L}}}$ (3) (2) When the signal modulation capacitor C0 is cut into the system, the equivalent circuit diagram of the system is shown in Figure 4. Figure 4. The equivalent circuit diagram of the system with C0 cut-in As shown in Figure 4, the overall impedance of the system can be expressed as: ${{Z}_{2}}={{Z}_{11}}+\frac{{{\left( \omega M \right)}^{2}}}{j\omega {{L}_{2}}+j\omega \left( {{C}_{2}}+{{C}_{0}} \right)}$ (4) If $\omega^{2} L_{1} C_{1}=1$ and $\omega^{2} L_{2} C_{2}=1$, then the primary-side current can be calculated by: ${{I}_{p2}}=\frac{{{V}_{1}}}{\left| \frac{\omega {{M}^{2}}+{{\omega }^{2}}{{M}^{2}}\left( {{C}_{0}}+{{C}_{2}} \right){{R}_{L}}i}{{{L}_{2}}\left( \omega {{C}_{0}}{{R}_{L}}-i \right)} \right|}$ (5) It can be seen from the formula (5) that the cut-in and cut-out of signal modulation capacitor C0 has a certain interference to the size of the primary-side current. The selection of C0 interferes with not only the efficiency and transmission power of the system, but also the accuracy of signal demodulation. It is important to rationalize the size of C0 to make the secondary side work near the resonance frequency [10-12]. Taking $C_{0}=0.22 \mu F$ for example, this paper configures the following ICPT system parameters to analyze the control algorithm of the system and the signal modulation strategy under load changes: The input DC voltage $U_{d} / V$ is 10; the primary-side resonance capacitance $C_{1} / \mu F$ is 0.47; the primary-side resonance inductance $L_{1} / \mu H$ is 134.7; the secondary-side resonance capacitance $C_{2} / \mu F$ is 1.0; the secondary-side resonance inductance $L_{2} / \mu H$ is 63.3; the signal modulation capacitance $C_{0} / \mu F$ is 0.22; The mutual inductance $M / \mu H$ is 18.5. When the signal modulation capacitor is cut out of or into the circuit, the effective value of the primary-side current will change with the load resistance [13-15] (Figure 5). Figure 5. The variation of primary-side current with load changes As shown in Figure 5, the primary-side current curves of C0 cut-in and C0 cut-out intersected at a point, where the resistance is denoted as R0 [16]. If the system architecture and parameters are fixed, R0 is a constant value. 4. Realization of RST of Electrical Signals 4.1 Parameter configuration The topology of the transmission channel must be clarified before identifying the transmission features of electrical signals [17]. To analyze the transmission channel, a choke coil Lz and a choke capacitor Cz were added to the secondary-side circuit, forming a choke network that satisfies: ${{C}_{z}}=\frac{1}{\omega _{s}^{2}{{L}_{z}}}$ (6) where, $\omega_{s}$ is the angular velocity of signal transmission. Under the above parameter configuration, the choke network has a large impedance for high-frequency electrical signals, which is equivalent to an open circuit. In this way, the carrier current to be incorporated into the secondary-side resonant coil to the maximum extent. To increase the intensity of signal transmission at the transmitting end and reduce the signal attenuation in the channel, this paper takes the secondary-side resonant coil into the analysis of signal resonance. Hence, the transmission capacitance CT of the secondary-side electrical signals can be calculated by: ${{C}_{T}}=\frac{1}{\omega _{S}^{2}\left( {{L}_{S}}+{{L}_{T2}} \right)}$ (7) To eliminate electrical interference [18] and increase the received signal strength, a resonant capacitor CR2 was injected in parallel across the secondary-side signal coupling transformer LR2: ${{C}_{R2}}=\frac{1}{\omega _{S}^{2}{{L}_{R2}}}$ (8) 4.2 Transmission process The transmission process of electrical signals is illustrated in Figure 6, where it is the transmitted current, is is the secondary-side current, iP is the primary-side current, and ir is the parallel pickup current. Figure 6. The transmission process of electrical signals As show in Figure 6, the power signals are transmitted in four loops. Loop 1 modulates the electrical signals, and introduces the modulated voltage into the signal coupling transformer, integrating the signals to the main circuit. Loop 2 makes the electrical signals to approach the fully resonant state, and enhances the strength of the signal to a certain extent. Loop 3 is comparable to a frequency selective filter composed of primary-side coil and capacitor, which attenuates the electrical signals of megahertz frequency. Loop 4 make the circuit resonance frequency equal to the frequency of the electrical signals, amplifying the transmission frequency and received signal strength. 4.3 Voltage gain of transmission channel Through AC impedance analysis, the following can be derived from Figure 6: $\left\{ \begin{align} & {{Z}_{4}}=s{{L}_{R2}}+\frac{{{R}_{2}}}{1+s{{C}_{R2}}{{R}_{2}}} \\ & {{Z}_{43}}=-\frac{{{s}^{2}}M_{R}^{2}}{{{Z}_{4}}} \\ & {{Z}_{3}}=s{{L}_{P}}+{{R}_{p}}+\frac{1}{s{{C}_{P}}}+s{{L}_{R1}}+{{Z}_{43}} \\ & {{Z}_{32}}=-\frac{{{s}^{2}}M_{P}^{2}}{{{Z}_{3}}} \\ & {{Z}_{2}}=sLs+\frac{1}{s{{C}_{T}}}+{{R}_{S}}+s{{L}_{t2}}+{{Z}_{32}} \\ & {{Z}_{21}}=-\frac{{{s}^{2}}M_{T}^{2}}{{{Z}_{2}}} \\ & {{Z}_{1}}=s{{L}_{T1}}+{{Z}_{21}} \\ \end{align} \right.$ (9) where, Zx is the Laplace transform of the equivalent impedance in loop x; Zxy is the Laplace transform of the impedance of the mapping from loop x to loop y, x, y=1, 2, 3, 4, $x \neq y$. According to Kirchhoff's laws for voltage and current, we have: $\left\{ \begin{align} & {{G}_{1}}\left( \omega \right)=\frac{is}{{{U}_{s}}}=\frac{1}{{{L}_{T1}}\frac{{{Z}_{2}}}{{{M}_{T}}}-s{{M}_{T}}} \\ & {{G}_{2}}\left( \omega \right)=\frac{{{i}_{P}}}{{{i}_{S}}}=\frac{s{{M}_{P}}}{{{Z}_{3}}} \\ & {{G}_{3}}\left( \omega \right)=\frac{{{i}_{r}}}{{{i}_{P}}}=\frac{s{{M}_{R}}}{{{Z}_{4}}} \\ & {{G}_{4}}\left( \omega \right)=\frac{{{U}_{R2}}}{{{i}_{r}}}=\frac{{{R}_{2}}}{1+s{{C}_{R2}}{{R}_{2}}} \\ \end{align} \right.$ (10) Then, the voltage gain of the transmission channel can be obtained as: $G\left( \omega \right)=\frac{{{U}_{R2}}}{{{U}_{S}}}={{G}_{1}}\left( s \right){{G}_{2}}\left( s \right){{G}_{3}}\left( s \right){{G}_{4}}\left( s \right)$ (11) 4.4 Signal conditioning Figure 7 presents the architectures of the signal transmitting circuit. In this paper, a Hartley oscillator is used to generate high-frequency sine wave, and a digital multiplexer is employed for signal modulation. To meet the power requirement of the transmitting end, the modulated signals were transmitted to Class A and B power amplifiers. The circuit of the Hartley oscillator was designed to produce a sine wave with a frequency range of 30kHz~30MHz, satisfying the power frequency requirements for common bandwidths. In the actual circuit design, it is necessary to avoid the mutual inductance between the inductors L1 and L2. Otherwise, there will be large errors in the resonance frequency and the design frequency. The working reliability, stability, and load of this circuit can be changed by adjusting C2 and R5. The oscillation frequency of this circuit was determined by the parameters of the resonant circuit composed of inductors L1 and L2, as well as capacitor C3. The resonant frequency can be calculated by $\omega=1 / \sqrt{\left(L_{1}+L_{2}\right) C_{3}}$. The architecture of the receiving circuit is also displayed in Figure 7, where resistor R2 is made up of the series pickup between R21 and R22. The two sides of resistor R22 are injected in parallel to the inductor-capacitor (LC) resonance network. The resonant frequency of the network equals the carrier frequency of the electrical signals. Therefore, the network has a large impedance under the signal source Us, which is equivalent to an open circuit. As a result, the network does not interfere with the voltage gain of signal transmission. For the interference caused by the voltage source Edc, the impedance is very small due to the kilohertz frequency, which is comparable to an open circuit. Thus, most interference voltage is divided in R21, eliminating the influence of electrical energy. Meanwhile, the SNR of the voltage across R22 was strengthened. Then, the voltage passes through a follower composed of computing amplifiers and a non-inverting amplifier, flowing into the diode envelope detection circuit. After that, the voltage is demodulated by the voltage comparator, producing the baseband signals. In this way, the RST of electrical signals is completed. Figure 7. The process of electrical signal conditioning 5. Experimental Verification To prove the feasibility of our method, an energy-carrying communication experimental platform was constructed based on the ICPT system. The electrical signals were generated by STM32, and modulated by an adjustable voltage regulator circuit. The waveforms of the electrical signals and modulated signals were displayed by an oscilloscope with a coupling architecture, which contains an XL6009 voltage stabilization chip, an XKT412 inverter chip, and an LM393 voltage comparison chip. The control signal frequency and resonant frequency of the system were set to 1kHz and 38.165kHz, respectively. The coils were mechanically wound with 17 turns. The electrical parameters of the coupling architecture are listed in Table 1. Table 1. The electrical parameters of the coupling architecture Input DC voltage Input DC current Primary-side resonant capacitor Secondary-side resonant capacitor Secondary-side resonant inductor Figure 8. The 1kHz control signals As shown in Figure 8, the simulated transmission speed was 0.7764, when the system operated at 38.462kHz. Under the same frequency, the output voltage was 4.8V, output current was 0.78A, and the transmission speed was 0.7488. The experimental efficiency is lower than the simulation efficiency, because the electrical parameters of the variation in the coupling architecture in the high-frequency circuit brings excessive losses. To improve the transmission efficiency, the resonant frequency should be increased by redesigning the coils. Experimental analysis shows that our method can effectively modulate the electrical signals into energy carrier signals, and complete the RST of electrical signals, without increasing the difficulty of system design. 5.1 Signal transmission delay To compare the control effects of different methods, the RST delays of electrical signals of our method at different intensities of electromagnetic interference were compared with those of the fully coupled capacitor (FCC) model and empirical wavelet transform (EWT). Table 2. The RST delays of different methods Interference intensity /μT RST delay/ms As shown in Table 2, the RST delays of all three methods decreased with the growing electromagnetic interference. When the interference intensity was 0 (i.e. no interference), the RST delay of FCC was 25ms, that of EWT was 32ms, and that of our method was 0. When the interference intensity increased to 2.0μT, the RST delay of FCC was 168ms, that of EWT was 132ms, and that of our method was merely 5ms. Therefore, the transmission delay of our method is so small as to be negligible under strong interference. This fully demonstrates the RST control ability of our method for electrical signals. To mitigate the impacts of the transmission on the frequency of electrical signals, this paper proposes an RST method for electrical signals based on parallel injection and series pickup. By this method, the RST of electrical signals is realized through parallel injection and series pickup. The following conclusions can be drawn from the experimental results: (1) When the system operated at 38.462kHz, the simulated transmission speed was 0.7764, while the experimental transmission speed was 0.7488. This means our method can effectively modulate the electrical signals into energy carrier signals, and complete the RST of electrical signals. (2) When the interference intensity increased to 2.0μT, the RST delay of our method was merely 5ms. This fully demonstrates that our method can effectively reduce the transmission delay under strong interference, and implement the RST control of electrical signals. Of course, there are some limitations of our research. For example, the electrical signals are easily lost during the transmission, owing to their instability. The future research will introduce real-time detection and intelligent control to the proposed method, aiming to detect and regulate the signal states in real time, and to avoid signal loss. This work was supported by Science and Technology Breakthrough Project of Henan Provincial Science and Technology Department under the program "Brain Tumor Image Segmentation Based on Depth Learning" (Grant No.: 182102310694) and "Energy Consumption Modeling and Measurement in Virtual Environment" (Grant No.: 182102310034), and Development Plan Project of Henan Provincial Science and Technology Department under the program "Research and Development of University Financial Intelligent Platform in the Context of Cloud Computing" (Grant No.: 192400410368). [1] Aysal, T.C., Barner, K.E. (2008). Blind decentralized estimation for bandwidth constrained wireless sensor networks. IEEE Transactions on Wireless Communications, 7(5): 1466-1471. https://doi.org/10.1109/TWC.2008.060687 [2] Hallez, H., De Vos, M., Vanrumste, B., Van Hese, P., Assecondi, S., Van Laere, K., Dupont, P., Van Paesschen, W., Van Huffel, S., Lemahieu, I. (2009). Removing muscle and eye artifacts using blind source separation techniques in ictal EEG source imaging. Clinical Neurophysiology, 120(7): 1262-1272. https://doi.org/10.1016/j.clinph.2009.05.010 [3] Nakajima, H., Nakadai, K., Hasegawa, Y., Tsujino, H. (2009). Blind source separation with parameter-free adaptive step-size method for robot audition. IEEE Transactions on Audio, Speech, and Language Processing, 18(6): 1476-1485. https://doi.org/10.1109/TASL.2009.2035219 [4] Rodriguez, A., Laio, A. (2014). Clustering by fast search and find of density peaks. Science, 344(6191): 1492-1496. https://doi.org/10.1126/science.1242072 [5] Hinton, G.E., Osindero, S., Teh, Y.W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7): 1527-1554. https://doi.org/10.1162/neco.2006.18.7.1527 [6] Sangeetha, J., Jayasankar, T. (2019). Emotion speech recognition based on adaptive fractional deep belief network and reinforcement learning. In Cognitive Informatics and Soft Computing, pp. 165-174. https://doi.org/10.1007/978-981-13-0617-4_16 [7] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1): 1929-1958. https://doi.org/10.5555/2627435.2670313 [8] Hinton, G.E. (2012). A practical guide to training restricted Boltzmann machines. In Neural networks: Tricks of the Trade, pp. 599-619. https://doi.org/10.1007/978-3-642-35289-8_32 [9] Corsini, G., Mossa, A., Verrazzani, L. (1996). Signal-to-noise ratio and autocorrelation function of the image intensity in coherent systems. Sub-Rayleigh and super-Rayleigh conditions. IEEE Transactions on Image Processing, 5(1): 132-141. https://doi.org/10.1109/83.481677 [10] Yatsenko, V., Kolesnik, Y., Titarenko, T. (1994). Identification of the non-Gaussian chaotic dynamics of the radioemission back scattering processes. IFAC Proceedings Volumes, 27(8): 277-281. https://doi.org/10.1016/S1474-6670(17)47728-7 [11] Leung, H., Lo, T. (1993). Chaotic radar signal processing over the sea. IEEE Journal of Oceanic Engineering, 18(3): 287-295. https://doi.org/10.1109/JOE.1993.236367 [12] Mukherjee, S., Osuna, E., Girosi, F. (1997). Nonlinear prediction of chaotic time series using support vector machines. In Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop, pp. 511-520. https://doi.org/10.1109/NNSP.1997.622433 [13] Leung, H., Hennessey, G., Drosopoulos, A. (2000). Signal detection using the radial basis function coupled map lattice. IEEE Transactions on Neural Networks, 11(5): 1133-1151. https://doi.org/10.1109/72.870045 [14] Sakai, K., Noguchi, Y., Asada, S.I. (2008). Detecting chaos in a citrus orchard: Reconstruction of nonlinear dynamics from very short ecological time series. Chaos, Solitons & Fractals, 38(5): 1274-1282. https://doi.org/10.1016/j.chaos.2007.01.144 [15] Zadeh, A.E. (2010). Automatic recognition of radio signals using a hybrid intelligent technique. Expert Systems with Applications, 37(8): 5803-5812. https://doi.org/10.1016/j.eswa.2010.02.027 [16] Kugiumtzis, D. (1996). State space reconstruction parameters in the analysis of chaotic time series—the role of the time window length. Physica D: Nonlinear Phenomena, 95(1): 13-28. https://doi.org/10.1016/0167-2789(96)00054-1 [17] Pépin, L., Kuntz, P., Blanchard, J., Guillet, F., Suignard, P. (2017). Visual analytics for exploring topic long-term evolution and detecting weak signals in company targeted tweets. Computers & Industrial Engineering, 112: 450-458. https://doi.org/10.1016/j.cie.2017.01.025 [18] Chang, C.C., Lin, C.J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3): 1-27. https://doi.org/10.1145/1961189.1961199
CommonCrawl
Impact of Economic Structure on the Environmental Kuznets Curve (EKC) hypothesis in India Muhammed Ashiq Villanthenkodath ORCID: orcid.org/0000-0002-6617-60711, Mohini Gupta2, Seema Saini3 & Malayaranjan Sahoo4 This study aims to evaluate the impact of economic structure on the Environmental Kuznets Curve (EKC) in India. The present study deviates from the bulk of study in the literature with the incorporation of both aggregated and disaggregated measures of economic development on the environmental degradation function. For the empirical analysis, the study employed the Auto-Regressive Distributed Lag (ARDL) bounds testing approach of cointegration to analyse the long-run and short-run relationship during 1971–2014. Further, the direction of the causality is investigated through the Wald test approach. The results revealed that the conventional EKC hypothesis does not hold in India in both aggregated and disaggregated models since economic growth and its component have a U-shaped impact on the environmental quality in India. However, the effect of population on environmental quality is positive but not significant in the aggregated model. Whereas, in the disaggregated model, it is significantly affecting environmental quality. Hence, it is possible to infer that the population of the country increases, the demand for energy consumption increase tremendously, particularly consumption of fossil fuel like coal, oil, and natural gas, and is also evident from the energy structure coefficient from both models. This increase is due to the scarcity of renewable energy for meeting the needs of people. On the contrary, urbanization reduces environmental degradation, which may be due to improved living conditions in terms of efficient infrastructure and energy efficiency in the urban area leading to a negative relation between urbanization and environmental degradation. The changes adopted in human activities related to the pandemic epoch of COVID-19 have led the Indian economy to a conjunction phase nearly similar to the recession period. For instance, India's GDP contracted to 23.9 per centFootnote 1 in the first quarter as compared to the same quarter of the previous year. Further, it has a vivid effect on energy consumption and carbon dioxide emission as a reduction in human mobility and shutdown of industries has led to the decline in coal and oil consumption. This trend is steepened and prevailing continuous downfall in the industrial growth and overall economic performance of India. Hence, the slowdown in the industrial sector during COVID-19 may reduce atmospheric emissions. However, the present slowdown of carbon emissions is not sustainable for the long run as carbon dioxide keeps on increasing in the atmosphere due to economic activities over the period of time. Therefore, the study raises the question of whether the COVID 19 epoch could impact carbon dioxide reduction for a longer period. For answering this question, the present study attempts to analyze the impact on carbon dioxide influenced through gross domestic production (GDP), industrial sector, energy structure, population and urbanization in India. Moreover, environmental degradation is a global concern, and it has gained importance as carbon dioxide emission is the prime emission that will affect the worldwide natural environment (MK 2020; Villanthenkodath et al. 2021). Therefore, many nations agreed to the Kyoto protocol in 1997 so as to shield nature from exploitation. Nonetheless, it is observed that carbon dioxide emission has rapidly levelled up in developing countries like India. Also, mentioning the reflection of the COVID-19 scenario in 2020 has adversely influenced the entire globe, but considering India as a developing country, the requirement to move from back to forth is a matter of concern. In theoretical literature, the Environmental Kuznets Curve (EKC) layout the opposing relation of exhibiting environment degradation and economic development. The EKC approach is pertained dominant to the pollution and growth introduced by Grossman and Krueger (1991); also, Stern and Common (2001) explain that the low industrialization will contribute to less environmental damages. However, there is no consensus regarding the eminent effect of carbon dioxide and its complex relation with economic growth. Hasanov et al. (2019) provide a cubic function form of EKC in the literature describing the monotonic rise of GDP along with carbon dioxide in Kazakhstan, which found the EKC does not hold. Unlike existing studies in the literature, the prime focus of the present study is to examine the impact of the economic structure on the environmental quality of India through the Environmental Kuznets Curve (EKC) framework. Besides that, the proponents of economic growth encourage the reduction of environmental degradation if the economic growth is disengaged from its effect. According to the report of the Centre for Research on Energy and Clean Air (CREA 2020), India's carbon dioxide emission has seen a drastic fall of 15% in the first quarter during the pandemic period 2020. It may be attributed to the reduction in demand for coal, oil, and gas consumption made carbon dioxide emissions fall by 30%, witnessed for the first time in the last four decades. This fall in carbon dioxide emission is mainly due to the shutdown of the industrial sector, which majorly encourages this emissions reduction, and also India targeted for 40% reduction in emission by shifting to non-fossil fuel consumption. However, India is undergoing rapid industrial development; hence understanding these changes and their related impact on carbon dioxide emission is required for the relevant policymaking. Moreover, it is believed that the industrial sector is a nucleus part of the economic system, transforming in scale and structure with the growth of an economy, specifically in a developing country like India (Fan et al. 2003). Meanwhile, the industry sectors are the eminent emitter of carbon dioxide, and consumers also contribute by utilizing the products of carbon dioxide. The intensity of carbon dioxide may differ with the different sectors of industrial structure in a specific region Tian et al. (2014). Hereby, the industrial structure is one of the important determinants that are associated with economic growth and carbon dioxide emissions. Thus, understanding how the association between CO2 emissions, economic structure in terms of industrial sector value-added, and economic growth by keeping urbanization, energy structure, and population as control variables prevails to provide the particulars for implementing the policy. In this background, best of our knowledge, the contribution of this study is that first of its kind that builds a model of the structural transformation in the context of environmental degradation to foster industrial diversity and environmental sustainability. Further, several prevailing studies consider only the aggregate component of the economy while estimating the EKC hypothesis, but this study contributes to the literature by considering both aggregate and disaggregate components of the economy in the estimation of EKC. Also, the time series study reads the impact on carbon dioxide of economic structure and economic growth undertaking EKC hypothesis India. The study uses time-series data spanning from 1971 to 2014; it is the updated series compared to other studies, and it has relatively more data points to produce reliable outcomes. The findings of the study portray both models at the aggregate and disaggregate level, wherein the aggregate model represents the long-run relation between CO2 emissions and economic growth. In contrast, the disaggregate model shows a long-run relationship between industrial value-added and CO2 emissions in the presence of other control variables. However, both model does not hold the conventional EKC hypothesis for India. Thus, the government authority can establish a policy targeting renewable energy over and above the non-renewable energy structures. The paper proceeds in the following sections: Sect. 2 briefs about the related literature. Section 3 represents the theoretical model, data briefing, and econometric methodology; Sect. 4 delineates the empirical analysis; last Sect. 5 includes the conclusion and policy implication. In the existing literature, the relationship between economic growth and environmental quality has been amply studied. In the book "The Limits to Growth", Meadows et al. (1972) argue that economic growth degrades environmental sustainability. Hence, to protect the environmental quality, there should be a limit to growth. In the seminal paper, Grossman and Krueger (1991) explored the environmental impact of the North American Free Trade Agreement (NAFTA) and observed that economic growth would affect the environment by scale effect, composition effect, and technical effect. They also find that two pollutants, i.e., smoke and SO2, increase with GDP at a low level of national income, but at a higher level of income, they decrease with GDP. Similarly, Wang et al. (2016) assessed the relationship between economic growth and sulfur dioxide emissions and found that income-sulfur dioxide emissions follow a conventional environmental Kuznets curve path. Similar results were found by Panayotou (1993), Shafik (1994), Apergis and Ozturk (2015), Bilgili et al. (2016), Shahbaz et al. (2017), El Montasser et al. (2018) while estimating the EKC hypothesis. Likewise, Stern and Common (2001) investigated the relationship between economic growth and sulfur dioxide for 74 countries globally from 1960 to 1990 but did not find evidence for the conventional EKC hypothesis. Hence, they concluded that the EKC model is fundamentally misspecified, and there is an omitted variable bias. The same outcome has reached (Harbaugh et al. 2002) while taking a similar variable as the proxy measure for environmental quality. However, Dasgupta et al. (2002) doubt the universal acceptability of the EKC hypothesis. Pal and Mitra (2017) argue that there is still another turning point, even if there is evidence for the conventional EKC relationship. By incorporating additional variables in the CO2 emissions model, Wang et al. (2013) examine the impact of economic growth, population, technology level, urbanization, service level, industrialization, energy consumption structure, and foreign trade on the energy-related CO2 emissions in Guangdong Province, China, from 1980 to 2010 using an extended STIRPAT model. Results indicate that technology level, foreign trade degree, and energy consumption structure lead to a decline in CO2 emissions. In a different study, Wang et al. (2017) investigate the driving factors of CO2 emissions from a regional perspective in China by employing the extended STIRPAT model from 1952 to 2012. The emanated results show that the impacts and influences of various factors on carbon emissions are different in the different development stages. Likewise, Ghazali and Ali (2019) studied the impact of various factors on CO2 in Newly Industrialized Countries (NICs) by utilizing the extended STIRPAT model from 1991 to 2013. The empirical results of the study suggest that GDP per capita, population, and CO2 emission intensity along with energy intensity are main contributors for CO2 emissions for NICs, while population carrying capacity have no significant impact on CO2 emission level. There seems to be mixed evidence of the EKC hypothesis. Grossman and Krueger (1991) suggest that the environment cannot be controlled by economic growth unless supported by institutions and policies. Therefore, the EKC hypothesis's validation depends intuitively on other factors such as access to technology or technological progress, quality of institution & availability of natural resources (Dogan and Inglesi-Lotz 2020). Recent studies have also included other variables like energy consumption, foreign aid, corruption, foreign investment, urbanization, technology, energy intensity, and financial development (Mahalik et al. 2021; Villanthenkodath and Mahalik 2020). Hence, Carson (2010) points those results are sensitive to the model specification, dataset, variable added, and environmental proxy. In the Indian context, a review of literature also shows mixed evidence of the EKC hypothesis. For instance, Boutabba (2014) examines the causal relationship's existence and direction in a multivariate framework for the Indian economy from 1970 to 2008. Their results suggest the long-run relationship between the per capita income and per capita carbon emission and further lend support to the EKC hypothesis. Similarly, Sehrawat and Giri (2015), using urbanization as an additional contributor to the emissions, attempted to study the EKC hypothesis during 1971–2011 for the Indian economy. They confirm the existence of the EKC hypothesis. Besides that, Kanjilal and Ghosh (2013), using a threshold cointegration test, found the presence of the EKC hypothesis for India. Likewise, Jayanthakumaran et al. (2012) concluded in favour of the EKC hypothesis in India. Recently, Shahbaz and Sinha (2019) estimated the EKC for emissions using the ARDL technique from 1971 to 2015 for the Indian economy. The study includes renewable energy measured by electric power consumption and its effect on environmental quality. The results suggest that EKC does exist for India. A study conducted by Dar and Asif (2017) explored energy use, financial development, and economic growth on the emissions using the ARDL model for the Indian economy. However, the study fails to establish the presence of the EKC hypothesis. A similar outcome has been reached by Alam and Adil (2019) since they conclude that there is no significant relationship between economic growth and carbon emissions. A study by Roy et al. (2017) analyzed the environmental impact of energy demand, energy mix, and fossil fuel intensity in a fast-growing economy like India from 1990 to 2016. They find that population, energy structure, and energy intensity are statistically factors for the CO2 emission in India. Some studies consider the various economic growth sources to test the EKC hypothesis but do not largely exist in the literature Dogan and Inglesi-Lotz (2020) and Lin et al. (2016). Our research will bridge this gap by studying the EKC hypothesis's presence by considering different economic growth sources for the Indian economy. Theoretical model, data description, and econometric methodology Theoretical model and data description The IPAT identity is considered as a system for determining what constitutes the patterns of the environment (Chertow 2000). The framework demonstrates how climate change (Generally calculated in terms of either CO2 or other air pollutants) responds to factors such as population, affluence, and technology. $$I = PAT$$ In Eq. 1, \(I\) stands for the degradation of environmental quality proxy in terms of emissions, \(P\) measures the growth of population. \(A\) is the affluence of society measured in terms of GDP, \(T\) used as technology proxy. Dietz and Rosa (1997) introduced the STIRPAT model due to the criticism related to earlier IPAT model assumptions such as the elasticities of all parameters are each equal and the simplicity (Tursun et al, 2015; Wang and Zhao 2015). $$I_{t} = \alpha P_{t}^{\beta } A_{t}^{\gamma } T_{t}^{\delta } \mu_{t}$$ In Eq. 2, \(\alpha\) indicates the intercept, \(P\), \(A,\) and \(T\) follows the same meaning of Eq. 1. \(\beta , \gamma\) and \(\delta\) indicates the elasticities of related to the impact of \(P\), \(A\) and \(T\) on the environment. Subscript \(t\) measures the year and \(\mu_{t}\) is the stochastic error term in the model. The underpinning theoretical framework of this study was proposed by Dogan and Inglesi-Lotz (2020) and Lin et al. (2016). For evaluating the determinants of CO2 emissions, these studies extended the STIRPAT model. Lin et al. (2016) modified the equation of STIRPAT by incorporating the square of GDP, energy structure, and urbanization of the countries. Similarly, Dogan and Inglesi-Lotz (2020) extended the STIRPAT by introducing the square term of industrial value-added in the context of European countries. Hence, the conceptualization of affluence in the STIRPAT model in both the industrial value-added and total GDP of India to analyses their impacts on emissions of CO2. Moreover, in any economy, the structure of energy consumption, i.e., shares of fossil fuels in total energy consumption, is an important element that influences the levels of emissions, which in turn affects the environment (You 2011). In the Indian context, earlier studies neglect the composition and pattern of GDP and their subsequent effects on the environment; instead, the studies focus on the aggregate GDP as a measurement of economic growth. On this line, the study has set up two models for the empirical analysis by following (Dogan and Inglesi-Lotz 2020). Model 1: Aggregate model $$\ln {\text{CO}}_{2t} = \alpha_{0} + \alpha_{1} \ln {\text{CO}}_{2t - i} + \alpha_{2} \ln {\text{GDP}}_{t} + \alpha_{3} \ln {\text{GDPSQ}}_{t} + \alpha_{4} \ln {\text{POP}}_{t} + \alpha_{5} \ln {\text{URB}}_{t} + \alpha_{6} \ln {\text{ES}}_{t} + \mu_{t}$$ Model 2: Disaggregate model $$\ln {\text{CO}}_{2t} = \alpha_{0} + \alpha_{1} \ln {\text{CO}}_{2t - i} + \alpha_{2} \ln {\text{IND}}_{t} + \alpha_{3} \ln {\text{INDSQ}}_{t} + \alpha_{4} \ln {\text{POP}}_{t} + \alpha_{5} \ln {\text{URB}}_{t} + \alpha_{6} \ln {\text{ES}}_{t} + \mu_{t}$$ In Eqs. 3 and 4, \(CO_{2}\) is the carbon dioxide emissions, \(\Delta \ln {\text{CO}}_{2t - i}\) measures the lag form of the carbon dioxide emissions, GDP stands for economic growth, \({\text{GDPSQ}}\) is the square term of the GDP, POP represent the population, \(URB\) is the urbanization, \(ES\) stands for the energy structure, IND means the industrial value-added and \({\text{INDSQ}}\) analyses the square term of industrial value-added. Intercept represented by \(\alpha_{0}\), while \(\alpha_{1} , \ldots \alpha_{6}\) stands for the coefficients of the explanatory variables in the model. Variables of the study are represented in Table 1, which offers the definition, measurement, and source of each variable for the period of 1971–2014. The selection of years was dictated by the availability of data for all the variables, particularly the energy structure data, which is available only up to 2014 in the World Development Indicators. The data were converted into the natural logarithm for the empirical analysis by the following studies (Pal et al. 2021; Sahoo et al. 2021; Villanthenkodath and Arakkal 2020; Villanthenkodath and Mushtaq 2021; Ansari and Villanthenkodath 2021; Villanthenkodath and Mahalik 2021). Table 1 Definition of variables Econometric methodology Stationarity test The first phase in the empirical analysis is to determine the order of integration of the variables for choosing the appropriate econometric models for the analysis. To attain this objective, we have employed the augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests. The null hypothesis of the non-stationarity is examined in opposition to the alternative hypothesis of stationarity. The first difference stationary or I (1) series indicates that all the variables are non-stationary in the levels, but it becomes stationary at their first difference. If the variables are I (0), then such variables are level stationery. Cointegration analysis The Autoregressive Distributed Lag (ARDL) bounds testing approach of cointegration proposed by Pesaran and Shin (1995) and Pesaran et al. (2001) has been employed for establishing the long-run relationship between the variables. The ARDL bounds testing approach is superior to other cointegration methods that can be listed as follows. Firstly, it can be applied in the case of a small sample size. Secondly, irrespective of the order of integration, i.e., I(0)/I(1) or mixed integration order of the variables, this method can be employed. Thirdly, the problem of endogeneity can be solved by using the optimal lag in the model specification. Fourthly, it offers superior results over other conventional cointegration. Model 1, i.e., the aggregate model estimated using the ARDL bounds testing approach based unrestricted error correction model as follows. $$\Delta {\text{lnCO}}_{2t} =\, \lambda_{0} + \mathop \sum \limits_{i = 1}^{p} \lambda_{1i} \Delta \ln {\text{CO}}_{2t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{2i} \Delta \ln {\text{GDP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{3i} \ln {\text{GDPSQ}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{4i} \Delta \ln {\text{POP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{5i} \Delta \ln {\text{URB}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{6i} \Delta \ln {\text{ES}}_{t - i} + \varphi_{1} \Delta \ln {\text{CO}}_{2t - 1} + \varphi_{2} \Delta \ln {\text{GDP}}_{t - 1} + \varphi_{3} \Delta \ln {\text{GDPSQ}}_{t - 1} + \varphi_{4} \Delta \ln {\text{POP}}_{t - 1} + \varphi_{5} \Delta \ln {\text{URB}}_{t - 1} + \varphi_{6} \Delta \ln {\text{ES}}_{t - 1} + \mu_{t}$$ In Eq. 5 ∆ stands for the first difference operator, \(\lambda_{0}\) represents the constant and \(\mu_{t}\) is the stochastic error terms. The process of the bounds testing approach for the long-run relationship using ARDL is based on the Wald test or F test. The null hypothesis of no cointegration i.e. \(H_{0} : \varphi_{1} = \varphi_{2} = \varphi_{3} = \varphi_{4} = \varphi_{5} = \varphi_{6} = 0\) is tested against the alternative hypothesis of cointegration, i.e., \(H_{1} : \varphi_{1} \ne \varphi_{2} \ne \varphi_{3} \ne \varphi_{4} \ne \varphi_{5} \ne \varphi_{6} \ne 0\) in the long run. The decision related to the long-run relationship is based on the F-statistics. If the F-statistics surpasses the critical values, then we conclude the existence of a long-run relationship and vice versa. In case the estimated value falls in between the critical values, we cannot have precise conclusion about the cointegration. The long-run elasticities can also be estimated using Eq. 5. The error correction model is represented in the following equation. $$\Delta {\text{lnCO}}_{2t}\, =\, \;\lambda_{0} + \mathop \sum \limits_{i = 1}^{p} \lambda_{1i} \Delta \ln {\text{CO}}_{2t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{2i} \Delta \ln {\text{GDP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{3i} \ln {\text{GDPSQ}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{4i} \Delta \ln {\text{POP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{5i} \Delta \ln {\text{URB}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{6i} \Delta \ln {\text{ES}}_{t - i} + \varphi {\text{ECT}}_{t - 1} + \mu_{1}$$ In Eq. 5, \(\text{ECT}\) stands for the error correction term, the coefficient of error correction term, i.e. \(\varphi\) has to be negative and less than one, and it shows the time taken for the adjustment towards the long-run equilibrium. In model 2, the empirical analysis has been carried out similarly by replacing the GDP with IND and GDPSQ with INDSQ in Eqs. 5 and 6. Empirical results and discussion The focus of this section is on the empirical simulations carried out in this study. First, preliminary analysis in terms of summary statistics is followed by correlation matrix analysis and then the visual plot of all variables under consideration. Table 2 highlights the descriptive statistics with industrial sector value added is having the highest average with the highest minimum and maximum, while industrial value-added, economic growth, and CO2 emissions are positively skewed. However, the population, urbanization, and energy sector are negatively skewed throughout the inquiry. Table 3 represents the analysis of the Pearson correlation matrix of the studied variables. The outcome shows the linear association between the variables. Moreover, there is a positive and significant relationship between CO2 emissions and industrial value-added. A similar conclusion has been reached for economic growth. This result indicates that industrial value-added, economic growth, urbanization, and energy structure drive environmental degradation in India. However, the population delineates a negative association with environmental degradation. Hence, to substantiate the outcomes of the correlation analysis, more analysis is needed. Figure 1 depicts the trend and pattern of the studied variables; it is clear that a positive correlation trend has been established for all the variables except population. Table 2 Summary statistics Table 3 Correlation matrix Visual plot of variables In time series modelling, the need for stationary analysis is important for circumventing spurious effects. The current study has implemented the traditional unit root test ADF and PP to analyze the stationarity properties of the variables, as seen in Table 4. The outcomes of the unit root test reveal a mixed order of integration among the variable vector under review. Table 4 ADF and PP and tests of unit root Successively, the study has established the long-run relationship between the variables with the help of Pesaran's ARDL Bounds test. The result shows the clear existence of a long-run relationship among the series that has been explored in the study. Optimum parsimonious lag has been chosen by the Akaike Information Criterion (AIC) (Table 5). Table 5 ARDL bounds test The long-run and short-run result obtained from Model 1 and Model 2 is reported in Tables 6 and 7. Model 1 displays the outcomes of the model using total GDP to reflect the economic growth, whereas Model 2 employs growth of the industrial sector which is used as an affluence proxy. The obtained result shows that the conventional EKC hypothesis is not holding in both the models; rather, it shows a U-shaped relation between the affluence proxies, i.e., GDP and IND, since the sign of coefficient on GDP and IND is negative, and GDPSQ and INDSQ is positive. Our findings are in line with Alam and Adil (2019) and Dar and Asif (2017). However, it differs from Jayanthakumaran et al. (2012) and Shahbaz and Sinha (2019). Table 6 ARDL results Model 1 In line with the preconceived notion, if other things remain constant, then the population growth coefficient has a positive effect on increasing levels of emission in both the short-run and long-run across the models. In model 1, the long-run coefficient is not significant when aggregate GDP is used. However, the short-run coefficient is positive and significant. In model 2, the population has a positive and significant impact on pollution both in the short-run and long-run when disaggregate GDP is employed. It may be due to an increase in the population contributing to the rising need for energy consumption. Similarly, the demand for goods and services also spur population growth; hence the energy required to produce the consumption goods also increases, which in turn enhances the CO2 emissions. In the literature, Song et al. (2015) and Gertler et al. (2013) observed that population growth could also be complemented by an increase in general economic conditions, living standards, and household income levels; as a result, there is a rise in energy consumption and emissions of CO2. In both models, the level of urbanization is negative and statistically significant in the short-run and long run. It indicates that urbanization has historically created a positive effect on environmental degradation, especially at the early stages of urbanization. However, the improved living conditions in terms of efficient infrastructure and energy in an urban area lead to a negative relation between urbanization and environmental degradation. The accelerating force behind such a move may be that replacing the inefficient energy sources with more efficient energy sources. This finding is consistent with the studies that found the negative relationship between urbanization and emissions of CO2 (Pachauri 2004; Poumanyvong and Kaneko 2010; Burton 2000; Pachauri and Jiang 2008). In both models, the energy structure coefficient is positive and significant in the long run and short run. Therefore, the study concluded that the composition of fossil fuels in the mix of energy is a driver of CO2 emissions. These findings support the theory that fossil fuels use is the major contributor to the increase in emissions. Therefore, our findings agree with previous studies MK (2020) for India and Canadell et al. (2009) for South Africa. The incorporated error correction term in both models shows that high speed in convergence to long-run equilibrium. The diagnostic test results show that both models are free from heteroscedasticity, serial correlation, and ARCH problems. ARDL models are well specified since Ramsey reset test offers the desired result. The cumulative sum of recursive residuals (CUSUM) and the CUSUM Square of recursive residuals (CUSUMsq) has been employed for both the models as proposed by Brown et al. (1975). The plot of the same is in Figs. 2 and 3. CUSUM and CUSUMsq for Model 1 Table 8 delineates the causality result based on the modified Wald test and corroborates the fossil fuel-induced growth hypothesis since there is a one-way causality running from energy structure (fossil fuel composition) to economic growth in India. The finding suggests that in the case of India, the fossil fuel conservation policy has to be enforced with caution; otherwise, it damages economic growth. Table 8 Granger causality analysis Conclusion and policy implications In this study, we are trying to examine the aggregate and disaggregate measure of economic growth and its effect on the environmental quality in India from 1971 to 2014. We have run two models to analyze the EKC hypothesis in the aggregated model, and the other one is a disaggregated model. For analyzing the long run and short run, we have applied Auto-Regressive Distributed Lag (ARDL) bound testing approach. The direction of the variables is measured through the modified Wald test. The results revealed that the EKC hypothesis doesn't hold in India in both aggregated and disaggregated models. In the aggregate model, considering the economic growth shows a U- relation with environmental degradation in India. In the disaggregated model, employing industrial sector value added instead of economic growth also produced a similar outcome. However, the effect of population on environmental quality is positive but not significant in model 1 or the aggregated model. Whereas, in model 2, it is significantly affecting environmental quality. As per the World Bank (2019), India is the second-highest populace country in the world after China, and it has forecasted that it may cross the China population by 2035. So, when the population of the country increases, the demand for energy consumption increase tremendously, particularly the consumption of fossil fuel like coal, oil, and natural gas. This increases due to the scarcity of renewable energy for meeting the needs of people. Hence government should increase more investment in the renewable energy sector (solar and wind energy etc.) to increase the environmental quality in India. In this regard, foreign direct investment needs to be attracted to boost the performance of renewable energy in India. Moreover, renewable energy investment can be promoted by charging a higher price for fossil fuels or removing fossil-fuel subsidies. As a result, the demand for renewable energy probably enhances to attract new investment. In contradiction, urbanization in both models 1 and 2 shows a negative impact on the environmental quality or CO2 emissions. Especially at the early stages of urbanization, it has historically had a positive impact on environmental degradation. "Urbanization helps more residents to gain connections at competitive rates to environment-friendly infrastructure and services". Innovation, like renewable technology, is driven by urbanization. In the long run, the future of the green economy can be decided by environmentally friendly facilities, machinery, cars, and services. From the above findings, as the primary drivers of CO2 emissions, the chemical and heavy industries play a crucial role in India. This country ought, however, to intensify the structural transformation. Administrative means of these sectors and the promotion of low and light emission industries by fostering industrial diversity. Besides, people should increase their eco-friendly knowledge and waste recycling to reduce emissions. The better health condition of an urban resident can be achieved by stringent environmental policy and environmental awareness among the urban as well as the general population of the country. The result indicates that the policy of conservation of fossil fuels must be pursued with precaution in the case of India; otherwise, it hurts economic development. As reported by Industry body FICCI's Economic Outlook Survey (http://www.ficci.in/ficci-surveys.asp). Alam R, Adil MH (2019) Validating the environmental Kuznets curve in India: ARDL bounds testing framework. OPEC Energy Review 43(3):277–300 Ansari MA, Villanthenkodath MA (2021) Does tourism development promote ecological footprint? A nonlinear ARDL approach. Anatolia. https://doi.org/10.1080/13032917.2021.1985542 Apergis N, Ozturk I (2015) Testing environmental Kuznets curve hypothesis in Asian countries. Ecol Ind 52:16–22 Bilgili F, Koçak E, Bulut Ü (2016) The dynamic impact of renewable energy consumption on CO2 emissions: a revisited Environmental Kuznets Curve approach. Renew Sustain Energy Rev 54:838–845 Boutabba MA (2014) The impact of financial development, income, energy and trade on carbon emissions: evidence from the Indian economy. Econ Model 40:33–41 Brown RL, Durbin J, Evans JM (1975) Techniques for testing the constancy of regression relationships over time. J R Stat Soc Ser B (methodological) 37(2):149–192 Burton E (2000) The compact city: just or just compact? A preliminary analysis. Urban Stud 37(11):1969–2006 Canadell JG, Raupach MR, Houghton RA (2009) Anthropogenic CO2 emissions in Africa. Biogeosciences 6(3):463–468. https://doi.org/10.5194/bg-6-463-2009 Carson RT (2010) The environmental Kuznets curve: seeking empirical regularity and theoretical structure. Rev Environ Econ Policy 4(1):3–23 Chertow MR (2000) The IPAT equation and its variants. J Ind Ecol 4(4):13–29. https://doi.org/10.1162/10881980052541927 CREA (2020) How air pollution worsens the COVID-19 pandemic. https://energyandcleanair.org/wp/wpcontent/uploads/2020/04/How_air_pollution_worsens_the_COVID-19_pandemic.pdf Dar JA, Asif M (2017) Is financial development good for carbon mitigation in India? A regime shift-based cointegration analysis. Carbon Manage 8(5–6):435–443 Dasgupta S, Laplante B, Wang H, Wheeler D (2002) Confronting the environmental Kuznets curve. J Econ Perspect 16(1):147–168 Dietz T, Rosa EA (1997) Effects of population and affluence on CO2 emissions. Proc Natl Acad Sci 94(1):175–179. https://doi.org/10.1073/pnas.94.1.175 Dogan E, Inglesi-Lotz R (2020) The impact of economic structure to the environmental Kuznets curve (EKC) hypothesis: evidence from European countries. Environ Sci Pollut Res 27(11):12717–12724. https://doi.org/10.1007/s11356-020-07878-2 El Montasser G, Ajmi AN, Nguyen DK (2018) Carbon emissions–income relationships with structural breaks: the case of the Middle Eastern and North African countries. Environ Sci Pollut Res 25(3):2869–2878 Fan S, Zhang X, Robinson S (2003) Structural change and economic growth in China. Rev Dev Econ 7(3):360–377 Gertler P, Shelef O, Wolfram C, Fuchs A (2013) How pro-poor growth affects the demand for energy (No. w19092). National Bureau of Economic Research. https://doi.org/10.3386/w19092 Ghazali A, Ali G (2019) Investigation of key contributors of CO2 emissions in extended STIRPAT model for newly industrialized countries: a dynamic common correlated estimator (DCCE) approach. Energy Rep 5:242–252 Grossman GM, Krueger AB (1991) Environmental impacts of a North American free trade agreement (No. w3914). National Bureau of Economic Research. Harbaugh WT, Levinson A, Wilson DM (2002) Reexamining the empirical evidence for an environmental Kuznets curve. Rev Econ Stat 84(3):541–551 Hasanov FJ, Mikayilov JI, Mukhtarov S, Suleymanov E (2019) Does CO2 emissions–economic growth relationship reveal EKC in developing countries? Evidence from Kazakhstan. Environ Sci Pollut Res 26(29):30229–30241 Jayanthakumaran K, Verma R, Liu Y (2012) CO2 emissions, energy consumption, trade and income: a comparative analysis of China and India. Energy Policy 42:450–460 Kanjilal K, Ghosh S (2013) Environmental Kuznet's curve for India: evidence from tests for cointegration with unknown structuralbreaks. Energy Policy 56:509–515 Lin B, Omoju OE, Nwakeze NM, Okonkwo JU, Megbowon ET (2016) Is the environmental Kuznets curve hypothesis a sound basis for environmental policy in Africa? J Clean Prod 133:712–724. https://doi.org/10.1016/j.jclepro.2016.05.173 Mahalik MK, Villanthenkodath MA, Mallick H, Gupta M (2021) Assessing the effectiveness of total foreign aid and foreign energy aid inflows on environmental quality in India. Energy Policy 149:112015 Meadows DH, Meadows DL, Randers J, Behrens WW (1972) The limits to growth. N Y 102(1972):27 MK Ashin Nishan (2020) Role of energy use in the prediction of CO2 emissions and economic growth in India: evidence from artificial neural networks (ANN). Environ Sci Pollut Res 27(19):23631–23642 Narayan PK (2005) The saving and investment nexus for China: evidence from cointegration tests. Appl Econ 37(17):1979–1990 Pachauri S (2004) An analysis of cross-sectional variations in total household energy requirements in India using micro survey data. Energy Policy 32(15):1723–1735 Pachauri S, Jiang L (2008) The household energy transition in India and China. Energy Policy 36(11):4022–4035. https://doi.org/10.1016/j.enpol.2008.06.016 Pal D, Mitra SK (2017) The environmental Kuznets curve for carbon dioxide in India and China: growth and pollution at crossroad. J Policy Model 39(2):371–385 Pal S, Villanthenkodath MA, Patel G, Mahalik MK (2021) The impact of remittance inflows on economic growth, unemployment and income inequality: an international evidence. Int J Econ Policy Stud. https://doi.org/10.1007/s42495-021-00074-1 Panayotou T (1993) Empirical tests and policy analysis of environmental degradation at different stages of economic development (No. 992927783402676). International Labour Organization. Pesaran MH, Shin Y (1995) An autoregressive distributed lag modelling approach to cointegration analysis (No. 9514). Faculty of Economics, University of Cambridge. Pesaran MH, Shin Y, Smith RJ (2001) Bounds testing approaches to the analysis of level relationships. J Appl Economet 16(3):289–326 Poumanyvong P, Kaneko S (2010) Does urbanization lead to less energy use and lower CO2 emissions? A cross-country analysis. Ecol Econ 70(2):434–444 Rees W, Wackernagel M, Testemale P (1996) Our ecological footprint: reducing human impact on the Earth. New Society Publishers, Gabriola Island, BC, pp 3–12 Roy M, Basu S, Pal P (2017) Examining the driving forces in moving toward a low carbon society: an extended STIRPAT analysis for a fast growing vast economy. Clean Technol Environ Policy 19(9):2265–2276 Sahoo M, Saini S, Villanthenkodath MA (2021) Determinants of material footprint in BRICS countries: an empirical analysis. Environ Sci Pollut Res. https://doi.org/10.1007/s11356-021-13309-7 Sehrawat M, Giri AK (2015) Financial development and income inequality in India: an application of ARDL approach. Int J Soc Econ 42:64–81 Shafik N (1994) Economic development and environmental quality: an econometric analysis. Oxford Econ Papers 46:757–773 Shahbaz M, Sinha A (2019) Environmental Kuznets curve for CO2 emissions: a literature survey. J Econ Stud 46:106–168 Shahbaz M, Solarin SA, Hammoudeh S, Shahzad SJH (2017) Bounds testing approach to analyzing the environment Kuznets curve hypothesis with structural beaks: the role of biomass energy consumption in the United States. Energy Econ 68:548–565 Song M, Guo X, Wu K, Wang G (2015) Driving effect analysis of energy-consumption carbon emissions in the Yangtze River Delta region. J Clean Prod 103:620–628. https://doi.org/10.1016/j.jclepro.2014.05.095 Stern DI, Common MS (2001) Is there an environmental Kuznets curve for sulfur? J Environ Econ Manag 41(2):162–178 Tian X, Chang M, Shi F, Tanikawa H (2014) How does industrial structure change impact carbon dioxide emissions? A comparative analysis focusing on nine provincial regions in China. Environ Sci Policy 37:243–254 Tursun H, Li Z, Liu R, Li Y, Wang X (2015) Contribution weight of engineering technology on pollutant emission reduction based on IPAT and LMDI methods. Clean Technol Environ Policy 17(1):225–235. https://doi.org/10.1007/s10098-014-0780-1 Villanthenkodath MA, Arakkal MF (2020) Exploring the existence of environmental Kuznets curve in the midst of financial development, openness, and foreign direct investment in New Zealand: Insights from ARDL bound test. Environ Sci Pollut Res 27(29):36511–36527 Villanthenkodath MA, Mahalik MK (2020) Technological innovation and environmental quality nexus in India: Does inward remittance matter? J Public Aff. https://doi.org/10.1002/pa.2291 Villanthenkodath MA, Mushtaq U (2021) Modelling the nexus between foreign aid and economic growth: a case of afghanistan and Egypt. Stud Appl Econ. https://doi.org/10.25115/eea.v39i2.3802 Villanthenkodath MA, Mahalik MK (2021) Does economic growth respond to electricity consumption asymmetrically in Bangladesh? The Implication for Environmental Sustainability. Energy 233:121142. https://doi.org/10.1016/j.energy.2021.121142 Villanthenkodath MA, Ansari MA, Shahbaz M, Vo XV (2021) Do tourism development and structural change promote environmental quality? Evidence from India. Environ Dev Sustain. https://doi.org/10.1007/s10668-021-01654-z Wang Y, Zhao T (2015) Impacts of energy-related CO2 emissions: Evidence from under developed, developing and highly developed regions in China. Ecol Ind 50:186–195. https://doi.org/10.1016/j.ecolind.2014.11.010 Wang P, Wu W, Zhu B, Wei Y (2013) Examining the impact factors of energy-related CO2 emissions using the STIRPAT model in Guangdong Province, China. Appl Energy 106:65–71 Wang Y, Han R, Kubota J (2016) Is there an environmental Kuznets curve for SO2 emissions? A semi-parametric panel data analysis for China. Renew Sustain Energy Rev 54:1182–1188 Wang C, Wang F, Zhang X, Yang Y, Su Y, Ye Y, Zhang H (2017) Examining the driving factors of energy related carbon emissions using the extended STIRPAT model based on IPAT identity in Xinjiang. Renew Sustain Energy Rev 67:51–61 World Bank (2019) ending poverty, investing in opportunity. The World Bank. https://doi.org/10.1596/978-1-4648-1470-9 You J (2011) China's energy consumption and sustainable development: Comparative evidence from GDP and genuine savings. Renew Sustain Energy Rev 15(6):2984–2989. https://doi.org/10.1016/j.rser.2011.03.026 The authors involved in this research communication do not have any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work. Department of Humanities and Social Sciences, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal, India Muhammed Ashiq Villanthenkodath Department of Humanities and Social Sciences, Jaypee Institute of Information Technology, A-10 sector-62, Noida, UP, 201309, India Mohini Gupta Department of Economics Science, Indian Institute of Technology Kanpur, Kharagpur, India Seema Saini Department of Humanities and Social Sciences, National Institute of Technology (NIT) Rourkela, Rourkela, Odisha, India Malayaranjan Sahoo VMA: Idea proposed, Data curation, Investigation, Writing—original draft, writing, revision, and estimation. GM: Writing—original draft, Introduction. SS: Writing—original draft, literature review. SM: Writing—original draft, restructuring. All authors read and approved the final manuscript". Correspondence to Muhammed Ashiq Villanthenkodath. Ethical approval and consent to participate The authors of the paper do not have any conflict of interest. Villanthenkodath, M.A., Gupta, M., Saini, S. et al. Impact of Economic Structure on the Environmental Kuznets Curve (EKC) hypothesis in India. Economic Structures 10, 28 (2021). https://doi.org/10.1186/s40008-021-00259-z Revised: 03 December 2021 DOI: https://doi.org/10.1186/s40008-021-00259-z Economic structure Energy structure
CommonCrawl
Multi-scale modeling of processes in porous media - coupling reaction-diffusion processes in the solid and the fluid phase and on the separating interfaces doi: 10.3934/dcdsb.2019106 A note on the stochastic Ericksen-Leslie equations for nematic liquid crystals Zdzisław Brzeźniak 1, , Erika Hausenblas 2, and Paul André Razafimandimby 3,, Department of Mathematics, University of York, Heslington Road, York YO10 5DD, UK Department of Mathematics and Information Technology, Montanuniversität Leoben, Franz Josef Straẞe 18, 8700 Leoben, Austria Department of Mathematics and Applied Mathematics, University of Pretoria, Lynwood Road, Pretoria 0002, South Africa, Current Address: Department of Mathematics, University of York, Heslington Road, York YO10 5DD, UK * Corresponding author: Paul André Razafimandimby This article is part of a project that is currently funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement No. 791735 "SELEs". Part of this work was written while P. A. Razafimandimby was at the University of Pretoria; he is grateful to the funding he received from the National Research Foundation South Africa (Grant Numbers 109355 and 112084). He is also grateful to the European Mathematical Society for the EMS-Simons for Africa-Collaborative research grant which enables him to visit Montanuniversität Leoben, Austria Received July 2018 Published June 2019 Fund Project: E. Hausenblas is supported by the FWF-Austrian Science Fund through the Stand-Alone grant number P28010 In this note we prove the existence and uniqueness of local maximal smooth solution of the stochastic simplified Ericksen-Leslie systems modelling the dynamics of nematic liquid crystals under stochastic perturbations. Keywords: Ericksen-Leslie equations, nematic liquid crystals, fixed point method, smooth solution, local solution. Mathematics Subject Classification: Primary: 60H15, 37L40; Secondary: 35R60. Citation: Zdzisław Brzeźniak, Erika Hausenblas, Paul André Razafimandimby. A note on the stochastic Ericksen-Leslie equations for nematic liquid crystals. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019106 K. Atkinson and W. Han, Theoretical Numerical Analysis. A Functional Analysis Framework, Third edition. Volume 39 of Texts in Applied Mathematics, Springer, Dordrecht, 2009. doi: 10.1007/978-1-4419-0458-4. Google Scholar R. Becker, X. Feng and A. Prohl, Finite element approximations of the Ericksen-Leslie model for nematic liquid crystal flow,, SIAM J. Numer. Anal., 46 (2008), 1704-1731. doi: 10.1137/07068254X. Google Scholar H. Bessaih, Z. Brzeźniak and A. Millet, Splitting up method for the 2D stochastic Navier-Stokes equations,, Stoch. Partial Differ. Equ. Anal. Comput., 2 (2014), 433-470. doi: 10.1007/s40072-014-0041-7. Google Scholar Z. Brzeźniak, S. Cerrai and M. Freidlin, Quasipotential and exit time for 2D Stochastic Navier-Stokes equations driven by space time white noise,, Probab. Theory Related Fields, 162 (2015), 739-793. doi: 10.1007/s00440-014-0584-6. Google Scholar Z. Brzeźniak and K. D. Elworthy, Stochastic differential equations on Banach manifolds, Methods Funct. Anal. Topology, 6 (2000), 43-84. Google Scholar Z. Brzeźniak and B. Ferrario, A note on stochastic Navier–Stokes equations with not regular multiplicative noise, Stoch. Partial Differ. Equ. Anal. Comput., 5 (2017), 53-80. doi: 10.1007/s40072-016-0081-2. Google Scholar Z. Brzeźniak, E. Hausenblas and P. Razafimandimby, Some results on the penalised nematic liquid crystals driven by multiplicative noise, arXiv preprint, arXiv: 1310.8641, (2016), 65 pages.Google Scholar Z. Brzeźniak and A. Millet, On the stochastic Strichartz estimates and the stochastic nonlinear Schrödinger equation on a compact Riemannian manifold,, Potential Anal., 41 (2014), 269-315. doi: 10.1007/s11118-013-9369-2. Google Scholar C. Cavaterra, R. Rocca and H. Wu, Global weak solution and blow-up criterion of the general Ericksen-Leslie system for nematic liquid crystal flows, J. Differential Equations, 255 (2013), 24-57. doi: 10.1016/j.jde.2013.03.009. Google Scholar [10] S. Chandrasekhar, Liquid Crystals, Cambridge University Press, 1992. B. Climent-Ezquerra, F. Guillén-González and M. A. Rojas-Medar, Reproductivity for a nematic liquid crystal model, Z. Angew. Math. Phys., 57 (2006), 984-998. doi: 10.1007/s00033-005-0038-1. Google Scholar B. Climent-Ezquerra and F. Guillén-González, A review of mathematical analysis of nematic and smectic-A liquid crystal models, European J. Appl. Math., 25 (2014), 133-153. doi: 10.1017/S0956792513000338. Google Scholar D. Coutand and S. Shkoller, Well-posdness of the full Ericksen-Leslie Model of nematic liquid crystals,, C.R. Acad. Sci. Paris. Série I, 333 (2001), 919-924. doi: 10.1016/S0764-4442(01)02161-9. Google Scholar M. Dai and M. Schonbek, Asymptotic behavior of solutions to the liquid crystal system in $H^m(R^3)$, SIAM J. Math. Anal., 46 (2014), 3131-3150. doi: 10.1137/120895342. Google Scholar [15] P. G. de Gennes and J. Prost, The Physics of Liquid Crystals, Clarendon Press, Oxford, 1993. K. D. Elworthy, Stochastic Differential Equations on Manifolds, London Math. Soc. LNS v 70, Cambridge University Press, 1982. Google Scholar J. L. Ericksen, Conservation laws for liquid crystals, Trans. Soc. Rheology, 5 (1961), 23-34. doi: 10.1122/1.548883. Google Scholar C. G. Gal and T. T. Medjo, On a regularized family of models for homogeneous incompressible two-phase flows, J. Nonlinear Sci., 24 (2014), 1033-1103. doi: 10.1007/s00332-014-9211-z. Google Scholar M. Grasselli and H. Wu, Long-time behavior for a hydrodynamic model on nematic liquid crystal flows with asymptotic stabilizing boundary condition and external force, SIAM J. Math. Anal., 45 (2013), 965-1002. doi: 10.1137/120866476. Google Scholar I. Gyöngy and N. V. Krylov, On stochastics equations with respect to semimartingales. Ⅱ. Itô formula in Banach spaces,, Stochastics, 6 (1981/82), 153-173. doi: 10.1080/17442508208833202. Google Scholar D. D. Haroske and H. Triebel, Distributions, Sobolev Spaces, Elliptic Equations, EMS Textbooks in Mathematics. European Mathematical Society, Zürich, 2008. Google Scholar M. Hieber, M. Nesensohn, J. Prüss and K. Schade, Dynamics of nematic liquid crystal flows: The quasilinear approach, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 397-408. doi: 10.1016/j.anihpc.2014.11.001. Google Scholar M. Hieber and J. Prüss, Dynamics of the Ericksen-Leslie equations with general Leslie stress Ⅰ: the incompressible isotropic case, Math. Ann., 369 (2017), 977-996. doi: 10.1007/s00208-016-1453-7. Google Scholar M.-C. Hong, Global existence of solutions of the simplified Ericksen-Leslie system in dimension two,, Calculus of Variations, 40 (2011), 15-36. doi: 10.1007/s00526-010-0331-5. Google Scholar M.-C. Hong and Z. Xin, Global existence of solutions of the liquid crystal flow for the Oseen-Frank model in $ \mathbb{R}^2$, Adv. Math., 231 (2012), 1364-1400. doi: 10.1016/j.aim.2012.06.009. Google Scholar M.-C. Hong, J. Li and Z. Xin, Blow-up criteria of strong solutions to the Ericksen-Leslie system in $ \mathbb{R}^3$, Comm. Partial Differential Equations, 39 (2014), 1284-1328. doi: 10.1080/03605302.2013.871026. Google Scholar W. Horsthemke and R. Lefever, Noise-induced Transitions. Theory and Applications in Physics, Chemistry, and Biology, Springer Series in Synergetics, 15. Springer-Verlag, Berlin, 1984. Google Scholar J. Huang, F. Lin, Fa nghua and C. Wang, Regularity and existence of global solutions to the Ericksen-Leslie system in $ \mathbb{R}^2$, Comm. Math. Phys., 331 (2014), 805-850. doi: 10.1007/s00220-014-2079-9. Google Scholar T. Huang, F. Lin, C. Liu and C. Wang, Finite time singularity of the nematic liquid crystal flow in dimension three, Arch. Ration. Mech. Anal., 221 (2016), 1223-1254. doi: 10.1007/s00205-016-0983-1. Google Scholar T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., 41 (1988), 891-907. doi: 10.1002/cpa.3160410704. Google Scholar T. Kato and G. Ponce, Well posedness of the Euler and Navier–Stokes equations in the Lebesgues spaces $L^p_s(\mathbb{R}^2)$,, Rev. Mat. Iberoam., 2 (1986), 73-88. doi: 10.4171/RMI/26. Google Scholar [32] H. Kunita, Stochastic Flows and Stochastic Differential Equations, Cambridge University Press, 1990. F. M. Leslie, Some constitutive equations for liquid crystals, Arch. Rational Mech. Anal., 28 (1968), 265-283. doi: 10.1007/BF00251810. Google Scholar F.-H. Lin and C. Liu, Nonparabolic dissipative systems modeling the flow of liquid crystals,, Communications on Pure and Applied Mathematics, 48 (1995), 501-537. doi: 10.1002/cpa.3160480503. Google Scholar F.-H. Lin and C. Liu, Existence of solutions for the Ericksen-Leslie system,, Arch. Rational Mech. Anal., 154 (2000), 135-156. doi: 10.1007/s002050000102. Google Scholar F. Lin and C. Wang, Recent developments of analysis for hydrodynamic flow of nematic liquid crystals, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 372 (2014), 20130361, 18 pp. doi: 10.1098/rsta.2013.0361. Google Scholar F. Lin and C. Wang, On the uniqueness of heat flow of harmonic maps and hydrodynamic flow of nematic liquid crystals,, Chinese Annals of Mathematics, Series B., 31 (2010), 921-938. doi: 10.1007/s11401-010-0612-5. Google Scholar F. Lin and C. Wang, Global existence of weak solutions of the nematic liquid crystal flow in dimension three, Comm. Pure Appl. Math., 69 (2016), 1532-1571. doi: 10.1002/cpa.21583. Google Scholar F. Lin, J. Lin and C. Wang, Liquid crystals in two dimensions,, Arch. Rational Mech. Anal., 197 (2010), 297-336. doi: 10.1007/s00205-009-0278-x. Google Scholar C. Liu and N. J. Walkington, Approximation of liquid crystal flows,, SIAM J. Numer. Anal., 37 (2000), 725-741. doi: 10.1137/S0036142997327282. Google Scholar R. Mikulevicius, On strong $ \mathrm{H}^{1}_2$-solutions of stochastic Navier-Stokes equation in a bounded domain, SIAM J. Math. Anal., 41 (2009), 1206-1230. doi: 10.1137/0807433747. Google Scholar [42] E. M. Ouhabaz, Analysis of Heat Equations on Domains,, Volume 31 of London Mathematical Society Monographs Series. Princeton University Press, Princeton, 2005. E. Pardoux, Stochastic partial differential equations and filtering of diffusion processes, Stochastics, 3 (1979), 127-167. doi: 10.1080/17442507908833142. Google Scholar E. Pardoux, Equations aux Dérivées Partielles Stochastiques Monotones, Theèse de Doctorat, Université Paris-Sud, 1975. Google Scholar F. Sagués and M. San Miguel, Dynamics of Fréedericksz transition in a fluctuating magnetic field,, Phys. Rev. A., 32 (1985), 1843-1851. Google Scholar M. San Miguel, Nematic liquid crystals in a stochastic magnetic field: Spatial correlations, Phys. Rev. A, 32 (1985), 3811-3813. Google Scholar S. Shkoller, Well-posedness and global attractors for liquid crystal on Riemannian manifolds,, Communication in Partial Differential Equations, 27 (2002), 1103-1137. doi: 10.1081/PDE-120004895. Google Scholar R. Temam, Navier-Stokes Equations and Nonlinear Functional Analysis, CBMS-NSF Regional Conference Series in Applied Mathematics, 41, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1983. Google Scholar M. Wang and W. Wang, Global existence of weak solution for the 2-D Ericksen-Leslie system,, Calc. Var. Partial Differential Equations, 51 (2014), 915-962. doi: 10.1007/s00526-013-0700-y. Google Scholar W. Wang, P. Zhang and Z. Zhang, Well-posedness of the Ericksen-Leslie system,, Arch. Ration. Mech. Anal., 210 (2013), 837-855. doi: 10.1007/s00205-013-0659-z. Google Scholar M. Wang, W. Wang and Z. Zhang, On the uniqueness of weak solution for the 2-D Ericksen-Leslie system, Discrete Contin. Dyn. Syst. Ser. B, 21 (2016), 919-941. doi: 10.3934/dcdsb.2016.21.919. Google Scholar W. Wang, P. Zhang and Z. Zhang, The small Deborah number limit of the Doi-Onsager equation to the Ericksen-Leslie equation, Comm. Pure Appl. Math., 68 (2015), 1326-1398. doi: 10.1002/cpa.21549. Google Scholar Jishan Fan, Tohru Ozawa. Regularity criteria for a simplified Ericksen-Leslie system modeling the flow of liquid crystals. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 859-867. doi: 10.3934/dcds.2009.25.859 Stefano Bosia. Well-posedness and long term behavior of a simplified Ericksen-Leslie non-autonomous system for nematic liquid crystal flows. Communications on Pure & Applied Analysis, 2012, 11 (2) : 407-441. doi: 10.3934/cpaa.2012.11.407 Meng Wang, Wendong Wang, Zhifei Zhang. On the uniqueness of weak solution for the 2-D Ericksen--Leslie system. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 919-941. doi: 10.3934/dcdsb.2016.21.919 Xiaoli Li. Global strong solution for the incompressible flow of liquid crystals with vacuum in dimension two. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4907-4922. doi: 10.3934/dcds.2017211 Shijin Ding, Changyou Wang, Huanyao Wen. Weak solution to compressible hydrodynamic flow of liquid crystals in dimension one. Discrete & Continuous Dynamical Systems - B, 2011, 15 (2) : 357-371. doi: 10.3934/dcdsb.2011.15.357 Mauro Fabrizio, Claudio Giorgi, Angelo Morro. Isotropic-nematic phase transitions in liquid crystals. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 565-579. doi: 10.3934/dcdss.2011.4.565 Jihoon Lee. Scaling invariant blow-up criteria for simplified versions of Ericksen-Leslie system. Discrete & Continuous Dynamical Systems - S, 2015, 8 (2) : 381-388. doi: 10.3934/dcdss.2015.8.381 Rui Qian, Rong Hu, Ya-Ping Fang. Local smooth representation of solution sets in parametric linear fractional programming problems. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 45-52. doi: 10.3934/naco.2019004 Geng Chen, Ping Zhang, Yuxi Zheng. Energy conservative solutions to a nonlinear wave system of nematic liquid crystals. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1445-1468. doi: 10.3934/cpaa.2013.12.1445 Boling Guo, Yongqian Han, Guoli Zhou. Random attractor for the 2D stochastic nematic liquid crystals flows. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2349-2376. doi: 10.3934/cpaa.2019106 Luigi C. Berselli, Jishan Fan. Logarithmic and improved regularity criteria for the 3D nematic liquid crystals models, Boussinesq system, and MHD equations in a bounded domain. Communications on Pure & Applied Analysis, 2015, 14 (2) : 637-655. doi: 10.3934/cpaa.2015.14.637 Apala Majumdar. The Landau-de Gennes theory of nematic liquid crystals: Uniaxiality versus Biaxiality. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1303-1337. doi: 10.3934/cpaa.2012.11.1303 Shu-Guang Shao, Shu Wang, Wen-Qing Xu, Yu-Li Ge. On the local C1, α solution of ideal magneto-hydrodynamical equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2103-2113. doi: 10.3934/dcds.2017090 Saeed Ketabchi, Hossein Moosaei, M. Parandegan, Hamidreza Navidi. Computing minimum norm solution of linear systems of equations by the generalized Newton method. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 113-119. doi: 10.3934/naco.2017008 Sertan Alkan. A new solution method for nonlinear fractional integro-differential equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1065-1077. doi: 10.3934/dcdss.2015.8.1065 Hua Qiu. Regularity criteria of smooth solution to the incompressible viscoelastic flow. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2873-2888. doi: 10.3934/cpaa.2013.12.2873 Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. Conference Publications, 2007, 2007 (Special) : 768-778. doi: 10.3934/proc.2007.2007.768 Qi S. Zhang. An example of large global smooth solution of 3 dimensional Navier-Stokes equations without pressure. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5521-5523. doi: 10.3934/dcds.2013.33.5521 Zhiyuan Geng, Wei Wang, Pingwen Zhang, Zhifei Zhang. Stability of half-degree point defect profiles for 2-D nematic liquid crystal. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6227-6242. doi: 10.3934/dcds.2017269 Güher Çamliyurt, Igor Kukavica. A local asymptotic expansion for a solution of the Stokes system. Evolution Equations & Control Theory, 2016, 5 (4) : 647-659. doi: 10.3934/eect.2016023 PDF downloads (7) HTML views (97) Zdzisław Brzeźniak Erika Hausenblas Paul André Razafimandimby
CommonCrawl
Existence of nontrivial solutions to systems of multi-point boundary value problems PROC Home Characterization of the spectral density function for a one-sided tridiagonal Jacobi matrix operator 2013, 2013(special): 259-272. doi: 10.3934/proc.2013.2013.259 Regularization for ill-posed inhomogeneous evolution problems in a Hilbert space Matthew A. Fury 1, Division of Science and Engineering, Penn State Abington, 1600 Woodland Road, Abington, PA 19001, United States Received July 2012 Published November 2013 We prove regularization for ill-posed evolution problems that are both inhomogeneous and nonautonomous in a Hilbert Space $H$. We consider the ill-posed problem $du/dt = A(t,D)u(t)+h(t)$, $u(s)=\chi$, $0\leq s \leq t< T$ where $A(t,D)=\sum_{j=1}^ka_j(t)D^j$ with $a_j\in C([0,T]:\mathbb{R}^+)$ for each $1\leq j\leq k$ and $D$ a positive, self-adjoint operator in $H$. Assuming there exists a solution $u$ of the problem with certain stabilizing conditions, we approximate $u$ by the solution $v_{\beta}$ of the approximate well-posed problem $dv/dt = f_{\beta}(t,D)v(t)+h(t)$, $v(s)=\chi$, $0\leq s \leq t< T$ where $0<\beta <1$. Our method implies the existence of a family of regularizing operators for the given ill-posed problem with applications to a wide class of ill-posed partial differential equations including the inhomogeneous backward heat equation in $L^2(\mathbb{R}^n)$ with a time-dependent diffusion coefficient. Keywords: Regularizing families of operators, ill-posed problems, evolution equations, backward heat equation.. Mathematics Subject Classification: Primary: 47D06, 46C99; Secondary: 35K0. Citation: Matthew A. Fury. Regularization for ill-posed inhomogeneous evolution problems in a Hilbert space. Conference Publications, 2013, 2013 (special) : 259-272. doi: 10.3934/proc.2013.2013.259 S. Agmon and L. Nirenberg, Properties of solutions of ordinary differential equations in Banach space,, Comm. Pure Appl. Math., 16 (1963), 121. Google Scholar K. A. Ames, "Comparison Results for Related Properly and Improperly Posed Problems, with Applications to Mechanics,", Ph.D. Thesis, (1980). Google Scholar K. A. Ames and R. J. Hughes, Structural stability for ill-posed problems in Banach space,, Semigroup Forum, 70 (2005), 127. Google Scholar B. Campbell Hetrick and R. J. Hughes, Continuous dependence results for inhomogeneous ill-posed problems in Banach space,, J. Math. Anal. Appl., 331 (2007), 342. Google Scholar N. Dunford and J. Schwartz, "Linear Operators, Part II,", John Wiley and Sons, (1957). Google Scholar M. Fury and R. J. Hughes, Continuous dependence of solutions for ill-posed evolution problems,, Electron. J. Diff. Eqns., Conf. 19 (2010), 99. Google Scholar M. A. Fury and R. J. Hughes, Regularization for a class of ill-posed evolution problems in Banach space,, Semigroup Forum, 85 (2012), 191. Google Scholar J. A. Goldstein, "Semigroups of Linear Operators and Applications,", Oxford Univ. Press, (1985). Google Scholar Y. Huang and Q. Zheng, Regularization for ill-posed Cauchy problems associated with generators of analytic semigroups,, J. Differential Equations, 203 (2004), 38. Google Scholar Y. Huang and Q. Zheng, Regularization for a class of ill-posed Cauchy problems,, Proc. Amer. Math. Soc., 133-10 (2005), 133. Google Scholar T. Kato, Linear evolution equations of "hyperbolic" type,, J. Fac. Sci. Univ. Tokyo, 25 (1970), 241. Google Scholar R. Lattes and J. L. Lions, "The Method of Quasireversibility, Applications to Partial Differential Equations,", Amer. Elsevier, (1969). Google Scholar I. V. Mel'nikova, General theory of the ill-posed Cauchy problem,, J. Inverse and Ill-posed Problems, 3 (1995), 149. Google Scholar I. V. Mel'nikova and A. I. Filinkov, "Abstract Cauchy Problems: Three Approaches,", Chapman & Hall/CRC Monogr. Surv. Pure Appl. Math., (2001). Google Scholar K. Miller, Stabilized quasi-reversibility and other nearly-best-possible methods for non-well-posed problems,, in, (1972), 161. Google Scholar A. Pazy, "Semigroups of Linear Operators and Applications to Partial Differential Equations,", Springer-Verlag, (1983). Google Scholar W. Rudin, "Real and Complex Analysis,", $3^{rd}$ edition, (1987). Google Scholar R. E. Showalter, The final value problem for evolution equations,, J. Math. Anal. Appl., 47 (1974), 563. Google Scholar D. D. Trong and N. H. Tuan, Regularization and error estimates for nonhomogeneous backward heat problems,, Electron. J. Diff. Eqns., 2006 (2006), 1. Google Scholar D. D. Trong and N. H. Tuan, A nonhomogeneous backward heat problem: regularization and error estimates,, Electron. J. Diff. Eqns., 2008 (2008), 1. Google Scholar Paola Favati, Grazia Lotti, Ornella Menchi, Francesco Romani. An inner-outer regularizing method for ill-posed problems. Inverse Problems & Imaging, 2014, 8 (2) : 409-420. doi: 10.3934/ipi.2014.8.409 Markus Haltmeier, Richard Kowar, Antonio Leitão, Otmar Scherzer. Kaczmarz methods for regularizing nonlinear ill-posed equations II: Applications. Inverse Problems & Imaging, 2007, 1 (3) : 507-523. doi: 10.3934/ipi.2007.1.507 Markus Haltmeier, Antonio Leitão, Otmar Scherzer. Kaczmarz methods for regularizing nonlinear ill-posed equations I: convergence analysis. Inverse Problems & Imaging, 2007, 1 (2) : 289-298. doi: 10.3934/ipi.2007.1.289 Eliane Bécache, Laurent Bourgeois, Lucas Franceschini, Jérémi Dardé. Application of mixed formulations of quasi-reversibility to solve ill-posed problems for heat and wave equations: The 1D case. Inverse Problems & Imaging, 2015, 9 (4) : 971-1002. doi: 10.3934/ipi.2015.9.971 Stefan Kindermann. Convergence of the gradient method for ill-posed problems. Inverse Problems & Imaging, 2017, 11 (4) : 703-720. doi: 10.3934/ipi.2017033 Sergiy Zhuk. Inverse problems for linear ill-posed differential-algebraic equations with uncertain parameters. Conference Publications, 2011, 2011 (Special) : 1467-1476. doi: 10.3934/proc.2011.2011.1467 Matthew A. Fury. Estimates for solutions of nonautonomous semilinear ill-posed problems. Conference Publications, 2015, 2015 (special) : 479-488. doi: 10.3934/proc.2015.0479 Misha Perepelitsa. An ill-posed problem for the Navier-Stokes equations for compressible flows. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 609-623. doi: 10.3934/dcds.2010.26.609 Felix Lucka, Katharina Proksch, Christoph Brune, Nicolai Bissantz, Martin Burger, Holger Dette, Frank Wübbeling. Risk estimators for choosing regularization parameters in ill-posed problems - properties and limitations. Inverse Problems & Imaging, 2018, 12 (5) : 1121-1155. doi: 10.3934/ipi.2018047 Olha P. Kupenko, Rosanna Manzo. On optimal controls in coefficients for ill-posed non-Linear elliptic Dirichlet boundary value problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1363-1393. doi: 10.3934/dcdsb.2018155 Guozhi Dong, Bert Jüttler, Otmar Scherzer, Thomas Takacs. Convergence of Tikhonov regularization for solving ill-posed operator equations with solutions defined on surfaces. Inverse Problems & Imaging, 2017, 11 (2) : 221-246. doi: 10.3934/ipi.2017011 Johann Baumeister, Barbara Kaltenbacher, Antonio Leitão. On Levenberg-Marquardt-Kaczmarz iterative methods for solving systems of nonlinear ill-posed equations. Inverse Problems & Imaging, 2010, 4 (3) : 335-350. doi: 10.3934/ipi.2010.4.335 Adriano De Cezaro, Johann Baumeister, Antonio Leitão. Modified iterated Tikhonov methods for solving systems of nonlinear ill-posed equations. Inverse Problems & Imaging, 2011, 5 (1) : 1-17. doi: 10.3934/ipi.2011.5.1 Lianwang Deng. Local integral manifolds for nonautonomous and ill-posed equations with sectorially dichotomous operator. Communications on Pure & Applied Analysis, 2020, 19 (1) : 145-174. doi: 10.3934/cpaa.2020009 Peter I. Kogut, Olha P. Kupenko. On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1273-1295. doi: 10.3934/dcdsb.2019016 Youri V. Egorov, Evariste Sanchez-Palencia. Remarks on certain singular perturbations with ill-posed limit in shell theory and elasticity. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1293-1305. doi: 10.3934/dcds.2011.31.1293 Alfredo Lorenzi, Luca Lorenzi. A strongly ill-posed integrodifferential singular parabolic problem in the unit cube of $\mathbb{R}^n$. Evolution Equations & Control Theory, 2014, 3 (3) : 499-524. doi: 10.3934/eect.2014.3.499 Faker Ben Belgacem. Uniqueness for an ill-posed reaction-dispersion model. Application to organic pollution in stream-waters. Inverse Problems & Imaging, 2012, 6 (2) : 163-181. doi: 10.3934/ipi.2012.6.163 Noboru Okazawa, Kentarou Yoshii. Linear evolution equations with strongly measurable families and application to the Dirac equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 723-744. doi: 10.3934/dcdss.2011.4.723 Sebastián Ferrer, Martin Lara. Families of canonical transformations by Hamilton-Jacobi-Poincaré equation. Application to rotational and orbital motion. Journal of Geometric Mechanics, 2010, 2 (3) : 223-241. doi: 10.3934/jgm.2010.2.223 HTML views (0) Matthew A. Fury
CommonCrawl
Solving fuzzy volterra-fredholm integral equation by fuzzy artificial neural network MFC Home Asymptotic normality of associated Lah numbers August 2021, 4(3): 193-208. doi: 10.3934/mfc.2021012 Uniform attractors of stochastic three-component Gray-Scott system with multiplicative noise Junwei Feng 1, , Hui Liu 1, and Jie Xin 1,2,, School of Mathematical Sciences, Qufu Normal University, Qufu, Shandong 273165, China College of Information Science and Engineering, Shandong Agricultural University, Taian, Shandong 271018, China * Corresponding author: Jie Xin Received October 2020 Published August 2021 Early access August 2021 Fund Project: The second author is supported by the National Natural Science Foundation of China No. 11901342, the Natural Science Foundation of Shandong Province No. ZR2018QA002, Postdoctoral Innovation Project of Shandong Province No. 202003040 and China Postdoctoral Science Foundation No. 2019M652350. The third author is supported by the NSF of China No. 11371183 In a bounded domain, we study the long time behavior of solutions of the stochastic three-component Gray-Scott system with multiplicative noise. We first show that the stochastic three-component Gray-Scott system can generate a non-autonomous random dynamical system. Then we establish some uniform estimates of solutions for stochastic three-component Gray-Scott system with multiplicative noise. Finally, the existence of uniform and cocycle attractors is proved. Keywords: Gray-Scott system, uniform attractor, cocycle attractor, multiplicative noise. Mathematics Subject Classification: Primary: 35K51, 37C70; Secondary: 35B41. Citation: Junwei Feng, Hui Liu, Jie Xin. Uniform attractors of stochastic three-component Gray-Scott system with multiplicative noise. Mathematical Foundations of Computing, 2021, 4 (3) : 193-208. doi: 10.3934/mfc.2021012 L. Arnold, Random Dynamical Systems, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar A. V. Babin and M. I. Vishik, Attractors of Evolution Equations, North-Holland, Amesterdam, 1992. Google Scholar P. W. Bates, H. Lisei and K. Lu, Attractors for stochastic lattice dynamical systems, Stoch. Dyn., 6 (2006), 1-21. doi: 10.1142/S0219493706001621. Google Scholar P. W. Bates, K. Lu and B. Wang, Random attractors for stochastic reaction-diffusion equations on unbounded domains, J. Differential Equations, 246 (2009), 845-869. doi: 10.1016/j.jde.2008.05.017. Google Scholar V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, American Mathematical Society Colloquium Publications, 49, AMS, Providence, RI, 2002. Google Scholar A. Cheskidov and L. Kavlie, Pullback attractors for generalized evolutionary systems, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 749-779. doi: 10.3934/dcdsb.2015.20.749. Google Scholar I. Chueshov, Monotone Random Systems Theory and Applications, Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2002. doi: 10.1007/b83277. Google Scholar H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dynam. Differential Equations, 9 (1997), 307-341. doi: 10.1007/BF02219225. Google Scholar H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab. Theory Related Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar H. Crauel, P. E. Kloeden and M. Yang, Random attractors of stochastic reaction-diffusion equations on variable domains, Stoch. Dyn., 11 (2011), 301-314. doi: 10.1142/S0219493711003292. Google Scholar H. Cui, M. M. Freitas and J. A. Langa, On random cocycle attractors with autonomous attraction universes, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 3379-3407. doi: 10.3934/dcdsb.2017142. Google Scholar H. Cui and P. E. Kloeden, Invariant forward attractors of non-autonomous random dynamical systems, J. Differential Equations, 265 (2018), 6166-6186. doi: 10.1016/j.jde.2018.07.028. Google Scholar H. Cui and J. A. Langa, Uniform attractors for non-autonomous random dynamical systems, J. Differential Equations, 263 (2017), 1225-1268. doi: 10.1016/j.jde.2017.03.018. Google Scholar X. Ding and J. Jiang, Random attractors for stochastic retarded reaction-diffusion equations on unbounded domains, Abstr. Appl. Anal., 1 (2013), 16pp. doi: 10.1155/2013/981576. Google Scholar X. Ding and J. Jiang, Randoms attractors for stochastic retarded lattice dynamical systems, Abstr. Appl. Anal., 2 (2012), 27pp. Google Scholar F. Flandoli and B. Schmalfuss, Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative white noise, Stochastics Stochastics Rep., 59 (1996), 21-45. doi: 10.1080/17442509608834083. Google Scholar A. Gu and H. Xiang, Upper semicontinuity of random attractors for stochastic three-component reversible Gray-Scott system, Appl. Math. Comput., 225 (2013), 387-400. doi: 10.1016/j.amc.2013.09.041. Google Scholar A. Gu, S. Zhou and Z. Wang, Uniform attractor of non-autonomous three-component reversible Gray-Scott system, Appl. Math. Comput., 219 (2013), 8718-8729. doi: 10.1016/j.amc.2013.02.056. Google Scholar X. Jia, J. Gao and X. Ding, Random attractors for stochastic two-compartment Gray-Scott equations with a multiplicative noise, Open. Math., 14 (2016), 586-602. doi: 10.1515/math-2016-0052. Google Scholar H. Liu and H. Gao, Ergodicity and dynamics for the stochastic 3D Navier-Stokes equations with damping, Commun. Math. Sci., 16 (2018), 97-122. doi: 10.4310/CMS.2018.v16.n1.a5. Google Scholar K. Lu and B. Wang, Global attractors for the Klein-Gordon-Schr$$ \rm\ddot{o} $$dinger equation in unbounded domains, J. Differential Equations, 170 (2001), 281-316. doi: 10.1006/jdeq.2000.3827. Google Scholar H. Mhara, N. J. Suematsu, T. Yamaguchi, K. Ohgane, Y. Nishiura and M. Shimomura, Three-variable reversible Gray-Scott model, J. Chem. Phys., 121 (2004), 8968-8972. Google Scholar G. Ochs, Weak Random Attractors, Citeseer, 1999. Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, 2$^{nd}$ edition, Applied Mathematical Sciences, 68. Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar B. Wang, Attractors for reaction-diffusion equations in unbounded domains, Phys. D, 128 (1999), 41-52. doi: 10.1016/S0167-2789(98)00304-2. Google Scholar B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar B. Wang, Random attractors for non-autonomous stochastic wave equations with multiplicative noise, Discrete Contin. Dyn. Syst., 34 (2014), 269-300. doi: 10.3934/dcds.2014.34.269. Google Scholar B. Wang, Pullback attractors for non-autonomous reaction-diffusion equations on ${\mathbb R}^{n}$, Front. Math. China, 4 (2009), 563-583. doi: 10.1007/s11464-009-0033-5. Google Scholar Z. Wang and S. Zhou, Random attractor for stochastic reaction-diffusion equation with multiplicative noise on unbounded domains, J. Math. Anal. Appl., 384 (2011), 160-172. doi: 10.1016/j.jmaa.2011.02.082. Google Scholar Y. You, Dynamics of two-compartment Gray-Scott equations, Nonlin. Anal., 74 (2011), 1969-1986. doi: 10.1016/j.na.2010.11.004. Google Scholar Y. You, Dynamics of three-compartment reversible Gray-Scott model, Disc. Cont. Dynam. Syst. Ser. B, 14 (2010), 1671-1688. doi: 10.3934/dcdsb.2010.14.1671. Google Scholar Y. You, Global attractor of the Gray-Scott equations, Commun. Pure. Appl. Anal., 7 (2008), 947-970. doi: 10.3934/cpaa.2008.7.947. Google Scholar Y. You, Robustness of global attractors for reversible Gray-Scott systems, J. Dynam. Differential Equations, 24 (2012), 495-520. doi: 10.1007/s10884-012-9252-7. Google Scholar Junwei Feng, Hui Liu, Jie Xin. Uniform attractors of stochastic two-compartment Gray-Scott system with multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4617-4640. doi: 10.3934/dcdsb.2020116 Yuncheng You. Global attractor of the Gray-Scott equations. Communications on Pure & Applied Analysis, 2008, 7 (4) : 947-970. doi: 10.3934/cpaa.2008.7.947 Gaocheng Yue, Chengkui Zhong. Global attractors for the Gray-Scott equations in locally uniform spaces. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 337-356. doi: 10.3934/dcdsb.2016.21.337 Junyi Tu, Yuncheng You. Random attractor of stochastic Brusselator system with multiplicative noise. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2757-2779. doi: 10.3934/dcds.2016.36.2757 Keisuke Matsuya, Mikio Murata. Spatial pattern of discrete and ultradiscrete Gray-Scott model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 173-187. doi: 10.3934/dcdsb.2015.20.173 Yuncheng You. Dynamics of three-component reversible Gray-Scott model. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1671-1688. doi: 10.3934/dcdsb.2010.14.1671 Berat Karaagac. Numerical treatment of Gray-Scott model with operator splitting method. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2373-2386. doi: 10.3934/dcdss.2020143 Chi Phan. Random attractor for stochastic Hindmarsh-Rose equations with multiplicative noise. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3233-3256. doi: 10.3934/dcdsb.2020060 Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 Xingni Tan, Fuqi Yin, Guihong Fan. Random exponential attractor for stochastic discrete long wave-short wave resonance equation with multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3153-3170. doi: 10.3934/dcdsb.2020055 Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 Messoud Efendiev, Etsushi Nakaguchi, Wolfgang L. Wendland. Uniform estimate of dimension of the global attractor for a semi-discretized chemotaxis-growth system. Conference Publications, 2007, 2007 (Special) : 334-343. doi: 10.3934/proc.2007.2007.334 Cedric Galusinski, Serguei Zelik. Uniform Gevrey regularity for the attractor of a damped wave equation. Conference Publications, 2003, 2003 (Special) : 305-312. doi: 10.3934/proc.2003.2003.305 Chunyou Sun, Daomin Cao, Jinqiao Duan. Non-autonomous wave dynamics with memory --- asymptotic regularity and uniform attractor. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 743-761. doi: 10.3934/dcdsb.2008.9.743 Xiaolin Jia, Caidi Zhao, Juan Cao. Uniform attractor of the non-autonomous discrete Selkov model. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 229-248. doi: 10.3934/dcds.2014.34.229 Olivier Goubet, Wided Kechiche. Uniform attractor for non-autonomous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 639-651. doi: 10.3934/cpaa.2011.10.639 Isabell Vorkastner. On the approaching time towards the attractor of differential equations perturbed by small noise. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4295-4316. doi: 10.3934/dcdsb.2020098 Yangrong Li, Lianbing She, Jinyan Yin. Longtime robustness and semi-uniform compactness of a pullback attractor via nonautonomous PDE. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1535-1557. doi: 10.3934/dcdsb.2018058 Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343 Xueli Song, Jianhua Wu. Non-autonomous 2D Newton-Boussinesq equation with oscillating external forces and its uniform attractor. Evolution Equations & Control Theory, 2022, 11 (1) : 41-65. doi: 10.3934/eect.2020102 Impact Factor: Junwei Feng Hui Liu Jie Xin
CommonCrawl
Population Pharmacokinetics at Two Dose Levels and Pharmacodynamic Profiling of Flucloxacillin Cornelia B. Landersdorfer, Carl M. J. Kirkpatrick, Martina Kinzig-Schippers, Jürgen B. Bulitta, Ulrike Holzgrabe, George L. Drusano, Fritz Sörgel Cornelia B. Landersdorfer Institute for Biomedical and Pharmaceutical Research, Nürnberg-Heroldsberg, Germany Carl M. J. Kirkpatrick University of Queensland, Brisbane, Australia Martina Kinzig-Schippers Jürgen B. Bulitta Ulrike Holzgrabe Institute of Pharmacy and Food Chemistry, University of Würzburg, Würzburg, Germany George L. Drusano Ordway Research Institute, Inc., Albany, New York 12208 Fritz Sörgel Institute for Biomedical and Pharmaceutical Research, Nürnberg-Heroldsberg, GermanyDepartment of Pharmacology, University of Duisburg-Essen, Essen, Germany For correspondence: [email protected] Flucloxacillin is often used for the treatment of serious infections due to sensitive staphylococci. The pharmacokinetic (PK)-pharmacodynamic (PD) breakpoint of flucloxacillin has not been determined by the use of population PK. Targets based on the duration of non-protein-bound concentrations above the MIC (fT>MIC) best correlate with clinical cure rates for beta-lactams. We compared the breakpoints for flucloxacillin between several dosage regimens. In a randomized, two-way crossover study, 10 healthy volunteers received 500 mg and 1,000 mg flucloxacillin as 5-min intravenous infusions. Drug concentrations were determined by high-pressure liquid chromatography. We used the programs WinNonlin for noncompartmental analysis and statistics and NONMEM for population PK and Monte Carlo simulation. We compared the probability of target attainment (PTA) for intermittent- and continuous-dosage regimens based on the targets of fT>MICs of ≥50% and ≥30% of the dosing interval. The clearance and the volume of distribution were very similar after the administration of 500 mg and 1,000 mg flucloxacillin. We estimated renal and nonrenal clearances of 5.37 liters/h (coefficient of variation, 19%) and 2.73 liters/h (33%). For near maximal killing (target, fT>MIC of ≥50%) flucloxacillin showed a robust (≥90%) PTA up to MICs of 0.75 to 1 mg/liter (PTA of 86% at 1 mg/liter) for a continuous or a prolonged infusion of 6 g/day. Short-term infusions of 6 g/day had a lower breakpoint of 0.25 to 0.375 mg/liter. The flucloxacillin PK was linear for doses of 500 mg and 1,000 mg. Prolonged and continuous infusion at a 66% lower daily dose achieved the same PK-PD breakpoints as short-term infusions. Prolonged infusion and continuous infusion are appealing options for the treatment of serious infections caused by sensitive staphylococci. Flucloxacillin is an isoxazolyl penicillin and is active against many gram-positive bacteria, including penicillinase-producing staphylococci and streptococci, but not against methicillin-resistant Staphylococcus aureus (20). In the United Kingdom flucloxacillin remains the predominantly prescribed antistaphylococcal oral antibiotic (32). It is typically used for the treatment of skin and soft tissue infections and respiratory tract infections. For serious infections like endocarditis or osteomyelitis caused by methicillin-susceptible S. aureus (MSSA) it is administered intravenously as a slow injection, a short-term infusion, or a continuous infusion. It has been reported that limiting the use of glycopeptides could help to control the emergence of resistance to vancomycin (10, 21). Therefore, it might be preferable to use alternatives like flucloxacillin instead of glycopeptides against MSSA, if the use of the alternative grants a sufficient probability for a successful clinical outcome. In addition, flucloxacillin has been suggested to show more rapid killing of MSSA than vancomycin, and flucloxacillin levels usually do not need to be monitored (23). The clinical outcome for treatment with beta-lactams is related to the proportion of the dosing interval for which non-protein-bound plasma concentrations exceed the MIC (fT>MIC) (12, 17). Attainment of the pharmacokinetic (PK)-pharmacodynamic (PD) target of an fT>MIC of ≥50% correlates best with the near maximal bactericidal activity of penicillins and is often used as a surrogate marker for successful clinical outcome (2, 11-13, 17). For drugs with short half-lives, like beta-lactams, prolonged and continuous infusions have been shown to achieve a longer fT>MIC than short-term infusions at the same daily dose (15, 18). Therefore, prolonged or continuous infusions may require lower daily doses compared to those required for intermittent treatment while achieving the same probability of target attainment (PTA) achieved with high-dose intermittent treatment. Conditions like endocarditis and osteomyelitis usually require prolonged high-dose treatment. Continuous-infusion treatment can be managed at patients' homes better than frequent intermittent infusions (every 6 h [q6h] or q4h) and has been reported to be efficacious for use for the completion of treatment for serious staphylococcal infections at patients' homes (22, 23). Flucloxacillin solutions have been reported to be stable in water for injection for 7 days at 20 to 25°C (26). However, the optimal doses for continuous and prolonged infusion of flucloxacillin have not yet been determined by population PK and Monte Carlo simulation (MCS). In the absence of these data, some authors have reported that they administered by continuous infusion the same daily doses that they used for intermittent treatment (22, 23). We used population PK and MCS to investigate the differences in the PK-PD target attainment between intermittent and continuous infusions. To study dosage regimens by MCS with various doses or other modes of administration, or both, it is important to know whether the clearance changes with the concentration in plasma at therapeutic concentrations. Therefore, our second objective was to compare the PK of flucloxacillin at two different dose levels. Study participants.Ten healthy Caucasian subjects (five males and five females) participated in the study. Prior to entry into the study, all subjects were given a physical examination, electrocardiography, and laboratory tests, including urinalysis and screening for drugs of abuse. During the drug administration periods, the volunteers were encouraged to report any discomfort or adverse reactions and were closely observed by physicians. On each day of the study the subjects were asked to complete a questionnaire on their health status. The study was approved by the local ethics committee, and all subjects gave their written informed consent prior to starting the study. Study design and drug administration.The study was of a randomized, controlled, two-way crossover design. In each of the two study periods, each subject received a single dose of 500 mg or 1,000 mg flucloxacillin as a 5-min intravenous infusion. Food and fluid intakes were strictly standardized on each study day. The treatment periods were separated by a washout period of at least 4 days. The subjects were requested to abstain from consuming alcohol and caffeine-containing foods and beverages during the study periods. Sampling schedule.All blood samples were drawn from a forearm vein via an intravenous catheter placed on the forearm contralateral to the one used for drug administration. Blood samples were drawn immediately before the start of infusion; at the end of infusion; as well as at 5, 10, 15, 20, 25, 45, 60, 75, and 90 min and 2, 2.5, 3, 3.5, 4, 5, 6, 8, and 24 h after the end of the infusion. The samples were cooled in an ice-water bath before centrifugation. After centrifugation the plasma samples were immediately frozen and stored at −80°C until analysis. Urine samples were collected immediately before the start of the infusion, from the start of the infusion until 1 h after the end of the infusion, and at the following time intervals: 1 to 2, 2 to 3, 3 to 4, 4 to 5, 5 to 6, 6 to 8, 8 to 12, and 12 to 24 h after the end of the infusion. The urine samples were stored at 4°C during the collection period. The amount and pH of the urine were measured. Urine samples were immediately frozen and were stored at −80°C before analysis. Determination of plasma and urine concentrations.For determination of the flucloxacillin concentration in plasma, 100 μl of the sample was deproteinized with 200 μl acetonitrile containing the internal standard. After mixing of the solution and centrifugation at 15,000 rpm, 40 μl was injected onto a high-pressure liquid chromatography system. For determination of the flucloxacillin concentration in urine, 20 μl of the sample was diluted with 180 μl water. After the sample and water were mixed, 40 μl was injected onto the high-pressure liquid chromatography system. The flucloxacillin concentration was determined by using a reversed-phase column and a potassium dihydrogen phosphate (pH 6.2)-acetonitrile mobile phase at a flow of 2 ml/min. Flucloxacillin and the internal standard were detected at 220 nm. The concentrations in the plasma and urine samples were measured against a plasma or urine calibration curve. The calibration row in plasma was prepared with a 10:1 dilution of tested drug-free plasma with a stock solution of flucloxacillin to obtain the highest calibration level. The other calibration levels were obtained by 1:1 dilution of the highest calibration level or a level of higher concentration with drug-free plasma. The calibration curve for urine was prepared with a 10:1 dilution of tested drug-free urine with a stock solution to obtain the highest calibration level. The other calibration levels were obtained by 1:1 dilution of the highest calibration level or a higher concentration with drug-free urine. For the control of inter- and intra-assay variations, spiked quality controls in plasma and urine were prepared by adding defined volumes of the stock solution of flucloxacillin or the spiked control of a higher concentration to defined volumes of tested drug-free plasma or urine. No interferences were observed in plasma and urine for flucloxacillin or the internal standard. Calibration was performed by linear regression. The linearity of the flucloxacillin calibration curves in plasma and urine was demonstrated from 0.500 to 250 mg/liter and 5.00 to 400 mg/liter, respectively. The quantification limits were identical to the lowest calibration levels. The interday precision and the analytical recovery of the spiked quality control standards of flucloxacillin in human plasma ranged from 4.1 to 7.7% and from 84.9 to 106.0%, respectively. The interday precision and the analytical recovery of the spiked quality control standards of flucloxacillin in human urine ranged from 3.3 to 5.1% and from 100.0 to 103.0%, respectively. The intraday precision and the analytical recovery of the spiked quality control standards of flucloxacillin in human plasma ranged from 3.9 to 7.0% and from 85.1 to 107.0%, respectively. The intraday precision and the analytical recovery of the spiked quality control standards of flucloxacillin in human urine ranged from 2.9 to 5.0% and from 99.0 to 103.2%, respectively. Pharmacokinetics. (i) Noncompartmental analysis.The maximum concentration in plasma for each subject was read directly from the plasma concentration-time curves. The area under the plasma concentration-time curve for each subject was calculated by using the linear up/log down method, as implemented in WinNonlin Professional (version 4.0.1) (30). This algorithm uses the linear trapezoidal rule when concentrations are increasing or constant and the log-trapezoidal rule when concentrations are decreasing. (ii) Population PK analysis.We considered models with one, two, or three disposition compartments and (i) first-order elimination, (ii) mixed-order (Michaelis-Menten) elimination, or (iii) parallel first-order and mixed-order elimination. The differential equations of our final model are as follows: $$mathtex$$\[\frac{dX(1)}{dt}{=}k01{-}\frac{(\mathrm{CL}_{\mathrm{R}}\ {+}\ \mathrm{CL}_{\mathrm{NR}}\ {+}\ \mathrm{CLic}_{\mathrm{shallow}}\ {+}\ \mathrm{CLic}_{\mathrm{deep}})}{V1}\ {\cdot}\ X(1)\ {+}\frac{\mathrm{CLic}_{\mathrm{shallow}}}{V2}\ {\cdot}\ X(2)\ {+}\ \frac{\mathrm{CLic}_{\mathrm{deep}}}{V3}\ {\cdot}\ X(3)\]$$mathtex$$ $$mathtex$$\[\frac{dX(2)}{dt}{=}\frac{\mathrm{CLic}_{\mathrm{shallow}}}{V1}\ {\cdot}\ X(1)\ {-}\ \frac{\mathrm{CLic}_{\mathrm{shallow}}}{V2}\ {\cdot}\ X(2)\]$$mathtex$$ $$mathtex$$\[\frac{dX(3)}{dt}{=}\frac{\mathrm{CLic}_{\mathrm{deep}}}{V1}\ {\cdot}\ X(1)\ {-}\ \frac{\mathrm{CLic}_{\mathrm{deep}}}{V3}\ {\cdot}\ X(3)\]$$mathtex$$ $$mathtex$$\[\frac{dX(4)}{dt}{=}\frac{\mathrm{CL}_{\mathrm{R}}}{V1}\ {\cdot}\ X(1)\]$$mathtex$$ where X(1), X(2), and X(3) are the amounts of flucloxacillin in the central, shallow peripheral, and deep peripheral compartments, respectively; X(4) is the cumulative amount excreted unchanged in urine; CLR is renal clearance; CLNR is nonrenal clearance; CLicshallow is the intercompartmental clearance between the central and the shallow peripheral compartment; and CLicdeep is the intercompartmental clearance between the central and the deep peripheral compartment. Drug input was described by a time-delimited zero-order input process with rate k01. The analytical solution to these linear differential equations, as implemented in NONMEM (ADVAN 11 and TRANS 4), was used for estimation. Competing models were discriminated by their predictive performance, as assessed by visual predictive checks and the use of the NONMEM objective function and residual plots. For the visual predictive check, we simulated the plasma and urine profiles for 10,000 subjects by using the competing models. From these data we calculated the median, the nonparametric 80% prediction interval (10th to 90th percentiles), and the nonparametric 50% prediction interval (25th to 75th percentiles) for the predicted concentrations in plasma and the amounts in urine. These prediction interval lines were then overlaid on the original raw data. If the model described the data correctly, then 20% of the observed data should fall outside the 80% prediction interval at each time point and 50% of the data should fall outside the interquartile range. We compared the median predicted concentrations and the prediction intervals with the raw data and tested whether the median and the prediction intervals mirrored the central tendency and the variability of the raw data for the respective model. (iii) Individual PK model.We estimated the between-subject variability (BSV) for all parameters except the duration of the zero-order input and intercompartmental clearances. We assumed a log-normal distribution for the PK parameters and used a full variance-covariance matrix to describe the variability of the PK parameters as well as their pairwise correlations. The NONMEM program estimates BSV as variance. For more convenient interpretation, we report the square root of the variance for BSV, as the square root of the variance is an approximation of the apparent coefficient of variation of a normal distribution on a log scale. (iv) Observation model.We described the residual unidentified variability by using a combined additive and proportional error model for the concentrations in plasma and the amounts excreted in urine. (v) Computation.We used the first-order conditional estimation method with the interaction estimation option in NONMEM (version V, release 1.1; NONMEM Project Group, University of California, San Francisco) (6) for all population PK modeling and WinNonlin Professional (version 4.0.1; Pharsight Corp., Mountain View, CA) (30) for noncompartmental analysis and equivalence statistics. (vi) MCS.We used fT>MIC for at least 30% or 50% of the dosing interval as PK-PD targets for flucloxacillin. An fT>MIC of ≥50% is the target for the near maximal bactericidal activity of penicillins, and an fT>MIC of ≥30% is the target for bacteriostasis (2, 11, 12, 14, 17). We calculated the PTA within the MIC range from 0.0625 to 16 mg/liter, and as protein binding was not measured in the current study, we used the protein binding of 96% for flucloxacillin that has been reported in the literature (7, 8). We compared three dosage regimens at a daily dose of 6 g: (i) continuous infusion, (ii) prolonged (4 h) infusion of 2 g q8h, and (iii) a short-term (0.5 h) infusion of 1.5 g q6h. In addition, we simulated 12 g flucloxacillin daily as 0.5- h infusions of 2 g q4h. We simulated the plasma concentration time profiles of 10,000 virtual subjects for each dosage regimen at steady-state in absence of residual error with NONMEM. The fT>MIC values and the PTA were calculated by linear interpolation between simulated concentrations (frequent sampling) with Perl scripts, written by J. Bulitta. These Perl scripts were validated for all dosage regimens studied for this and several other studies. We used the population mean PK parameter estimates and the variance-covariance matrix representing the BSV estimated by NONMEM for MCS and assumed that the dose, duration of infusion, and timing of infusion had no variability. We used the clearance and volume notation of our population PK model during both estimation and MCS and assumed a normal distribution of the PK parameters on a log scale, as described above. We derived the PTA for each of the two targets by calculating the fraction of subjects who attained the target at each MIC. The PK-PD breakpoint was defined as the highest MIC for which the PTA was at least 90%. To put these values into clinical perspective, the expectation value for the population PTA, i.e., the expected population PTA for a specific dosage regimen and a specific population of microorganisms, was calculated. Statistical analysis.We tested the noncompartmental parameter estimates for differences between treatments (500 mg and 1,000 mg flucloxacillin). We used analysis of variance statistics on a log scale and an α level of significance of 0.05. Demographics.All 10 subjects completed the study. The median weight was 71 kg (range, 52 to 83 kg), the median height was 178 cm (range, 165 to 190 cm), and the median age was 25 years (range, 23 to 34 years). The subjects had normal renal and hepatic functions. Noncompartmental analysis.The flucloxacillin concentrations in plasma and the amounts in urine after infusion of 500 mg and 1,000 mg are shown in Fig. 1. Table 1 shows the results of the noncompartmental analysis. The peak concentrations and the areas under the curve were dose linear. All other PK parameters were very similar at both dose levels. Average ± standard deviation profiles of flucloxacillin in healthy volunteers after 5-min infusions of 500 mg and 1,000 mg flucloxacillin. Pharmacokinetic parameters for 500 mg and 1,000 mg flucloxacillin from noncompartmental analysis Population PK.The three-compartment model had a 200-point better objective function and a better predictive performance than the two-compartment model. A one-compartment model was inappropriate for our data set. The parameter estimates for the three-compartment model are shown in Table 2, and the variance-covariance matrix is shown in Table 3. Figure 2 shows the visual predictive checks for this model. There was no indication that clearance decreased for the increase in dose from 500 mg to 1,000 mg. The distributions of subjects with lower and higher individual estimates for the 500-mg dose than for the 1,000-mg dose were five and five for total clearance; five and five for renal clearance; and five and five for nonrenal clearance. Thus, there was no trend of any saturation of clearance at these dose levels. Three-compartment models with mixed-order or parallel first-order and mixed-order elimination yielded no significant improvement in the objective function compared to that achieved with the model with first-order elimination, and the parameter estimates of models with saturable elimination indicated no or negligible saturation at the dose levels studied. We selected the three-compartment model with first-order elimination as our final model, as it had excellent predictive performance and as the models with saturable elimination neither improved the objective function significantly nor improved the predictive performance. Visual predictive check for concentrations in plasma and amounts excreted unchanged in urine: The plots show the raw data, the 80% prediction interval (10th to 90th percentile), and the interquartile range (25th to 75th percentile). Ideally, 50% of the raw data should fall inside the interquartile range at each time point and 80% of the raw data should fall inside the 80% prediction interval. The coefficient of correlation for the observed versus the individual predicted concentrations was 0.987. Population parameter estimates Variance-covariance matrix (on natural log scale) for flucloxacillina MCSs.We compared the PTA versus MIC profiles for three dosage regimens with a daily dose of 6 g and for one dosage regimen at a daily dose of 12 g at two PK-PD targets (Fig. 3). The PK-PD breakpoints for the targets of fT>MIC of ≥50% (near maximal bactericidal activity of penicillins) and fT>MIC of ≥30% (bacteriostasis) are shown in Table 4. Probabilities of target attainment for different dosage regimens and PK-PD targets of flucloxacillin. Symbols: ▴, daily dose of 6 g flucloxacillin, continuous infusion; □, daily dose of 6 g flucloxacillin, 2 g q8h as a 4-h infusion; ⋄, daily dose of 6 g flucloxacillin, 1.5 g q6h as a 0.5-h infusion; •, daily dose of 12 g flucloxacillin, 2 g q4h as a 0.5-h infusion. Breakpoints of various dosage regimens for flucloxacillin Flucloxacillin is indicated for the treatment of infections due to susceptible gram-positive organisms, including beta-lactamase-producing staphylococci and streptococci. It is used intravenously for the treatment of serious infections caused by MSSA. In a randomized comparison trial with flucloxacillin (1 g q6h) and teicoplanin (6 mg/kg q12h for three doses and then the same dose given once daily) for the treatment of burn wound infections due to gram-positive pathogens, no significant differences in the clinical and the microbiological success rates between flucloxacillin and teicoplanin were found (33). However, the rate of the emergence of resistance to glycopeptides is increasing, and restriction of their use has been reported to be helpful in controlling vancomycin-resistant enterococci (10, 21). Therefore, it may be preferable to use alternatives like flucloxacillin against MSSA whenever possible in order to postpone and limit the use of glycopeptides. The PK of flucloxacillin was studied by Nauta and Mattie (29) during a 3-h infusion of 1.2 g/h (loading dose, 1 g) in healthy volunteers. They reported average total, renal, and nonrenal clearances of 7.4, 5.3, and 2.1 liters/h, respectively. Adam et al. (1) found an average total clearance of 7.1 liters/h in healthy volunteers after 1 g flucloxacillin was given as a 5-min intravenous infusion. Our results are in good agreement with those clearances. For beta-lactams like flucloxacillin, the clinical and microbiological outcomes best correlate with fT>MIC (12, 17). At the same daily dose, prolonged or continuous infusions achieve a longer fT>MIC than intermittent short-term infusions. This is especially evident for drugs with short half-lives, like beta-lactams. Consequently, Drusano (18) proposed the use of prolonged infusions to optimize the PTAs of carbapenems. We used population PK analysis and MCS to compare the PTAs of short-term, prolonged, and continuous infusions for flucloxacillin. MCS was shown to be a very useful tool for rational dose selection for phase II/III clinical trials (19). Compared to healthy volunteers, patients often have lower clearances and larger volumes of distribution, which result in higher average plasma concentrations and longer elimination half-lives. Both these alterations of the PK in patients increase the PTA. Thus, our estimates for the PTA in healthy volunteers are conservative estimates for the PTA in patients. A higher PTA for hospitalized patients compared to that for healthy volunteers has also been shown by Lodise et al. (24) for piperacillin given in combination with tazobactam. To determine the PTA at various dose levels and for different modes of administration, it is important to know whether the clearance changes with the plasma concentration at therapeutic concentrations. Therefore, we compared the PK of flucloxacillin at two dose levels in a crossover study with healthy volunteers. Our noncompartmental analysis showed no differences in the clearance or the volume of distribution between the low and the high dose (Table 1). A three-compartment population PK model with linear renal and nonrenal elimination showed excellent predictive performance, and there was no indication for a decrease in the total, renal, or nonrenal clearance with an increase in the dose from 500 mg to 1,000 mg. Therefore, we used this model, estimated from plasma and urine data at both dose levels for MCS, to compare the PTA profiles with the MIC profiles for four dosage regimens (Fig. 3). The target in our MCS was based on the non-protein-bound plasma concentrations. A range of 94.6 to 96.2% for the plasma protein binding of flucloxacillin has been reported (7, 9, 31). These values correspond to a rather wide range for the non-protein-bound fraction of 3.8 to 5.4%. This is a difference of 42% (5.4 versus 3.8%) for the non-protein-bound concentrations. To account for this situation, we chose a relatively high protein binding of 96%, which has also been reported by Bergan (7), as this choice results in conservative (low) non-protein-bound plasma concentrations. For serious infections, doses of up to 8 g flucloxacillin per day (2 g given three to four times daily) are recommended both in the United Kingdom (4) and in Germany (5). As intravenous flucloxacillin is more likely to be given for serious infections than for uncomplicated infections, we chose for our simulations a daily dose of 6 g, which is within the dose range recommended for the treatment of serious infections. Our MCS with the bacteriostasis target (fT>MIC ≥ 30%) showed that prolonged infusion had a higher PTA than continuous infusion and short-term infusion at the same daily dose (Table 4). For the target near maximal killing (fT>MIC ≥ 50%), prolonged infusion and continuous infusion both had a three times higher PK-PD breakpoint than short-term infusion at the same daily dose of 6 g. Thus, prolonged infusion achieved a PTA that was higher than or similar to the PTAs of the other two regimens for both targets. The different ranking of the three dosage regimens at the same daily dose of 6 g for the two targets can be explained as follows: the breakpoint for continuous infusion at steady state is independent of the chosen target, as for any one patient fT>MIC can be only 0% or 100%. The profile for short-term infusion shows a rather pronounced peak without a plateau, and the profile for prolonged infusion shows a flat peak with a rather broad plateau. Subsequently, short-term infusions reach very high concentrations for a short period of time, and prolonged infusions reach moderately high concentrations, but for a longer period of time. We determined for flucloxacillin that short-term (0.5 h) infusions had the highest PTA for targets up to fT>MIC of ≥20%, prolonged (4-h) infusions performed best for targets between 20% and 50%, and continuous infusion had the highest PTA for longer targets (55% and above). Continuous infusion was superior to short-term infusions for targets of 30% and above (Fig. 4). Unbound concentrations and median and prediction interval (5th to 95th percentile) of the fT>MIC at steady state in healthy volunteers after a 30-min infusion of 1.5 g q6h (continuous line, marker •), after a 4-h infusion of 2 g q8h (dashed line, marker ▪), and after continuous infusion of 6 g per day (dashed line, marker ♦) (bottom left panel) and after a 30-min infusion of 2 g q4h (continuous line, marker ▪) (bottom right panel). We assumed a fixed protein binding of 96% for the simulations. The curve for the 4-h infusion was shifted to the right by 6% in the MIC for easier identification of the corresponding prediction intervals (5th to 95th percentile). The 5th and 95th percentiles for the continuous infusion were omitted for clarity. After continuous infusion, more than 99% of the subjects had an fT>MIC of 100% at an MIC of 0.75 mg/liter. The PK-PD breakpoints need to be compared to the MICs encountered in clinical practice to put these results into clinical perspective. The MIC90s of flucloxacillin for MSSA are usually reported to be ≤0.5 mg/liter (22, 27, 28, 33). For an MIC of 0.5 mg/liter, continuous infusion and prolonged infusion of 6 g/day had PTAs of more than 99%, whereas short-term infusion of 6 g/day reached a PTA of only 46%, based on the target for near maximal killing. At an MIC of 1.0 mg/liter, a short-term infusion of 2 g every 4 h had a PTA of more than 90% for the target fT>MIC of ≥50% (Table 4; Fig. 3). The most relevant target depends on the clinical situation of the patient. For patients with uncomplicated infections and an intact immune system, it might be sufficient to achieve bacteriostasis. However, for serious infections the near maximal bactericidal activity of the antibiotic might be required (17). As flucloxacillin is also available for oral treatment, intravenous dosing is more relevant for patients with serious infections. As flucloxacillin showed no saturation in clearance, doubling of the dose will yield concentrations twice as high and, therefore, PK-PD breakpoints twice as high. The PK-PD breakpoints for an MCS with a daily dose of 12 g will be twice as high as the PK-PD breakpoints for 6 g/day shown in Table 4, as flucloxacillin exhibited linear pharmacokinetics. As the PK-PD breakpoint of the short-term infusion q6h of 6 g/day was 0.25 mg/liter, a daily dose of about 12 g would be required to reach a breakpoint of 0.5 mg/liter for near maximal killing. However, 4 g/day as a continuous or a prolonged infusion would be sufficient to reach a breakpoint of 0.5 mg/liter. The MIC90 of flucloxacillin is typically ≤0.5 mg/liter for MSSA. Even without data on the distribution of the flucloxacillin MIC for MSSA, it is still possible to calculate the expectation value for the population PTA and an MIC90 of 0.5 mg/liter. The PTA-versus-MIC profile (Fig. 3) for continuous or prolonged infusion of 4 g/day with a PK-PD breakpoint of 0.5 mg/liter can be roughly simplified to assuming a PTA of 100% for all MICs ≤ 0.5 mg/liter and a PTA of 0% for all MICs above 0.5 mg/liter. For an MIC90 of 0.5 mg/liter, 90% of the patients (PTA = 100% for MICs ≤ 0.5 mg/liter) will achieve the target (corresponding to the 90th percentile of the MIC distribution) and 10% of the patients (PTA = 0% for MICs > 0.5 mg/liter) will not achieve the target. Therefore, the expectation value for the population PTA will be at least 90% for prolonged or continuous infusion of 4 g/day and an MIC90 of 0.5 mg/liter. As described above, a dose of 12 g/day given as short-term infusions q6h would be required to reach a PK-PD breakpoint of 0.5 mg/liter and an expectation value of about 90% for an MIC90 of 0.5 mg/liter. This means that a 66% lower daily dose is sufficient for continuous or prolonged infusion compared to the dose required for short-term infusion q6h in order to reach the same expectation value for a successful clinical outcome. Besides a cost reduction for drug acquisition secondary to the dose reduction for prolonged or continuous infusion, a lower risk of adverse events from flucloxacillin treatment is probably a major advantage of prolonged and continuous infusion. Cholestatic jaundice and hepatitis rarely occur in relation to flucloxacillin intake, with the risk factors being the patient's age, preexisting hepatic impairment, and long-term use (>14 days [3]). In a population-based case-control study, de Abajo et al. (16) identified a dose dependency for these adverse effects. It remains to be determined whether high peak concentrations, total exposure, or trough concentrations contribute to the frequency of adverse events for flucloxacillin. However, prolonged and continuous infusions allow a significant dose reduction and avoid high peak concentrations, while they achieve the same PTAs as short-term infusion. Whether prolonged or continuous infusion would be used depends on the clinical situation of the patient. The continuous infusion of flucloxacillin may be used for the home treatment of MSSA infections that require high-dose intravenous treatment over prolonged time periods. This procedure has been shown to be safe, convenient, and effective for the follow-up treatment of serious staphylococcal infections (e.g., sepsis, endocarditis, and osteomyelitis) and cellulitis (22, 23), and flucloxacillin solutions have also been reported to have sufficient stability at room temperature (26). However, in the hospital setting and with patients receiving multiple intravenous drugs, one of the major drawbacks of continuous infusion is the need for an additional infusion line to prevent incompatibilities with other drugs and the resulting increase of the risk for line infections (25). In conclusion, we found that the PK of flucloxacillin is linear for the doses of 500 mg and 1,000 mg administered as 5-min intravenous infusions. For near maximal killing (target, an fT>MIC of ≥50%) flucloxacillin showed a robust (≥90%) PTA up to MICs of 0.75 to 1 mg/liter (PTA of 86% at 1 mg/liter) for prolonged infusions or a continuous infusion of 6 g/day. Short-term infusions of 6 g/day given q6h had a lower breakpoint of 0.25 to 0.375 mg/liter (PTA of 79% at 0.375 mg/liter). For an MIC90 of 0.5 mg/liter, which is typically found for flucloxacillin against MSSA, prolonged and continuous infusion at a daily dose of 4 g had an expectation value for the population PTA of at least 90%. To achieve the same expectation value, a daily dose of 12 g given as short-term infusions given q6h would be required. This dose reduction from 12 g/day for short-term infusion to 4 g/day for prolonged infusion and continuous infusion might be a considerable advantage in terms of the risk for adverse events and drug acquisition costs. Future comparative clinical trials are warranted to show whether prolonged and continuous infusions of flucloxacillin at lower dose levels achieve similar or better clinical success rates than short-term infusion. Received 11 November 2006. Returned for modification 12 April 2007. Accepted 8 June 2007. ↵▿ Published ahead of print on 18 June 2007. Adam, D., P. Koeppe, and H. D. Heilmann. 1983. Pharmacokinetics of amoxicillin and flucloxacillin following the simultaneous intravenous administration of 4 g and 1 g, respectively. Infection 11:150-154. Ambrose, P. G., S. M. Bhavnani, C. M. Rubino, A. Louie, T. Gumbo, A. Forrest, and G. L. Drusano. 2007. Pharmacokinetics-pharmacodynamics of antimicrobial therapy: it's not just for mice anymore. Clin. Infect. Dis. 44:79-86. Anonymous. 2005. British National Formulary 49. British Medical Association and Royal Pharmaceutical Society of Great Britain, London, United Kingdom. Anonymous. 2007, posting date. Electronic Medicines Compendium (eMC). Datapharm Communications Ltd., Surrey, United Kingdom. Anonymous. 2007, posting date. Rote Liste. Arzneimittelinformationen für Deutschland. Rote Liste Service GmbH, Frankfurt am Main. Beal, S. L., A. J. Boeckmann, L. B. Sheiner, and NONMEM Project Group. 1999. NONMEM users guides, version 5 ed. University of California at San Francisco. Bergan, T. 1978. Penicillins. Antibiot. Chemother. 25:1-122. Bergan, T., A. Engeset, W. Olszewski, N. Ostby, and R. Solberg. 1986. Extravascular penetration of highly protein-bound flucloxacillin. Antimicrob. Agents Chemother. 30:729-732. Bergeron, M. G., J. L. Brusch, M. Barza, and L. Weinstein. 1976. Bactericidal activity and pharmacology of flucloxacillin. Am. J. Med. Sci. 271:13-20. Bonten, M. J., S. Slaughter, A. W. Ambergen, M. K. Hayden, J. van Voorhis, C. Nathan, and R. A. Weinstein. 1998. The role of "colonization pressure" in the spread of vancomycin-resistant enterococci: an important infection control variable. Arch. Intern. Med. 158:1127-1132. Craig, W. A. 2002. Pharmacodynamics of antimicrobials: general concepts and applications, p. 1-22. In C. H. Nightingale, T. Murakawa, and P. G. Ambrose (ed.), Antimicrobial pharmacodynamics in theory and clinical practice. Marcel Dekker, New York, NY. Craig, W. A. 1998. Pharmacokinetic/pharmacodynamic parameters: rationale for antibacterial dosing of mice and men. Clin. Infect. Dis. 26:1-10. Craig, W. A., and D. Andes. 1996. Pharmacokinetics and pharmacodynamics of antibiotics in otitis media. Pediatr. Infect. Dis. J. 15:255-259. Craig, W. A., S. Ebert, and Y. Watanabe. 1993. Differences in time above MIC required for efficacy of beta-lactams in animal infection models, abstr. 86. Abstr. 33rd Intersci. Conf. Antimicrob. Agents Chemother. American Society for Microbiology, Washington, DC. Craig, W. A., and S. C. Ebert. 1992. Continuous infusion of beta-lactam antibiotics. Antimicrob. Agents Chemother. 36:2577-2583. de Abajo, F. J., D. Montero, M. Madurga, and L. A. Garcia Rodriguez. 2004. Acute and clinically relevant drug-induced liver injury: a population based case-control study. Br. J. Clin. Pharmacol. 58:71-80. Drusano, G. L. 2004. Antimicrobial pharmacodynamics: critical interactions of 'bug and drug.' Nat. Rev. Microbiol. 2:289-300. Drusano, G. L. 2003. Prevention of resistance: a goal for dose selection for antimicrobial agents. Clin. Infect. Dis. 36:S42-S50. Drusano, G. L., S. L. Preston, C. Hardalo, R. Hare, C. Banfield, D. Andes, O. Vesga, and W. A. Craig. 2001. Use of preclinical data for selection of a phase II/III dose for evernimicin and identification of a preclinical MIC breakpoint. Antimicrob. Agents Chemother. 45:13-22. GlaxoSmithKline. 2005. Flucloxacillin product information (Floxapen). GlaxoSmithKline, Uxbridge, Middlesex, United Kingdom. Gould, I. M. 1999. A review of the role of antibiotic policies in the control of antibiotic resistance. J. Antimicrob. Chemother. 43:459-465. Howden, B. P., and M. J. Richards. 2001. The efficacy of continuous infusion flucloxacillin in home therapy for serious staphylococcal infections and cellulitis. J. Antimicrob. Chemother. 48:311-314. Leder, K., J. D. Turnidge, T. M. Korman, and M. L. Grayson. 1999. The clinical efficacy of continuous-infusion flucloxacillin in serious staphylococcal sepsis. J. Antimicrob. Chemother. 43:113-118. Lodise, T. P., Jr., B. Lomaestro, K. A. Rodvold, L. H. Danziger, and G. L. Drusano. 2004. Pharmacodynamic profiling of piperacillin in the presence of tazobactam in patients through the use of population pharmacokinetic models and Monte Carlo simulation. Antimicrob. Agents Chemother. 48:4718-4724. Lomaestro, B. M., and G. L. Drusano. 2005. Pharmacodynamic evaluation of extending the administration time of meropenem using a Monte Carlo simulation. Antimicrob. Agents Chemother. 49:461-463. Lynn, B. 2002. Recent work on parenteral penicillins. J. Hosp. Pharmacy 29:183-194. Meyer, B., S. Ahmed el Gendy, G. Delle Karth, G. J. Locker, G. Heinz, W. Jaeger, and F. Thalhammer. 2003. How to calculate clearance of highly protein-bound drugs during continuous venovenous hemofiltration demonstrated with flucloxacillin. Kidney Blood Press. Res. 26:135-140. Mouton, J. W., H. P. Endtz, J. G. den Hollander, N. van den Braak, and H. A. Verbrugh. 1997. In-vitro activity of quinupristin/dalfopristin compared with other widely used antibiotics against strains isolated from patients with endocarditis. J. Antimicrob. Chemother. 39(Suppl. A):75-80. Nauta, E. H., and H. Mattie. 1975. Pharmacokinetics of flucloxacillin and cloxacillin in healthy subjects and patients on chronic intermittent haemodialysis. Br. J. Clin. Pharmacol. 2:111-121. Pharsight. 2002. WinNonlin user's guide. Pharsight, Mountain View, CA. Roder, B. L., N. Frimodt-Moller, F. Espersen, and S. N. Rasmussen. 1995. Dicloxacillin and flucloxacillin: pharmacokinetics, protein binding and serum bactericidal titers in healthy subjects after oral administration. Infection 23:107-112. Russmann, S., J. A. Kaye, S. S. Jick, and H. Jick. 2005. Risk of cholestatic liver disease associated with flucloxacillin and flucloxacillin prescribing habits in the UK: cohort study using data from the UK General Practice Research Database. Br. J. Clin. Pharmacol. 60:76-82. Steer, J. A., R. P. Papini, A. P. Wilson, D. A. McGrouther, and N. Parkhouse. 1997. Teicoplanin versus flucloxacillin in the treatment of infection following burns. J. Antimicrob. Chemother. 39:383-392. Antimicrobial Agents and Chemotherapy Aug 2007, 51 (9) 3290-3297; DOI: 10.1128/AAC.01410-06 You are going to email the following Population Pharmacokinetics at Two Dose Levels and Pharmacodynamic Profiling of Flucloxacillin
CommonCrawl
Solve the following : Suppose the rod with the balls $A$ and $B$ of the previous problem is clamped at the centre in such a way that it can rotate freely about a horizontal axis through the clamp. The system is kept at rest in the horizontal position. A particle $P$ of the same mass $m$ is dropped from a height $h$ on the ball $B$. The particle collides with B and sticks to it. (a) Find the angular momentum and the angular speed of the system just after the collision. (b) What should be the minimum value of $h$ so that the system makes a full rotation after the collision. (a) If we consider two bodies $P$ and $B$ as system then $F_{\text {ext }}=0$ and ${ }^{\tau_{\text {ext }}}=0$ $P_{i}=P_{f}$ $\mathrm{m}(\sqrt{2 g h})=2 m v^{\prime}$ $V^{\prime}=\sqrt{\frac{g h}{2}}$. $L_{i}=L_{f}$ $\mathrm{m}(\sqrt{2 g h})\left(\frac{L}{2}\right)=\mid \omega$ $m L \sqrt{\frac{g h}{2}}=\left(\frac{2 m L^{2}}{4}+\frac{m L^{2}}{4}\right) \omega$ $\omega=\frac{\sqrt{8 g h}}{3 L}$
CommonCrawl
Asymptotics of the Lebowitz-Rubinow-Rotenberg model of population development Confinement of a hot temperature patch in the modified SQG model June 2019, 24(6): 2417-2442. doi: 10.3934/dcdsb.2018259 Oscillations and asymptotic convergence for a delay differential equation modeling platelet production Loïs Boullu 1,, , Mostafa Adimy 2, , Fabien Crauste 1, and Laurent Pujo-Menjouet 1, Univ Lyon, Université Claude Bernard Lyon 1, CNRS UMR 5208, Institut Camille Jordan, 43 blvd. du 11 novembre 1918, F-69622 Villeurbanne Cedex, France Inria, Université de Lyon, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex, France * Corresponding author: [email protected] Received July 2017 Revised April 2018 Published October 2018 Fund Project: LB was supported by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). Also, LB is supported by a grant of Région Rhône-Alpes and benefited of the help of the France Canada Research Fund, of the NSERC and of a support from MITACS Figure(6) We analyze the existence of oscillating solutions and the asymptotic convergence for a nonlinear delay differential equation arising from the modeling of platelet production. We consider four different cell compartments corresponding to different cell maturity levels: stem cells, megakaryocytic progenitors, megakaryocytes, and platelets compartments, and the quantity of circulating thrombopoietin (TPO), a platelet regulation cytokine. Our initial model consists in a nonlinear age-structured partial differential equation system, where each equation describes the dynamics of a single compartment. This system is reduced to a single nonlinear delay differential equation describing the dynamics of the platelet population, in which the delay accounts for a differentiation time. After introducing the model, we prove the existence of a unique steady state for the delay differential equation. Then we determine necessary and sufficient conditions for the existence of oscillating solutions. Next we set up conditions to get local asymptotic stability and asymptotic convergence of this steady state. Finally we present a short analysis of the influence of the conditions at t < 0 on the proof for asymptotic convergence. Keywords: Megakaryopoiesis, platelet, oscillations, stability, delay. Mathematics Subject Classification: Primary: 34K11, 34K20; Secondary: 92D25. Citation: Loïs Boullu, Mostafa Adimy, Fabien Crauste, Laurent Pujo-Menjouet. Oscillations and asymptotic convergence for a delay differential equation modeling platelet production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2417-2442. doi: 10.3934/dcdsb.2018259 R. Apostu and M. C. Mackey, Understanding cyclical thrombocytopenia: A mathematical modeling approach, Journal of Theoretical Biology, 251 (2008), 297-316. doi: 10.1016/j.jtbi.2007.11.029. Google Scholar J. Bélair, M. C. Mackey and J. M. Mahaffy, Age-structured and two delay models for erythropoiesis, Math. Biosciences, 128 (1995), 317-346. Google Scholar L. Berezansky, E. Braverman and L. Idels, Mackey-Glass model of hematopoiesis with monotone feedback revisited, Applied Mathematics and Computation, 219 (2013), 4892-4907. doi: 10.1016/j.amc.2012.10.052. Google Scholar C. Colijn and M. C. Mackey, A mathematical model of hematopoiesis: Ⅰ. Periodic chronic myelogenous leukemia, Journal of Theoretical Biology, 237 (2005), 117-132. doi: 10.1016/j.jtbi.2005.03.033. Google Scholar C. Colijn and M. C. Mackey, A mathematical model of hematopoiesis: Ⅱ. Cyclical neutropenia, Journal of Theoretical Biology, 237 (2005), 133-146. doi: 10.1016/j.jtbi.2005.03.034. Google Scholar F. Crauste, Stability and Hopf bifurcation for a first-order delay differential equation with distributed delay, in Complex Time-Delay Systems (ed. F. M. Atay), Understanding Complex Systems, Springer, Berlin, 2010,263-296. doi: 10.1007/978-3-642-02329-3_8. Google Scholar A. de Graaf, Thrombopoietin and hematopoietic stem cells, Cell Cycle, 10 (2011), 1582-1589. doi: 10.4161/cc.10.10.15619. Google Scholar V. R. Deutsch and A. Tomer, Advances in megakaryocytopoiesis and thrombopoiesis: From bench to bedside, British Journal of Haematology, 161 (2013), 778-793. doi: 10.1111/bjh.12328. Google Scholar J. Eller, I. Gyori, M. Zollei and F. Krizsa, Modelling thrombopoiesis regulation - Ⅰ Model description and simulation results, Comput. Math. Appl., 14 (1987), 841-848. doi: 10.1016/0898-1221(87)90233-1. Google Scholar K. Gopalsamy, M. R. S. Kulenovic and G. Ladas, Oscillations and global attractivity in models of hematopoiesis, Journal of Dynamics and Differential Equations, 2 (1990), 117-132. doi: 10.1007/BF01057415. Google Scholar K. Gopalsamy, S. I. Trofimchuk and N. R. Bantsur, A note on global attractivity in models of hematopoiesis, Ukrainian Mathematical Journal, 50 (1998), 3-12. doi: 10.1007/BF02514684. Google Scholar I. Gyori and G. E. Ladas, Oscillation Theory of Delay Differential Equations: With Applications, Oxford mathematical monographs, Oxford University Press, 1991. doi: 10.1086/418288. Google Scholar I. Gyori and S. I. Trofimchuk, On the existence of rapidly oscillatory solutions in the Nicholson blowflies equation, Nonlinear Analysis: Theory, Methods & Applications, 48 (2002), 1033-1042. doi: 10.1016/S0362-546X(00)00232-7. Google Scholar J. K. Hale and S. M. V. Lunel, Introduction to Functional Differential Equations, vol. 99 of Applied Mathematical Sciences, Springer New York, 1993. doi: 10.1007/978-1-4612-4342-7. Google Scholar A. Ivanov, E. Liz and S. Trofimchuk, Global stability of a class of scalar nonlinear delay differential equations, Differential Equations Dynam. Systems, 11 (2003), 33-54. Google Scholar K. Kaushansky, The molecular mechanisms that control thrombopoiesis, Journal of Clinical Investigation, 115 (2005), 3339-3347. doi: 10.1172/JCI26674. Google Scholar K. Kaushansky, S. Lok, R. D. Holly, V. C. Broudy, N. Lin, M. C. Bailey, J. W. Forstrom, M. M. Buddle, P. J. Oort, F. S. Hagen, G. J. Roth, T. Papayannopoulou and D. C. Foster, Promotion of megakaryocyte progenitor expansion and differentiation by the c-Mpl ligand thrombopoietin, Nature, 369 (1994), 568-571. doi: 10.1038/369568a0. Google Scholar Y. Kuang, Delay Differential Equations: With Applications in Population Dynamics, no. 191 in Mathematics in science and engineering, Academic Press, 1993. Google Scholar I. Kubiaczyk and S. Saker, Oscillation and stability in nonlinear delay differential equations of population dynamics, Mathematical and Computer Modelling, 35 (2002), 295-301. doi: 10.1016/S0895-7177(01)00166-2. Google Scholar M. Kulenovic and G. Ladas, Linearized oscillations in population dynamics, Bulletin of Mathematical Biology, 49 (1987), 615-627. doi: 10.1007/BF02460139. Google Scholar G. P. Langlois, M. Craig, A. R. Humphries, M. C. Mackey, J. M. Mahaffy, J. Bélair, T. Moulin, S. R. Sinclair and L. Wang, Normal and pathological dynamics of platelets in humans, Journal of Mathematical Biology, 75 (2017), 1411-1462. doi: 10.1007/s00285-017-1125-6. Google Scholar J.-W. Li and S. S. Cheng, Remarks on a set of sufficient conditions for global attractivity in a model of hematopoiesis, Computers & Mathematics with Applications, 59 (2010), 2751-2755. doi: 10.1016/j.camwa.2010.01.043. Google Scholar M. Mackey and L. Glass, Oscillation and chaos in physiological control systems, Science, 197 (1977), 287-289. doi: 10.1126/science.267326. Google Scholar M. C. Mackey, Unified hypothesis for the origin of aplastic anemia and periodic hematopoiesis, Blood, 51 (1978), 941-956. Google Scholar J. Mallet-Paret, Morse decompositions for delay-differential equations, Journal of Differential Equations, 72 (1988), 270-315. doi: 10.1016/0022-0396(88)90157-X. Google Scholar J. J. M. Oliveira, Asymptotic Stability for Population Models and Neural Networks with Delays, Ph.D thesis, Universidade de Lisboa, 2008. Google Scholar L. Pang, M. J. Weiss and M. Poncz, Megakaryocyte biology and related disorders, The Journal of Clinical Investigation, 115 (2005), 3332-3338. doi: 10.1172/JCI26720. Google Scholar S. R. Patel, The biogenesis of platelets from megakaryocyte proplatelets, Journal of Clinical Investigation, 115 (2005), 3348-3354. doi: 10.1172/JCI26891. Google Scholar M. Santillan, J. M. Mahaffy, J. Bélair and M. C. Mackey, Regulation of platelet production: The normal response to perturbation and cyclical platelet disease, Journal of Theoretical Biology, 206 (2000), 585-603. doi: 10.1006/jtbi.2000.2149. Google Scholar A. Schmitt, J. Guichard, J. M. Masse, N. Debili and E. M. Cramer, Of mice and men: Comparison of the ultrastructure of megakaryocytes and platelets, Experimental Hematology, 29 (2001), 1295-1302. doi: 10.1016/S0301-472X(01)00733-0. Google Scholar R. Stoffel, A. Wiestner and R. C. Skoda, Thrombopoietin in thrombocytopenic mice: Evidence against regulation at the mRNA level and for a direct regulatory role of platelets, Blood, 87 (1996), 567-573. Google Scholar J. L. Swinburne and C. Mackey, Cyclical thrombocytopenia: Characterization by spectral analysis and a review, Journal of Theoretical Medecine, 2 (2000), 81-91. doi: 10.1080/10273660008833039. Google Scholar H.-O. Walther, The 2-dimensional attractor of $x'(t) = -μ x(t) + f(x(t-1))$, Memoirs of the American Mathematical Society, 113 (1995), vi+76 pp. doi: 10.1090/memo/0544. Google Scholar M. Wazewska-Czyzewska and A. Lasota, Mathematical problems of the dynamics of a system of red blood cells, Mat. Stos., 6 (1976), 23-40. Google Scholar Q. Wen, B. Goldenson and J. D. Crispino, Normal and malignant megakaryopoiesis, Expert Reviews in Molecular Medicine, 13 (2011), e32. doi: 10.1017/S1462399411002043. Google Scholar H. E. Wichmann, M. D. Gerhardts, H. Spechtmeyer and R. Gross, A mathematical model of thrombopoiesis in rats, Cell and Tissue Kinetics, 12 (1979), 551-567. doi: 10.1111/j.1365-2184.1979.tb00176.x. Google Scholar M. Yu and A. B. Cantor, Megakaryopoiesis and Thrombopoiesis: An Update on Cytokines and Lineage Surface Markers, in Platelets and Megakaryocytes (eds. J. M. Gibbins and M. P. Mahaut-Smith), vol. 788, Springer New York, 2011, 291-303. doi: 10.1007/978-1-61779-307-3_20. Google Scholar A. Zaghrout, A. Ammar and M. M. A. El-Sheikh, Oscillations and global attractivity in delay differential equations of population dynamics, Applied Mathematics and Computation, 77 (1996), 195-204. doi: 10.1016/S0096-3003(95)00213-8. Google Scholar Figure 1. Model of Megakaryopoiesis. The linear differentiation process, starting from HSC and ending with platelets, is positively regulated by TPO. The quantity of TPO is in turn modulated by the number of platelets: the more platelets, the less circulating TPO Figure 2. Oscillations appear when $\alpha_A$ increases. As $\alpha_A$ (the maximum number of platelets that a megakaryocyte can shed, see Equation (6)) increases, $R = rqe^{r\left( \gamma +p\right) }-\frac{1}{e}$ becomes positive and $x$ (blue) starts to oscillate around $x^*$ (dashed red). Black marks are placed where $x(t)$ goes through $x^*$. (A) $\alpha_A = 5000, R = -0.0492$ and there are no oscillations. (C) $\alpha_A = 10000, R = 7.6863$ and there are oscillations. (B) $\alpha_A = 20000, R = 83$ and there are oscillations Figure 3. Solutions of (18) with or without low initial slope. (Top) The solution goes through $x(t) = x^*$ after $t = r$, it meets the low initial slope criterion. (Bottom) The solution goes through $x(t) = x^*$ before $t = r$, it does not meet the low initial slope criterion Figure 4. An example of sequences $(y_n)_{n\in \mathbb{N}}$ and $(z_n)_{n\in \mathbb{N}}$. The decreasing (resp. increasing) sequence $(z_n)_{n\in \mathbb{N}}$ (resp. $(y_n)_{n\in \mathbb{N}}$) bounds $x(t)$ for $t>t_{2n}^*$ (resp. for $t>t_{2n-1}$) Figure 5. $ {\bf{Simplified \ model \ of \ Megakaryopoiesis}}$ Figure 6. Initial slope and initial conditions. Four solutions of the equation (44) (blue) where different initial conditions lead to different relative position for $\tau$, $r$ (dashed green) and the time $t_0$ when $P(t)$ crosses $P^*$ (dashed red). (A) $P(0) = 0.95e^{\gamma \tau}P^*$ such that $t_0 < \tau$. (B) $P(0) = 1.1e^{\gamma \tau}P^*$ such that $t_0 > \tau$ (as implied by (45)). (C) $P(0) = 0.6 M_{mk}$ such that $t_0 > \tau$. (D) $P(0) = 1.1 M_{mk}$ such that $t_0 > r$ (as implied by (46)) Sze-Bi Hsu, Ming-Chia Li, Weishi Liu, Mikhail Malkin. Heteroclinic foliation, global oscillations for the Nicholson-Bailey model and delay of stability loss. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1465-1492. doi: 10.3934/dcds.2003.9.1465 Eugenii Shustin. Exponential decay of oscillations in a multidimensional delay differential system. Conference Publications, 2003, 2003 (Special) : 809-816. doi: 10.3934/proc.2003.2003.809 Eugenii Shustin, Emilia Fridman, Leonid Fridman. Oscillations in a second-order discontinuous system with delay. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 339-358. doi: 10.3934/dcds.2003.9.339 Bernold Fiedler, Isabelle Schneider. Stabilized rapid oscillations in a delay equation: Feedback control by a small resonant delay. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1145-1185. doi: 10.3934/dcdss.2020068 Guy Katriel. Stability of synchronized oscillations in networks of phase-oscillators. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 353-364. doi: 10.3934/dcdsb.2005.5.353 Eugenii Shustin. Dynamics of oscillations in a multi-dimensional delay differential system. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 557-576. doi: 10.3934/dcds.2004.11.557 Baruch Cahlon. Sufficient conditions for oscillations of higher order neutral delay differential equations. Conference Publications, 1998, 1998 (Special) : 124-137. doi: 10.3934/proc.1998.1998.124 Muhammad Usman, Bing-Yu Zhang. Forced oscillations of the Korteweg-de Vries equation on a bounded domain and their stability. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1509-1523. doi: 10.3934/dcds.2010.26.1509 Jan Sieber, Matthias Wolfrum, Mark Lichtner, Serhiy Yanchuk. On the stability of periodic orbits in delay equations with large delay. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3109-3134. doi: 10.3934/dcds.2013.33.3109 Pham Huu Anh Ngoc. Stability of nonlinear differential systems with delay. Evolution Equations & Control Theory, 2015, 4 (4) : 493-505. doi: 10.3934/eect.2015.4.493 Luis Barreira, Claudia Valls. Delay equations and nonuniform exponential stability. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 219-223. doi: 10.3934/dcdss.2008.1.219 Anatoly Neishtadt. On stability loss delay for dynamical bifurcations. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 897-909. doi: 10.3934/dcdss.2009.2.897 Xiao Wang, Zhaohui Yang, Xiongwei Liu. Periodic and almost periodic oscillations in a delay differential equation system with time-varying coefficients. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6123-6138. doi: 10.3934/dcds.2017263 Shyan-Shiou Chen, Chang-Yuan Cheng. Delay-induced mixed-mode oscillations in a 2D Hindmarsh-Rose-type model. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 37-53. doi: 10.3934/dcdsb.2016.21.37 Jan Čermák, Jana Hrabalová. Delay-dependent stability criteria for neutral delay differential and difference equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4577-4588. doi: 10.3934/dcds.2014.34.4577 Elena Braverman, Sergey Zhukovskiy. Absolute and delay-dependent stability of equations with a distributed delay. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2041-2061. doi: 10.3934/dcds.2012.32.2041 Jifeng Chu, Zaitao Liang, Pedro J. Torres, Zhe Zhou. Existence and stability of periodic oscillations of a rigid dumbbell satellite around its center of mass. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2669-2685. doi: 10.3934/dcdsb.2017130 Tomás Caraballo, Leonid Shaikhet. Stability of delay evolution equations with stochastic perturbations. Communications on Pure & Applied Analysis, 2014, 13 (5) : 2095-2113. doi: 10.3934/cpaa.2014.13.2095 Leonid Berezansky, Elena Braverman. Stability of linear differential equations with a distributed delay. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1361-1375. doi: 10.3934/cpaa.2011.10.1361 István Györi, Ferenc Hartung. Exponential stability of a state-dependent delay system. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 773-791. doi: 10.3934/dcds.2007.18.773 Loïs Boullu Mostafa Adimy Fabien Crauste Laurent Pujo-Menjouet
CommonCrawl
The Perspective and Orthographic Projection Matrix What Are Projection Matrices and Where/Why Are They Used? Projection Matrices: What You Need to Know First Building a Basic Perspective Projection Matrix The OpenGL Perspective Projection Matrix About the Projection Matrix, the GPU Rendering Pipeline and Clipping The OpenGL Orthographic Projection Matrix What Will We Study in this Chapter? In the first chapter of this lesson, we said that projection matrices were used in the GPU rendering pipeline. We mentioned that there were two GPU rendering pipelines: the old one, called the fixed-function pipeline and the new one which is generally referred to as the programmable rendering pipeline. We also talked about how clipping, the process that consists of discarding or trimming primitives that are either outside or straddling across the boundaries of the frustum, happens somehow while the points are being transformed by the projection matrix. Finally, we also explained that in fact, projection matrices don't convert points from camera space to NDC space but to homogeneous clip space. It is time to provide more information about these different topics. Let's explain what it means when we say that clipping happens while the points are being transformed. Let's explain what clip space is. And finally let's review how the projection matrices are used in the old and the new GPU rendering pipeline. Clipping and Clip Space Figure 1: example of clipping in 2D. At the clipping stage, new triangles may be generated wherever the original geometry overlaps the boundaries of the viewing frustum. Figure 2: example of clipping in 3D. Let's recall quickly that the main purpose of clipping is to essentially "reject" geometric primitives which are behind the eye or located exactly at the eye position (this would mean a division by 0 which we don't want) and more generally trim off part of the geometric primitives which are outside the viewing area (more information on this topic can be found in chapter 2). This viewing area is defined by the truncated pyramid of the perspective or viewing frustum. Any professional rendering system actually somehow needs to implement this step. Note that the process can result into creating more triangles as shown in figure 1 than the scenes initially contained. The most common clipping algorithms are the Cohen-Sutherland algorithm for lines and the Sutherland-Hodgman algorithm for polygons. It happens that clipping is more easily done in clip space than in camera space (before vertices are transformed by the projection matrix) or screen space (after the perspective divide). Remember that when the points are transformed by the projection matrix, they are first transformed as you would with any other 4x4 matrix, and the transformed coordinates are then normalized: that is, the x- y- and z-coordinates of the transformed points are divided by the transformed point z-coordinate. Clip space is the space points are in just before they get normalized. In summary, what happens on a GPU is this. Points are transformed from camera space to clip space in the vertex shader. The input vertex is converted from Cartesian coordinates to homogeneous coordinates and its w-coordinate is set to 1. The predefined gl_Position variable, in which the transformed point is stored, is also a point with homogeneous coordinates. Though when the input vertex is multiplied by the projection matrix, the normalized step is not yet performed. gl_Position is in homogeneous clip space. When all the vertices have been processed by the vertex shader, triangles whose vertices are now in clip space are clipped. Once clipping is done, all vertices are normalized. Their x- y- and z-coordinates of each vertex are divided by their respective w-coordinate. This is where and when perspective divide occurs. Let's recall, that after the normalization step, points which are visible to the camera are all contained in the range [-1,1] both in x and y. This happens in the last part of the point-matrix multiplication process, when the coordinates are normalized as we just said: $$\begin{array}{l} -1 \leq \dfrac{x'}{w'} \leq 1 \\ -1 \leq \dfrac{y'}{w'} \leq 1 \\ -1 \leq \dfrac{z'}{w'} \leq 1 \\ \end{array}$$ Or: \(0 \leq \dfrac{z'}{w'} \leq 1\) depending on the convention you are using. Therefore we can also write: $$\begin{array}{l} -w' \leq x' \leq w' \\ -w' \leq y' \leq w' \\ -w' \leq z' \leq w' \\ \end{array}$$ Which is the state x', y' and z' are before they get normalized by w' or to say it different, when coordinates are in clip space. We can add a fourth equation: \(0 \lt w'\). The purpose of this equation is to guarantee that we will never divide any of the coordinates by 0 (which would be a degenerate case). These equations mathematically works. You don't really need though to try to represent what vertices look like or what it means to work with a four-dimensional space. All it says is that the clip space of a given vertex whose coordinates are {x, y, z} is defined by the extents [-w,w] (the w value indicates what the dimensions of the clip space are). Note that this clip space is the same for each coordinate of the point and the clip space of any given vertex is a cube. Though note also that each point is likely to have its own clip space (each set of x, y and z-coordinate is likely to have a different w value). In other words, every vertex has its own clip space in which it exists (and basically needs to "fit" in). This lesson is only about projection matrices. All we need to know in the context of this lesson, is to know where clipping occurs in the vertex transformation pipeline and what clip space means, which we just explained. Everything else will be explained in the lessons on the Sutherland-Hodgman and the Cohen-Sutherland algorithms which you can find in the Advanced Rasterization Techniques section. The "Old" Point (or Vertex) Transformation Pipeline The fixed-function pipeline is now deprecated in OpenGL and other graphics APIs. Do not use it anymore. Use the "new" programmable GPU rendering pipeline instead. We only kept this section for reference and because you might still come across some articles on the Web referencing methods from the old pipeline. Vertex is a better term when it comes to describe how points (vertices) are transformed in OpenGL (or Direct3D, Metal or any other graphics API you can think of). OpenGL (and other graphics APIs) had (in the old fixed-function pipeline) two possible modes for modifying the state of the camera: GL_PROJECTION and GL_MODELVIEW. GL_PROJECTION allowed to set the projection matrix itself. As we know by now (see previous chapter) this matrix is build from the left, right, bottom and top screen coordinates (which are computed from the camera's field of view and near clipping plane), as well as the near and far clipping planes (which are parameters of the camera). These parameters define the shape of the camera's frustum and all the vertices or points from the scene contained within this frustum are visible. In OpenGL, these parameters were passed to the API through a call to glFrustum (which we show an implementation of in the previous chapter): glFrustum(float left, float right, float bottom, float top, float near, float far); GL_MODELVIEW mode allowed to set the world-to-camera matrix. A typical OpenGL program set the perspective projection matrix and the model-view matrix using the following sequence of calls: glMatrixMode (GL_PROJECTION); glLoadIdentity(); glFrustum(l, r, b, t, n, f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslate(0, 0, 10); ... First we would make the GL_PROJECTION mode active (line 1). Next, to set up the projection matrix, we would make a call to glFrustum passing as arguments to the function, the left, right, bottom and top screen coordinates as well as the near and far clipping planes. Once the projection matrix was set, we would switch to the GL_MODELVIEW mode (line 4). Actually, the GL_MODELVIEW matrix could be seen as the combination of the "VIEW" transformation matrix (the world-to-camera matrix) with the "MODEL" matrix which is the transformation applied to the object (the object-to-world matrix). There was not concept of world-to-camera transform separate from the object-to-world transform. The two transforms were combined in the GL_MODELVIEW matrix. $${GL\_MODELVIEW} = M_{object-to-world} * M_{world-to-camera}$$ First a point \(P_w\) expressed in world space was transformed to camera space (or eye space) using the GL_MODELVIEW matrix. The resulting point \(P_c\) was then projected onto the image plane using the GL_PROJECTION matrix. We ended up with a point expressed in homogeneous coordinates in which the coordinate w contained the point \(P_c\)'s z coordinate. The Vertex Transformation Pipeline in the New Programmable GPU Rendering Pipeline The pipeline in the new programmable GPU rendering pipeline is more or less the same than the old pipeline, but what is really different in this new pipeline, is the way you set things up. In the new pipeline, there is no more concept of GL_MODELVIEW or GL_PROJECTION mode. This step can now be freely programmed in a vertex shader. As mentioned in the first chapter of this lesson, the vertex shader, is like a small program. You can program this vertex shader to tell the GPU how vertices making up the geometry of the scene should be processed. In other words, this is where you should be doing all your vertex transformations: the world-to-camera transformation if necessary but more importantly the projection transformation. A program using an OpenGL API doesn't produced an image if the vertex and its correlated fragment shader are not defined. The simplest form of vertex shader looks like this: in vec3 vert; void main() { // does not alter the vertices at all gl_Position = vec4(vert, 1); } This program doesn't even transform the input vertex with a perspective projection matrix, which in some cases can produce a visible result depending on the size and the position of the geometry as well as how the viewport is set. But this is not relevant in this lesson. What we can see by looking at this code is that the input vertex is set to be a vec4 which is nothing else than a point with homogeneous coordinates. Note that gl_Position too is a point with homogeneous coordinates. As expected, the vertex shader output the position of the vertex in clip space (see diagram of the vertex transformation pipeline above). In reality you are more likely to use a vertex shader like this one: uniform mat4 worldToCamMatrix, projMatrix; in vec3 vert; void main() { gl_Position = projMatrix * worldToCamMatrix * vec4(vert, 1); } It uses both a world-to-camera and projection matrix to transform the vertex to camera space and then clip space. Both matrices are set externally in program using some calls (glGetUniformLocation to find the location of the variable in the shader and glUniformMatrix4fv to set the matrix variable using the previously found location) that are provided to you by the OpenGL API: Matrix44f worldToCamera = ... // See note below to learn about whether you need to transpose the matrix of not before using it in glUniformMatrix4fv //worldToCamera.transposeMe(); //projMatrix.transposeMe(); GLuint projMatrixLoc = glGetUniformLocation(p, "projMatrix"); GLuint worldToCamLoc = glGetUniformLocation(p, "worldToCamMatrix"); glUniformMatrix4fv(projMatrixLoc, 1, GL_FALSE, projMatrix); glUniformMatrix4fv(worldToCamLoc, 1, GL_FALSE, worldToCamera); Edit - January 2017: do I need to transpose the matrix in an OpenGL program or not? Despite our effort to make things as clear as possible, it is easy to still get confused by things such as "should I transpose my matrix before passing it to the graphics pipeline, etc.". In the OpenGL specifications, matrices were/are written using the column-major order convention. Though the confusing part is that API calls such as glUniformMatrix4vfx() accept coefficients mapped in memory in the row-major form. In conclusion if in your code the coefficients of the matrices are laid out in memory in a row-major order, then you don't need to transpose the matrix. Otherwise you may have to. You "may" because in fact this is something you can control via a flag in the glUniformMatrix4vfx() function itself. The third parameter of the function which is set to GL_FALSE in the example above indicates to the graphics API whether you wish the API to transpose the coefficients of the matrix for you. So even if your coefficients are mapped in memory in a column-major order, you don't necessarily need to transpose matrices specifically before using them with glUniformMatrix4vfx(). What you can do instead is to set the transpose flag of glUniformMatrix4vfx() to GL_TRUE. In fact things get even more confusing if you look at the order in which the matrices are used in the OpenGL vertex shader. You will notice we write \(Proj * View * vtx\) instead of \(vtx * View * Proj\). The former form is used when you deal with column-major matrices (because it implies that you multiply the matrix by the point rather than the point by the matrix as explained in our lesson on Geometry. Conclusion? OpenGL assume matrices are column-major (so this is how you need to use them in shaders) yet coefficients are mapped in memory using a row-major order form. Confusing? Remember that matrices in OpenGL (and vectors) use column-major order. Thus if you use a row vectors like we do on Scratchapixel, you will need to transpose the matrix before setting up the matrix of the vertex shader (line 2). They are other ways of doing this in modern OpenGL but we will skip them in this lesson which is not devoted to that topic. This information can easily be found on the Web anyway.
CommonCrawl
Meanwhile, the APAC has been identified as the fastest growing regional market. The regions massive population size of which a significant share belongs to the geriatric demographic is expected to impact growth. Moreover, the region is undergoing healthcare reforms and is increasingly adopting advanced medical technology. Growth opportunities in this regional market are high. Phenotropil is an over-the-counter supplement similar in structure to Piracetam (and Noopept). This synthetic smart drug has been used to treat stroke, epilepsy and trauma recovery. A 2005 research paper also demonstrated that patients diagnosed with natural lesions or brain tumours see improvements in cognition. Phenylpiracetam intake can also result in minimised feelings of anxiety and depression. This is one of the more powerful unscheduled Nootropics available. By which I mean that simple potassium is probably the most positively mind altering supplement I've ever tried…About 15 minutes after consumption, it manifests as a kind of pressure in the head or temples or eyes, a clearing up of brain fog, increased focus, and the kind of energy that is not jittery but the kind that makes you feel like exercising would be the reasonable and prudent thing to do. I have done no tests, but feel smarter from this in a way that seems much stronger than piracetam or any of the conventional weak nootropics. It is not just me – I have been introducing this around my inner social circle and I'm at 7/10 people felt immediately noticeable effects. The 3 that didn't notice much were vegetarians and less likely to have been deficient. Now that I'm not deficient, it is of course not noticeable as mind altering, but still serves to be energizing, particularly for sustained mental energy as the night goes on…Potassium chloride initially, but since bought some potassium gluconate pills… research indicates you don't want to consume large amounts of chloride (just moderate amounts). Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8). This is one of the few times we've actually seen a nootropic supplement take a complete leverage on the nootropic industry with the name Smart Pill. To be honest, we don't know why other companies haven't followed suit yet – it's an amazing name. Simple, and to the point. Coming from supplement maker, Only Natural, Smart Pill makes some pretty bold claims regarding their pills being completely natural, whilst maintaining good quality. This is their niche – or Only Natural's niche, for that matter. They create supplements, in this case Smart Pill, with the… Learn More... Nondrug cognitive-enhancement methods include the high tech and the low. An example of the former is transcranial magnetic stimulation (TMS), whereby weak currents are induced in specific brain areas by magnetic fields generated outside the head. TMS is currently being explored as a therapeutic modality for neuropsychiatric conditions as diverse as depression and ADHD and is capable of enhancing the cognition of normal healthy people (e.g., Kirschen, Davis-Ratner, Jerde, Schraedley-Desmond, & Desmond, 2006). An older technique, transcranial direct current stimulation (tDCS), has become the subject of renewed research interest and has proven capable of enhancing the cognitive performance of normal healthy individuals in a variety of tasks. For example, Flöel, Rösser, Michka, Knecht, and Breitenstein (2008) reported enhancement of learning and Dockery, Hueckel-Weng, Birbaumer, and Plewnia (2009) reported enhancement of planning with tDCS. While the commentary makes effective arguments — that this isn't cheating, because cheating is based on what the rules are; that this is fair, because hiring a tutor isn't outlawed for being unfair to those who can't afford it; that this isn't unnatural, because humans with computers and antibiotics have been shaping what is natural for millennia; that this isn't drug abuse anymore than taking multivitamins is — the authors seem divorced from reality in the examples they provide of effective stimulant use today. Two increasingly popular options are amphetamines and methylphenidate, which are prescription drugs sold under the brand names Adderall and Ritalin. In the United States, both are approved as treatments for people with ADHD, a behavioural disorder which makes it hard to sit still or concentrate. Now they're also widely abused by people in highly competitive environments, looking for a way to remain focused on specific tasks. l-theanine (Examine.com) is occasionally mentioned on Reddit or Imminst or LessWrong32 but is rarely a top-level post or article; this is probably because theanine was discovered a very long time ago (>61 years ago), and it's a pretty straightforward substance. It's a weak relaxant/anxiolytic (Google Scholar) which is possibly responsible for a few of the health benefits of tea, and which works synergistically with caffeine (and is probably why caffeine delivered through coffee feels different from the same amount consumed in tea - in one study, separate caffeine and theanine were a mixed bag, but the combination beat placebo on all measurements). The half-life in humans seems to be pretty short, with van der Pijl 2010 putting it ~60 minutes. This suggests to me that regular tea consumption over a day is best, or at least that one should lower caffeine use - combining caffeine and theanine into a single-dose pill has the problem of caffeine's half-life being much longer so the caffeine will be acting after the theanine has been largely eliminated. The problem with getting it via tea is that teas can vary widely in their theanine levels and the variations don't seem to be consistent either, nor is it clear how to estimate them. (If you take a large dose in theanine like 400mg in water, you can taste the sweetness, but it's subtle enough I doubt anyone can actually distinguish the theanine levels of tea; incidentally, r-theanine - the useless racemic other version - anecdotally tastes weaker and less sweet than l-theanine.) Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.) The blood half-life is 12-36 hours; hence two or three days ought to be enough to build up and wash out. A week-long block is reasonable since that gives 5 days for effects to manifest, although month-long blocks would not be a bad choice either. (I prefer blocks which fit in round periods because it makes self-experiments easier to run if the blocks fit in normal time-cycles like day/week/month. The most useless self-experiment is the one abandoned halfway.) My general impression is positive; it does seem to help with endurance and extended the effect of piracetam+choline, but is not as effective as that combo. At $20 for 30g (bought from Smart Powders), I'm not sure it's worthwhile, but I think at $10-15 it would probably be worthwhile. Sulbutiamine seems to affect my sleep negatively, like caffeine. I bought 2 or 3 canisters for my third batch of pills along with the theanine. For a few nights in a row, I slept terribly and stayed awake thinking until the wee hours of the morning; eventually I realized it was because I was taking the theanine pills along with the sleep-mix pills, and the only ingredient that was a stimulant in the batch was - sulbutiamine. I cut out the theanine pills at night, and my sleep went back to normal. (While very annoying, this, like the creatine & taekwondo example, does tend to prove to me that sulbutiamine was doing something and it is not pure placebo effect.) Learning how products have worked for other users can help you feel more confident in your purchase. Similarly, your opinion may help others find a good quality supplement. After you have started using a particular supplement and experienced the benefits of nootropics for memory, concentration, and focus, we encourage you to come back and write your own review to share your experience with others. Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. My answer is that this is not a lot of research or very good research (not nearly as good as the research on nicotine, eg.), and assuming it's true, I don't value long-term memory that much because LTM is something that is easily assisted or replaced (personal archives, and spaced repetition). For me, my problems tend to be more about akrasia and energy and not getting things done, so even if a stimulant comes with a little cost to long-term memory, it's still useful for me. I'm going continue to use the caffeine. It's not so bad in conjunction with tea, is very cheap, and I'm already addicted, so why not? Caffeine is extremely cheap, addictive, has minimal effects on health (and may be beneficial, from the various epidemiological associations with tea/coffee/chocolate & longevity), and costs extra to remove from drinks popular regardless of their caffeine content (coffee and tea again). What would be the point of carefully investigating it? Suppose there was conclusive evidence on the topic, the value of this evidence to me would be roughly $0 or since ignorance is bliss, negative money - because unless the negative effects were drastic (which current studies rule out, although tea has other issues like fluoride or metal contents), I would not change anything about my life. Why? I enjoy my tea too much. My usual tea seller doesn't even have decaffeinated oolong in general, much less various varieties I might want to drink, apparently because de-caffeinating is so expensive it's not worthwhile. What am I supposed to do, give up my tea and caffeine just to save on the cost of caffeine? Buy de-caffeinating machines (which I couldn't even find any prices for, googling)? This also holds true for people who drink coffee or caffeinated soda. (As opposed to a drug like modafinil which is expensive, and so the value of a definitive answer is substantial and would justify some more extensive calculating of cost-benefit.) Another well-known smart drug classed as a cholinergic is Sulbutiamine, a synthetic derivative of thiamine which crosses the blood-brain barrier and has been shown to improve memory while reducing psycho-behavioral inhibition. While Sulbutiamine has been shown to exhibit cholinergic regulation within the hippocampus, the reasons for the drug's discernable effects on the brain remain unclear. This smart drug, available over the counter as a nutritional supplement, has a long history of use, and appears to have no serious side effects at therapeutic levels. Some people aren't satisfied with a single supplement—the most devoted self-improvers buy a variety of different compounds online and create their own custom regimens, which they call "stacks." According to Kaleigh Rogers, writing in Vice last year, companies will now take their customers' genetic data from 23andMe or another source and use it to recommend the right combinations of smart drugs to optimize each individual's abilities. The problem with this practice is that there's no evidence the practice works. (And remember, the FDA doesn't regulate supplements.) Find out the 9 best foods to boost your brain health. It is often associated with Ritalin and Adderall because they are all CNS stimulants and are prescribed for the treatment of similar brain-related conditions. In the past, ADHD patients reported prolonged attention while studying upon Dexedrine consumption, which is why this smart pill is further studied for its concentration and motivation-boosting properties. "Cavin Balaster knows brain injury as well as any specialist. He survived a horrific accident and came out on the other side stronger than ever. His book, "How To Feed A Brain" details how changing his diet helped him to recover further from the devastating symptoms of brain injury such as fatigue and brain fog. Cavin is able to thoroughly explain complex issues in a simplified manner so the reader does not need a medical degree to understand. The book also includes comprehensive charts to simplify what the body needs and how to provide the necessary foods. "How To Feed A Brain" is a great resource for anyone looking to improve their health through diet, brain injury not required." And there are other uses that may make us uncomfortable. The military is interested in modafinil as a drug to maintain combat alertness. A drug such as propranolol could be used to protect soldiers from the horrors of war. That could be considered a good thing – post-traumatic stress disorder is common in soldiers. But the notion of troops being unaffected by their experiences makes many feel uneasy. Never heard of OptiMind before? This supplement promotes itself as an all-natural nootropic supplement that increases focus, improves memory, and enhances overall mental drive. The product first captured our attention when we noticed that their supplement blend contains a few of the same ingredients currently present in our editor's #1 choice. So, of course, we grew curious to see whether their formula was as (un)successful as their initial branding techniques. Keep reading to find out what we discovered… Learn More... By the end of 2009, at least 25 studies reported surveys of college students' rates of nonmedical stimulant use. Of the studies using relatively smaller samples, prevalence was, in chronological order, 16.6% (lifetime; Babcock & Byrne, 2000), 35.3% (past year; Low & Gendaszek, 2002), 13.7% (lifetime; Hall, Irwin, Bowman, Frankenberger, & Jewett, 2005), 9.2% (lifetime; Carroll, McLaughlin, & Blake, 2006), and 55% (lifetime, fraternity students only; DeSantis, Noar, & Web, 2009). Of the studies using samples of more than a thousand students, somewhat lower rates of nonmedical stimulant use were found, although the range extends into the same high rates as the small studies: 2.5% (past year, Ritalin only; Teter, McCabe, Boyd, & Guthrie, 2003), 5.4% (past year; McCabe & Boyd, 2005), 4.1% (past year; McCabe, Knight, Teter, & Wechsler, 2005), 11.2% (past year; Shillington, Reed, Lange, Clapp, & Henry, 2006), 5.9% (past year; Teter, McCabe, LaGrange, Cranford, & Boyd, 2006), 16.2% (lifetime; White, Becker-Blease, & Grace-Bishop, 2006), 1.7% (past month; Kaloyanides, McCabe, Cranford, & Teter, 2007), 10.8% (past year; Arria, O'Grady, Caldeira, Vincent, & Wish, 2008); 5.3% (MPH only, lifetime; Du-Pont, Coleman, Bucher, & Wilford, 2008); 34% (lifetime; DeSantis, Webb, & Noar, 2008), 8.9% (lifetime; Rabiner et al., 2009), and 7.5% (past month; Weyandt et al., 2009). MPH was developed more recently and marketed primarily for ADHD, although it is sometimes prescribed off label or used nonmedically to increase alertness, energy, or concentration in conditions other than ADHD. Both MPH and AMP are on the list of substances banned from sports competitions by the World Anti-Doping Agency (Docherty, 2008). Both also have the potential for abuse and dependence, which detracts from their usefulness and is the reason for their classification as Schedule II controlled substances. Although the risk of developing dependence on these drugs is believed to be low for individuals taking them for ADHD, the Schedule II classification indicates that these drugs have a high potential for abuse and that abuse may lead to severe dependence. Taking the tryptophan is fairly difficult. The powder as supplied by Bulk Nutrition is extraordinarily dry and fine; it seems to be positively hydrophobic. The first time I tried to swallow a teaspoon, I nearly coughed it out - the power had seemed to explode in my mouth and go down my lungs. Thenceforth I made sure to have a mouth of water first. After a while, I took a different tack: I mixed in as much Hericium as would fit in the container. The mushroom powder is wetter and chunkier than the tryptophan, and seems to reduce the problem. Combining the mix with chunks of melatonin inside a pill works even better. The ethics of cognitive enhancement have been extensively debated in the academic literature (e.g., Bostrom & Sandberg, 2009; Farah et al., 2004; Greely et al., 2008; Mehlman, 2004; Sahakian & Morein-Zamir, 2007). We do not attempt to review this aspect of the problem here. Rather, we attempt to provide a firmer empirical basis for these discussions. Despite the widespread interest in the topic and its growing public health implications, there remains much researchers do not know about the use of prescription stimulants for cognitive enhancement. The infinite promise of stacking is why, whatever weight you attribute to the evidence of their efficacy, nootropics will never go away: With millions of potential iterations of brain-enhancing regimens out there, there is always the tantalizing possibility that seekers haven't found the elusive optimal combination of pills and powders for them—yet. Each "failure" is but another step in the process-of-elimination journey to biological self-actualization, which may be just a few hundred dollars and a few more weeks of amateur alchemy away. The Smart Pills Technology are primarily utilized for dairy products, soft drinks, and water catering in diverse shapes and sizes to various consumers. The rising preference for easy-to-carry liquid foods is expected to boost the demand for these packaging cartons, thereby, fueling the market growth. The changing lifestyle of people coupled with the convenience of utilizing carton packaging is projected to propel the market. In addition, Smart Pills Technology have an edge over the glass and plastic packaging, in terms of environmental-friendliness and recyclability of the material, which mitigates the wastage and reduces the product cost. Thus, the aforementioned factors are expected to drive the Smart Pills Technology market growth over the projected period. We can read off the results from the table or graph: the nicotine days average 1.1% higher, for an effect size of 0.24; however, the 95% credible interval (equivalent of confidence interval) goes all the way from 0.93 to -0.44, so we cannot exclude 0 effect and certainly not claim confidence the effect size must be >0.1. Specifically, the analysis gives a 66% chance that the effect size is >0.1. (One might wonder if any increase is due purely to a training effect - getting better at DNB. Probably not25.) But where will it all stop? Ambitious parents may start giving mind-enhancing pills to their children. People go to all sorts of lengths to gain an educational advantage, and eventually success might be dependent on access to these mind-improving drugs. No major studies have been conducted on the long-term effects. Some neuroscientists fear that, over time, these memory-enhancing pills may cause people to store too much detail, cluttering the brain. Read more about smart drugs here. Participants (n=205) [young adults aged 18-30 years] were recruited between July 2010 and January 2011, and were randomized to receive either a daily 150 µg (0.15mg) iodine supplement or daily placebo supplement for 32 weeks…After adjusting for baseline cognitive test score, examiner, age, sex, income, and ethnicity, iodine supplementation did not significantly predict 32 week cognitive test scores for Block Design (p=0.385), Digit Span Backward (p=0.474), Matrix Reasoning (p=0.885), Symbol Search (p=0.844), Visual Puzzles (p=0.675), Coding (p=0.858), and Letter-Number Sequencing (p=0.408). The evidence? Found helpful in reducing bodily twitching in myoclonus epilepsy, a rare disorder, but otherwise little studied. Mixed evidence from a study published in 1991 suggests it may improve memory in subjects with cognitive impairment. A meta-analysis published in 2010 that reviewed studies of piracetam and other racetam drugs found that piracetam was somewhat helpful in improving cognition in people who had suffered a stroke or brain injury; the drugs' effectiveness in treating depression and reducing anxiety was more significant. One study of helicopter pilots suggested that 600 mg of modafinil given in three doses can be used to keep pilots alert and maintain their accuracy at pre-deprivation levels for 40 hours without sleep.[60] However, significant levels of nausea and vertigo were observed. Another study of fighter pilots showed that modafinil given in three divided 100 mg doses sustained the flight control accuracy of sleep-deprived F-117 pilots to within about 27% of baseline levels for 37 hours, without any considerable side effects.[61] In an 88-hour sleep loss study of simulated military grounds operations, 400 mg/day doses were mildly helpful at maintaining alertness and performance of subjects compared to placebo, but the researchers concluded that this dose was not high enough to compensate for most of the effects of complete sleep loss. Two additional studies used other spatial working memory tasks. Barch and Carter (2005) required subjects to maintain one of 18 locations on the perimeter of a circle in working memory and then report the name of the letter that appeared there in a similarly arranged circle of letters. d-AMP caused a speeding of responses but no change in accuracy. Fleming et al. (1995) referred to a spatial delay response task, with no further description or citation. They reported no effect of d-AMP in the task except in the zero-delay condition (which presumably places minimal demand on working memory). Two additional studies assessed the effects of d-AMP on visual–motor sequence learning, a form of nondeclarative, procedural learning, and found no effect (Kumari et al., 1997; Makris, Rush, Frederich, Taylor, & Kelly, 2007). In a related experimental paradigm, Ward, Kelly, Foltin, and Fischman (1997) assessed the effect of d-AMP on the learning of motor sequences from immediate feedback and also failed to find an effect. These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds." With the right lifestyle and the right stack of supplements and nootropics, you can enjoy enhanced mental clarity, easier flow, and better vision. The best nootropics for your needs will depend on how much you want to spend, how often you want to take them, and what you want to take them for. Nutritional supplements should be taken daily, for the cumulative effect, but Smart drugs such as noopept and modafinil are usually taken on an as-needed basis, for those times when you are aiming for hyperfocus, better clarity, and better recall, or the ability to process a huge amount of incoming visual information quickly and accurately and to pick up on details that you might otherwise miss. Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. I don't believe there's any need to control for training with repeated within-subject sampling, since there will be as many samples on both control and active days drawn from the later trained period as with the initial untrained period. But yes, my D5B scores seem to have plateaued pretty much and only very slowly increase; you can look at the stats file yourself. "A system that will monitor their behavior and send signals out of their body and notify their doctor? You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia," says Paul Appelbaum, director of Columbia University's psychiatry department in an interview with the New York Times. …Four subjects correctly stated when they received nicotine, five subjects were unsure, and the remaining two stated incorrectly which treatment they received on each occasion of testing. These numbers are sufficiently close to chance expectation that even the four subjects whose statements corresponded to the treatments received may have been guessing. There is no shortage of nootropics available for purchase online that can be shipped to you nearly anywhere in the world. Yet, many of these supplements and drugs have very little studies, particularly human studies, confirming their results. While this lack of research may not scare away more adventurous neurohackers, many people would prefer to […] There's been a lot of talk about the ketogenic diet recently—proponents say that minimizing the carbohydrates you eat and ingesting lots of fat can train your body to burn fat more effectively. It's meant to help you both lose weight and keep your energy levels constant. The diet was first studied and used in patients with epilepsy, who suffered fewer seizures when their bodies were in a state of ketosis. Because seizures originate in the brain, this discovery showed researchers that a ketogenic diet can definitely affect the way the brain works. Brain hackers naturally started experimenting with diets to enhance their cognitive abilities, and now a company called HVMN even sells ketone esters in a bottle; to achieve these compounds naturally, you'd have to avoid bread and cake. Here are 6 ways exercise makes your brain better. Only two of the eight experiments reviewed in this section found that stimulants enhanced performance, on a nonverbal fluency task in one case and in Raven's Progressive Matrices in the other. The small number of studies of any given type makes it difficult to draw general conclusions about the underlying executive function systems that might be influenced. Soldiers should never be treated like children; because then they will act like them. However, There's a reason why the 1SG is known as the Mother of the Company and the Platoon Sergeant is known as a Platoon Daddy. Because they run the day to day operations of the household, get the kids to school so to speak, and focus on the minutia of readiness and operational execution in all its glory. Officers forget they are the second link in the Chain of Command and a well operating duo of Team Leader and Squad Leader should be handling 85% of all Soldier issues, while the Platoon sergeant handles the other 15% with 1SG. Platoon Leaders and Commanders should always be present; training, leading by example, focusing on culture building, tracking and supporting NCO's. They should be focused on big business sides of things, stepping in to administer punishment or award and reward performance. If an officer at any level is having to step into a Soldier's day to day lives an NCO at some level is failing. Officers should be junior Officers and junior Enlisted right along side their counterparts instead of eating their young and touting their "maturity" or status. If anything Officers should be asking their NCO's where they should effect, assist, support or provide cover toward intitiatives and plans that create consistency and controlled chaos for growth of individuals two levels up and one level down of operational capabilities at every echelon of command. But how to blind myself? I used my pill maker to make 9 OO pills of piracetam mix, and then 9 OO pills of piracetam mix+the Adderall, then I put them in a baggy. The idea is that I can blind myself as to what pill I am taking that day since at the end of the day, I can just look in the baggy and see whether a placebo or Adderall pill is missing: the big capsules are transparent so I can see whether there is a crushed-up blue Adderall in the end or not. If there are fewer Adderall than placebo, I took an Adderall, and vice-versa. Now, since I am checking at the end of each day, I also need to remove or add the opposite pill to maintain the ratio and make it easy to check the next day; more importantly I need to replace or remove a pill, because otherwise the odds will be skewed and I will know how they are skewed. (Imagine I started with 4 Adderalls and 4 placebos, and then 3 days in a row I draw placebos but I don't add or remove any pills; the next day, because most of the placebos have been used up, there's only a small chance I will get a placebo…) We have established strict criteria for reviewing brain enhancement supplements. Our reviews are clear, detailed, and informative to help you find supplements that deliver the best results. You can read our reviews, learn about the best nootropic ingredients, compare formulas, and find out how each supplement performed according to specific criteria. One often-cited study published in the British Journal of Pharmacology looked at cognitive function in the elderly and showed that racetam helped to improve their brain function.19 Another study, which was published in Psychopharmacology, looked at adult volunteers (including those who are generally healthy) and found that piracetam helped improve their memory.20 My first dose on 1 March 2017, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. One claim was partially verified in passing by Eliezer Yudkowsky (Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar…About the same as drinking a cup of coffee - i.e., it works as a perker-upper, somehow. I'm not sure, since it doesn't do anything for me except possibly mitigate foot cramps.) It is a known fact that cognitive decline is often linked to aging. It may not be as visible as skin aging, but the brain does in fact age. Often, cognitive decline is not noticeable because it could be as mild as forgetting names of people. However, research has shown that even in healthy adults, cognitive decline can start as early as in the late twenties or early thirties. Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
CommonCrawl
Design of an Executable ANFIS-based Control System to Improve the Attitude and Altitude Performances of a Quadcopter Drone Mohammad Al-Fetyani, Mohammad Hayajneh, Adham Alsharkawi 文章导航 > International Journal of Automation and Computing > 2020 > 优先出版 Mohammad Al-Fetyani, Mohammad Hayajneh, Adham Alsharkawi. Design of an Executable ANFIS-based Control System to Improve the Attitude and Altitude Performances of a Quadcopter Drone. International Journal of Automation and Computing. doi: 10.1007/s11633-020-1251-2 Citation: Mohammad Al-Fetyani, Mohammad Hayajneh, Adham Alsharkawi. Design of an Executable ANFIS-based Control System to Improve the Attitude and Altitude Performances of a Quadcopter Drone. International Journal of Automation and Computing. doi: 10.1007/s11633-020-1251-2 1, , 2, , , Mohammad Al-Fetyani1 , , Mohammad Hayajneh2 , , , Adham Alsharkawi1 , Department of Mechatronics Engineering, The University of Jordan, Amman11942, Jordan Department of Mechatronics Engineering, The Hashemite University, Zarqa13133, Jordan Mohammad Al-Fetyani received the B. Sc. degree in mechatronics engineering with an excellent grade at The University of Jordan, Jordan in 2020. He is currently working as a research assistant at The University of Jordan and intends to complete his postgraduate studies. His research interests include robotics and intelligent systems. E-mail:[email protected] iD: 0000-0002-7360-2403 Mohammad Hayajneh received the B. Sc. and M. Sc. degrees in mechatronics engineering from Jordan University of Science and Technology, Jordan in 2010 and 2012. He received the Ph. D. degree in automatic control and operational research from University of Bologna, Italy in 2016. He is currently an assistant professor in mechatronics engineering at The Hashemite University, Jordan. His research interests include the design and development of control and navigation methods in aerial and ground robots and their applications. E-mail: [email protected] (Corresponding author) ORCID iD: 0000-0001-6869-1862 Adham Alsharkawi received the B. Sc. degree in machatronics engineering from Tafila Technical University, Jordan in 2010. He received the M. Sc. degree in advanced control and system engineering from University of Manchester, UK in 2013. He received the Ph. D. degree in automatic control and systems engineering from University of Sheffield, UK in 2017. He is currently an assistant professor in Mechatronics Engineering Department, The University of Jordan, Jordan. He has been working in projects related to wheeled mobile robots, quadcopters, and solar thermal power plants. His interests include system dynamics, automatic control, artificial intelligence, and solar energy. E-mail: [email protected] Corresponding author: [email protected] 图(23) 表(8) Quadcopter / proportional integral derivate (PID) control / fuzzy control / adaptive neuro-fuzzy / altitude control / attitude control Abstract: Nowadays, quadcopters are presented in many life applications which require the performance of automatic takeoff, trajectory tracking, and automatic landing. Thus, researchers are aiming to enhance the performance of these vehicles through low-cost sensing solutions and the design of executable and robust control techniques. Due to high nonlinearities, strong couplings and under-actuation, the control design process of a quadcopter is a rather challenging task. Therefore, the main objective of this work is demonstrated through two main aspects. The first is the design of an adaptive neuro-fuzzy inference system (ANFIS) controller to develop the attitude and altitude of a quadcopter. The second is to create a systematic framework for implementing flight controllers in embedded systems. A suitable model of the quadcopter is also developed by taking into account aerodynamics effects. To show the effectiveness of the ANFIS approach, the performance of a well-trained ANFIS controller is compared to a classical proportional-derivative (PD) controller and a properly tuned fuzzy logic controller. The controllers are compared and tested under several different flight conditions including the capability to reject external disturbances. In the first stage, performance evaluation takes place in a nonlinear simulation environment. Then, the ANFIS-based controllers alongside attitude and position estimators, and precision landing algorithms are implemented for executions in a real-time autopilot. In precision landing systems, an IR-camera is used to detect an IR-beacon on the ground for precise positioning. Several flight tests of a quadcopter are conducted for results validation. Both simulations and experiments demonstrated superior results for quadcopter stability in different flight scenarios. Figure 1. Coordinate systems of vision-based landing Figure 2. Vision-based landing strategy Figure 3. Membership functions of the fuzzy-PD controller: (a) Altitude error input; (b) Altitude error rate input; (c) Altitude output; (d) Attitude error input; (e) Attitude error rate input; (f) Attitude output. Figure 4. Rule base surfaces of the fuzzy-PD controller: (a) Altitude; (b) Attitude. Figure 5. ANFIS multi-layer architecture Figure 6. Membership functions of the ANFIS-PD controller: (a) Altitude error input; (b) Altitude error rate input; (c) Attitude error input; (d) Attitude error rate input. Figure 7. Rule base surfaces of the ANFIS-PD controller: (a) Altitude; (b) Attitude. Figure 8. Quadcopter Figure 9. UAV hardware and software architecture Figure 10. Simulation results for Case 1: (a) Position z; (b) Angle $ \phi $ ; (c) Angle $ \theta $ ; (d) Angle $ \psi $ . Figure 11. Control inputs for Case 1: (a) PD controller; (b) Fuzzy-PD controller; (c) ANFIS-PD controller. Figure 14. Simulation results for Case 3: (a) External force; (b) Altitude. Figure 16. flight tests: (a) Manual flight mode; (b) Loiter flight mode. Figure 17. Quadcopter′s attitude in manual mode Figure 18. Quadcopter′s attitude in loiter mode Figure 19. Quadcopter′s altitude in manual and loiter modes Figure 20. Quadcopter path in precision landing Figure 21. Attitude and altitude of the quadcopter during precision landing. Figure 22. Drone′s velocities in high responsive flight Figure 23. Drone′s attitude in high responsive flight Table 1. Rule base of the fuzzy-PD controller NB NS Z PS PB Error rate NB SN FP FP FP FP NS FN SN SP SP FP Z FN SN S SP FP PS SN S SN S FP PB S FN FN SN S 下载: 导出CSV Table 2. Rule base of the ANFIS-PD controller Input1 (Error) In1mf1 In1mf2 In1mf3 In1mf4 In1mf5 Input2 (Error rate) In2mf1 Out1mf1 Out1mf6 Out1mf11 Out1mf16 Out1mf21 In2mf2 Out1mf2 Out1mf7 Out1mf12 Out1mf17 Out1mf22 In2mf5 Out1mf5 Out1mf10 Out1mf15 Out1mf20 Out1mf25 Table 3. Quadcopter parameters Parameter Value Unit Parameter Value Unit g 9.810 m/s2 Ixx 0.048 56 kg·m2 Mass, m 1.568 kg Iyy 0.048 56 kg·m2 Arm length, 1 0.225 m Izz 0.088 01 kg·m2 Rx 0.50 kg/s Drag coefficient, k 2.980×10−6 Ry 0.50 kg/s Thrust coefficient, b 1.140×10−7 Rz 0.50 kg/s Table 4. PD controller parameter values Parameter Value Parameter Value Kz,D 2.50 Kz,P 1.5 Kϕ,D 1.75 Kϕ,P 6.0 Kθ,D 1.75 Kθ,P 6.0 Kψ,D 1.75 Kψ,P 6.0 Table 5. Case 1: Response characteristics of the three controllers PD Fuzzy-PD ANFIS-PD Settling time (s) Position $ z $ > 5 1.39 1.22 Angle $ \phi $ 4.43 2.68 1.48 Angle $ \theta $ 4.47 2.60 2.26 Angle $ \psi $ 4.40 2.78 1.55 Overshoot peak value Position $ z $ − −0.01 −0.01 Angle $ \phi $ −3.03 −0.01 −0.17 Angle $ \theta $ −2.99 −0.02 −0.23 Angle $ \psi $ −2.66 −0.01 −0.14 Table 6. Case 2: Quadcopter time-varying desired state Time t (s) Position z (m) Angle $ \phi $ (deg) Angle $ \theta $ (deg) Angle $ \psi $ (deg) $0 \!\leq\! t \!\leq\! 5$ $ \dfrac{1}{5} t $ $ 0 $ $ 0 $ $ \dfrac{20}{5} t $ $5 \!\leq\! t \!\leq\! 10$ $ 1 $ $\dfrac{40}{5} (t\!-\!5)$ $ 0 $ $-\dfrac{50}{5} (t\!-\!5)\!+\!20$ $10 \!\leq\! t \!\leq\! 15$ $\dfrac{1}{5} (t\!-\!10)\!+\!1$ $ 40 $ $-\dfrac{80}{5} (t\!-\!10)$ $ -30 $ $15 \!\leq\! t \!\leq\! 20$ $-\dfrac{1}{5} (t\!-\!15)\!+\!2$ $ 40 $ $ -80 $ $ -30 $ $20 \!\leq \!t \!\leq \!25$ $ 1 $ $-\dfrac{40}{5} (t\!-\!20)\!+\!40$ $ -80 $ $ -30 $ $25 \!\leq \!t\! \leq \!30$ $-\dfrac{1}{5} (t\!-\!25)\!+\!1$ $ 0 $ $\dfrac{80}{5} (t\!-\!25)\!-\!80$ $\dfrac{30}{5} (t\!-\!25)\!-\!30$ Table 7. Case 3: Performance evaluation MSE MAE PD controller 0.514 6 0.570 6 Fuzzy-PD controller 0.251 8 0.407 1 ANFIS-PD controller 0.118 6 0.271 1 Table 8. Environmental conditions during flight time. IMU temperature (°C) 59−60 Barometer temperature (°C) 54−60 External temperature (°C) 37−40 forecast Wind speed (km/h) 15−20 forecast Number of satellites 6−8 [1] S. Bouabdallah, A. Noth, R. Siegwart. PID vs LQ control techniques applied to an indoor micro quadrotor. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Sendai, Japan, pp. 2451−2456, 2004. DOI: 10.1109/IROS.2004.1389776. [2] T. Luukkonen. Modeling and control of quadcopter. Journal of American Society for Mass Spectrometry, Vol. 22, pp. 1134−1145, 2011 [3] P. E. I. Pounds, D. R. Bersak, A. M. Dollar. Stability of small-scale UAV helicopters and quadrotors with added payload mass under PID control. Autonomous Robots, vol. 33, no. 1−2, pp. 129–142, 2012. DOI: 10.1007/s10514-012-9280-5. [4] D. K. Tiep, Y. J. Ryoo. An autonomous control of fuzzy-PD controller for quadcopter. International Journal of Fuzzy Logic and Intelligent Systems, vol. 17, no. 2, pp. 107–113, 2017. DOI: 10.5391/IJFIS.2017.17.2.107. [5] F. Santoso, M. A. Garratt, S. G. Anavatti. Fuzzy logic-based self-tuning autopilots for trajectory tracking of a low-cost quadcopter: A comparative study. In Proceedings of International Conference on Advanced Mechatronics, Intelligent Manufacture, and Industrial Automation, IEEE, Surabaya, Indonesia, pp. 64–69, 2015. DOI: 10.1109/ICAMIMIA.2015.7508004. [6] B. E. Demir, R. Bayir, F. Duran. Real-time trajectory tracking of an unmanned aerial vehicle using a self-tuning fuzzy proportional integral derivative controller. International Journal of Micro Air Vehicles, vol. 8, no. 4, pp. 252–268, 2016. DOI: 10.1177/1756829316675882. [7] M. Rabah, A. Rohan, Y. J. Han, S. H. Kim. Design of fuzzy-PID controller for quadcopter trajectory-tracking. International Journal of Fuzzy Logic and Intelligent Systems, vol. 18, no. 3, pp. 204–213, 2018. DOI: 10.5391/IJFIS.2018.18.3.204. [8] P. Garcia-Aunon, M. S. Peñas, J. M. de la Cruz García. Parameter selection based on fuzzy logic to improve UAV path-following algorithms. Journal of Applied Logic, vol. 24, pp. 62–75, 2017. DOI: 10.1016/j.jal.2016.11.025. [9] E. Kayacan, R. Maslim. Type-2 fuzzy logic trajectory tracking control of quadrotor VTOL aircraft with elliptic membership functions. IEEE/ASME Transactions on Mechatronics, vol. 22, no. 1, pp. 339–348, 2017. DOI: 10.1109/TMECH.2016.2614672. [10] A. Dorzhigulov, B. Bissengaliuly, B. F. Spencer Jr, J. Kim, A. P. James. ANFIS based quadrotor drone altitude control implementation on raspberry pi platform. Analog Integrated Circuits and Signal Processing, vol. 95, no. 3, pp. 435–445, 2018. DOI: 10.1007/s10470-018-1159-8. [11] D. Domingos, G. Camargo, F. Gomide. Autonomous fuzzy control and navigation of quadcopters. IFAC-PapersOnLine, vol. 49, no. 5, pp. 73–78, 2016. DOI: 10.1016/j.ifacol.2016.07.092. [12] S. Kundu, D. R. Parhi. Reactive navigation of underwater mobile robot using ANFIS approach in a manifold manner. International Journal of Automation and Computing, vol. 14, no. 3, pp. 307–320, 2017. DOI: 10.1007/s11633-016-0983-5. [13] O. Salah, A. A. Ramadan, S. Sessa, A. A. Ismail, M. Fujie, A. Takanishi. ANFIS-based sensor fusion system of sit- to- stand for elderly people assistive device protocols. International Journal of Automation and Computing, vol. 10, no. 5, pp. 405–413, 2013. DOI: 10.1007/s11633-013-0737-6. [14] M. Gheisarnejad, M. H. Khooban. Supervised control strategy in trajectory tracking for a wheeled mobile robot. IET Collaborative Intelligent Manufacturing, vol. 1, no. 1, pp. 3–9, 2019. DOI: 10.1049/iet-cim.2018.0007. [15] M. Gheisarnejad, M. H. Khooban, J. Boudjadar. Adaptive network based fuzzy inference system for frequency regulation in modern maritime power systems. In Proceedings of the 5th International forum on Research and Technology for Society and Industry, IEEE, Florence, Italy, pp. 302−307, 2019. DOI: 10.1109/RTSI.2019.8895596. [16] M. Gheisarnejad, P. Karimaghaee, J. Boudjadar, M. H. Khooban. Real-time cellular wireless sensor testbed for frequency regulation in smart grids. IEEE Sensors Journal, vol. 19, no. 23, pp. 11656–11665, 2019. DOI: 10.1109/JSEN.2019.2934599. [17] M. Panda, B. Das, B. Subudhi, B. B. Pati. A comprehensive review of path planning algorithms for autonomous underwater vehicles. International Journal of Automation and Computing, vol. 17, no. 3, pp. 321–352, 2020. DOI: 10.1007/s11633-019-1204-9. [18] F. Santoso, M. A. Garratt, S. G. Anavatti. Adaptive neuro-fuzzy inference system identification for the dynamics of the AR.drone quadcopter. In Proceedings of International Conference on Sustainable Energy Engineering and Application, IEEE, Jakarta, Indonesia, pp. 55−60, 2016. DOI: 10.1109/ICSEEA.2016.7873567. [19] S. Rezazadeh, M. A. Ardestani, P. S. Sadeghi. Optimal attitude control of a quadrotor UAV using adaptive neuro-fuzzy inference system (ANFIS). In Proceedings of the 3rd International Conference on Control, Instrumentation, and Automation, IEEE, Tehran, Iran, pp. 219−223, 2013. DOI: 10.1109/ICCIAutom.2013.6912838. [20] S. Khatoon, I. Nasiruddin, M. Shahid. Design and simulation of a hybrid PD-ANFIS controller for attitude tracking control of a quadrotor UAV. Arabian Journal for Science and Engineering, vol. 42, no. 12, pp. 5211–5229, 2017. DOI: 10.1007/s13369-017-2586-z. [21] T. Bresciani. Modelling, Identification and Control of a Quadrotor Helicopter, Master dissertation, Lund University, Sweden, 2008. [22] D. Kotarski, Z. Benić, M. Krznar. Control design for unmanned aerial vehicles with four rotors. Interdisciplinary Description of Complex Systems:INDECS, vol. 14, no. 2, pp. 236–245, 2016. [23] N. S. Bao, X. Ran, Z. F. Wu, Y. F. Xue, K. Y. Wang. Research on attitude controller of quadcopter based on cascade PID control algorithm. In Proceedings of the 2nd Information Technology, Networking, Electronic and Automation Control Conference, IEEE, Chengdu, China, pp. 1493−1497, 2017. DOI: 10.1109/ITNEC.2017.8285044. [24] G. Ononiwu, O. Onojo, O. Ozioko, O. Nosiri. Quadcopter design for payload delivery. Journal of Computer and Communications, vol. 4, no. 10, pp. 1–12, 2016. DOI: 10.4236/jcc.2016.410001. [25] E. Kuantama, T. Vesselenyi, S. Dzitac, R. Tarca. PID and fuzzy-PID control model for quadcopter attitude with disturbance parameter. International Journal of Computers Communications &Control, vol. 12, no. 4, pp. 519–532, 2017. DOI: 10.15837/ijccc.2017.4.2962. [26] M. Hayajneh, M. Melega, L. Marconi. Design of autonomous smartphone based quadrotor and implementation of navigation and guidance systems. Mechatronics, vol. 49, pp. 119–133, 2018. DOI: 10.1016/j.mechatronics.2017.11.012. [27] M. R. Hayajneh, A. R. E. Badawi. Automatic UAV wireless charging over solar vehicle to enable frequent flight missions. In Proceedings of the 3rd International Conference on Automation, Control and Robots, Prague, Czech Republic, pp. 44−49, 2019. [28] MATLAB. Px4 autopilots support from embedded coder., [Online], 2020. Available: https://www.mathworks.com/hardware-support/px4-autopilots.html. [29] T-motor air 2213 920kv brushless motor, [Online], Available: http://dekasto.com/catalog/view/364. [30] C. S. Subudhi, D. Ezhilarasi. Modeling and trajectory tracking with cascaded PD controller for quadrotor. Procedia Computer Science, vol. 133, pp. 952–959, 2018. DOI: 10.1016/j.procs.2018.07.082. [31] G. Szafranski, R. Czyba. Different approaches of PID control UAV type quadrotor. In Proceedings of International Micro Air Vehicle Conference and Flight Competition, Delft, Netherlands, 2011. [32] Z. Mustapa, S. Saat, S. H. Husin, N. Abas. Altitude controller design for multi-copter UAV. In Proceedings of International Conference on Computer, Communications, and Control Technology, IEEE, Langkawi, Malaysia, pp. 382−387, 2014. DOI: 10.1109/I4CT.2014.6914210. [33] J. A. Paredes, C. Jacinto, R. Ramírez, I. Vargas, L. Trujillano. Simplified fuzzy-PD controller for behavior mixing and improved performance in quadcopter attitude control systems. In Proceedings of IEEE ANDESCON, IEEE, Arequipa, Peru, 2016. DOI: 10.1109/ANDESCON.2016.7836217. [34] E. S. Filatova, A. V. Devyatkin, A. I. Fridrix. UAV fuzzy logic stabilization system. In Proceedings of IEEE International Conference on Soft Computing and Measurements, IEEE, St. Petersburg, Russia, pp. 132−134, 2017. DOI: 10.1109/SCM.2017.7970517. [1] Amir Hossein Davaie Markazi, Mohammad Maadani, Seyed Hassan Zabihifar, Nafiseh Doost-Mohammadi. Adaptive Fuzzy Sliding Mode Control of Under-actuated Nonlinear Systems . International Journal of Automation and Computing, doi: 10.1007/s11633-017-1108-5 [2] Hanane Zermane, Hayet Mouss. Development of an Internet and Fuzzy Based Control System of Manufacturing Process . International Journal of Automation and Computing, doi: 10.1007/s11633-016-1027-x [3] Xiao-Yi Wang, Guang-Ren Duan. A Direct Parametric Approach to Spacecraft Attitude Tracking Control . International Journal of Automation and Computing, doi: 10.1007/s11633-017-1089-4 [4] Huan Du, Guo-Liang Fan, Jian-Qiang Yi. Nonlinear Longitudinal Attitude Control of an Unmanned Seaplane with Wave Filtering . International Journal of Automation and Computing, doi: 10.1007/s11633-016-0962-x [5] Bai-Shun Liu, Xiang-Qian Luo, Jian-Hui Li. General Convex Integral Control . International Journal of Automation and Computing, doi: 10.1007/s11633-014-0813-6 [6] Fu-Cai Liu, Li-Huan Liang, Juan-Juan Gao. Fuzzy PID Control of Space Manipulator for Both Ground Alignment and Space Applications . International Journal of Automation and Computing, doi: 10.1007/s11633-014-0800-y [7] Zhi-Qiang Pu, Ru-Yi Yuan, Xiang-Min Tan, Jian-Qiang Yi. An Integrated Approach to Hypersonic Entry Attitude Control . International Journal of Automation and Computing, doi: 10.1007/s11633-014-0764-y [8] . Modeling and Adaptive Sliding Mode Control of the Catastrophic Course of a High-speed Underwater Vehicle . International Journal of Automation and Computing, doi: 10.1007/s11633-013-0714-0 [9] Mohammad Mehdi Fateh, Sara Fateh. A Precise Robust Fuzzy Control of Robots Using Voltage Control Strategy . International Journal of Automation and Computing, doi: 10.1007/s11633-013-0697-x [10] Vineet Kumar, A. P. Mittal, R. Singh. Stability Analysis of Parallel Fuzzy P + Fuzzy I + Fuzzy D Control Systems . International Journal of Automation and Computing, doi: 10.1007/s11633-013-0701-5 [11] Feng Zhang, Guang-Ren Duan. Integrated Relative Position and Attitude Control of Spacecraft in Proximity Operation Missions . International Journal of Automation and Computing, doi: 10.1007/s11633-012-0654-0 [12] Qing Zhu, Ai-Guo Song, Tian-Ping Zhang, Yue-Quan Yang. Fuzzy Adaptive Control of Delayed High Order Nonlinear Systems . International Journal of Automation and Computing, doi: 10.1007/s11633-012-0633-5 [13] Zhi-Sheng Chen, Yong He, Min Wu. Robust Fuzzy Tracking Control for Nonlinear Networked Control Systems with Integral Quadratic Constraints . International Journal of Automation and Computing, doi: 10.1007/s11633-010-0532-6 [14] . Indirect Adaptive Fuzzy and Impulsive Control of Nonlinear Systems . International Journal of Automation and Computing, doi: 10.1007/s11633-010-0531-7 [15] Xiao-Yuan Luo, Zhi-Hao Zhu, Xin-Ping Guan. Adaptive Fuzzy Dynamic Surface Control for Uncertain Nonlinear Systems . International Journal of Automation and Computing, doi: 10.1007/s11633-009-0385-z [16] Xin Li, Xue-Ping Zhao, Jie Chen. Controller Design for Electric Power Steering System Using T-S Fuzzy Model Approach . International Journal of Automation and Computing, doi: 10.1007/s11633-009-0198-0 [17] . Adaptive Control of Rigid Body Satellite . International Journal of Automation and Computing, doi: 10.1007/s11633-008-0296-4 [18] Akira Inoue, Ming-Cong Deng. Framework of Combined Adaptive and Non-adaptive Attitude Control System for a Helicopter Experimental System . International Journal of Automation and Computing, doi: 10.1007/s11633-006-0229-z [19] . Modelling and Multi-Objective Optimal Control of Batch Processes Using Recurrent Neuro-fuzzy Networks . International Journal of Automation and Computing, doi: 10.1007/s11633-006-0001-4 [20] Jie Chen, Feng Pan, Tao Cai. Acceleration Factor Harmonious Particle Swarm Optimizer . International Journal of Automation and Computing, doi: 10.1007/s11633-006-0041-9 PDF下载 ( 0 KB) 图(23) / 表(8) 文章访问数: 26 HTML全文浏览量: 48 PDF下载量: 8 录用日期: 2020-08-28 网络出版日期: 2020-11-06 Mohammad Hayajneh2, , , 通讯作者: [email protected] Mohammad Al-Fetyani 1, , Mohammad Hayajneh 2, , , Adham Alsharkawi 1, 1. Department of Mechatronics Engineering, The University of Jordan, Amman11942, Jordan 2. Department of Mechatronics Engineering, The Hashemite University, Zarqa13133, Jordan Accepted Date: 2020-08-28 Available Online: 2020-11-06 Due to its simple construction and low-cost, as well as the massive development of its flight control systems, quadcopters have been extensively used in many civilian and military fields in the past two decades. Although the quadcopter system is nonlinear and under-actuated, making it difficult to achieve its dynamics stability and to improve its accuracy to track the required trajectory, linear-based controllers such as proportional integral derivate (PID) have obtained satisfactory results in controlling its movements[1-3]. However, linear controllers are limited by their inability to reach quadcopter stability in large scopes around the equilibrium point, preventing them from flying aggressively. In addition, the presence of external disturbances increases the deviation of the flight path as long as the quadcopter is operating. Therefore, many researchers have improved the work of the PID controller by tuning its gains online using classical fuzzy logic systems for stabilizing a quadcopter and controlling its attitude for trajectory tracking[4-7]. Others have recently applied fuzzy logic controllers to improve the trajectory followed by a quadcopter depending on factors such as the curvature of the path and the velocity of the quadcopter[8]. In other words, they applied the fuzzy logic to tune the distance and the velocity of a virtual point on the curvature as well as pitch and roll speeds of the quadcopter. The problem in the fuzzy logic controller listed above may no longer be appropriate when there is a change or variation in system parameters or working conditions, which needs to be redesigned and adjusted frequently to deal with various uncertainties in the system[9]. In addition, this method requires considerable experience in designing and tuning IF-THEN rules that represent non-linear approximations of the system[10, 11]. The approach to addressing this issue is to automatically adjust rule conditions and input/output parameters through learning algorithms. Therefore, the fuzzy logic could be combined with the learning ability of neural network methods in a form called adaptive neuro-fuzzy inference system (ANFIS). Some successful applications of the ANFIS approach in motion control can be found in [12-14]. The ANFIS approach has also found success in some other areas[15-17]. Few works in the literature have used an ANFIS controller to control the movement of the quadcopter. One of these works compared the results of the controller on the attitude of the quadcopter with respect to the performance of linear system recognition techniques without any validation of how the results improve the overall performance[18]. In other side, the ANFIS technique is applied in [10, 19, 20] to stabilize a quadcopter attitude and/or altitude, and it has proved better results than those produced by a PID controller, especially when applying external disturbances. However, the attempts in [10, 19, 20] discussed theoretical aspects and did not go beyond this to the practical application of the claimed solutions. The aim of this paper is to apply ANFIS to control the altitude and attitude of a quadcopter. This work is completed in two main stages. The first stage includes simulation studies of quadcopter models in Matlab Simulink. In simulations, the performance of a well-trained ANFIS controller is compared to a classical proportional-derivative (PD) controller and a properly tuned fuzzy logic controller. The ANFIS controller is tested under several different flight conditions including external disturbances. Furthermore, aerodynamic effects are modeled and also simulated, to ensure more practical behavior of the quadcopter. The second stage focuses on practical implementation and experimental validation of the proposed controllers. More precisely, a C++ model of the proposed controllers is generated, using an embedded coder, by a supported package in Matlab Simulink. The attitude and altitude models are carefully customized to be run by PX4 autopilot, and to use on-board sensor data and other calculations at run time. In addition, the quadcopter is equipped with a vision-based system that includes an IR camera and beacon for precision landing purposes. Hence, a precision landing algorithm is adopted to estimate the relative horizontal positions of the drone with respect to the IR beacon coordinates. It is necessary to mention that the construction of a quadcopter underwent several examinations to ensure the integrity of the components and to reach reliable long-term flights. The remainder of this paper is organized as follows. Section 2 briefly describes the quadcopter mathematical model which is essential in simulation and control design. Furthermore, this section discusses the adopted strategy for precision landing. Section 3 discusses the control design of a PD controller, fuzzy-PD controller and ANFIS-PD controller. Also, Section 3 provides a simple PD position controller and complementary filters for attitude and position estimations. This is followed by Section 4, where the hardware and software implementations of a quadcopter system are discussed for real flight experiments. Then the simulation and experimental results and the main findings are discussed in Section 5. Finally, some concluding remarks are given in Section 6. 2. System overview This section mentions the quadcopter′s dynamic and kinematic models which are adopted in this work in simulation and in the design of the ANFIS-based attitude and altitude controllers. This section also presents a strategy that has been built for precision landing purposes. 2.1. Quadcopter model Quadcopter dynamics are easily derived through a Newton-Euler method since the quadcopter is assumed to be a rigid body. The quadcopter equations of motion including the drag force are expressed in (1). Details of the derivation can be found in [21]. $$ \begin{bmatrix} \ddot{x} \\ \ddot{y} \\ \ddot{z} \end{bmatrix} \!=\! -g \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \!+\! \dfrac{T}{m} \begin{bmatrix} C_\psi S_\theta C_\phi+S_\psi S_\phi \\ S_\psi S_\theta C_\phi-C_\psi S_\phi \\ C_\theta C_\phi \end{bmatrix} \!-\! \dfrac{1}{m} \begin{bmatrix} R_x \dot{x}\\ R_y \dot{y}\\ R_z\dot{z} \end{bmatrix} \!+\! \dfrac{1}{m} \begin{bmatrix} 0\\ 0\\ c_z \end{bmatrix} $$ where $C_\psi={\rm{cos}}(\psi)$ , $C_\phi={\rm{cos}}(\phi)$ , $S_\theta={\rm{sin}}(\theta)$ , $S_\psi={\rm{sin}}(\psi)$ , $S_\phi={\rm{sin}}(\phi)$ , $ m $ is the total mass of the quadcopter and $ g $ is the gravitational acceleration. Euler angular velocities expressed in the inertial frame can be related to the body angular velocities expressed in the body frame by the following transformation matrix: $$ \begin{bmatrix} \dot{\phi} \\ \dot{\theta} \\ \dot{\psi} \end{bmatrix} = \begin{bmatrix} 1 & S_\phi T_\theta & C_\phi T_\theta \\ 0 & C_\phi & -S_\phi \\ 0 & S_\phi /C_\theta & C_\phi /C_\theta \end{bmatrix} \begin{bmatrix} p \\ q \\ r \end{bmatrix} $$ where $ T = {\rm{tan}}(\theta) $ , $ p $ , $ q $ and $ r $ are the angular velocities expressed in the body frame. The quadcopter equations of angular motion can be expressed in the body frame as follows: $$ \begin{bmatrix} \dot{p} \\ \dot{q} \\ \dot{r} \end{bmatrix} = \begin{bmatrix} (I_{yy}-I_{zz})qr/I_{xx} \\ (I_{zz}-I_{xx})pr/I_{yy} \\ (I_{xx}-I_{yy})pq/I_{zz} \end{bmatrix} + \begin{bmatrix} \tau_\phi/I_{xx} \\ \tau_\theta/I_{yy} \\ \tau_\psi/I_{xx} \end{bmatrix} $$ where $ I_{xx} $ , $ I_{yy} $ and $ I_{zz} $ are the inertial moments of the quadcopter in the corresponding body frame axes. 2.2. Vision-based landing Unlike the traditional landing method used in unmanned aerial vehicles (UAVs), which is based on the location from GPS, the vision based approaches provide precision landing on a specific spot accurate within centimeters. In this case, the UAV camera detects a landing spot, or marker, which defines the relative distance between the vehicle and the marker frames. Particularly in this work, an IR-lock camera mounted on a quadcopter detects an IR-beacon on the ground as illustrated in Fig. 1. Hence, the position of the beacon with respect to the camera frame C is calculated by a vision algorithm while the movements of the quadcopter are defined to the start point inertial frame I. Once the IR-beacon is detected by the IR-camera, the drone attempts to move laterally (i.e., in x, y directions) towards the center of the beacon frame and starts precision landing from there. While the drone is descending, the ANFIS-attitude controller compensates for the effect of wind and the inherent instability. In this work, the adopted landing strategy is described by Fig. 2. The IR-beacon can be detected by the drone′s camera from a height of 10 m. Therefore, the landing process begins after the drone approaches the landing station. At this stage, the loiter mode is activated, where it keeps the drone hovering over the station and allows it to search for the beacon. If the beacon is detected within 40 s, the precision landing controller is activated, otherwise a GPS-based landing is performed. In the case of precision landing, the drone′s lateral positions are always estimated with respect to beacon frame which is ideally approaching zero. The drone will keep moving horizontally and the precision landing process will be executed continuously as long as the height is large (i.e., higher than 0.5 m). When the drone is near the ground with a distance of less than 0.5 m, the descending is handled with a special technique that ensures safe landing and disarms the motors. 3. Control design The quadcopter is an unstable system and requires closed-loop feedback to obtain stability. The quadcopter is controlled using the four motor angular velocities ($ \omega_1 $ , $ \omega_2 $ , $ \omega_3 $ and $ \omega_4 $ ), which are responsible for the six states of the quadcopter, linear positions and angular positions. The quadcopter can be stabilized by controlling four states, altitude ($ z $ ) and attitude ($ \phi $ , $ \theta $ and $ \psi $ ). The motor angular velocities can be obtained through inverse dynamics[22], thus the system has to be simplified to provide an inverse model that is easy to deal with. In order to do so, the movement of the quadcopter is considered to be near the hovering state, in which small angular changes occur, and the rotational matrix between Euler angular velocities and the body angular velocities is deemed as close to identity. Hence, the simplified quadcopter equations used in the control design are given as $$ \begin{split} \ddot{z} & = -g+({\rm{cos}}(\theta) {\rm{cos}}(\phi))\dfrac{T}{m}\\ \ddot{\phi} & = \dfrac{\tau_\phi}{I_{xx}} \\ \ddot{\theta} & = \dfrac{\tau_\theta}{I_{yy}} \\ \ddot{\psi} & = \dfrac{\tau_\psi}{I_{zz}}. \\ \end{split} $$ 3.1. PID controller A classical PID controller is still one of the most widely used controllers due to its simple structure and ease of implementation[23, 24]. One way to implement a PID controller is to use a parallel PID structure, which allows for a complete decoupling of proportional, integral and derivative actions. The parallel PID structure can be described by the following: $$ \begin{split} e(t) & = x_d(t)-x(t)\\ u(t) & = K_Pe(t)+K_I\int_{0}^{t}e(\tau){\rm{d}}\tau+K_D\dfrac{{\rm{d}}e(t)}{{\rm{d}}t}\\ \end{split} $$ where $ e(t) $ is the difference between the desired state $ x_d(t) $ and the current measured state $ x(t) $ , $ u(t) $ is the control signal, $ K_P $ is the proportional coefficient, $ K_I $ is the integral coefficient and $ K_D $ is the derivative coefficient. Given the system double integrator behavior described in (4), four PD controllers are required to stabilize the four desired states of the quadcopter, altitude ($ z $ ) and attitude ($ \phi $ , $ \theta $ and $ \psi $ ), which are derived from (4) as follows: $$ \begin{split} T & = (g+K_{z,D}(\dot{z_d}-\dot{z})+K_{z,P}(z_d-z))\dfrac{m}{C_\phi C_\theta}\\ \tau_\phi & = (K_{\phi,D}(\dot{\phi_d}-\dot{\phi})+K_{\phi,P}(\phi_d-\phi))I_{xx} \\ \tau_\theta & = (K_{\theta,D}(\dot{\theta_d}-\dot{\theta})+K_{\theta,P}(\theta_d-\theta))I_{yy} \\ \tau_\psi & = (K_{\psi,D}(\dot{\psi_d}-\dot{\psi})+K_{\psi,P}(\psi_d-\psi))I_{zz} .\\ \end{split} $$ The values obtained from (6) are used to calculate the four angular velocities, which are the inputs to the quadcopter. The angular velocities can be expressed as $$ \begin{split} \omega_1^2 & = \dfrac{T}{4k}-\dfrac{\tau_\theta}{2kl}-\dfrac{\tau_\psi}{4b}\\ \omega_2^2 & = \dfrac{T}{4k}-\dfrac{\tau_\phi}{2kl}+\dfrac{\tau_\psi}{4b}\\ \omega_3^2 & = \dfrac{T}{4k}+\dfrac{\tau_\theta}{2kl}-\dfrac{\tau_\psi}{4b}\\ \omega_4^2 & = \dfrac{T}{4k}+\dfrac{\tau_\phi}{2kl}+\dfrac{\tau_\psi}{4b}.\\ \end{split} $$ 3.2. Fuzzy-PD controller Fuzzy logic is a multi-valued logic that deals with degrees of membership and degrees of truth. The process of designing a fuzzy logic controller is performed in three main steps[18]. The first step is called fuzzification in which the inputs are mapped onto membership functions. The membership values are then quantified from a rule base. This step is called rule evaluation. The last step is called defuzzification in which membership values are converted into crisp outputs. A fuzzy-PD controller is a combination between a classical PD controller and a fuzzy logic controller. The purpose of the fuzzy-PD controller is to establish a varying coefficient for the PD controller rather than a fixed coefficient which allows having an optimal output based on a set of rules[25]. Fuzzy membership functions, for both the inputs and the output, of the fuzzy-PD controller are shown in Fig. 3. Rule base for both the inputs and the output are presented in Table 1. Each input is divided into five fuzzy sets as in [7], i.e., negative big (NB), negative small (NS), zero (Z), positive small (PS) and positive big (PB). The output is also divided into five fuzzy sets, i.e., fast negative (FN), slow negative (SN), slow (S), slow positive (SP) and fast positive (FP). Fuzzy rule surfaces for the altitude and the attitude are presented in Fig. 4. 3.3. ANFIS-PD controller Designing a fuzzy logic controller can be a complicated process especially for a complex system. Given a set of input/output training data, ANFIS is an approach that can be used to easily obtain a properly tuned fuzzy logic controller[19]. The strength of the ANFIS approach comes from the combined advantages of the fuzzy inference system (FIS) and neural networks (NN). In this paper, the ANFIS architecture used for the quadcopter is presented in Fig. 5. The controller has to learn only once by a set of training data. The training data set for the altitude is obtained by simulating the quadcopter when controlled by a classical PD controller. A similar process is repeated for the attitude control. A neuro-fuzzy designer is used for training and designing the ANFIS-PD controller. Membership functions obtained after the training are presented in Fig. 6. The rule base of the ANFIS-PD controller is presented in Table 2. The output equations of the altitude controller can be found in (A1) in Appendix and the output equations of the attitude controllers are found in (A2) in Appendix. Fuzzy rule surfaces for the altitude and attitude are illustrated in Fig. 7. 3.4. Position controller The attitude and altitude controllers, which have been discussed previously in this section, are essential for overcoming the inherent instability of the quadcopter drone. To control the movement of the vehicle horizontally, a closed loop position controller is needed to approach the desired waypoint. In the quadcopter case, the horizontal motion is dependent with the roll and pitch states. Quadcopter positions can be easily defined by GPS location readings. However, positions from GPS have low accuracy, and are not suitable for precise landing purposes on a specific point. Therefore, a vision based landing process is adopted in this work as described in Section 2. An outer loop PD controller is adopted for both GPS-based and vision-based systems during the landing process as $$ u(t) = K_Pe(t)+K_D\dfrac{{\rm{d}}e(t)}{{\rm{d}}t} $$ where $ e(t) = p_i-p_o $ and $ \dot{e}(t) = \dot{p}_i-\dot{p}_o $ are error position and velocity state vectors respectively. The state vector is expressed by the deference between the reference states $p_o = [x_o \,\,\, y_o]^{\rm{T}}$ and $\dot{p}_o = [\dot{x}_o \,\,\, \dot{y}_o]^{\rm{T}}$ , and the target states $ p_i $ and $ \dot{p}_i $ . The output of the proposed controller is the control command $u(t) = [u_\theta \,\,\, u_\phi]^{\rm{T}}$ that is provided to the attitude controller. $ u_\theta $ and $ u_\phi $ allow the quadcopter to adjust its horizontal positions according to the position estimates from GPS or vision system. $ K_P $ and $ K_D $ are proportional and derivative gains respectively tuned for better tracking performance. 3.5. Attitude and position estimations Due to noises and biases in cheap sensors which are used in low cost embedded systems, sensor fusion techniques are used for estimates of the drone′s attitude and position. This work adopts complementary filtering algorithms for state estimations. A complementary filter is used to fuse between an accelerometer, magnetometer and gyroscope measurements for better attitude estimation. A second filter is described for altitude estimation using accelerometer readings and barometer measurements. A third filter combines the GPS information and an accelerometer for better horizontal position estimations. For the sake of keeping things short, these filters will be discussed briefly. For more details about design and implementation, you can read our previous works[26, 27]. 3.5.1. Attitude estimation The attitude estimation could be integrated from angular speeds measured by a gyroscope. But the presence of biases and noises in readings leads to divergence in these interactions. Therefore, the gyroscope readings are combined with accelerometer′s readings for better estimations in roll $ (\phi) $ and pitch $ (\theta) $ angles, and with magnetometer measurements to estimate the yaw angle $ (\psi) $ . The complementary filter that is adopted for aforementioned purposes is implemented as follows: $$ \left\{\begin{array}{l} \dot{\hat{{q}}} = \dfrac{1}{2} \hat{q} \otimes P\Big(\Omega - \hat{b}+ K_P \displaystyle\sum\limits_{i = 1}^{n} k_i v_i \times \hat{v}_i\Big) \\ \dot{\hat{b}} = -K_I \displaystyle\sum\limits_{i = 1}^{n} k_i v_i \times \hat{v}_i \\ \end{array}\right. $$ where $ \hat{q} $ and $ \hat{b} $ are the attitude estimation in quaternions representation and the gyro biases respectively. For better performance, $ K_P $ , $ K_I $ and $ k_i $ are tuned properly. $ \Omega $ is the vector of rate gyro measurements. The vector $ v $ is measured in body frame and vector $ \hat{v} $ is its observation in the inertial frame. Two vectors in the inertial frame can be observed in embedded systems which are the Earth′s gravitational and magnetic field measured by an accelerometer and magnetometer respectively. 3.5.2. Altitude and horizontal positions estimations Trajectory tracking in automatic missions requires robust position estimations using on-board sensors. Due to noisy and low frequency GPS and barometer sensors, a complementary filter is implemented to optimally estimate a drone′s 3-D positions as follows: $$ \dot{\hat{{p}}} = \hat{{v}}+ \frac{1}{2} \int ({Q a} + g{e_3}){\rm{d}}x+ k_2 ({p}-\hat{{p}})\\ $$ where $ {p} $ is the position vector which represents two raw measurements of GPS horizontal positions and one barometer measurement for the vehicle′s altitude. The position estimation is a function of $ \hat{{v}} $ which is the estimated velocity introduced by the following relation: $$ \left\{\begin{aligned}& \dot{\hat{{v}}} = k_1 ({v} - \hat{{v}}) + g{e_3} + {Qa} \\ &\dot{{Q}} = {QS} (\Omega - \hat{{b}}_\Omega) + k_v ({v} - \hat{{v}}) {a}^{\rm T} \end{aligned}\right. $$ where $\Omega \in {\bf{R}}^3$ and ${a} \in {\bf{R}}^3$ are raw measurements of angular speeds and linear accelerations measured in body frame. The vector ${v} \in {\bf{R}}^3$ includes the actual velocities measured in the inertial frame by GPS, the receiver and pressure sensor. Moreover, ${Q} \in {\bf{R}}^{3\times 3}$ is a virtual rotation matrix and $g{e_3} = [0, \; 0, \; g]^{\rm{T}}$ is the local gravity vector in the inertial north-east-down (NED) frame. The best performance of this filter can be achieved with good gains tuning of $ k_1 $ , $ k_2 $ and $ k_v $ . 4. Real-time implementations A UAV quadcopter (see Fig. 8) is used in this work to implement and verify the performance of the control algorithms just discussed. The full implementation of the ANFIS-based controllers and the visual-feedback precision landing controller are executed by a Pixhawk autopilot as shown in Fig. 9. The quadcopter is also equipped with an IR-lock sensor used for precise position estimation during the landing process. Furthermore, a radio telemetry unit is connected in order to send and receive flight parameters (e.g., flight state, waypoints, flight modes$,\cdots, $ etc.) to and from the ground control station (GCS). Due to the diversity of UAV models in market, it is worth mentioning the characteristics of the UAV hardware and software architectures adopted in implementations which are briefly demonstrated in Fig. 9. 4.1. Quadcopter The flight control algorithms on the proposed quadcopter (see Fig. 8) are executed by a Pixhawk (PX4) autopilot as an open-source and low-cost flight control unit. The Pixhawk autopilot includes different on-board sensors (i.e., inertial measurements unit (IMU), barometer) and supports external sensors such as GPS, ultrasonic, and IR-lock camera. Sensor data are fed to control algorithms running on a ARM Cortex-M microprocessor based on a NuttX real-time operating system (RTOS). In this work, ANFIS based control algorithms are adopted for attitude, and altitude stability and automatic landing. These algorithms also perform an allocation matrix to provide motors with corresponding pulse width modulation (PWM) outputs. Via the Micro air vehicle link (MAV Link) as a light weight massage protocol, the Pixhawk autopilot transmits flight information to the GCS such as attitude, position, and flight conditions, and receives data from the GCS such as path waypoints. The Pixhawk is connected also with a light-weight and low-cost IR-lock camera which is used for precision landing. 4.2. IR-lock camera The IR-lock camera, as shown in Fig. 9, is connected to the quadcopter for precision landing purposes. Furthermore, an IR-beacon is mounted on a center of the landing point and is filtered by the IR camera to define its position in the 2D image frame. A vision algorithm is implemented to estimate horizontal coordinates, $ x_i $ and $ y_i $ , which are measured in pixel units. The x and y components are fused to calculate the estimated positions and velocities with respect to the beacon frame. This system provides high precise landing, within 20 cm accuracy, that allows the vehicle to approach the center of the beacon horizontally while descending toward a specific landing station. 4.3. Ground control station The ground control station consists of a high-speed processing laptop computer and a radio controller. The laptop is connected to the quadcopter by a radio telemetry module for flight data and parameter transmission. The computer also runs a graphical user interface software (i.e., mission planner) that allows a user to monitor flight states and to change parameters. Furthermore, a Matlab software with PX4-Simulink support package is installed with required code compilers on the computer. 4.4. PX4-Simulink support package The PX4-Simulink support package in Matlab allows you to customize different algorithms in Simulink blocks that leverage Pixhawk sensor data and other calculations at run time. Moreover, a C++ code can be generated from Simulink models and deployed to the Pixhawk flight management unit (FMU) using the PX4 Toolkit. This package also supports a sensor/peripheral block library that allows reading on-board sensor data such as inertial measurements, GPS, and vehicle estimation from the IR-lock sensor, writing PWM signals to the motor′s electronic speed control (ESC), and activating analog to digital converter (ADC) and serial Rx/Tx. According to the official package website[28], the package is specifically tailored for the Pixhawk flight management units. Therefore, the generated code has been optimized for this particular flight controller. During the deployment process, no problems were encountered, either in terms of memory usage or computations. For other hardware, however, one can easily extract the fuzzy control system since the equations for the membership functions and the rules are explicitly provided, each in a separate function by the compiler. 5. Simulation and experimental results This section discusses the results of simulation and flight experiments on a quadcopter drone. The characteristics of the used quadcopter (see Fig. 8) are experimentally calculated and presented in Table 3 for better implementation. In the first stage, the quadcopter model and different controllers are implemented for simulation in Matlab 2018b, in order to compare the performance of the different controllers in this paper. The simulation is performed in three different cases, and parameter values of the adopted PD controllers are chosen from [2] which are presented in Table 4. The feasibility of running the controllers through embedded systems depends on the control signals, which are the values of the angular velocities of the motors. Thus, the control signals during the simulation are considered. 5.1. Case 1 The first case is taken from [2]. The initial conditions of the quadcopter, linear positions and angular positions, are $[x,y,z]^{\rm{T}} = [0,0,1]^{\rm{T}}$ in meters, $[\phi,\theta,\psi]^{\rm{T}} = [10,10,10]^{\rm{T}}$ in degrees. The desired state of the quadcopter is hovering, thus $[z,\phi,\theta,\psi]^{\rm{T}} = [0,0,0,0]^{\rm{T}}$ . The simulation runs for 5 s. The output graphs for the altitude and attitude during the simulation are presented in Fig. 10. Control signals during the simulation are presented in Fig. 11. In this case, it can be clearly seen that the ANFIS-PD controller is the fastest to reach the desired state with negligible overshoot and feasible control signals. Although the fuzzy-PD controller may not be the best controller in this case, it still shows good results when compared to the PD controller. The oscillatory behavior of the PD controller can be easily observed when controlling the attitude along with a slow settling time. Values of the settling time and overshoot for each of the controllers are presented in Table 5. The computational costs, for this case, are computed in the simulation environment. The average execution time for the PD and ANFIS controllers has been estimated to be 6 s and 16 s respectively. However, the execution of the ANFIS model in real-time processor did not cause an overrun. The processor in the autopilot was actively operating the control system for a very responsive flights as shown by experiments in Section 5.4. On the other hand, in our experimental setup, we used the motor from [29], which can give a maximum power of 109 W with full throttle. After the angular velocities are converted to corresponding PWM signals, the average PD and ANFIS PWM signal that is required from the motors are found to be 2 000, which corresponds to 100% throttle, and 1621, which can be approximated to 65% throttle, respectively. As a result, the PD controller consumes 109 W while the ANFIS controller only consumes 42.2 W. In this case, the desired state is time-varying, in which the quadcopter is expected to follow several ramp functions. This is illustrated in Table 6. The simulation runs for 30 s. Altitude and angular positions during the simulation are presented in Fig. 12. Control signals during the simulation are presented in Fig. 13. It is quite clear in this case that the PD controller shows an oscillatory behavior when trying to control the angular positions $ \phi $ and $ \psi $ in the time interval between 10 s and 15 s. The reason for this is the fact that the PD controller is tuned to a near hovering state. Similar to the first case, the ANFIS-PD controller is still the fastest to reach a time-varying desired state when compared to both controllers, the PD controller and fuzzy-PD controller. The third case investigates the performance of the three controllers in tracking a desired altitude in the presence of disturbances. The simulation runs for 10 s. At the beginning of the simulation, the quadcopter is expected to track a sinusoidal target and then maintain zero altitude. During the simulation, an external disturbance force is added at the beginning of the simulation and towards the end of the simulation. This is illustrated in Fig. 14(a). The performance of the three controllers along with the control signals are presented in Figs. 14(b) and 15, respectively. It is rather obvious here that the ANFIS-PD controller is coping quite well with the external disturbance force. For clarity, Table 7 shows the deviation from the desired target for the three controllers through mean square error (MSE) and mean absolute error (MAE). 5.4. Experimental flight tests The proposed ANFIS controllers, attitude and position estimators, and the precision landing algorithm are executed and verified experimentally by a Pixhawk embedded system on top of a quadcopter as discussed in Sections 3 and 4. Several experiments were conducted on the quadcopter to prove the system′s efficiency and reliability. First, the ANFIS-based attitude and the altitude controllers are tested in manual mode, where a user sends commands by a remote controller to control the position of the drone in the space. The commands in manual mode are: throttle signal to control the altitude, the roll, pitch and yaw signals to control the drone′s attitude. Fig. 16(a) illustrates the flight path of the quadcopter in the manual mode. The 39-seconds flight experiment shows great results in the quadcopter′s performance as shown in Figs. 17 and 18. The comparisons between the desired and the estimated (current) angles of orientation (i.e., $ \phi, \theta, $ and $ \psi $ ) of the drone are illustrated in Fig. 17. The angles are presented in degrees and the MSEs for each orientation angle are 0.23°, 0.38° and 0.018° respectively. Secondly, the drone′s performances are tested in loiter mode, where the drone automatically keeps its horizontal positions based on GPS locations as well as the drone holding its altitude based on barometer estimations. In this mode, the position controller sends the desired angles, and height commands to the attitude and the altitude controllers which keep the drone hovering over a certain point. In this case, the attitude and the altitude controllers are primarily responsible for maintaining the drone′s balance. Fig. 19 shows excellent results in attitude with MSEs less than 1°. Furthermore, the altitude controller responds promptly to the throttle commands to reach the desired altitude with MSE = $0.12\;{\rm{m}}$ as shown in Fig. 18. The 3-D path of the quadcopter during this experiment is illustrated in Fig. 16. The deviations in the horizontal location are due to the low accuracy provided by GPS. Thirdly, the proposed ANFIS controller and the estimation system are tested for automatic precision landing. In this experiment, an IR camera in the drone detects an IR beacon on the ground to land over it automatically and precisely. In this experiment, as shown in Fig. 20, the drone takes off and flies in loiter mode to reach a specific altitude. When the camera detects the beacon, the precision landing mode is activated and the horizontal positions of the quadcopter are calculated with respect to the center of the beacon. The quadcopter adjusts its horizontal positions to align, as far as possible, the center of the drone with the center of the beacon. Fig. 20 shows how the drone is landed precisely near to the beacon within less than 20 cm error. Fig. 21 shows how effectively the attitude and altitude controllers perform in this mode. The overall MSEs for $ \phi $ and $ \theta $ are 0.13° and 0.34° respectively, and the MSE in altitude is 0.036 m. Finally, a relatively long-term experiment was performed on the drone, in order to examine the performance of the approved system in changing flight conditions, as well as to identify the maximum capabilities of the drone. In this experiment, the flight duration was about 15 min, including a number of take offs and landings, travelling long distances at different speeds, and reaching high altitudes. Also, some intentional variables were introduced to the drone, such as changing the propeller size, adding additional loads, changing the flight modes (i.e., stabilize, loiter, land, auto). In addition, some variables of the surrounding environment were taken into account while analyzing the drone′s performance which are listed in Table 8. During the 15-minute flight experiment, the adopted system approved reliable performance with the various conditions mentioned above. To analyze the performance of the drone, two portions of the flight results were studied during two different periods due to the large amount of information. Figs. 22 and 23 illustrate the drone′s velocities and attitude in three directions (North East, and Up) respectively. Fig. 22 shows the drone′s ability to travel at high speeds up to 10 m/s in longitudinal directions and up to 4 m/s in vertical directions. Moreover, Fig. 23 proves the stability in large attitudes where roll and pitch angles reach more than 30°, as well as a fast rate in yaw angle. An adaptive neuro-fuzzy control system has been successfully developed to control the altitude and attitude of a quadcopter. Comparing ANFIS with PD and fuzzy controllers, simulation results have clearly demonstrated the feasibility and effectiveness of the proposed control approach in terms of reference tracking and disturbance rejection. The ANFIS controller outperformed a classical PD controller and a properly tuned fuzzy logic controller in three different yet commonplace scenarios. In fact, the performance of the ANFIS controller developed in this paper can be considered satisfying when compared to the linear controllers reported in [30-32] and the non-linear controllers reported in [7, 33, 34]. This work went beyond simulation to practical application of the quadcopter, where a complete flight system is designed and implemented in a Pixhawk autopilot using a PX4-Simulink support package. The flight system contained ANFIS controllers, attitude and position estimations, and vision-based precision landing algorithms which are successfully deployed in the autopilot. Several flight tests were conducted and the results showed that the controllers performed perfectly in achieving the attitude and altitude stability of the quadcopter. Furthermore, a long flight experiment was conducted, and the performance was analyzed under various conditions. The results prove the efficiency of the adopted system and the flight reliability with integrity of the components. Although the results of this work demonstrated satisfying performances of the Pixhawk-based quadcopter, running a high-computational demand image algorithms along with the ANFIS controller may reduce the efficiency of the autopilot performance. This issue forces us to simplify the precision landing controller using low-computational IR-lock sensors. Therefore, this leads us to overcome this issue in future work by using an external board, such as a Raspberry Pi, for purposes of handling the computational demand that is required by the image processing techniques. This solution will also allow us to implement the ANFIS position controller for better trajectory tracking and landing. Future work could also focus on some of the most common commercial applications and uses of quadcopters. Regarding the control system, it is worth exploring the idea of multiple-input multiple-output ANFIS controllers and performing comparisons with other non-linear controllers often used in flight control systems. The authors acknowledge the Hashemite University for providing the financial and technical supports for this project. Also the authors thank all colleagues and students at Jordan University and at the Hashemite University for their valuable assistance. The followings are the output equations of the ANFIS controller. $$\tag{A1} \begin{array}{*{20}{l}}{}\\{}\\ {out1mf1}{ = 5.634\,5{x^2} + 5.480\,6x + 5.083\,1}\\ {out1mf2}{ = 5.320\,5{x^2} + 0.000\,3x + 3.609\,0}\\ {out1mf3}{ = 0.084\,6{x^2} + 0.133\,4x + 0.770\,6}\\ {out1mf4}{ = - 0.005\,3{x^2} - 0.175\,3x + 3.381\,8}\\ {out1mf5}{ = 4.747\,4{x^2} + 3.135\,3x - 0.209\,9}\\ {out1mf6}{ = - 0.011\,6{x^2} + 0.780\,9x - 0.013\,8}\\ {out1mf7}{ = 0.214\,7{x^2} + 3.640\,3x + 0.000\,3}\\ {out1mf8}{ = 5.308\,2{x^2} + 5.096\,1x + 5.477\,8}\\ {out1mf9}{ = 5.643\,0{x^2} - 5.265\,9x - 4.460\,2}\\ {out1mf10}{ = - 4.502\,9{x^2} - 0.648v4x - 0.000\,0}\\ {out1mf11}{ = - 0.046\,4{x^2} - 4.593\,6x - 3.522\,2}\\ {out1mf12}{ = - 1.401\,7{x^2} - 0.037\,3x + 0.286\,8}\\ {out1mf13}{ = - 4.390\,6{x^2} - 4.718\,0x - 4.194\,7}\\ {out1mf14}{ = 0.267\,4{x^2} - 0.055\,7x - 1.400\,1}\\ {out1mf15}{ = - 3.186\,8{x^2} - 4.706\,5x + 0.015\,2}\\ {out1mf16}{ = - 0.000\,0{x^2} - 0.663\,9x - 4.523\,0}\\ {out1mf17}{ = - 4.455\,1{x^2} - 5.264\,9x - 5.370\,3}\\ {out1mf18}{ = - 4.768\,8{x^2} - 5.217\,9x - 5.332\,9}\\ {out1mf19}{ = - 0.000\,3{x^2} + 2.697\,6x - 3.885\,6}\\ {out1mf20}{ = - 3.205\,6{x^2} - 3.354\,0x - 0.066\,9}\\ {out1mf21}{ = 0.014\,9{x^2} + 0.249\,7x + 0.027\,9}\\ {out1mf22}{ = - 0.320\,4{x^2} + 0.232\,4x + 0.078\,6}\\ {out1mf23}{ = 3.607\,6{x^2} + 3.107\,2x + 3.861\,0}\\ {out1mf24}{ = - 2.793\,5{x^2} + 0.000\,3x + 5.321\,0}\\ {out1mf25}{ = 5.202\,8{x^2} + 4.742\,2x + 5.343\,5} \\[-10pt]\end{array}$$ $$\tag{A2}\begin{array}{*{20}{l}} {out1mf1}{ = 0.479\,0{x^2} + 0.007\,3x + 0.177\,8}\\ {out1mf2}{ = 0.331\,0{x^2} + 0.124\,3x - 0.063\,9}\\ {out1mf3}{ = 0.255\,8{x^2} + 0.045\,4x + 0.038\,1}\\ {out1mf4}{ = 0.046\,8{x^2} - 0.115\,0x + 0.234\,8}\\ {out1mf5}{ = 0.179\,7{x^2} + 0.161\,8x - 0.231\,4}\\ {out1mf6}{ = 0.018\,9{x^2} - 0.068\,8x + 0.123\,9}\\ {out1mf7}{ = 0.146\,9{x^2} + 0.139\,6x + 0.120\,3}\\ {out1mf8}{ = 0.342\,0{x^2} + 0.138\,2x + 0.167\,5}\\ {out1mf9}{ = 0.363\,9{x^2} - 1.439\,7x + 0.837\,9}\\ {out1mf10}{ = - 0.099\,7{x^2} - 0.109\,2x - 0.030\,3}\\ {out1mf11}{ = - 1.456\,1{x^2} + 0.424\,1x + 0.183\,6}\\ {out1mf12}{ = - 0.010\,9{x^2} + 0.062\,8x - 0.3723}\\ {out1mf13}{ = 0.272\,7{x^2} - 1.450\,6x + 0.351\,5}\\ {out1mf14}{ = - 0.505\,7{x^2} + 0.011\,3x - 0.068\,0}\\ {out1mf15}{ = 0.307\,7{x^2} + 0.338\,7x - 1.143\,6}\\ {out1mf16}{ = - 0.026\,5{x^2} - 0.090\,3x - 0.242\,3}\\ {out1mf17}{ = 0.982\,8{x^2} - 1.354\,2x - 0.325\,0}\\ {out1mf18}{ = - 1.040\,9{x^2} - 1.155\,8x - 1.833\,3}\\ {out1mf19}{ = - 0.695\,1{x^2} + 0.029\,1x - 0.406\,3}\\ {out1mf20}{ = - 0.495\,0{x^2} - 1.425\,0x + 0.017\,0}\\ {out1mf21}{ = 1.050\,1{x^2} + 0.036\,5x + 0.004\,3}\\ {out1mf22}{ = - 0.038\,4{x^2} - 1.071\,0x + 0.259\,5}\\ {out1mf23}{ = 1.181\,9{x^2} + 0.322\,1x + 0.329\,0}\\ {out1mf24}{ = - 0.236\,4{x^2} + 0.660\,1x + 1.837\,9}\\ {out1mf25}{ = 1.115\,7{x^2} + 1.027\,9x + 0.249\,9}. \\[-10pt]\end{array}$$ © Institute of Automation, Chinese Academy of Sciences. Published by Springer Nature and Science Press. All rights reserved.
CommonCrawl
A neural joint model for entity and relation extraction from biomedical text Fei Li1, Meishan Zhang2, Guohong Fu2 & Donghong Ji ORCID: orcid.org/0000-0002-9244-24761 Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining. Automatically extracting entities and their relations from biomedical text has attracted much research attention in biomedical text mining community due to its important applications on knowledge acquisition and ontology construction [1]. Recently, various related tasks have been proposed, such as protein-protein interaction detection (PPI) [2], drug-drug interaction detection (DDI) [3], adverse drug event extraction (ADE) [4] and the bacteria biotope task (BB) [5]. Taking the ADE task for example, the objective of this task is to recognize mentions of drug and disease entities, and extract possible ADE relations between them. Given a sentence "A woman who was treated for thyrotoxicosis disease with methimazole drug developed agranulocytosis disease .", the outputs will be three entity mentions and an ADE relation {methimazole drug , agranulocytosis disease } ADE . Entity and relation extraction is a standard task in text mining or natural language processing (NLP). Most of previous work used two-step pipeline models to perform this task. First, entity mentions in a given sentence are recognized using the technologies of named entity recognition (NER). NER is usually casted as a sequence labeling problem solved by conditional random fields (CRFs) [6]. Second, each entity pair is examined to decide whether they have task-specific relations using classification models such as support vector machines (SVMs) [7]. In the biomedical community, pipeline models are also frequently used for this task [8–14]. Such pipeline models suffer two main problems. First, the errors generated in the NER step may propagate to the step of relation classification. For instance, if a drug or disease entity mention is incorrectly recognized, the extraction of its related ADEs will be incorrect. Second, the interactions between two subtasks in the two steps are not able to be utilized, while these interactions may help the subtasks. For instance, given a sentence "The tire maker still employs 1400" [15], although it may be difficult to recognize "1400" as a person entity, the word "employs" indicates an employment-organization relation which must involve a person entity. Therefore, such relation may help the model to recognize "1400" correctly. Due to the aforementioned disadvantages of pipeline models, joint models, which process entity recognition and relation classification simultaneously, have been proposed. Joint models process two subtasks simultaneously, so they can alleviate the problem of error propagation. On the other hand, some model parameters are shared by the submodels of entity recognition and relation classification in joint models, so these parameters help the models capture the interactions between two subtasks. Roth and Yih [16] proposed a joint inference framework based on integer linear programming to extract entities and relations. Li and Ji [15] exploited a single transition-based model to accomplish entity recognition and relation classification simultaneously. Kordjamshidi et al. [17] proposed a structured learning model to extract biomedical entities and their relationships. However, these feature-based approaches require much feature engineering and they also suffer feature sparsity problem, since the combined feature space of a joint task is significantly larger than those of its subtasks. Recently, deep learning with neural networks has received increasing research attention in the artificial intelligence area [18, 19], as well as the text mining and NLP areas [20, 21]. Compared with other models, deep neural networks adopt low-dimensional dense embeddings to denote features such as words or part-of-speech (POS) tags, which can effectively settle the feature sparsity problem. In addition, deep neural networks demand less feature engineering, since they can learn features from training data automatically. Ma and Hovy [22] and Lample et al. [23] exploited similar frameworks by combining recurrent neural networks (RNNs) with CRFs and obtained the best results on several benchmark NER datasets. For relation classification, there are two state-of-the-art methods using deep neural networks, namely RNNs [24] and convolutional neural networks (CNNs) [25]. They used RNNs or CNNs to learn relation representations along the words between two target entities or along the words on the shortest dependency path (SDP) of two target entities. Miwa and Bansal [26] proposed an end-to-end relation extraction model and obtained competitive performances in several datasets. However, there is less related work in biomedical entity and relation extraction using deep neural networks. Li et al. [27] and Mehryary et al. [28] used similar approaches with [24, 25], but they only focused on relation classification with given entities. Li et al. [29] exploited a transition-based feed-forward neural network to jointly extract drug-disease entity mentions and their ADE relations. Jiang et al. [30] proposed two independent neural models for DDI and gene mention tagging tasks, respectively. In this paper, we follow the novel line of work on deep neural networks and propose a neural joint model to extract biomedical entities and their relations. First, our model uses CNNs to encode character information of words into their character-level representations. Second, character-level representations, word embeddings and POS embeddings are fed into a bi-directional (Bi) long short-term memory (LSTM) [31] based RNN to learn the representations of entities and their contexts in a sentence. These representations are used to recognize biomedical entities. Third, another Bi-LSTM-RNN learns relation representations of two target entities along their SDP. These representations are used to classify their relations. The second Bi-LSTM-RNN is stacked on the first one, i.e., the output vectors of LSTM units in the first Bi-LSTM-RNN are used as the input vectors of LSTM units in the second one. The parameters of LSTM units in the first Bi-LSTM-RNN are shared by both networks, so they are jointly affected by entity recognition and relation classification tasks during training. Our neural joint model was evaluated for extracting biomedical entities and their relations on two tasks, namely ADE [4] and BB [5]. Comparing with the state-of-the-art model [29] for the ADE task, our model improved the precision and recall of drug-disease entity recognition by 3.2 and 7.1%, and ADE relation extraction by 3.5 and 12.9%, respectively. Comparing with the best system [14] for the BB task, our model boosted the precision and recall of resident relation extraction by 30.5 and 0.8%, respectively. Experimental results showed that our neural joint model could obtain competitive performances with less feature engineering. In addition, our model could obtain better performances than pipeline models by sharing parameters between the submodels. We demonstrate that deep neural networks are also effective for biomedical entity and relation extraction. Therefore, our model is able to facilitate the research on biomedical text mining. CNN for character-level representations Character-level features have been demonstrated to be effective for neural NER models. For example, the suffix "bacter" is a strong feature to indicate a bacteria entity such as "campylobacter" or "helicobacter". Following previous work [22, 23], CNNs are used to extract morphological information (like the prefix or suffix of a word) from characters of words. Figure 1 shows the process of extracting character information from a word and encoding them into a character-level vector representation. The CNN for extracting character-level representations. A rectangular grid indicates a vector and a square indicates one dimension of this vector, so character embeddings or representations can be denoted as n-dimensional vectors. Shading rectangular grids indicate special padding vectors Given a word w={c 1,c 2,…,c N }, c i denotes its i-th character and emb(c i ) denotes the embedding of this character. To use morphological information, the embeddings of continuous characters in a window size C are concatenated as the final representation \(r_{c_{i}}\) of c i . For example, if \(C=1, r_{c_{i}} = [\!emb(c_{i-1}), emb(c_{i}), emb(c_{i+1})]\), where "[]" denotes the vector concatenation operation. Then the convolutional kernel of CNN needs N times of convolutions for all the characters in this word and for each convolution i, the kernel output o i is computed by $$ o_{i} = tanh\left(W_{1} r_{c_{i}} + b_{1}\right), $$ where W 1 and b 1 are the parameter matrix and bias vector that are learned, and tanh denotes the hyperbolic tangent activation function. To generate the character-level representation r w of this word w, max-pooling operations are applied to all kernel outputs o 1,o 2,…,o N . The k-th dimension of r w is computed by $$ r_{w_{k}} = \max_{1 \leqslant i \leqslant N} o_{i_{k}}. $$ Bi-LSTM-RNN for biomedical entity recognition Following state-of-the-art neural models [22, 23, 26], biomedical entity recognition is casted as a sequence labeling problem. For example, if the standard label scheme BILOU is utilized in the ADE task, which includes two entity types namely Drug and Disease, entity labels can be designed as follows. B-Drug/B-Disease, I-Drug/I-Disease and L-Drug/L-Disease denote the beginning, following and last words of Drug/Disease entities, respectively. U-Drug or U-Disease denotes the single word of Drug or Disease entities. O denotes that the word does not belong to any type of entities. For example, given a sentence "gliclazide-induced acute hepatitis", Fig. 2 shows the process of labeling each word of this sentence by our Bi-LSTM-RNN model. The Bi-LSTM-RNN for biomedical entity recognition. Rectangular grids indicate vectors of feature embeddings or representations. At the bottom, three kinds of vectors are concatenated and fed into LSTMs. Dashed arrow lines denote bottom-up computations along the network framework and solid arrow lines denote left-to-right computations along the sentence Given a sentence \(\phantom {\dot {i}\!}w_{1}/p_{1}/r_{w_{1}}, w_{2}/p_{2}/r_{w_{2}}, \ldots, w_{N}/p_{N}/r_{w_{N}}\), where w i denotes the i-th word, p i denotes the POS tag of w i , and \(\phantom {\dot {i}\!}r_{w_{i}}\) denotes the character-level representation of w i . For the i-th step of sequence labeling, the Bi-LSTM-RNN layer takes the concatenation of the word embedding, POS tag embedding and character-level representation of w i as inputs, given by $$ t_{i}=\left[emb\left(w_{i}\right), emb\left(p_{i}\right), r_{w_{i}}\right]. $$ Based on t={t 1,t 2,…,t N }, a LSTM unit in the left-to-right direction associates each of them with a hidden state \(\overrightarrow {h}_{i}\), so t corresponds to \(\overrightarrow {h}=\{\overrightarrow {h}_{1}, \overrightarrow {h}_{2}, \ldots, \overrightarrow {h}_{N}\}\). Here \(\overrightarrow {h}_{i}\) does not only capture the information in the current step, but also that in the previous steps. To capture the information in the following steps, we also add a counterpart \(\overleftarrow {h}_{i}\) of \(\overrightarrow {h}_{i}\) in the reverse direction, so t also corresponds to \(\overleftarrow {h}=\{\overleftarrow {h}_{1}, \overleftarrow {h}_{2}, \ldots, \overleftarrow {h}_{N}\}\). In the hidden layer, \(\overrightarrow {h}_{i}\) and \(\overleftarrow {h}_{i}\) are selected as one input source in the i-th step. Moreover, the last entity label l \(^{e}_{i-1}\) is also selected as another input source to consider label dependence (e.g., the label I-Drug should not follow the label O). This is not shown in Fig. 2 for conciseness. The final inputs and outputs of the i-th step in the hidden layer are given by $$ h^{e}_{i} = tanh\left(W_{2} \left[\overrightarrow{h}_{i}, \overleftarrow{h}_{i}, emb\left(l^{e}_{i-1}\right)\right] + b_{2}\right), $$ where \(h^{e}_{i}\) denotes the output vector of the hidden layer, W 2 and b 2 denote the parameter matrix and bias vector that are learned. Finally, the softmax output layer calculates the probabilities y e of all entity labels L e, given by $$ y^{e} = softmax\left(W_{3} h^{e}_{i} + b_{3}\right), $$ where the k-th label with the maximum probability \(y^{e}_{k}\) is selected as the label of the i-th word. Bi-LSTM-RNN for relation classification Once entity recognition is finished, our model starts relation classification to determine whether a task-specific relation exists between all possible entity pairs. Prior work has demonstrated the effectiveness of SDPs in the dependency trees for relation classification [24, 26]. The words along SDPs concentrate on most relevant information while diminishing less relevant noise. Following these studies, we use the Bi-LSTM-RNN to model relation representations between two target entities along their SDP. For example, given a sentence "gliclazide-induced acute hepatitis", Fig. 3 shows the process of classifying ADE relations by our Bi-LSTM-RNN. The Bi-LSTM-RNN for relation classification. The input sentence is tokenized before it is analyzed by a dependency parser. Tokens are indexed by Arabic numerals. Basic (a.k.a, projective) dependency style is utilized to build a tree. The bold lines in the tree denote the shortest dependency path (SDP) between "gliclazide" and "hepatitis" with their lowest common ancestor "induced". x i indicates the input vector of a LSTM unit as shown in Eq. 6 and i corresponds to the index of a token. In the Bi-LSTM-RNN layer, solid arrow lines denote bottom-up and top-down computations along the SDP in the dependency tree. ↑ h a , ↑ h b , ↓ h a , ↓ h b are listed in Eq. 8 Given an entity pair e a (e.g., gliclazide) and e b (e.g., acute hepatitis) in a sentence, the last words a (e.g., gliclazide) and b (e.g., hepatitis) of these entities are used to build the SDP between them. The SDP can be formally represented by {a,a 1,…,a m ,c,b n ,…,b 1,b} (e.g., {gliclazide, induced, hepatitis}), where c denotes their lowest common ancestor in the dependency tree (e.g., induced). a 1,…,a m denote the words occurring between a and c on the SDP, and b 1,…,b n denote the words occurring between b and c. The SDP can be divided into two parts: {a,a 1,…,a m ,c} (e.g., {gliclazide, induced}) and {b,b 1,…,b n ,c} (e.g., {hepatitis, induced}) are bottom-up sequences; {c,a m ,…,a 1,a} (e.g., {induced, gliclazide}) and {c,b n ,…,b 1,b} (e.g., {induced, hepatitis}) are top-down sequences. We extract features from both kinds of sequences by the Bi-LSTM-RNN. The input of each LSTM unit is a concatenation of three parts, given by $$ x_{i} = \left[\overrightarrow{h}_{i}, \overleftarrow{h}_{i}, emb(d_{i})\right], $$ where emb(d i ) denotes the embedding of dependency type d i between the word w i and its governor in the dependency tree. \(\overrightarrow {h}_{i}\) and \(\overleftarrow {h}_{i}\) correspond to the word w i and they are identical to those notations mentioned in Eq. 4. Since \(\overrightarrow {h}_{i}\) and \(\overleftarrow {h}_{i}\) are used as the inputs of these LSTM units, the Bi-LSTM-RNN for relation classification is stacked on the Bi-LSTM-RNN for entity recognition. Therefore, two Bi-LSTM-RNNs in our joint model share partial parameters and these parameters can be tuned during jointly training, which assists our joint model to capture the interactions between two subtasks. Miwa and Bansal [26] also demonstrated the effectiveness of such method for neural models. The last LSTM outputs computed along bottom-up sequences {a,a 1,…,a m ,c} and {b,b 1,…,b n ,c} are denoted as ↑ h a and ↑ h b . The last LSTM outputs computed along top-down sequences {c,a m ,…,a 1,a} and {c,b n ,…,b 1,b} are denoted as ↓ h a and ↓ h b . In the hidden layer, ↑ h a ,↑ h b ,↓ h a and ↓ h b are selected as one input source, and the entity representations r a and r b are used as another input source, computed by $$ \begin{aligned} r_{a} &= \frac{1}{|K_{a}|}\sum_{k \in K_{a}} \left[\overrightarrow{h}_{k}, \overleftarrow{h}_{k}\right],\\ r_{b} &= \frac{1}{|K_{b}|}\sum_{k \in K_{b}} \left[\overrightarrow{h}_{k}, \overleftarrow{h}_{k}\right], \end{aligned} $$ where K a and K b denote the index sets of the words in two entities, and \(\overrightarrow {h}_{k}\) and \(\overleftarrow {h}_{k}\) are identical to those notations in Eq. 4. Entity representations are used to compensate information losses, since the SDP are built according to the last words of two target entities. For conciseness, this part is not shown in Fig. 3. Finally, all vector representations of two input sources are concatenated and then computed in the hidden layer to generate the outputs h r, given by $$ h^{r} = tanh\left(W_{4} \left[\uparrow{h}_{a}, \uparrow{h}_{b}, \downarrow{h}_{a}, \downarrow{h}_{b}, r_{a}, r_{b}\right] + b_{4}\right). $$ A softmax layer calculates the probabilities y r of all relation labels L r, given by $$ y^{r} = softmax\left(W_{5} h^{r} + b_{5}\right), $$ where the k-th label with the maximum probability \(y^{r}_{k}\) is selected as the relation type of two target entities e a and e b . Both submodels of our joint model employ the same training algorithm and AdaGrad [32] is employed to control the update step. We describe their training in one section for conciseness. Online learning is exploited to train model parameters. Given a sentence with gold-standard entities and relations, we generate some training examples for entity recognition and relation classification submodels. When each example is sent to its corresponding submodel, the cross-entropy loss for this example is computed and gradients are back-propagated to each layer of the submodel for updating parameters. Therefore, we can consider two submodels are trained alternately. Moreover, since the parameters of LSTM units in the entity recognition submodel are shared by two submodels, the loss of each example can propagate to these parameters. Therefore, they are affected by both entity recognition and relation classification tasks. Formally, assuming that the gold-standard label and its predicted probability are l and prob l , the loss for each example is calculated via - logprob l . If all losses are accumulated with a L2 regularization term, the final objective is given by $$ L(\theta)\ =\ -\sum\limits_{i} \log{prob_{l}}\ +\ \frac{\lambda}{2}\ \lVert\ \theta\ \rVert^{2}_{2}, $$ where θ denotes all model parameters, and λ is the regularization parameter. We carried out experiments on two tasks, namely adverse drug event extraction (ADE) [4] and the bacteria biotope task (BB) [5]. The ADE task aims to extract two kinds of entities (drugs and diseases) and relations about which drug is associated with which disease (ADEs). Its dataset is published in the form of independent sentences that come from 1644 PubMed abstracts. Sentences in the dataset are divided into two categories, namely 6821 sentences in which at least one drug/disease entity pair has the ADE relation (i.e., ADE sentences), and 16695 sentences in which no drug/disease entity pair has the ADE relation (i.e., non-ADE sentences). Biocurators only annotated drug/disease entities (i.e., the arguments of ADE relations) in the ADE sentences, so there are no annotated entities in the non-ADE sentences. Following previous work [29], only ADE sentences were used in our experiments since we need to evaluate the performances of both entity recognition and relation extraction. Similar to prior work [12, 29], 120 relations with nested gold annotations were removed (e.g., "lithium intoxication", where "lithium" is related to "lithium intoxication"). The BB task aims to extract bacteria-related knowledge from PubMed abstracts. We focus on the BB-event+ner subtask, which consists of two parts, namely recognizing bacteria, habitat and geographical entity mentions, and extracting Lives_In relations between bacteria entities and their locations (either habitat or geographical entities). The training, development and test set of the BB-event+ner subtask include 71, 36 and 54 documents, which contain 1158, 736, 1049 entities and 327, 223, 314 relations, respectively. The statistics of the final data used in our experiments are shown in Table 1. Table 1 Statistics of the ADE and BB data used in our experiments Evaluation metrics Standard precision (P), recall (R), F1 were used as evaluation metrics of entity and relation extraction, computed by $$ \begin{aligned} P &= \frac{TP}{TP + FP},\\ R &= \frac{TP}{TP + FN},\\ F {1} &= \frac{2 \times P \times R}{P + R},\\ \end{aligned} $$ where a recognized entity mention was counted as true-positive (TP) if its boundary and type matched those of a gold entity mention. An extracted relation was counted as TP if its relation type was correct, and the boundaries and types of its related entities matched those of the entities in a gold relation. A recognized entity or extracted relation was counted as false-positive (FP) if it did not match the corresponding conditions mentioned above. The number of false-negative (FN) instances was computed by counting the gold entities or relations that had not been identified by our model. Since there were no official development set in the ADE task, we evaluated our model using 10-fold cross-validation, where 10% of the data were used as the development set, 10% were used as the test set and the remaining were used as the training set. Then the final results were displayed as macro-averaged scores. For the BB task, we used P, R and F1 to evaluate our model on the development set. The final results on the test set were given by the official evaluation service [5], which showed only the overall performance of relation extraction in P, R and F1. Hyper-parameter settings Some of hyper-parameter values were tuned according to the development set and others were chosen empirically following prior work [22, 26] since it is infeasible to perform full search for all hyper-parameters. Their final values are shown in Table 2. For conciseness, the dimensions of model parameter matrices W 1,W 2,W 3,W 4,W 5 and bias vectors b 1,b 2,b 3,b 4,b 5 are not shown since they can be easily deduced from this table. Their values were randomly initialized with a uniform distribution. Table 2 Hyper-parameter settings The initial AdaGrad learning rate α and regularization parameter λ were set to 0.03 and 10 −8, respectively. The dimension of word embeddings was set to 200 and those of other feature embeddings were set to 25. We used pre-trained biomedical word embeddings [33] to initial our word embeddings and other kinds of embeddings were randomly initialized in the range (-0.01, 0.01). All the embeddings were tuned during training except word embeddings. For CNN, the character window size C was set to 3, so the dimension of convolutional kernel inputs r c can be computed as (2 ×3+1) ×25=175. For Bi-LSTM-RNN in entity recognition, we set the dimensions of LSTM hidden states \(\overrightarrow {h}_{i}\) or \(\overleftarrow {h}_{i}\), and the hidden layer \(h^{e}_{i}\) to 100. For Bi-LSTM-RNN in relation classification, we set the dimensions of LSTM hidden states ↑ h a ,↑ h b ,↓ h a or ↓ h b , and the hidden layer h r to 100. The dimensions of entity representations r a and r b can be computed as 200. Given a document, we used some heuristic rules to split it into sentences and then tokenized these sentences into words. Tokenization was performed using not only whitespaces but also punctuations, since we might not find the node for an entity (e.g., "gliclazide") in the dependency tree if it was not separated from a piece of text (e.g., "gliclazide-induced"). All the words were transformed into their lowercase forms and numbers were replaced by zeroes. The version 3.4 of Stanford CoreNLP toolkit [34] was used for POS tagging and dependency parsing. To ensure dependency structures as trees, we employed basic (a.k.a., projective) dependencies. In particular, the discontinuous and nested entities were removed, in order to fit our model. Result comparisons with other work Table 3 shows the results of prior work that processed the ADE task. Kang et al. [12] utilized a knowledge-based pipeline method, namely recognizing entities via an off-the-shelf tool, and extracting ADEs via the UMLS Metathesaurus and Semantic Network [35]. As shown in Table 3, their method obtained the imbalanced precision and recall. One likely reason is that their method did not distinguish between ADE relations and drug-disease treatment relations due to the limitations of manually designed rules and knowledge bases, so this strategy led to a high recall but a low precision. By contrast, our neural joint model achieved more balanced precisions and recalls without the assistance of knowledge bases. In addition, the recall of relation extraction is comparable with that of their method. Table 3 Result (%) comparisons with other work in the ADE task Li et al. [29] used a feed-forward neural network to jointly extract drug-disease entities and ADE relations. For drug-disease entity recognition, our model improved the precision, recall and F1 by 3.2, 7.1 and 5.1%, respectively. For ADE relation extraction, the precision, recall and F1 was improved by 3.5, 12.9 and 8.0%, respectively. Their method used knowledge bases such as WordNet [36] and CTD [37] to help improving performances. Moreover, they manually designed global features to capture the interactions of entity recognition and relation extraction. By contrast, our model obtained much better results without using any knowledge base and captured the interactions automatically. Table 4 shows the results of related work that processed the BB task. LIMSI [14] achieved the best F1 in the official evaluation. It leveraged a pipeline framework using CRF to recognize mentions of bacteria and locations, and SVM to extract Lives_In relations between two entity mentions. UTS [5] also employed a pipeline framework that relied on two independent SVMs to perform entity recognition and relation classification, respectively. As shown in Table 4, they suffered either low precisions or recalls. Our neural joint model outperformed their methods without using knowledge bases provided by the task organizers. In addition, neural features reduced the work of feature engineering in CRF or SVM. Table 4 Result (%) comparisons with other work in relation extraction of the BB task All the methods in the BB task achieved lower recalls than precisions, which might be caused by two reasons. The first reason is that there is much disagreement among annotators on whether to annotate an entity mention or relation as a gold answer based on the official statistics [5] shown in Table 5. This implies that it is a challenging task to extract Lives_In relations from PubMed abstracts, even for professional annotators. The second reason is that there are 27% inter-sentence relations (i.e., the argument entities of a relation occurring in different sentences) based on the official statistics of BB task, so the methods restricted to extract intra-sentence relations (i.e., the argument entities of a relation occurring in the same sentence) will suffer low recalls. Nevertheless, the extraction of inter-sentence relations is still a very challenging problem in the text mining or NLP area, which is not taken into account for the moment in this paper. Table 5 The inter-annotator agreement (%) of entity mentions and Lives_In relations [5] Feature contributions The experiments were carried out on the development set to explore the contributions of different features. For entity recognition, our features consist of words, characters, POS tags and entity labels. For relation extraction, our features consist of words, dependency types, entity representations. In feature contribution experiments, we took the model using word features as the baseline, and added only one kind of other features at a time. In Table 6, entity labels were most useful in the ADE task, improving the precision and recall by 2.4 and 1.9%, respectively. While in the BB task, POS tags contributed the most, improving the precision and recall by 2.3 and 4.1%, respectively. The effectiveness of character features was moderate, improving the F1 by 0.3 and 1.3%. Table 6 Feature contribution experiments for entity recognition In Table 7, by adding entity representations, our model achieved the biggest improvements in F1, by 1.0% in the ADE task and 3.0% in the BB task. While dependency type features contributed the most for the precision in the BB task. Table 7 Feature contribution experiments for relation extraction Based on our experiments, the contributions of these features are not consistent in different tasks, which is reasonable due to the characters of these tasks and their datasets. Comparisons of joint and pipeline models Since our model uses parameter sharing to joint two Bi-LSTM-RNN networks, it is necessary to evaluate the effectiveness of such method. To this end, a pipeline model was built without parameter sharing and compared with the joint model. The pipeline model was built by replacing \(\overrightarrow {h}_{i}\) and \(\overleftarrow {h}_{i}\) in Eq. 6 with word embeddings emb(w i ). Therefore, the connections between two Bi-LSTM-RNNs were cut off and they became independent submodels. To be fair, both the pipeline and joint models used only word embedding features. As shown in Table 8, the performance differences between the pipeline and joint models are slight in the ADE task. While in the BB task, the performance of the joint model is much better than that of the pipeline model, and the F1 scores of the joint model increase by 2.8 and 4.2% in entity recognition and relation classification, respectively. Miwa and Bansal [26] performed similar experiments in other datasets and the performance differences varied between 0.8–1.1%. Table 8 Performance comparisons of joint and pipeline models In general, we believe that parameter sharing between the subtasks of a joint model is effective since these parameters are influenced by correlated subtasks and they can help a joint model capturing the interactions of these subtasks. Nevertheless, such strategy may have few effects on improving performances for a specific task, so the characters of a task also need to be considered. The errors were divided into two parts, namely FP and FN. For entity recognition, both FP and FN errors can be divided into two types: The boundary of an entity is incorrectly recognized and the type of an entity is incorrectly recognized. For relation extraction, FP errors contain two types: the entity mentions of a relation are incorrect (either boundaries or types), and entity mentions are correct but their relation is incorrectly predicted. FN errors consist of two types: First, at least one entity mention of a relation has not been recognized, leading to losing this relation; Second, both entity mentions of a relation have been recognized, but the model does not determine that they have such relation. The statistics of error analysis was performed on the development sets of two datasets. As shown in Table 9, boundary identification seems to be much more difficult than type identification in biomedical entity recognition. The errors of boundary identification account for more than 90% of total errors in both tasks. This may be rational due to the following reasons: First, there are only several entity types in the ADE (drug/disease) and BB (bacteria/emphhabitat/geographical) tasks, so it is easier for the model to identify entity types; Second, the characters of biomedical entities are more obvious than those of the entities in the common area, which helps the model to identify their types. For example, a bacteria entity "helicobacter" or drug entity "gliclazide" is much less ambiguous than an organization entity "bank", since "bank" has another meaning "riverside"; Third, the boundary of a biomedical entity is more difficult to be identified, since it may include a number of words to express an integrated biomedical concept, such as a disease entity "bilateral lower leg edema" or habitat entity "monocyte-like THP-1 cells". Table 9 Error analysis of entity recognition In Table 10, the percentage of the first type of FP errors is much higher than that of the second one in both tasks (55.7% vs. 3.1% and 22.7% vs. 15.2%), which implies the importance of entity recognition for relation extraction. The proportion of the second type of FP errors in the BB task is larger than that in the ADE task (15.2% vs. 3.1%), which demonstrates the relations in the BB task are more difficult to be predicted. Table 10 Error analysis of relation extraction In addition, the first type of FN errors accounts for nearly 50% of total errors in both tasks, which indicates that missing entities is the main reason of missing relations. Therefore, one way to alleviate this problem is to build a high-quality entity recognition model in order to reduce errors propagating to the subsequent step of relation extraction. Another alternative way is to use joint models to alleviate such error propagation. By contrast, the distribution of the second type of FN errors shows obvious differences between two tasks. In the ADE task, such errors account for 0.5%, while in the BB task, they account for 18.4%. The reasons for this may be because we only used ADE sentences, which contain at least one ADE relation, as our dataset in the ADE task, since the entities in non-ADE sentences were not annotated. The relation expression in ADE sentences may be apparent so they are easier for the model to determine. In contrast, we used all sentences in the BB task, which increases the difficulty of relation extraction. Furthermore, the relations in the ADE task were annotated in the sentence level, while ones in the BB task were annotated in the document level, so inter-sentence relations were lost. To further demonstrate our observations from error analysis, we performed additional experiments to compare our model with two relation extraction methods that are based on co-occurrence entities inside one sentence and gold entity mentions. As shown in Table 11, co-occurrence and gold-mention based methods achieved pretty high performances (>95% in F1) in the ADE task, which demonstrates the errors of our model mainly come from entity recognition. Therefore, the low error rates of the second FP (Entities correct, relations wrong: 3.1%) and FN (Entities found, relations not found: 0.5%) in Table 10 are explainable. Achieving high performances when entities are given is mainly due to the annotation method of ADE corpus: if drug and disease entities have no ADE relations in a sentence, entities will not be annotated in that sentence either; therefore, if entities are given, ADE relations are almost determined. By contrast, the submodel of relation classification in our model also contributed a number of errors in the BB task, since co-occurrence and gold-mention based methods achieved modest performances when entities were given. It also explains the high error rates of the second FP (Entities correct, relations wrong: 15.2%) and FN (Entities found, relations not found: 18.4%) in Table 10. Table 11 Comparisons with the methods based on co-occurrence entities inside one sentence and gold entity mentions Limitations of our model The main limitation of our model is that it is not able to extract inter-sentence relations, which is a much more challenging task since it requires discourse-level language understanding and coreference resolution technologies. Some prior work has explored the methods for inter-sentence relation extraction [38, 39] or event extraction [40]. In future work, our main objective is to alleviate this limitation. In this paper, we explore a neural joint model to extract biomedical entities and their relations. Our model utilizes the advantages of several state-of-the-art neural models for entity recognition or relation classification in text mining and NLP. Experimental results on two related tasks showed that our model outperformed the best systems in those tasks. We find that deep neural networks can achieve competitive performances with less work on feature engineering and less dependence on external resources such as knowledge bases. In addition, parameter sharing is an effective method for neural models to jointly process several correlated tasks. We believe that our work can facilitate the research on biomedical text mining, especially for biomedical entity and relation extraction. Whether our model is effective for other biomedical entity-relation-extraction tasks remains to be investigated. ADE: Adverse drug event extraction Bacteria biotope task Bi: Bi-directional CNNs: Convolutional neural networks CRFs: Conditional random fields DDI: Drug-drug interaction detection False-negative FP: False-positive LSTM: Long short-term memory NER: Named entity recognition NLP: Part-of-speech PPI: Protein-protein interaction detection RNNs: SDP: Shortest dependency path SVMs: TP: True-positive Wei C, Peng Y, Leaman R, Davis AP, Mattingly CJ, Li J, Wiegers TC, Lu Z. Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task. Database. 2016; 2016:1–8. Pyysalo S, Ginter F, Heimonen J, Björne J, Boberg J, Järvinen J, Salakoski T. Bioinfer: a corpus for information extraction in the biomedical domain. BMC Bioinforma. 2007; 8:266–7. Segura-Bedmar I, Martínez P, Herrero-Zazo M. Semeval-2013 task 9 : Extraction of drug-drug interactions from biomedical texts (ddiextraction 2013). In: Proceedings of the 7th International Workshop on Semantic Evaluation. Atlanta: Association for Computational Linguistics: 2013. Gurulingappa H, Mateen-Rajput A, Roberts A, Fluck J, Hofmann-Apitius M, Toldo L. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects frommedical case reports. J Biomed Inform. 2012; 45:885–92. Deléger L, Bossy R, Chaix E, Ba M, Ferré A, Bessières P, Nédellec C. Overview of the bacteria biotope task at bionlp shared task 2016. In: Proceedings of the 4th BioNLP Shared Task Workshop. Berlin: Association for Computational Linguistics: 2016. Finkel JR, Grenager T, Manning C. Incorporating non-local information into information extraction systems by gibbs sampling. In: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). Ann Arbor: Association for Computational Linguistics: 2005. p. 363–70. Zhou G, Su J, Zhang J, Zhang M. Exploring various knowledge in relation extraction. In: Proceedings of the 43rd ACL. Ann Arbor: Association for Computational Linguistics: 2005. p. 427–34. Fundel K, Küffner R, Zimmer R. Relex-relation extraction using dependency parse trees. Bioinformatics. 2007; 23:365–71. Airola A, Pyysalo S, Björne J, Pahikkala T, Ginter F, Salakoski T. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC Bioinforma. 2008; 9(Suppl 11)(S2):1–12. Nguyen NTH, Tsuruoka Y. Extracting bacteria biotopes with semi-supervised named entity recognition and coreference resolution. In: Proceedings of BioNLP Shared Task 2011 Workshop. Portland: Association for Computational Linguistics: 2011. Gurulingappa H, Mateen-Rajput A, Toldo L. Extraction of adverse drug effects from medical case reports. J Biomed Semant. 2012; 3(15):1–10. Kang N, Singh B, Bui C, Afzal Z, Van-Mulligen EM, Kors JA. Knowledge-based extraction of adverse drug events from biomedical text. BMC Bioinforma. 2014; 15(64):1–8. Xu J, Wu Y, Zhang Y, Wang J, Lee H-J, Xu H. Cd-rest: a system for extracting chemical-induced disease relation in literature. Database. 2016; 2016:1–9. Grouin C. Identification of mentions and relations between bacteria and biotope from pubmed abstracts. In: Proceedings of the 4th BioNLP Shared Task Workshop. Berlin: Association for Computational Linguistics: 2016. Li Q, Ji H. Incremental joint extraction of entity mentions and relations. In: Proceedings of the 52nd ACL. Baltimore: Association for Computational Linguistics: 2014. p. 402–12. Roth D, Yih W. Introduction to Statistical Relational Learning. Global Inference for Entity and Relation Identification via a Linear Programming Formulation. Boston: MIT Press; 2007. http://cogcomp.cs.illinois.edu/papers/RothYi07.pdf. Kordjamshidi P, Roth D, Moens MF. Structured learning for spatial information extraction from biomedical text: bacteria biotopes. BMC Bioinforma. 2015; 16(129):1–15. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(7553):436–44. Bengio Y, Goodfellow IJ, Courville A. Deep Learning. Boston: MIT Press; 2015. Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. 2011; 12:2493–537. Andor D, Alberti C, Weiss D, Severyn A, Presta A, Ganchev K, Petrov S, Collins M. Globally normalized transition-based neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin: Association for Computational Linguistics: 2016. p. 2442–52. Ma X, Hovy E. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin: Association for Computational Linguistics: 2016. p. 1064–74. Lample G, Ballesteros M, Subramanian S, Kawakami K, Dyer C. Neural architectures for named entity recognition. In: Proceedings of the NAACL. San Diego: Association for Computational Linguistics: 2016. Xu Y, Mou L, Li G, Chen Y, Peng H, Jin Z. Classifying relations via long short term memory networks along shortest dependency paths. In: Proceedings of the EMNLP. Lisbon: Association for Computational Linguistics: 2015. p. 1785–94. Wang L, Cao Z, de Melo G, Liu Z. Relation classification via multi-level attention cnns. In: Proceedings of the ACL. Berlin: Association for Computational Linguistics: 2016. Miwa M, Bansal M. End-to-end relation extraction using lstms on sequences and tree structures. In: Proceedings of the ACL. Berlin: Association for Computational Linguistics: 2016. Li H, Zhang J, Wang J, Lin H, Yang Z. Dutir in bionlp-st 2016: Utilizing convolutional network and distributed representation to extract complicate relations. In: Proceedings of the 4th BioNLP Shared Task Workshop. Berlin: Association for Computational Linguistics: 2016. Mehryary F, Björne J, Pyysalo S, Salakoski T, Ginter F. Deep learning with minimal training data: Turkunlp entry in the bionlp shared task 2016. In: Proceedings of the 4th BioNLP Shared Task Workshop. Berlin: Association for Computational Linguistics: 2016. Li F, Zhang Y, Zhang M, Ji D. Joint models for extracting adverse drug events from biomedical text. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI). Palo Alto: AAAI Press: 2016. p. 2838–44. Jiang Z, Li L, Huang D, Jin L. Training word embeddings for deep learning in biomedical text mining tasks. In: Bioinformatics and Biomedicine (BIBM), 2015 IEEE International Conference On. Washington DC: IEEE: 2015. p. 625–8. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997; 9(8):1735–80. Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res. 2011; 12:2121–59. Pyysalo S, Ginter F, Moen H, Salakoski T, Ananiadou S. Distributional semantics resources for biomedical text processing. In: LBM. Tokyo: Database Center for Life Science: 2013. Manning CD, Surdeanu M, Bauer J, Finkel J, Bethard SJ, McClosky D. The stanford corenlp natural language processing toolkit. In: Proceedings of 52nd ACL. Baltimore: Association for Computational Linguistics: 2014. p. 55–60. Bodenreider O. The unified medical language system (umls): integrating biomedical terminology. Nucleic Acids Res. 2004; 32(suppl 1):267–70. Miller GA. Wordnet: a lexical database for english. Commun ACM. 1995; 38(11):39–41. Davis AP, Grondin CJ, Lennon-Hopkins K, Saraceni-Richards C, Sciaky D, King BL, Wiegers TC, Mattingly CJ. The comparative toxicogenomics database's 10th year anniversary: update 2015. Nucleic Acids Res. 2015; 43(D1):914–20. Lavergne T, Grouin C, Zweigenbaum P. The contribution of co-reference resolution to supervised relation detection between bacteria and biotopes entities. BMC Bioinforma. 2015; 16(10):1–17. Kilicoglu H, Rosemblat G, Fiszman M, Rindflesch TC. Sortal anaphora resolution to enhance relation extraction from biomedical literature. BMC Bioinforma. 2016; 17(1):1–16. Miwa M, Thompson P, Ananiadou S. Boosting automatic event extraction from the literature using domain adaptation and coreference resolution. Bioinformatics. 2012; 28(13):1759–65. Zhang M, Yang J, Teng Z, Zhang Y. Libn3l: A lightweight package for neural nlp. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation. Paris: European Language Resources Association (ELRA): 2016. The authors thank the anonymous referees for their careful reading of this manuscript and their extensive comments. This work was supported by the National Natural Science Foundation of China (No.61373108), the National Philosophy Social Science Major Bidding Project of China (No.11&ZD189). The funding bodies did not play any role in the design of the study, data collection and analysis, or preparation of the manuscript. Availability of data and material The dataset of ADE task can be downloaded at: https://sites.google.com/site/adecorpus. The dataset of BB task can be downloaded at: http://2016.bionlp-st.org/tasks/bb2. Our model is implemented based on LibN3L [41]. The code is publicly available under GPL at: https://github.com/foxlf823/njmere. FL and DJ designed the study. FL and MZ implemented the model. FL, MZ and GF performed experiments and analyses. FL drafted the manuscript and GF, DJ revised it. All authors have read and approved the final version of this manuscript. School of Computer, Wuhan University, Bayi Road, Wuhan, China Fei Li & Donghong Ji School of Computer Science and Technology, Heilongjiang University, Xuefu Road, Harbin, China Meishan Zhang & Guohong Fu Search for Fei Li in: Search for Meishan Zhang in: Search for Guohong Fu in: Search for Donghong Ji in: Correspondence to Donghong Ji. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Li, F., Zhang, M., Fu, G. et al. A neural joint model for entity and relation extraction from biomedical text. BMC Bioinformatics 18, 198 (2017) doi:10.1186/s12859-017-1609-9 Biomedical text Entity recognition Relation extraction Joint model Sequence analysis (methods)
CommonCrawl
Quality appraisal for systematic literature reviews of health state utility values: a descriptive analysis Muchandifunga Trust Muchadeyi1,2, Karla Hernandez-Villafuerte1,3 & Michael Schlander1,2,4 Health state utility values (HSUVs) are an essential input parameter to cost-utility analysis (CUA). Systematic literature reviews (SLRs) provide summarized information for selecting utility values from an increasing number of primary studies eliciting HSUVs. Quality appraisal (QA) of such SLRs is an important process towards the credibility of HSUVs estimates; yet, authors often overlook this crucial process. A scientifically developed and widely accepted QA tool for this purpose is lacking and warranted. To comprehensively describe the nature of QA in published SRLs of studies eliciting HSUVs and generate a list of commonly used items. A comprehensive literature search was conducted in PubMed and Embase from 01.01.2015 to 15.05.2021. SLRs of empirical studies eliciting HSUVs that were published in English were included. We extracted descriptive data, which included QA tools checklists or good practice recommendations used or cited, items used, and the methods of incorporating QA results into study findings. Descriptive statistics (frequencies of use and occurrences of items, acceptance and counterfactual acceptance rates) were computed and a comprehensive list of QA items was generated. A total of 73 SLRs were included, comprising 93 items and 35 QA tools and good recommendation practices. The prevalence of QA was 55% (40/73). Recommendations by NICE and ISPOR guidelines appeared in 42% (16/40) of the SLRs that appraised quality. The most commonly used QA items in SLRs were response rates (27/40), statistical analysis (22/40), sample size (21/40) and loss of follow up (21/40). Yet, the most commonly featured items in QA tools and GPRs were statistical analysis (23/35), confounding or baseline equivalency (20/35), and blinding (14/35). Only 5% of the SLRS used QA to inform the data analysis, with acceptance rates of 100% (in two studies) 67%, 53% and 33%. The mean counterfactual acceptance rate was 55% (median 53% and IQR 56%). There is a considerably low prevalence of QA in the SLRs of HSUVs. Also, there is a wide variation in the QA dimensions and items included in both SLRs and extracted tools. This underscores the need for a scientifically developed QA tool for multi-variable primary studies of HSUVs. The concept of evidence-based medicine (EBM) originated in the mid-nineteenth century in response to the need for a conscientious, explicit, and judicious use of current, best evidence in making healthcare decisions [1]. Emerging from the notion of evidence-based medicine is the systematic and transparent process of Health Technology Assessment (HTA). HTA can be defined as a state-of-the-art method to gather, synthesize and report on the best available evidence on health technologies at different points in their lifecycle [2]. This evidence informs policymakers, insurance companies and national health systems during approval, pricing, and reimbursement decisions. As the world continues to grapple with increased healthcare costs (mainly due to an ageing population and the rapid influx of innovative and expensive treatments), health economic evaluations are increasingly becoming an integral part of the HTA process. Comparative health economic assessments, mainly in the form of cost-effectiveness analysis and cost-utility analysis (CUA), are currently the mainstay tools for the applied health economic evaluation of new technologies and interventions [3]. Within the framework of CUA, the quality-adjusted life years (QALY) is a generic outcome measure widely used by economic researchers and HTA bodies across the globe [3]. Quality-adjusted life years are calculated by adjusting (multiplying) the length of life gained (e.g. the number of years lived in each health state) by a single weight representing a cardinal preference for that particular state or outcome. These cardinal preferences are often called health state utility values (HSUVs), utilities or preferences in the context of health economics. Notably, HSUVs are regarded as one of the most critical and uncertain input parameters in CUA studies [4]. A considerable body of evidence on cost-effectiveness analyses suggests that CUA results are sensitive to the utility values used [3, 5, 6]. A small margin of error in the HSUVs used in CUA can be enough to alter the reimbursement and pricing decision and have far-reaching consequences on drug quality-adjusted life years, the incremental cost-effectiveness ratios, and may potentially impact an intervention's accessibility [3, 5, 6]. Besides, HSUVs are inherently heterogeneous. Applying different population groups (patients, general population, caregivers or spouses, and in some instances, experts or physicians), context, assumptions (theoretical grounding), and elicitation methods may generate different utility values for the same health state [7,8,9]. Thus, selecting appropriate, relevant and valid HSUVs is germane to comparative health economic assessments [3, 4, 10]. The preferences reflected in the HSUVs can be directly elicited using direct methods such as the time trade-off (TTO), the standard gamble (SG) or the visual analogue scale (VAS) [11]. Alternatively, indirect methods using multi-attribute health status classification systems with preference scores such as the EuroQol- 5 Dimension (EQ-5D), Short-Form Six-Dimension (SF6D), Health Utilities Index (HUI) or mapping from non-preference-based measures onto generic preference-based health measures can also be employed [12]. However, methodological infeasibility, costs, and time constraints make empirical elicitation of HSUVs a problematic and sometimes an unachievable task. Consequently, researchers often resort to synthesising evidence on HSUVs through rapid or systematic literature reviews (SLRs) [12]. Correspondingly, the number of SLRs of studies eliciting HSUVs has been growing exponentially over the years, particularly in the last five years [13]. The cornerstone of all SLRs is the process of Quality Appraisal (QA) [14, 15]. Regardless of the source of utility values, HSUVs should be "free from known sources of bias, and measured using a validated method appropriate to the condition and population of interest and the perspective of the decision-maker for whom the economic model is being developed [4]". The term garbage in garbage out (GIGO), originates from the information technology world, and is often referred to in quality discussions. The use of biased, low-quality HSUVs estimates will undoubtedly result in wrong and misleading outcomes, regardless of how robust the other elements of the model are. To avoid using biased estimates, it is imperative that empirical work on HSUVs, the reporting of such work, and subsequent reviews of studies eliciting HSUVs are of the highest level of quality. A robust, scientifically developed and commonly accepted QA tool is one step towards achieving such a requirement. Over the years, some research groups and HTA agencies have developed checklists, ad-hoc tools, and good practice recommendations (GPRs) describing or listing the essential elements to consider when assessing the quality of primary studies eliciting HSUVs. Prominent among these GPRs are the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Taskforce report [16], the National Institute for Health and Care Excellence (NICE) Technical Document 9 [17], and related peer-reviewed publications [4, 10, 12, 18], hereafter referred to as"NICE/ISPOR tools". Despite this effort and the importance placed on HSUVs and their QA process, there is still no accepted gold standard, scientifically developed, and widely accepted QA tool for studies eliciting HSUVs. Several challenges impede the critical appraisal of studies eliciting HSUVs. Common to all QA processes is the significant heterogeneity in using the term QA. This heterogeneity leads to a misunderstanding of and or disagreements on what should and should not constitute QA [19, 20]. The term quality represents an amorphous and multidimensional concept that should include the quality of reporting, methodological (e.g. risk of bias [RoB]) and external validity (applicability) [15, 21, 22]. However, it is often incompletely and or inappropriately applied by restricting quality only to a subset of its components (mostly one dimension). For example, many SLR authors use the term QA to refer to the RoB assessment [15, 23,24,25], while others refer to the reporting quality assessment [19, 26, 27]. Similarly, several terms to define QA have also been used interchangeably in the literature. These terms include: quality assessment, methodological quality, methodological review, critical appraisal, critical assessment, grading of evidence, data appropriateness, and credibility check [22]. Resultantly, the domains, components and or items considered to evaluate the studies' quality also vary considerably [22]. Another challenge in appraising the quality of studies contributing to SLRs is the lack of guidance for applying the QA results into the subsequent stages of a review, particularly summarizing and data synthesis; interpreting the findings, and drawing conclusions [14, 28]. The trend over the years has been shifting away from scale-based QA to domain-based RoB assessments [29, 30]. Moreover, there is no consensus regarding the quality threshold for the scale-based approach nor risk summary judgment for the domain-based approaches [28]. Specific to SLRs of studies eliciting HSUVs is the unique nature and characteristics of these studies, mainly study design. While randomised controlled trials (RCTs) are the gold standard for intervention studies of effect size [31], multiple study designs, including experimental (e.g. RCTs) and observational (e.g. cohort, case–control, cross-sectional) designs, can be used in primary studies on HSUVs [14]. On the one hand, RCTs may suffer from a lack of representation of the real-world setting, mainly due to strict inclusion and exclusion criteria (which is a form of selection bias). On the other hand, observational studies, by design, are inherently prone to several problems that may bias their results, for example, confounding or baseline population heterogeneity. While confounding is mainly controlled at the design stage through randomisation in RCTs, statistical and analytical methods are vital for controlling confounding in observational studies. More so, some QA items such as the randomisation process, blinding of investigators/assessors, description of the treatment protocol for both intervention and control groups and use of intention-to-treat analysis [22] tend to be more specific to RCTs of intervention studies and of less value to observational and or primary studies of HSUVs. By design, all intervention studies of measure of effect size should ideally be comparative and define at least one intervention. The gold standard is to include a control or comparator group that is "equivalent" to the intervention group, with only the intervention under investigation varying. On the contrary, not all studies eliciting HSUVs are intervention and comparative studies. Oftentimes, HSUVs are elicited from the population of interest (or the whole population) without regard to an intervention. This distinction between primary studies of HSUVs and intervention studies presents another unique feature to primary studies of HSUVs. QA of empirical studies of HSUVs (except when there is an intervention in question) may not find QA items such as intervention measurement, adherence to prescribed intervention, randomisation, concealment of allocation, blinding of subjects, and outcomes relevant or feasible. Furthermore, the various methodologies used to elicit utility values make it challenging to identify a QA tool that allows an adequate comparison between studies. Direct methods are frequently used alongside indirect methods [12]. Consequently, using a single QA tool is insufficient; however, it remains unclear if using multiple tools would remedy the above-mentioned challenges. Few studies in the literature where QA tools were used reflected the previously described multi-factorial nature of the QA of studies eliciting HSUVs. More recently, Yepes-Nuñez et al. [13] summarised the methodological quality (examining RoB) of SLRs of HSUVs published in top-ranking journals. The review culminated in a list of 23 items (grouped in 7 domains) pertinent to the RoB assessment. Nevertheless, RoB is only one necessary quality dimension, by itself insufficient [15]. Ara et al. mentioned that a researcher needs a well-reported study to perform any meaningful assessment of other quality dimensions [10, 18]. Correspondingly, the completeness and transparency of the reporting (i.e., reporting quality dimension) is also needed. Similar to RoB, a focus on reporting quality without attention to RoB is also necessary, but alone, insufficient. Notably, an article can be of good reporting quality—reporting all aspects of the methods, presenting their findings in a clear and easy-to-understand manner—and still be subject to considerable methodological flaws that can bias the reported estimates [3, 32]. Since HSUVs as an outcome can be highly subjective and context-driven compared to the commonly assessed clinical outcomes in clinical effectiveness studies, limiting the QA of studies eliciting HSUVs to reporting and methodological quality dimensions is not enough (necessary but insufficient rule). The relevance and applicability (i.e., external validity) of the included studies also matter. Relevance and applicability questions are equally crucial to the decision-maker, including whose utility values and when and where the assessment was done. Gathering evidence on the current practices of SLRs authors to appraise the quality of primary studies eliciting HSUVs is key to solving the above-mentioned challenges. It forms the precursor to the development, based on a systematic process, of a QA tool that assures a consistent and comparable evaluation of the evidence available. Therefore, the main objective of this study is to review, consolidate, and comprehensively describe the current (within the last five years) nature of QA (methodological, reporting and relevance) in SLRs of HSUVs. Given the challenges hampering QA of studies eliciting HSUVs, we hypothesise that many SLR authors are reluctant to perform QA; hence we expected a low prevalence. We also hypothesise that there is significant heterogeneity in how the QAs are currently done. We precisely aim at: Evaluating the prevalence of QA in published systematic reviews of HSUVs. Determining the nature of QA in SLRs of HSUVs. Exploring the impact of QA on the SLR analysis, its results, and recommendations. Identifying and listing all items commonly used for appraising the quality in the SLR of HSUVs and comparing these to items of existing checklist, tools and GPRs. Identifying and listing all checklists, tools and GPRs commonly used for QA of studies eliciting HSUVs A rapid review (RR) of evidence was conducted to identify peer-reviewed and published SLRs of studies eliciting HSUVs from 01.01.2015 to 11.05.2021. Cochrane RRs guidelines were followed with minor adjustments throughout the RR process [33]. Table 1 defines some key terms applicable to quality and quality appraisal. Notably, since not all published QA tools have been validated, in this study, we define a standardised tool as a tool that has been scientifically developed and published with or without validation. Table 1 Definitions of key terms related to quality appraisal Data sources and study eligibility A search strategy adopted from Petrou et al. 2018 [12] that combines terms related to HSUVs, preference-based instruments and systematic literature reviews (SLRs) was run in the PubMed electronic database on 11.05.2021. The search strategy did not impose restrictions on the disease entity or health states, population, intervention, comparators and setting. All retrieved articles were exported to EndNote version X9 software (Clarivate Analytics, Boston, MA, USA), and duplicate cases were deleted. The remaining articles were exported to Microsoft Excel for a step-wise screening process. To ensure we did not miss any relevant articles, the PubMed search strategy was translated into Embase search terms and run on 05.09.2022. For example, we converted MeSH and other search terms to Emtree and replaced the PubMed-specific field codes with Embase-specific codes. All articles retrieved were exported to Microsoft Excel for a step-wise screening process. Search strings and hits for both databases are summarised in the Additional file 1, Supplementary Material 1 and 2, Table A.1 and A.2. One author (MTM) developed the inclusion and exclusion criteria based on study objectives and previous reviews. All identified SLRs that performed a descriptive synthesis and or meta-analysis of primary HSUVs studies (direct or indirect elicitation) and were published in English from January, 01, 2015, to April, 29, 2021 were included. A pilot exercise was done on 50 randomly chosen titles and abstracts. Refinement of the inclusion and exclusion was done after this initial round of screening. Two experienced/senior health economists (KHV and MS) reviewed the inclusion and exclusion criteria with minor adjustments. The final inclusion and exclusion criteria is summarised in the Additional file 1, Supplementary Material 3, Table A.3. Data screening A step-wise screening process starting with titles, followed by abstracts and the full text, was done by one reviewer (MTM) using the pre-developed inclusion and exclusion criteria. Full-text SLRs that matched the stage-wise inclusions and exclusion criteria (See Additional file 1, Supplementary Material 3, Fig A.1) were retained for further analysis. The reference lists of the selected SLRs were further examined to identify any relevant additional reviews, tools, and GPRs. MTM repeated the same steps as described above (i.e., title, abstract and full-text scan) based on the mentioned inclusion and exclusion criteria to identify additional articles from the reference list of the initially selected SLRs. MTM and KHV discussed any uncertainty about including certain studies and mutually decided on the final list of included articles. A two-stage data extraction process was done using two predefined Microsoft Excel spreadsheet data extraction matrices. MTM designed the first drafts of the data extraction matrices based on a similar review [29, 30] and research objectives. KHV and MS reviewed both matrices with minor adjustments. First, all the relevant bibliography and descriptive information on the QA process done by the SRL authors were extracted (See Additional file 1, Supplementary Material 4, Table A.4a). One of our aims was to determine the prevalence of QA in the included SLRs. Therefore, we did not appraise the quality of included SLRs. Since high-quality SLRs must incorporate all the recommended review stages, including the QA stage; we assumed that including only high-quality SLRs may potentially bias our prevalence point estimates. Second, all QA tools, checklists and GPRs, identified or cited in the included SLRs were extracted. A backward tracking was undertaken to identify all the original publications of these QA tools, checklists and GPRs. Authors' names and affiliations, year of first use or publication, domains, items or signalling questions contained in each extracted QA tool, checklist and GPR were then harvested using the second data extraction sheet (see Additional file 1, Supplementary Material 4 Table A. 4b). Data synthesis Narrative and descriptive statistics (i.e., frequencies, percentages, counterfactual acceptance rate [CAR], listing and ranking of items used) were performed on the selected SLRs and the identified QA tools, checklists and GPRs. All graphical visualizations were plotted using the ggplot2 package in R. Descriptive analysis of included SLRs, checklists, tools and GPRs practices extracted The SLRs were first categorised into those that performed a QA of the contributing studies or not. For those SLRs that appraised the quality of studies, descriptive statistics were calculated based on six stratifications: 1) QA appraisal tool type (i.e., an ad-hoc tool or custom-made, standardised or adapted tool); 2) critical assessment tool format (i.e., scale, domain-based or checklist); 3) QA dimensions used (i.e., reporting quality, RoB and/or relevancy); 4) how the QA results were summarised (i.e., summary scores, threshold summary score or risk judgments); 5) type of data synthesis used (quantitative including meta-analysis or qualitative), and 6) how QA results were used to inform subsequent stages of the analysis (i.e., synthesis/results and/or the conclusion-drawing). The distribution of the number of QA items and existing checklists, tools and GRPs used to generate these items were also tabulated (see Additional file 1, Supplementary Material 4, Table A.4a). Similarly, QA tools, checklists, and GPRs extracted in the second step of the review were categorised according to 1) document type (i.e., technical document [recommendations], technical document [recommendations] with a QA tool added, a previous SLR, reviews, SLRs or standardised tool), 2) critical assessment tool format (i.e., domain-based, checklists or scale-based tools) and 3) QA dimensions included in the tool (i.e., any of RoB or methodological, reporting or relevancy dimension) and items as they are listed [original items] (see Additional file 1, Supplementary Material 4, Table A.4b). Quality appraisal – Impact of QA on the synthesis of results To explore the impact of the QA on the eligibility of studies for data synthesis, we first analysed the acceptance rate for each SLR that used the QA assessment results to exclude articles. We defined the acceptance rate of a SLR as the proportion of primary studies eliciting HSUVs that meet a predetermined (by the SLR's authors) quality threshold. The threshold can be presented as a particular score for scale-based or an overall quality rating (e.g., high quality) for domain-based QA. Second, a counterfactual analysis was done on a subset of SLRs that appraised the quality of contributing studies but did not incorporate the QA's results in the data synthesis. A counterfactual acceptance rate (CAR) was defined as the proportion of studies that would have been included if the QA results had informed such a decision. Based on a predetermined QA threshold, we defined the counterfactual acceptance rate as follows: $$CAR =\frac{ number\ of\ studies\ with\ quality\ >\ 60\mathrm{\% }\ (or\ a \ high-quality\ rating\ in\ all\ domains)}{total\ number\ of\ eligible\ studies.}$$ In the SLR by Marušić et al. [14], the majority (52%, N = 90) of included SLRs used a quality score as a threshold to inform which primary studies qualify for data synthesis. A quality threshold of 3 out of 5 (60%) was used for the Jadad [36] and Oxford [14] scales and 6 out of 9 (67%) for the Newcastle–Ottawa scale. Consequently, we used a quality threshold of 60% in the CAR calculations (see Eq. 1). Reporting checklists with Yes, No, and Unclear responses were converted into a scale-based (Yes = 1, No = 0 and Unclear = 0). The resulting scores were summed to calculate the overall score percentage. Regarding domain-based tools, the ROBINS-I tool [37] gives guidelines to make summary judgments of the overall RoB as follows: 1) a study is judged "low" risk of bias if it scores "low" in all RoB domains; 2) a study is judged "moderate" if it scores "moderate" to "low" in any of the RoB domains; 3) a study is judged "serious risk of bias" if scores "serious or critical" in any domain. By so doing, the tools assume that any RoB domain could contribute equally to the overall RoB assessment. On the contrary, the Cochrane RoB tool [28] requires review authors to pre-specify (depending on outcomes of interest) which domains are most important in the review context. In order to apply the Cochrane RoB, it is necessary to first rank the domains according to their level of importance. The level of importance, thus the ranking, depends on both the research question and context. A context-based ranking approach would be highly recommendable. However, given that the relevant SLR articles refer to different contexts, it was not feasible to establish an informed and justified ranking of the domains for each article based on context. Therefore, while considering that the context-based approach is highly desirable, we chose the method applied in the ROBINS-I tool [37] to evaluate the CAR of SLRs that used domain-based ratings and did not provide a summary judgment. Quality appraisal – items used and their relative importance We separately extracted and listed all original QA items: 1) used in the SLRs and 2) found in the original publications of QA tools, checklists and GPRs cited, adapted or customised by the SLR authors of included reviews. Based on a similar approach used by Yepes et al. [13], we iteratively and visually inspected the two mentioned lists for items that used similar wording and or reflected the same construct. Where plausible and feasible, we retained the original names of the items as spelt out in QA tools, checklists and GPRs or by the SLR authors. A new name or description was assigned to those items that used similar wording and or reflected the same constructs. For example, we assigned the name 'missing (incomplete) data' for all original items phrased as 'incomplete information', 'missing data', and 'the extent of incomplete data'. Similarly, items reflecting preference elicitation groups, preference valuation methods, scaling methods, and or choice versus feeling-based methods were named 'technique used to value the health states (see Additional file 1, Supplementary Material 5a; Table A.5 and Table A.6 for the assignment process). In this way, apparent discrepancies in wording, spellings and expressions in the items were matched. All duplicate items and redundancies were concurrently removed. A single comprehensive list of items used in SLRs or extracted QA tools, checklists and GPRs was produced (see Additional file 1, Supplementary Material 5a; Table A.7). Using the comprehensive list of the items with assigned names, we counted the frequency of occurrence of each item included in 1) the SLRs of studies eliciting HSUVs and 2) identified QA tools, checklists and GPRs. We regard the frequency of each item in SLRs as a reasonable proxy to the relative importance that SLR authors place on the items. Similarly, the frequency of occurrence in QA tools and GRPs can be regarded as a reasonable proxy for what items are valued more highly in the currently existing tools that are commonly used for QA of studies eliciting HSUVs. Additionally, we narrowed the above analysis to two selected groups of items: 1) one composed of the 14 items corresponding to the recommendations by the ISPOR Taskforce report [16], NICE Technical Document 9 [17] and related peer-reviewed publications [4, 10, 12, 18] (hereafter referred to as 'ISPOR items'), and 2) an additional list of 14 items (hereafter as 'Additional items' (see Additional file 1, Supplementary Material 5b and 5c). Additional items were informed mainly by literature [38], theoretical considerations [39,40,41,42,43,44,45] and the study team's conceptual understanding of HSUV elicitation process. Specifically, Additional items represent those that we considered "relevant" (based on the literature and theoretical considerations and were not included in the ISPOR items). For example, statistical consideration and the handling of confounders do not appear in the ISPOR items, yet they are relevant to the QA of studies eliciting HSUVs. We considered the combination of both lists (28 combined items) to be a comprehensive but not exclusive list of items that can be deemed "relevant" to QA of contributing studies to the SLR of studies eliciting HSUVs. Correspondingly, the frequency of ISPOR items in SLRs can be considered a reasonable proxy measure of the extent to which SLR authors are following the currently existing GPRs, while the frequency of the Additional items as a proxy of the importance of other "relevant" items in the QA process. The frequency of the ISPOR items and the Additional items in the existing QA tools, checklists and GPRs can be considered a proxy measure of how well the currently used tools covered the "relevant" items for the QA of studies eliciting HSUVs (i.e., suitability of purpose). All analyses on SLRs that appraised quality were further stratified by considering separately: 1) the 16 SLRs [9, 26, 46,47,48,49,50,51,52,53,54,55,56,57,58,59] that either adapted or used one or more of the 6 QA tools, checklists and GPRs were considered to be NICE, ISPOR and related publications report [4, 10, 12, 16,17,18] (hereafter 'QA based on NICE/ISPOR tools') and 2) the 24 SLRs that adapted, customised or used other QA tools, checklists and GPRs (hereafter "QA based on other tools"). Similarly, all analyses on QA tools, checklists and GPRs were further stratified by considering separately: 1) the 6 QA tools, checklists that are considered to be NICE, ISPOR and related publications [4, 10, 12, 16,17,18] (hereafter 'NICE/ISPOR tools') and 2) 29 QA tools, checklists and GPRs (hereafter "Other tools"). The initial electronic search retrieved 3,253 records (1,997 from PubMed and 1,701 from Embase). After the initial step-wise screening process, 70 articles were selected. Three additional articles were retrieved from the snowball method of selecting relevant articles identified from the chosen SLRs. Thus in total, 73 SLRs were analysed (see Fig. 1). PRISMA flow diagram summarising the study selection process. HRQOL, Health Related Quality of Life; PRO, Patient Reported Outcomes; CEA, cost-effectiveness analysis; CUA, cost-utility analysis; PRISMA, Preferred Reporting Items for Systematic Review and Meta-Analyses; SLR, Systematic Literature Review Characteristics of included SRLs, checklists, tools and GRPs The SLRs included in the analysis consist of utility values for health states covering a wide range of disease areas: cardiovascular diseases (10%); neurological diseases, including Alzheimer's disease, mild cognitive impairment and dementia (10%); cancers of all types (21%); infectious diseases, including human immunodeficiency virus and tuberculosis (10%); musculoskeletal disorders, including rheumatoid arthritis, osteoporosis, chronic pain, osteoarthritis, ankylosing spondylitis, psoriatic arthritis, total hip replacement, and scleroderma (6%); metabolic disorders, including diabetes (3%); gastrointestinal disorders (4%); respiratory disorders (non-infectious) including asthma (4%) and non-specific conditions, including injuries and surgeries (20%). Special attention was also given to mental health and childhood utilities which accounted for 1% and 10% of the eligible SRLs (see Additional file 1, Supplementary Material 6, Table A.8). Table 2 shows the characteristics of the QA tools, checklists and GPRs used to evaluate the quality of studies eliciting HSUVs in the SLRs analysed. A total of 35 tools, checklists and GPRs were extracted directly from the SLRs analysed. Most of these (37%) were standardised tools that are scientifically developed for QA of either RCTs or observational studies. Technical documents, which merely seek guidance on appraising quality, accounted for another 37%. Notably, a few SLRs (8%) of studies eliciting HSUVs [60,61,62,63] based their QA appraisal methods on those used in previous SLRs [21, 64, 65] or reviews [66, 67], in which the authors of those SLRs had used guidance from a previous SLR [68]. Table 2 Characteristics of 35 QA tools, checklists and GPRs Regarding the critical assessment format (see Table 1 for the definition of terms), domain-based tools contributed 26% to the total number of tools, checklists and GPRs extracted. In comparison, checklist and scale-based tools accounted for 20% and 17%, respectively, representing 37%. (see Additional file 1, Supplementary Material 6, Table A.9, for more details on the 35 QA tools and GPRs). Prevalence and characteristics of QA in included SLRs Table 3 shows the prevalence of QA and the current nature of QA in the included SLRs. The number of QA tools and GPRs used or cited ranged from 1 to 9 (equal mean and median of 2 and IQR of 1). Notably, the observed prevalence of QA is 55%. Around a third of the SLR authors (33%) used all three QA dimensions (reporting, RoB [methodological] and relevancy) to appraise the quality of studies eliciting HSUVs. Of the 40 SLRs that appraised quality, 16 (42%) were based on NICE/ISPOR tools [9, 26, 46,47,48,49,50,51,52,53,54,55,56,57,58,59]. Table 3 Prevalence and characteristics of QA in included SLRs Impact of the QA on study outcomes The 40 studies that appraised quality included 1,653 primary studies eliciting HSUVs, with the number of included studies ranging from 4 to 272 (median = 28, mean = 41 and IQR = 33). Surprisingly, most (35/40) SLRs that appraised the quality of their included studies did not use the QA findings to synthesise final results and overall review conclusions. Of the remaining five articles, three [47, 60, 62] used the QA results to inform the inclusion of studies for meta-analysis (acceptance rate was 100% for Afshari et al. [60] and Jiang et al. [62], and 53% for Blom et al. [47]). These represent only 15% (3/20) of the studies that performed a quantitative synthesis (i.e., meta-analysis or meta-regression). In the fourth [50] and fifth study [69], the QA results were used as a basis of inclusion for the qualitative synthesis, with 33% and 67% of the eligible studies being included in the final analysis. We estimated the counterfactual acceptance rate (CAR) for those SLRs that appraised the quality of contributing studies but did not incorporate the QA's results in the data synthesis. Six of the 40 SLRs [48, 53, 55, 56, 70, 71] did not provide sufficient information to calculate the threshold or summarize the judgement of risk of bias. For the other 6 studies [47, 50, 60, 62, 69, 72], the actual acceptance rate was as reported by the SLR authors. CAR in the remaining 28 SRLs ranged from 0 to 100% (mean = 53%, median = 48% and IQR 56%). If all the 28 SLRs for which a CAR was estimated had considered the QA results, on average, 57% of 1053 individual studies eliciting HSUVs would have been deemed ineligible for data synthesis. Had the 28 SLRs used QA results to decide on the inclusion of studies for the analysis stage, 52% (15/28) would have rejected at least 50% of the eligible studies. Figure 2 shows the estimated CAR and acceptance rates across the 32 analysed studies. Counterfactual acceptance rates (CAR) across the SLRs evaluated. Note: For Blom et al. [47], Copper et al. [50], Afshari et al. [60], Jiang et al. [62], Etxeandia-Ikobaltzeta et al. [72] and Eiring et al. [69], the actual acceptance rates reported by authors are presented. n = xx represents the total number of articles considered eligible and evaluated for quality after screening. SLRs = Systematic Literature Reviews Items used for the QA of primary studies in the included SLRs The majority of the included SLRs (39/40) comprehensively described how the QA process was conducted. One study [70] mentioned that QA was done but did not describe how it was implemented. Furthermore, the terminology used to describe the QA process varied considerably among the SLRs. Terms such as quality appraisal or assessment [9, 23, 24, 48, 49, 51, 53, 55, 57,58,59,60, 73,74,75,76,77,78], critical appraisal [47], risk of bias assessment [25, 62, 63, 72, 79,80,81,82], relevancy and quality assessment [52, 56], assessment of quality and data appropriateness [50], methodological quality assessment [26, 27, 46, 54, 61, 69], reporting quality [71, 83] credibility checks and methodological review [70] were used loosely and interchangeably. One study [84] mentioned three terms, RoB, methodological quality and reporting quality, in their description of the QA process. Notably, most SLRs that used the term quality assessment incorporated all three QA dimensions (RoB [methodology], reporting and relevance) in the QA. A comprehensive list of 93 items remained after reviewing the original list of items, assigning new names where necessary, and removing duplicates (see Additional file 1, Supplementary Material 5a Table A.7). Only 70 out of the 93 items found a place in the 40 SLRs that appraised the quality of studies eliciting HSUVs. The number of items used per SLR ranged from 1 to 29 (mean = 10, median = 8, and IQR = 8). Of the 70 items used in SLRs, only five were used in at least 50% of the 40 SLRs: 'response rates' (68%); 'statistical and/or data analysis' (55%), 'loss to follow-up [attrition or withdrawals]' (53%), 'sample size' (53%) and 'missing (incomplete) data'. Some of the least frequently used items include: 'sources of funding', 'administration procedures', 'ethical approval', 'reporting of p-values', 'appropriateness of endpoints', 'the generalizability of findings' and 'non-normal distribution of utility values'. Each was used in only one SLR (3%). Twenty-three items (23/93) were not used in the SLRs but appeared in QA tools, checklists and/or GPRs. Some of these include 'allocation sequence concealment', 'questionnaire response time' 'description and use of anchor states', 'misclassification (bias) of interventions', 'reporting of adverse events', 'the integrity of intervention' and 'duration in health states' (see Additional file 1, Supplementary material 7, Table A.10). Results of the ISPOR and Additional items are depicted in Fig. 3. The ISPOR item (Panel A) that most frequently occurred in SLRs was 'response rates' (27/40). Notably, most SLRs that evaluated 'response rates' developed their QA based on NICE/ISPOR tools (14/27). Similarly, QA based on NICE/ISPOR tools tended to include items such as 'sample size' (12 vs 9), 'loss of follow up' (13 vs 8), 'inclusion and exclusion criteria' (8 vs 3) and 'missing data' (12 vs 7) more so than those based on other checklists, tools and GPRs. Moreover, among ISPOR items, the measure used to describe the health states appeared the least frequently (3/40) in the SLRs. Additionally, none of the 40 SLRs evaluated all the 14 ISPOR items, and 10 of these items were considered by less than 50% of the SLRs. This observed trend indicated that adherence to the currently published guidelines is limited. Frequency of use of ISPOR and Additional items in SLRs. GPBM, generic preference-based measure; HS, health states and HSUVs, Health state utility values Similar to the ISPOR items, most of the Additional items (Panel B) were used in just a few SLRs, with 12 appearing in less than 25% of the SLRs. The Additional item that appeared most frequently was 'statistical and/or data analysis' (22/40). Five out of these 22 articles were SLRs that based their QA on NICE/ISPOR tools. Items related to administration procedures', 'indifferent search procedures' and 'time of assessment' were the least used, each appearing only one to three times out of the 40 SLRs analysed. Of note, no SLR that based its QA on NICE/ISPOR tools included items related to 'confounding and baseline equivalence'; study design features; 'reporting biases and administration procedure, which were used in 17, 9, 5 and 3 of the 40 SLRs, respectively. The figure also suggests that QA, based on other currently existing QA tools, checklists, and GPRs, focused more on statistical and data analysis issues (17 vs 5) and blinding (8 vs 1). Items occurring in the checklists, tools, and GRPs extracted from the SLRs Out of the 93 items identified, 81 items appeared in the identified checklists, tools, and GPRs (see Additional file 1, Supplementary Material 7, Table A.11). The most frequently featured items were 'statistical/data analysis' (23/30) and 'confounding or baseline equivalency of groups' (20/30). The least occurring items included instrument properties (feasibility, reliability, and responsiveness), 'generalisability of findings', 'administration procedure' and 'ethical approval', all of which were featured once. Twelve items (12/93) were not found in the checklists, tools, and GRPs, for instance, 'bibliographic details (including the year of publication)', 'credible extrapolation of health state valuations', and 'source of tariff (value set)'. Figure 4 shows the occurrence frequency of ISPOR (Panel A) and Additional items (Panel B) in the 35 QA tools, checklists and GPRs. Notably, each ISPOR item featured in less than 50% (18) of the 35 QA tools, checklists, and GPRs analysed. The most frequently appearing ISPOR item was 'respondent and recruitment selection' (17/35), followed by 'response rates (13/35) and 'missing or incomplete date' 13/35, and 'sample size' (11/30). The most frequently occurring Additional item was 'statistical/data analysis' (23/35), which appeared in 3 out of the 6 of the NICE/ISPOR tools and 20 out of the 29 other checklists, tools and GPRs. This was followed by confounding (20/35), which also appeared in only 1 out of the 6 NICE/ISPOR tools. Remarkably, items such as 'blinding' (14/35), 'study design features' (11/35) and 'randomisation' (6/35) only appeared in other checklists, tools and GPRs which are not considered NICE/ISPOR tools. Frequency of occurrence in QA tools, checklist, and good practice recommendations. GPBM, generic preference based measure; HS, health states and HSUVs, Health state utility values Out of the 93 items from the comprehensive list, Fig. 5 displays the ten most used items in SLRs (Panel A) and the ten most frequently occurring items in the QA tools, checklists, and GPRs analysed (Panel B). On the one hand, although 'blinding' and 'study/experimental design features' were not among the ten most frequent items in the SLRs, they were highly ranked among the QA tools, checklists, and GPRs (fourth [with 40% occurrence rate] and eighth [with 31% occurrence rate], respectively). On the other hand, items related to 'response rates' and 'loss of follow up' had a higher ranking among the SLRs (first [68%] and third [53%], respectively) than among the checklists, tools and GPRs (seventh [33%] and tenth [26%] and respectively). Top ten most occurring items in (A) SLRs and (B) QA tools, checklists and GPRs. GPBM, generic preference-based measure; HS, health states and HSUVs, Health state utility values We reviewed 73 SLRs of studies eliciting HSUVs and comprehensively described the nature of QA undertaken. We identified 35 QA tools, checklists, and GPRs considered or mentioned in the selected SLRs and extracted their main characteristics. We then used the two sets of information to generate a comprehensive list of 93 items used in 1) SLRs (70 items) and 2) in the QA tools, checklists, and GPRs (81 items) (see Additional file 1, Supplementary file 5). With only 55% of SLRs appraising the quality of included studies, the results supported our hypothesis of a low prevalence of QA in SLRs of studies eliciting HSUVs. This is evident when compared to other fields such as sports and exercise medicine, in which the prevalence of QA in SLRs was 99% [30], general medicine, general practice public health and paediatrics (90%) [15], surgery, alternative medicine rheumatology, dentistry and hepatogastroenterology (97%), and anesthesiology 76% [14]. In these fields, the high prevalence is in part linked to the availability of standardised QA tools and the presence of generally accepted standards [15, 30]. For instance, a study on sports and exercise medicine [30] estimated that standardised QA tools were used in 65% of the SLRs analysed compared to 16% in the current study. The majority of the SRLs in the Büttner et al. [30] study were either healthcare interventions (32/66) or observational epidemiology (26/66) reviews, where standardised QA tools are widely available and accepted. Examples include: the Jadad Tool [36], Downs and Black [85], Newcastle–Ottawa Scale (NOS) scale, Cochrane tool for RoB assessment tools [28, 86], RoB 1 [37] and RoB 2 [87]. Our results showed that SRL authors incorporate heterogeneous QA dimensions in their QAs. These variations can be attributed to a strong and long-standing lack of consensus on the definition of quality and the overall aim of doing a QA [31]. Overall, the present review identified three QA dimensions, RoB, reporting and relevancy\applicability, which were evaluated to varying extents (see the breakdown in Table 2). This heterogeneity in dimensions often leads to considerable variations in the QA items considered and the overall conclusions drawn [22, 38]. For instance, Büttner et al. [29, 30] compared the QA results based on the Downs and Black checklistFootnote 1 and the Cochrane Risk of Bias 2 tool (RoB2).Footnote 2 Interestingly, QA using the RoB 2 resulted in 11/11 of the RCTs being rated high overall RoB, while using the Downs and Black checklist resulted in 8/11 of the same studies being judged as high-quality trials. The result from the study by Büttner et al. [29, 30] described above is in favour of focusing only on RoB when appraising the quality of studies included in a SLR. Nevertheless, additional challenges exist when the studies are not well reported. It is different from concluding that a study is prone to RoB because it had several methodological flaws and that another is prone to RoB because the reporting was unclear. In effect, we do not know anything about the RoB in a study that does not provide sufficient details for such an assessment. Pivotal to any QA in a SLR process is the reporting quality of included studies. A well-reported study allows reviewers to judge whether the results of primary studies can be trusted and whether they should contribute to meta-analyses [14]. First, the reviewers should assess the studies' methodological characteristics (based on the reported information). Only then, based on the methodological rigour (or flaws) identified, should risk judgements, the perceived risk that the results of a research study deviate from the truth [29, 30], be inferred. Inevitably, all three quality dimensions are necessary and sufficient components of a robust QA [88]. A challenge to the QA of studies eliciting HSUVs is the apparent lack of standardised and widely accepted QA tools to evaluate them. First, this is evident in some of the SLRs [89,90,91,92] that did not appraise the quality of contributing studies and cited a lack of a gold standard as the main barrier to conducting such. Second, most of the SLRs that appraised quality did this by customising elements from the different checklist(s) [24, 27, 75, 79, 80], or using standardised tools designed to evaluate quality in other types of studies, and not primarily for eliciting HSUVs [23, 27, 52, 62], and GPRs [9, 26, 46, 47, 50, 54,55,56, 61, 63, 74, 84]. In this regard, we estimated that SLR authors used, on average, two QA tools, checklists or GPRs (Max = 9) to construct their customised QA tools, with only 14/40 (35%) SLRs using one tool [24, 25, 49, 51, 53, 57,58,59,60, 73, 75, 79, 80, 82]. This finding is not consistent with other fields of research. For instance, Katikireddi et al. [15] conducted a comprehensive review of QA in general practice public health and paediatrics. Their study estimated that, out of the 678 selected SLRs, 513 (76%) used a single quality/RoB assessment tool. Tools used included the non-modified versions of the Cochrane tool for RoB assessment (36%), the Jadad tool (14%), and the Newcastle–Ottawa scale (6%) [15]. The observed use of multiple tools leads to a critical question regarding the appropriateness of combining or developing custom-made tools to address the challenges present in the QA in SLRs of HSUVs studies. Petrou et al.'s guide to conducting systematic reviews and meta-analyses of studies eliciting HSUVs stated that "In the absence of generic tools that encompass all potentially relevant features, it is incumbent on those involved in the review process to describe the quality of contributing studies in holistic terms, drawing where necessary upon the relevant features of multiple checklists" [12]. While this may sound plausible and pragmatic to many pundits, it requires comprehension and an agreement on what should be considered "relevant features". Here is where the evidence delineated in this comprehensive review may call into question the notion of Petrou et al. [12]. The analysis of the comprehensive list of 93 items (see Fig. 5 and Additional file 1, Supplementary Material 7 Table A.10 and A.11) showed: 1) a high heterogeneity among the QA items included in the SLRs and 2) a considerable mismatch of what is included in the existing QA tools, checklists and GPRs — which may be relevant for those who created the tools and the specific fields they were created for — with what is used by SLR authors in the QA of studies eliciting HSUVs. The plethora of QA tools that authors of SLRs can choose from are designed with a strong focus on healthcare intervention studies measuring effect size. Yet, primary studies of HSUVs are not restricted to intervention studies. Accordingly, features that could be considered more relevant to intervention studies than to studies eliciting HSUVs such as the blinding of participants and outcomes, appeared in 40% of the checklists and GPRs and did not appear in any of the QAs of studies eliciting HSUVs. Their exclusion could indicate that the SLRs authors omitted less "relevant" features. However, authors of SLRs overlooked an essential set of core elements of the empirical elicitation of HSUVs. For instance, Stalmeier et al. [39] provided a shortlist of 10 items necessary to report in the methods sections of studies eliciting HSUVs. The list includes items on how utility questions were administered, how health states were described, which utility assessment method or methods were used, the response and completion rates, specification of the duration of the health states, which software program (if any) was used, the description of the worst health state (lower anchor of the scale), whether a matching or choice indifference search procedure was used, when the assessment was conducted relative to treatment, and which (if any) visual aids were used. Similarly, the Checklist for Reporting Valuation Studies of Multi-Attribute Utility-Based Instruments (CREATE)[43]—which can be considered to be very close to HSUVs elicitation—includes attribute levels and scoring algorithms used for the valuation process. Regrettably, core elements such as instrument administration procedures, respondent burden, construction of tasks, indifferent search procedures, and scoring algorithms were used in less than 22% of the SLRs (see Fig. 3). The lack of these core elements strongly suggests that existing tools may not be suitable for QA of empirical HSUVs studies. Additionally, the most highly ranked items in existing tools are statistical analysis and confounding and baseline equivalence, which appeared in 66% and 57%, of QA tools, checklists and GPRs evaluated. These items are only used in 55% and 43% of the SLRs that appraised quality. Undeniably, studies eliciting HSUVs are not just limited to experimental and randomised protocols, where the investigator has the flexibility to choose which variables to account for and control for during the design stage. It becomes extremely relevant to control for confounding variables in HSUVs primary studies (both observational and experimental) and employ robust statistical methods to control for any remaining confounders. Furthermore, several items found in currently existing checklists, tools and GPRs reviewed and used by SLR review authors may be considered redundant. These include, as examples, 'items on sources of funding', 'study objectives and research questions', 'bibliographic details, including the year of publication' and 'reporting of ethical approval'. Another argument against Petrou et al.'s recommendation to resort to multiple QA tools when customising existing tools for QA in SLRs of studies eliciting HSUV is the need for consistency, reproducibility and comparability of research. Consistency, reproducibility and comparability are key to all scientific methods and or research regardless of domain. Undeniably, using multivariable QA tools and methods, informed by many published critical appraisal tools and GPRs (35 in our study), does not ensure consistency, reproducibility and comparability of either QA results or overall conclusions [21, 22]. The 14 ISPOR items drawn from the few available GPRs specific for studies eliciting HSUVs [4, 5, 10, 16,17,18, 93] and the 14 Additional items which were informed mainly by literature [38], theoretical considerations [39,40,41,42,43,44,45] and the study team's conceptual understanding of HSUV elicitation process can be considered a plausible list of items to include when conducting QA of studies eliciting HSUVs. Nevertheless, besides the list being too extensive or broad, there is, and would be high heterogeneity in the contribution of these items to QA. Therefore, there is a strong need for a scientific and evidence-based process to streamline the list into a standardised one and hope that it can be widely accepted. Although SLRs and checklists, tools, and GPRs shared the same top five ISPOR items (i.e., response rates, loss of follow up, sample size, respondent selection and recruitment, and missing data), the ISPOR items are more often considered in the SLRs than they appear in checklists, tools, and GPRs reviewed. Moreover, our results showed that Additional items, which are also valuable in QA, have a considerably lower prevalence than ISPOR items in the QA presented in the SLRs. This is of concern since relying only on NICE/ISPOR tools may overlook relevant items for the QA of studies eliciting HSUVs such as 'statistical or data analysis', 'confounding', 'blinding', 'reporting of results', and 'study design features'. Arguably too, relying on the current set of the QA tools, checklist and GPRs that have a noticeable lack of attention (as implied by their low frequency of occurrence) to items that capture the core elements for studies eliciting HSUVs, such as techniques used to value health states, the population used to collect the HSUVs, appropriate use of valuation method, and proper use of generic preference-based methods will not address the present challenges. Another critical area where SLR authors are undecided is which QA system to use. While the guidelines seem to favour domain-based over the checklist and scale-based systems, SLR authors still seem to favour checklist and scale-based QA, presumably due to their simplicity. Our results suggest that scale-based checklists were used in more than 66% of the SLRs that appraised quality. The pros and cons of either system are well documented in the literature [15, 21, 22, 38]. Notably, the two systems will produce different QA judgements [15, 21, 22, 38]. The combined effect of such heterogeneity and inconsistencies in QA is a correspondingly wide variation and uncertainty in the QA results, conclusions and recommendations for policy. Our analysis also revealed an alarmingly low rate of SLRs in which the conducted QA impacts the analysis. Congruent with previous studies in other disciplines of general medicine, public health, and trials of therapeutic or preventive interventions [15, 94, 95], only 11% (5/35) of the SLRs that conducted a QA explicitly informed the synthesis stage based on the QA results [47, 50, 60, 62, 69]. The reasons for this low prevalence of incorporating QA findings into the synthesis stage of SLRs remain unclear. However, it could be firmly attributed to a lack of specific guidance and disagreements on how QA results can be incorporated into the analysis process [95]. Commonly used methods for incorporating QA results into the analysis process include sensitivity analysis, narrative discussion and exclusion of studies at high RoB [15]. The five SLRs in our review [47, 50, 60, 62, 69] excluded studies with high or unclear risk of bias (or moderate or low quality) from the synthesis. These findings are a cause of concern since the empirical evidence suggests that combining evidence from low-quality (RoB) articles with high-quality leads to bias in the overall review conclusions, which can be detrimental to policy-making [15]. Therefore, incorporating the QA findings into the synthesis and conclusion drawing of any SLR [28,29,30], mainly of HSUVs, which are heterogeneous and considered a highly sensitive input parameter in many CUA [3, 5, 6], is highly recommendable. Nevertheless, the lack of clear guidance and agreement on how to do so remains a significant barrier. To explore the potential impact of QA, we calculated counterfactual acceptance rates for individual studies and corresponding summary statistics (mean, median and IQR). While there has been an increasing number of empirical studies eliciting HSUVs over the years, our results suggest that a staggering 46% of individual studies would be excluded from the SLRs analysis because of their lower quality. However, this needs to be interpreted with caution. First, there is a mixed bag of QA tools (reporting quality vs methodological flaws and RoB, domain-based vs scale-based). Second, there could be an overlap of individual primary studies across the 40 SLRs that appraised quality. Third, although informed by previous studies, the QA threshold we used is arbitrary. There is currently no agreed standard or recommended threshold cut-off point to use during QA. This has resulted in considerable heterogeneity on the threshold used to exclude studies for synthesis in the previous literature [14]. Forth, there are variations in approaches recommended by different tools on how to summarise the individual domain ratings into an overall score [14, 15]. Two main strengths can be highlighted in our review. First, in comparison to Yepes-Nuñez et al. [13], who focused on RoB and included 43 SRLs (to our knowledge, the only review that looked at RoB items considered in the QA of SLRs studies eliciting HSUVs), our findings are based on a larger sample (73 SLRs) with a broader focus (three dimensions: RoB, reporting, and relevancy\applicability) [13]. Second, in addition to examining QA in SRLs, we systematically evaluated the original articles related to each of the 35 identified checklists, tools, and GPRs [13]. Consequently, our comprehensive list of items reflects the QA methods applied in the SLRs and the current practices applied in checklists, tools, and GPRs. More importantly, based on both types of articles (i.e., SLRs and checklists, tools and GPRs), we propose a subsample of 28 main items that can serve as the basis for developing a standardised QA tool for the evaluation of HSUVs. A limitation of our study is that the understanding of how QA was done was solely based on our comprehension of the reported information in the SLRs. Since this was a rapid review, we did not contact the corresponding SLR authors for clarifications regarding extracted items and QA methodology. A second limitation is that the SLRs were selected from published articles between 2015 and 2021. We adopted this approach to capture only the recent trends in the QA of studies on HSUVs, including the current challenges. Furthermore, the review by Yepes-Nuñez et al. [13], which reviewed all SLRs of HSUV from inception to 2015, has been used as part of the evidence that informed the development of the "Additional items". As a result, our list captured all the 23 items identified by Yepes-Nuñez et al. and considered relevant before 2015. Our comprehensive review reveals a low prevalence of QA in identified SLRs of studies eliciting HSUVs. Most importantly, the review depicts wide inconsistencies in approaches to the QA process ranging from the tools used, QA dimensions, the corresponding QA items, use of scale- or domain-based tools, and how the overall QA outcomes are summarised (summary scores vs risk judgements). The origins of these variations can be attributed to an absence of consensus on the definition of quality and the consequent lack of a standardised and widely accepted QA tool to evaluate studies eliciting HSUVs. Overall, the practice of QA of individual studies in SLRs of studies eliciting HSUVs is still in its infancy stage. There is a strong need to promote QA in such assessments. The use of a rigorously and scientifically developed QA tool specifically designed for studies on HSUVs will, to a greater extent, ensure the much-needed consistency, reproducibility and comparability of research. A key question remains: Is it feasible to have a gold standard, comprehensive and widely accepted tool for QA of studies eliciting HSUVs? Downs and Black [85] concluded that it is indeed feasible to create a "checklist" for assessing the methodological quality of both randomised and non-randomised studies of health care interventions. Therefore, the next step to developing a much-needed QA tool in the field of HSUVs is for researchers to reach a consensus on the working definition of quality, particularly for HSUVs where contextual considerations matter. Once that is established, an agreement on the core dimensions, domains and items that can be used to measure the quality, based on the agreed concept of quality, then follows. This work provides a valuable pool of items that should be considered for any future QA tool development. All data is provided in the paper or supplementary material. The checklist is comprised of 27 items across four subscales, including completeness of reporting (9 items); internal validity (13 items); precision (1 item), and; external validity (3 items). RoB2 is the revised, second edition of the Cochrane Risk of Bias tool for RCTs with five RoB domains [1) bias arising from the randomisation process; 2) bias due to deviations from intended interventions; 3) bias due to missing outcome data; 4) bias in measurement of the outcome; and 5) bias in the selection of the reported results. Counterfactual Acceptance Rate CUA: EQ-5D: Euroqol- 5 Dimension GPBM: Generic Preference-Based Methods GPR(s): Good Practice Recommendation(s) HSUVs: Health States Utility Value(s) HTA: HUI: Health Utilities Index IQR: Interquartile Range ISPOR: The Professional Society for Health Economics And Outcomes Research NICE: The National Institute for Health and Care Excellence Quality Appraisal or Quality Assessment RCT(s): Randomised Controlled Trial(s) RoB: ROBINS-I: Risk Of Bias In Non-Randomised Studies of Interventions RR(s): Rapid Review(s) SF6D: Short-Form Six-Dimension SG: Standard Gamble SLR(s): Systematic Literature Review(s) TTO: Time Trade-Off Visual Analogue Scale Masic I, Miokovic M, Muhamedagic B. Evidence based medicine - new approaches and challenges. Acta Inform Med. 2008;16(4):219–25. Health Technology Assessment [https://htaglossary.net/health-technology-assessment] Xie F, Zoratti M, Chan K, Husereau D, Krahn M, Levine O, Clifford T, Schunemann H, Guyatt G. Toward a Centralized, Systematic Approach to the Identification, Appraisal, and Use of Health State Utility Values for Reimbursement Decision Making: Introducing the Health Utility Book (HUB). Med Decis Making. 2019;39(4):370–8. Wolowacz SE, Briggs A, Belozeroff V, Clarke P, Doward L, Goeree R, Lloyd A, Norman R. Estimating Health-State Utility for Economic Models in Clinical Studies: An ISPOR Good Research Practices Task Force Report. Value Health. 2016;19(6):704–19. Ara R, Peasgood T, Mukuria C, Chevrou-Severac H, Rowen D, Azzabi-Zouraq I, Paisley S, Young T, van Hout B, Brazier J. Sourcing and Using Appropriate Health State Utility Values in Economic Models in Health Care. Pharmacoeconomics. 2017;35(Suppl 1):7–9. Ara R, Hill H, Lloyd A, Woods HB, Brazier J. Are Current Reporting Standards Used to Describe Health State Utilities in Cost-Effectiveness Models Satisfactory? Value Health. 2020;23(3):397–405. Torvinen S, Bergius S, Roine R, Lodenius L, Sintonen H, Taari K. Use of patient assessed health-related quality of life instruments in prostate cancer research: a systematic review of the literature 2002–15. Int J Technol Assess Health Care. 2016;32(3):97–106. Robinson A, Dolan P, Williams A. Valuing health status using VAS and TTO: what lies behind the numbers? Soc Sci Med (1982). 1997;45(8):1289–97. Li L, Severens JLH, Mandrik O. Disutility associated with cancer screening programs: A systematic review. PLoS ONE. 2019;14(7): e0220148. Ara R, Brazier J, Peasgood T, Paisley S. The Identification, Review and Synthesis of Health State Utility Values from the Literature. Pharmacoeconomics. 2017;35(Suppl 1):43–55. Arnold D, Girling A, Stevens A, Lilford R. Comparison of direct and indirect methods of estimating health state utilities for resource allocation: review and empirical analysis. BMJ (Clin Res Ed). 2009;339:b2688. Petrou S, Kwon J, Madan J. A Practical Guide to Conducting a Systematic Review and Meta-analysis of Health State Utility Values. Pharmacoeconomics. 2018;36(9):1043–61. Yepes-Nuñez JJ, Zhang Y, Xie F, Alonso-Coello P, Selva A, Schünemann H, Guyatt G. Forty-two systematic reviews generated 23 items for assessing the risk of bias in values and preferences' studies. J Clin Epidemiol. 2017;85:21–31. Marušić MF, Fidahić M, Cepeha CM, Farcaș LG, Tseke A, Puljak L. Methodological tools and sensitivity analysis for assessing quality or risk of bias used in systematic reviews published in the high-impact anesthesiology journals. BMC Med Res Methodol. 2020;20(1):121. Katikireddi SV, Egan M, Petticrew M. How do systematic reviews incorporate risk of bias assessments into the synthesis of evidence? A methodological study. J Epidemiol Community Health. 2015;69(2):189–95. Brazier J, Ara R, Azzabi I, Busschbach J, Chevrou-Séverac H, Crawford B, Cruz L, Karnon J, Lloyd A, Paisley S, et al. Identification, Review, and Use of Health State Utilities in Cost-Effectiveness Models: An ISPOR Good Practices for Outcomes Research Task Force Report. Value Health. 2019;22(3):267–75. Papaioannou D, Brazier J, Paisley S. NICE Decision Support Unit Technical Support Documents. In: NICE DSU Technical Support Document 9: The Identification, Review and Synthesis of Health State Utility Values from the Literature. edn. London: National Institute for Health and Care Excellence (NICE); 2010. Papaioannou D, Brazier J, Paisley S. Systematic searching and selection of health state utility values from the literature. Value Health. 2013;16(4):686–95. Viswanathan M, Patnode CD, Berkman ND, Bass EB, Chang S, Hartling L, Murad MH, Treadwell JR, Kane RL. Recommendations for assessing the risk of bias in systematic reviews of health-care interventions. J Clin Epidemiol. 2018;97:26–34. Ma L-L, Wang Y-Y, Yang Z-H, Huang D, Weng H, Zeng X-T. Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better? Mil Med Res. 2020;7(1):7. O'Connor SR, Tully MA, Ryan B, Bradley JM, Baxter GD, McDonough SM. Failure of a numerical quality assessment scale to identify potential risk of bias in a systematic review: a comparison study. BMC Res Notes. 2015;8:224. Armijo-Olivo S, Fuentes J, Ospina M, Saltaji H, Hartling L. Inconsistency in the items included in tools used in general health research and physical therapy to evaluate the methodological quality of randomized controlled trials: a descriptive analysis. BMC Med Res Methodol. 2013;13:116. Park HY, Cheon HB, Choi SH, Kwon JW. Health-Related Quality of Life Based on EQ-5D Utility Score in Patients With Tuberculosis: A Systematic Review. Front Pharmacol. 2021;12:659675. Carrello J, Hayes A, Killedar A, Von Huben A, Baur LA, Petrou S, Lung T. Utility Decrements Associated with Adult Overweight and Obesity in Australia: A Systematic Review and Meta-Analysis. Pharmacoeconomics. 2021;39(5):503–19. Landeiro F, Mughal S, Walsh K, Nye E, Morton J, Williams H, Ghinai I, Castro Y, Leal J, Roberts N, et al. Health-related quality of life in people with predementia Alzheimer's disease, mild cognitive impairment or dementia measured with preference-based instruments: a systematic literature review. Alzheimers Res Ther. 2020;12(1):154. Meregaglia M, Cairns J. A systematic literature review of health state utility values in head and neck cancer. Health Qual Life Outcomes. 2017;15(1):174. Li YK, Alolabi N, Kaur MN, Thoma A. A systematic review of utilities in hand surgery literature. J Hand Surg Am. 2015;40(5):997–1005. Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (Editors). Cochrane Handbook for Systematic Reviews of Interventions In: VA, W. (ed.). 2nd Edition. Chichester: Wiley; 2019. Büttner F, Winters M, Delahunt E, Elbers R, Lura CB, Khan KM, Weir A, Ardern CL. Identifying the 'incredible'! Part 1: assessing the risk of bias in outcomes included in systematic reviews. Br J Sports Med. 2020;54(13):798–800. Büttner F, Winters M, Delahunt E, Elbers R, Lura CB, Khan KM, Weir A, Ardern CL. Identifying the 'incredible'! Part 2: Spot the difference - a rigorous risk of bias assessment can alter the main findings of a systematic review. Br J Sports Med. 2020;54(13):801–8. Dechartres A, Charles P, Hopewell S, Ravaud P, Altman DG. Reviews assessing the quality or the reporting of randomized controlled trials are increasing over time but raised questions about how quality is assessed. J Clin Epidemiol. 2011;64(2):136–44. Downes MJ, Brennan ML, Williams HC, Dean RS. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ Open. 2016;6(12):e011458. Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, Affengruber L, Stevens A. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13–22. Burls A. What is Critical Appraisal? [Online]. Hayward Medical Communications; 2009. Available: http://www.bandolier.org.uk/painres/download/whatis/What_is_critical_appraisal.pdf. Accessed 5 Nov 2021. Verhagen AP, De Vet HC, De Bie RA, Kessels AG, Boers M, Bouter LM, Knipschild PG. The Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus. J Clin Epidemiol. 1998;51(12):1235–41. Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, McQuay HJ. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials. 1996;17(1):1–12. Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ (Clin Res Ed). 2016;355:i4919. Katrak P, Bialocerkowski AE, Massy-Westropp N, Kumar VSS, Grimmer KA. A systematic review of the content of critical appraisal tools. BMC Med Res Methodol. 2004;4(1):22. Stalmeier PF, Goldstein MK, Holmes AM, Lenert L, Miyamoto J, Stiggelbout AM, Torrance GW, Tsevat J. What should be reported in a methods section on utility assessment? Med Decis Making. 2001;21(3):200–7. Bridges JF, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, Johnson FR, Mauskopf J. Conjoint analysis applications in health–a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health. 2011;14(4):403–13. Petrou S, Rivero-Arias O, Dakin H, Longworth L, Oppe M, Froud R, Gray A. The MAPS Reporting Statement for Studies Mapping onto Generic Preference-Based Outcome Measures: Explanation and Elaboration. Pharmacoeconomics. 2015;33(10):993–1011. Petrou S, Rivero-Arias O, Dakin H, Longworth L, Oppe M, Froud R, Gray A. Preferred Reporting Items for Studies Mapping onto Preference-Based Outcome Measures: The MAPS Statement. Pharmacoeconomics. 2015;33(10):985–91. Xie F, Pickard AS, Krabbe PF, Revicki D, Viney R, Devlin N, Feeny D. A Checklist for Reporting Valuation Studies of Multi-Attribute Utility-Based Instruments (CREATE). Pharmacoeconomics. 2015;33(8):867–77. Zhang Y, Alonso-Coello P, Guyatt GH, Yepes-Nuñez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW Jr, et al. GRADE Guidelines: 19. Assessing the certainty of evidence in the importance of outcomes or values and preferences-Risk of bias and indirectness. J Clin Epidemiol. 2019;111:94–104. Zhang Y, Coello PA, Guyatt GH, Yepes-Nuñez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW Jr, et al. GRADE guidelines: 20. Assessing the certainty of evidence in the importance of outcomes or values and preferences-inconsistency, imprecision, and other domains. J Clin Epidemiol. 2019;111:83–93. Aceituno D, Pennington M, Iruretagoyena B, Prina AM, McCrone P. Health State Utility Values in Schizophrenia: A Systematic Review and Meta-Analysis. Value Health. 2020;23(9):1256–67. Blom EF, Haaf KT, de Koning HJ. Systematic Review and Meta-Analysis of Community- and Choice-Based Health State Utility Values for Lung Cancer. Pharmacoeconomics. 2020;38(11):1187–200. Buchanan-Hughes AM, Buti M, Hanman K, Langford B, Wright M, Eddowes LA. Health state utility values measured using the EuroQol 5-dimensions questionnaire in adults with chronic hepatitis C: a systematic literature review and meta-analysis. Qual Life Res. 2019;28(2):297–319. Carter GC, King DT, Hess LM, Mitchell SA, Taipale KL, Kiiskinen U, Rajan N, Novick D, Liepa AM. Health state utility values associated with advanced gastric, oesophageal, or gastro-oesophageal junction adenocarcinoma: a systematic review. J Med Econ. 2015;18(11):954–66. Cooper JT, Lloyd A, Sanchez JJG, Sörstadius E, Briggs A, McFarlane P. Health related quality of life utility weights for economic evaluation through different stages of chronic kidney disease: a systematic literature review. Health Qual Life Outcomes. 2020;18(1):310. Di Tanna GL, Urbich M, Wirtz HS, Potrata B, Heisen M, Bennison C, Brazier J, Globe G. Health State Utilities of Patients with Heart Failure: A Systematic Literature Review. Pharmacoeconomics. 2021;39(2):211–29. Golicki D, Jaśkowiak K, Wójcik A, Młyńczak K, Dobrowolska I, Gawrońska A, Basak G, Snarski E, Hołownia-Voloskova M, Jakubczyk M, et al. EQ-5D-Derived Health State Utility Values in Hematologic Malignancies: A Catalog of 796 Utilities Based on a Systematic Review. Value Health. 2020;23(7):953–68. Kua WS, Davis S. PRS49 - Systematic Review of Health State Utilities in Children with Asthma. Value Health. 2016;19(7):A557. Magnus A, Isaranuwatchai W, Mihalopoulos C, Brown V, Carter R. A Systematic Review and Meta-Analysis of Prostate Cancer Utility Values of Patients and Partners Between 2007 and 2016. MDM Policy Practice. 2019;4(1):2381468319852332. Paracha N, Abdulla A, MacGilchrist KS. Systematic review of health state utility values in metastatic non-small cell lung cancer with a focus on previously treated patients. Health Qual Life Outcomes. 2018;16(1):179. Paracha N, Thuresson PO, Moreno SG, MacGilchrist KS. Health state utility values in locally advanced and metastatic breast cancer by treatment line: a systematic review. Expert Rev Pharmacoecon Outcomes Res. 2016;16(5):549–59. Petrou S, Krabuanrat N, Khan K. Preference-Based Health-Related Quality of Life Outcomes Associated with Preterm Birth: A Systematic Review and Meta-analysis. Pharmacoeconomics. 2020;38(4):357–73. Saeed YA, Phoon A, Bielecki JM, Mitsakakis N, Bremner KE, Abrahamyan L, Pechlivanoglou P, Feld JJ, Krahn M, Wong WWL. A Systematic Review and Meta-Analysis of Health Utilities in Patients With Chronic Hepatitis C. Value Health. 2020;23(1):127–37. Szabo SM, Audhya IF, Malone DC, Feeny D, Gooch KL. Characterizing health state utilities associated with Duchenne muscular dystrophy: a systematic review. Quality Life Res. 2020;29(3):593–605. Afshari S, Ameri H, Daroudi RA, Shiravani M, Karami H, Akbari Sari A. Health related quality of life in adults with asthma: a systematic review to identify the values of EQ-5D-5L instrument. J Asthma. 2021;59(6):1203–12. Ó Céilleachair A, O'Mahony JF, O'Connor M, O'Leary J, Normand C, Martin C, Sharp L. Health-related quality of life as measured by the EQ-5D in the prevention, screening and management of cervical disease: A systematic review. Qual Life Res. 2017;26(11):2885–97. Jiang M, Ma Y, Li M, Meng R, Ma A, Chen P. A comparison of self-reported and proxy-reported health utilities in children: a systematic review and meta-analysis. Health Qual Life Outcomes. 2021;19(1):45. Rebchuk AD, O'Neill ZR, Szefer EK, Hill MD, Field TS. Health Utility Weighting of the Modified Rankin Scale: A Systematic Review and Meta-analysis. JAMA Netw Open. 2020;3(4):e203767. Herzog R, Álvarez-Pasquin MJ, Díaz C, Del Barrio JL, Estrada JM, Gil Á. Are healthcare workers' intentions to vaccinate related to their knowledge, beliefs and attitudes? a systematic review. BMC Public Health. 2013;13(1):154. Gupta A, Giambrone AE, Gialdini G, Finn C, Delgado D, Gutierrez J, Wright C, Beiser AS, Seshadri S, Pandya A, et al. Silent Brain Infarction and Risk of Future Stroke: A Systematic Review and Meta-Analysis. Stroke. 2016;47(3):719–25. Vistad I, Fosså SD, Dahl AA. A critical review of patient-rated quality of life studies of long-term survivors of cervical cancer. Gynecol Oncol. 2006;102(3):563–72. Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007;85(4):729–68. Gupta A, Kesavabhotla K, Baradaran H, Kamel H, Pandya A, Giambrone AE, Wright D, Pain KJ, Mtui EE, Suri JS, et al. Plaque echolucency and stroke risk in asymptomatic carotid stenosis: a systematic review and meta-analysis. Stroke. 2015;46(1):91–7. Eiring Ø, Landmark BF, Aas E, Salkeld G, Nylenna M, Nytrøen K. What matters to patients? A systematic review of preferences for medication-associated outcomes in mental disorders. BMJ Open. 2015;5(4):e007848. Hatswell AJ, Burns D, Baio G, Wadelin F. Frequentist and Bayesian meta-regression of health state utilities for multiple myeloma incorporating systematic review and analysis of individual patient data. Health Econ. 2019;28(5):653–65. Kwon J, Kim SW, Ungar WJ, Tsiplova K, Madan J, Petrou S. A Systematic Review and Meta-analysis of Childhood Health Utilities. Med Decis Making. 2018;38(3):277–305. Etxeandia-Ikobaltzeta I, Zhang Y, Brundisini F, Florez ID, Wiercioch W, Nieuwlaat R, Begum H, Cuello CA, Roldan Y, Chen R, et al. Patient values and preferences regarding VTE disease: a systematic review to inform American Society of Hematology guidelines. Blood Adv. 2020;4(5):953–68. Yuan Y, Xiao Y, Chen X, Li J, Shen M. A Systematic Review and Meta-Analysis of Health Utility Estimates in Chronic Spontaneous Urticaria. Front Med (Lausanne). 2020;7:543290. Ward Fuller G, Hernandez M, Pallot D, Lecky F, Stevenson M, Gabbe B. Health State Preference Weights for the Glasgow Outcome Scale Following Traumatic Brain Injury: A Systematic Review and Mapping Study. Value Health. 2017;20(1):141–51. Van Wilder L, Rammant E, Clays E, Devleesschauwer B, Pauwels N, De Smedt D. A comprehensive catalogue of EQ-5D scores in chronic disease: results of a systematic review. Qual Life Res. 2019;28(12):3153–61. Han R, François C, Toumi M. Systematic Review of Health State Utility Values Used in European Pharmacoeconomic Evaluations for Chronic Hepatitis C: Impact on Cost-Effectiveness Results. Appl Health Econ Health Policy. 2021;19(1):29–44. Brennan VK, Mauskopf J, Colosia AD, Copley-Merriman C, Hass B, Palencia R. Utility estimates for patients with Type 2 diabetes mellitus after experiencing a myocardial infarction or stroke: a systematic review. Expert Rev Pharmacoecon Outcomes Res. 2015;15(1):111–23. Gheorghe A, Moran G, Duffy H, Roberts T, Pinkney T, Calvert M. Health Utility Values Associated with Surgical Site Infection: A Systematic Review. Value Health. 2015;18(8):1126–37. Yang Z, Li S, Wang X, Chen G. Health state utility values derived from EQ-5D in psoriatic patients: a systematic review and meta-analysis. J Dermatol Treat. 2020;33(2):1029–36. Tran AD, Fogarty G, Nowak AK, Espinoza D, Rowbotham N, Stockler MR, Morton RL. A systematic review and meta-analysis of utility estimates in melanoma. Br J Dermatol. 2018;178(2):384–93. Haridoss M, Bagepally BS, Natarajan M. Health-related quality of life in rheumatoid arthritis: Systematic review and meta-analysis of EuroQoL (EQ-5D) utility scores from Asia. Int J Rheum Dis. 2021;24(3):314–26. Foster E, Chen Z, Ofori-Asenso R, Norman R, Carney P, O'Brien TJ, Kwan P, Liew D, Ademi Z. Comparisons of direct and indirect utilities in adult epilepsy populations: A systematic review. Epilepsia. 2019;60(12):2466–76. Khadka J, Kwon J, Petrou S, Lancsar E, Ratcliffe J. Mind the (inter-rater) gap. An investigation of self-reported versus proxy-reported assessments in the derivation of childhood utility values for economic evaluation: A systematic review. Soc Sci Med. 1982;2019(240):112543. Zrubka Z, Rencz F, Závada J, Golicki D, Rupel VP, Simon J, Brodszky V, Baji P, Petrova G, Rotar A, et al. EQ-5D studies in musculoskeletal and connective tissue diseases in eight Central and Eastern European countries: a systematic literature review and meta-analysis. Rheumatol Int. 2017;37(12):1957–77. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84. Higgins JPT, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savović J, Schulz KF, Weeks L, Sterne JAC. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ (Clin Res Ed). 2011;343:d5928. Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng H-Y, Corbett MS, Eldridge SM, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ (Clin Res Ed). 2019;366:l4898. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, Leeflang MM, Sterne JA, Bossuyt PM. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–36. Blanchard P, Volk RJ, Ringash J, Peterson SK, Hutcheson KA, Frank SJ. Assessing head and neck cancer patient preferences and expectations: A systematic review. Oral Oncol. 2016;62:44–53. Brown V, Tan EJ, Hayes AJ, Petrou S, Moodie ML. Utility values for childhood obesity interventions: a systematic review and meta-analysis of the evidence for use in economic evaluation. Obes Rev. 2018;19(7):905–16. Mohindru B, Turner D, Sach T, Bilton D, Carr S, Archangelidi O, Bhadhuri A, Whitty JA. Health State Utility Data in Cystic Fibrosis: A Systematic Review. Pharmacoecon Open. 2020;4(1):13–25. Xia Q, Campbell JA, Ahmad H, Si L, de Graaff B, Otahal P, Palmer AJ. Health state utilities for economic evaluation of bariatric surgery: A comprehensive systematic review and meta-analysis. Obes Rev. 2020;21(8):e13028. Brazier J, Rowen D. NICE DSU Technical Support Document 11: Alternatives to EQ-5D for Generating Health State Utility Values. National Institute for Health and Care Excellence (NICE). NICE Decision Support Unit Technical Support Documents. School of Health and Related Research, University of Sheffield, UK; 2011. https://www.ncbi.nlm.nih.gov/books/NBK425861/pdf/Bookshelf_NBK425861.pdf. de Craen AJ, van Vliet HA, Helmerhorst FM. An analysis of systematic reviews indicated low incorpororation of results from clinical trial quality assessment. J Clin Epidemiol. 2005;58(3):311–3. Hopewell S, Boutron I, Altman DG, Ravaud P. Incorporation of assessments of risk of bias of primary studies in systematic reviews of randomised trials: a cross-sectional study. BMJ Open. 2013;3(8):e003342. The authors thank Rachel Eckford and Tafirenyika Brian Gwenzi for proofreading and editing this manuscript. We also immensely appreciate the members of the DKFZ Division of Health Economics (https://www.dkfz.de/en/gesundheitsoekonomie/index.php) for their insightful comments and suggestions during internal presentations (i.e., team meetings) of the current review process. Open Access funding enabled and organized by Projekt DEAL. This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors. Division of Health Economics, German Cancer Research Center (DKFZ), Foundation Under Public Law, Im Neuenheimer Feld 280, 69120, Heidelberg, Germany Muchandifunga Trust Muchadeyi, Karla Hernandez-Villafuerte & Michael Schlander Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany Muchandifunga Trust Muchadeyi & Michael Schlander Health Economics, WifOR institute, Rheinstraße 22, Darmstadt, 64283, Germany Karla Hernandez-Villafuerte Alfred Weber Institute for Economics (AWI), University of Heidelberg, Heidelberg, Germany Michael Schlander Muchandifunga Trust Muchadeyi MTM contributed to the conception, development of the search strategy, retrieval of articles for review, step-wise screening of articles, data extraction, data analysis, interpretation and discussion of findings and writing the final manuscript. KHV contributed to the conception, development of the search strategy, step-wise screening of articles (quality checks), data extraction (quality checks), data analysis, interpretation and discussion of findings and writing the final manuscript. MS contributed to the conception, design and analysis of the study, interpretation of findings and writing of the final manuscript. The author(s) read and approved the final manuscript. Correspondence to Muchandifunga Trust Muchadeyi. This rapid review involved no study participants and was exempt from institutional review. All data analysed in the current review came from previously published SLRs. 12874_2022_1784_MOESM1_ESM.pdf Muchadeyi, M.T., Hernandez-Villafuerte, K. & Schlander, M. Quality appraisal for systematic literature reviews of health state utility values: a descriptive analysis. BMC Med Res Methodol 22, 303 (2022). https://doi.org/10.1186/s12874-022-01784-6 Quality appraisal Health state utility values
CommonCrawl
Realizing drug repositioning by adapting a recommendation system to handle the process Makbule Guclin Ozsoy1, Tansel Özyer2, Faruk Polat1 & Reda Alhajj3 A Correction to this article was published on 02 July 2018 Drug repositioning is the process of identifying new targets for known drugs. It can be used to overcome problems associated with traditional drug discovery by adapting existing drugs to treat new discovered diseases. Thus, it may reduce associated risk, cost and time required to identify and verify new drugs. Nowadays, drug repositioning has received more attention from industry and academia. To tackle this problem, researchers have applied many different computational methods and have used various features of drugs and diseases. In this study, we contribute to the ongoing research efforts by combining multiple features, namely chemical structures, protein interactions and side-effects to predict new indications of target drugs. To achieve our target, we realize drug repositioning as a recommendation process and this leads to a new perspective in tackling the problem. The utilized recommendation method is based on Pareto dominance and collaborative filtering. It can also integrate multiple data-sources and multiple features. For the computation part, we applied several settings and we compared their performance. Evaluation results show that the proposed method can achieve more concentrated predictions with high precision, where nearly half of the predictions are true. Compared to other state of the art methods described in the literature, the proposed method is better at making right predictions by having higher precision. The reported results demonstrate the applicability and effectiveness of recommendation methods for drug repositioning. Traditional drug discovery approaches are characterized by high cost and high risk [22]. In 2010, some researchers, e.g., [9], stated that bringing a new drug to the market takes about 15 years and costs between $800 million to $1 billion. A recent study, published in 2014 [7], revealed that developing a new medicine and getting its market approval takes more than 10 years and costs more than $2.5 billion. In response to these costs, drug repositioning has recently received considerable attention as a good alternative which could reduce both time and cost associated with seeking new drugs for emerging diseases. Instead, existing drugs may be adapted as less risky alternatives. Drug repositioning can be defined as the process of identifying new targets for known drugs [22]. It does not aim to replace traditional drug discovery research, but aims to complement them ([31, 35]). Researchers stated in [9] that time required to develop a new drug can be reduced by 30–60% by adapting drug repositioning. Having knowledge of unknown but more probable drug-disease relations may help researchers in drug industry to conduct more targeted laboratory experiments and find out new targets for known drugs. Another advantage of drug repositioning compared to new drug development is that drug repositioning reduces risk because it deals with drugs which have already passed toxicity and other tests, and hence have been approved [37]. Some example drug repositioning cases are presented in [9]. For instance, Minoxidil was originally tested for hypertension and then was found useful for hair loss, Viagra was originally tested for angina and then was found useful for erectile dysfunction and pulmonary hypertension, Avastin was originally developed for metastatic colon cancer and non-small-cell lung cancer and then it was found useful for metastatic breast cancer. As a result of the above-mentioned advantages, drug repositioning has received more attention from industry and academia [9]. Nowadays, with the advancement in technology, researchers are more capable of reaching different types of biological data and complex networks which are composed of different types of interactions among biological components [10]. Using these data sources, many different computational methodologies may be used to predict possible new use-cases (repositions) for drugs. As described in the literature, most researchers tackled the problem by applying methods from data mining and machine learning. These methods use a single feature or combination of features to model drugs. Some example features used in the process are chemical structures of drugs, protein targets, side-effect profiles and gene expression profiles [41]. In this study, we adapted a method from the recommendation systems literature to handle the drug repositioning problem. The utilized method has already been applied to produce successful recommendation systems in various domains, including location recommendation [29] and bioinformatics for predicting the structure of gene regulatory networks (GRNs) [30]. The recommendation method employed in this study is based on Pareto dominance and collaborative filtering. It is also capable of integrating multiple data-sources and multiple features. Inspiring from a state-of-the-art method for drug repositioning [41], we used three types of information; namely chemical properties, protein targets and side-effect profiles. For the calculations, we applied several different settings and we compared their performance results. The conducted experiments revealed some promising results which demonstrate the applicability and effectiveness of the proposed approach. As described in the literature, identifying new targets for known drugs, namely drug repositioning, has recently received more attention from industry and academia. The work described in [9] classifies computational drug repositioning methods into two categories: namely drug-based and disease-based approaches. Drug-based repositioning methods initiate their analysis from chemical or pharmaceutical features of drugs. Disease-based repositioning methods initiate the analysis from symptomatology or pathology features of diseases. Drug repositioning methods use various features for the computations [41], e.g., Chemical structure of drugs, proteins and targets interaction networks, side-effect of drugs, gene expression levels and textual features. There are many drug repositioning methods described in the literature. However, they mostly use only one feature: structural and chemical properties of a drug in relation to diseases it affects. Drugs with high chemical similarity can be used for drug repositioning [9]. The works described in [19, 27] are example methods that use chemical similarity for drug repositioning. Authors of the work described in [5] stated that common segments in protein-protein interaction and protein-targets interaction networks can reveal cross-reactions and can be used for drug repositioning. The works described in [20, 23] use protein-targets interaction networks. Side effects form physiological consequences of drugs' biological activity; they can provide information on underlying pathways or physiological systems to which drugs are related [9]. Side-effect similarity between drugs may indicate physiological relatedness between them. The works described in [1, 40] use side-effect similarity of drugs for drug repositioning. Similarities at molecular level can also be used for drug repositioning [9]. For this purpose, the works described in [12, 13, 34] use gene expressions and molecular activity signatures. Some of the works described in the literature rely on text mining tools to connect drugs and diseases [32]. One such method is described in [2]. It applies text mining methods to associate query and matching terms related to diseases, genes, drugs, mutations and metabolites. It also ranks related sentences and abstracts. Recent drug-repositioning methods combined multiple features to achieve better performance. For instance, the work described in [22] combined chemical and molecular features to find out similar drugs. The authors applied a bipartite graph based method to predict novel indications of drugs. Luo et al. [26] used drug-drug and disease-disease similarities to create a graph. Then they employed random-walk on this graph to extract new drug-disease relations. Lim et al. [24] used chemical and protein similarities to create a network of drug-disease relations. Then they used matrix factorization to decide on drugs which can be repurposed. They showed that their proposed method is highly scalable. Gottlieb et al. [11] used chemical structures, side effects and drug targets to calculate pairwise similarity of drugs. They used the calculated similarities as input features for a machine learning method, namely logistic regression. They predicted new drug-disease relations. Zhang et al. [41] used chemical, biological and phenotypical features to calculate drug-drug similarities which are used to find out k-nearest-neighbors. Then known targets of neighbors are used for drug repositioning. Qabaja et al. [32] combined information collected from gene expression profiling and text mining. They applied logistic regression to predict associations between drugs and diseases. Ozgur et al. [28] used text mining techniques to create a parse tree which was then used to create a protein-protein network. They also applied some social network analysis techniques (e.g., degree centrality, closeness) to prioritize genes' effect on diseases. Rastegar-Mojarad et al. [33] also used text mining techniques to repurpose drugs. They collected user comments on drugs and diseases from social media; they applied a combination of machine learning and rule based approaches to extract candidates for drug repurposing. Recent research on big-data in bioinformatics can also reveal new ways to find new indications of known drugs. The work described in [15, 16] proposed new methods to identify damages and DNA breaks which are important for disease investigations and drug design. The work described in [18] focused on cancer disease and applied several different machine learning methods for data reduction and coding area selection, which is considered as key area for discovering the desired medicine. The research described in [14, 17] can be used for extracting drug-disease relations, which aim to predict the primary, secondary and tertiary protein structure and to handle large volume biological datasets. Compared to the works described in the literature, in this paper we investigate the problem of drug repositioning from a different perspective which enriches the current literature related to this field and additionally confirms the results reported. In particular, we realize drug repositioning as a recommendation process. In other words, we argue that it is possible to recommend existing drugs for treating emerging diseases based on characteristics of new diseases as compared to characteristics of existing diseases in relationship with associated effective drugs. Thus, we apply a method from recommendation systems domain to tackle the drug repositioning problem. The employed method is able to integrate multiple data-sources and multiple features. Similar to the work of Zhang et al. [41], the proposed method first identifies drugs most similar to the target drug. Then, it uses known relations of neighbor drugs to predict new indications of the target drug. Unlike the work of Zhang et al. [41], we use a Pareto dominance and collaborative filtering based method, which has been already used as part of adapting recommendation systems to other domains, like venue recommendation and in bio-informatics to predict the structure of gene regulatory networks. Also, we have applied several settings for the calculation and we have compared the performance of the two methods. The rest of this paper is organized as follows: "Methods" section, presents the proposed drug repositioning method. "Results and discussion" section, includes the evaluation process and the results. "Conclusions" section is conclusions and future work. The aim of this work is to predict new uses of known drugs by analyzing multiple features and multiple data sources. For this purpose, we adapted a recommendation system based method which has been successfully applied in other domains. Fortunately, the results reported from this study clearly demonstrate the effectiveness and applicability of recommendation methods for drug repositioning. In other words, the process could be easily mapped to recommending an existing drug for handling a new disease by studying characteristics of new diseases in link to already known diseases and their associated drugs. Zhang et al. [41] stated that similar drugs are indicators for similar diseases. Accordingly, in their work they inspired from similar diseases to reposition target drugs. Realizing the fact that this approach is similar to collaborative filtering in the recommendation systems domain, we adapted for drug repositioning a method that we previously proposed for classical recommendation purposes [29]. In the following subsections, we first present the proposed method in general, and then we describe steps of the method in details. Pareto dominance and collaborative filtering based prediction The utilized recommendation method uses Pareto dominance and collaborative filtering approaches to predict future venue preferences (i.e, check-in locations) of target users. Its idea is based on the observation that similar users tend to visit similar venues. Accordingly, it would be acceptable to recommend to a target user venues that have been visited by similar users. As described in [30], we applied the same concept in the bioinformatics domain for predicting structure of gene regulatory networks. In the latter work, target genes are used instead of target users and accordingly regulated genes are predicted. The achieved results confirmed promising aspects of adapting a recommendation system to discover gene regulations. The success achieved in studying gene regulatory networks motivated us to investigate the applicability of recommendation systems for drug repositioning. The overall design of the proposed method for drug repositioning is shown in Fig. 1, where the modules and their interactions are presented. The proposed method is composed of three main steps, namely similarity calculation, neighbor selection and item (disease) selection. In the similarity calculation step, each feature is used to determine similarity between drugs. Then, similarities are used to find most similar drugs, namely neighbors, by a Pareto dominance based method. Then known connections among neighbor drugs and indicated diseases are used for prediction. Reported at the end is a prediction list of target drugs and predicted diseases which could be treated by target drugs. Design of the proposed method Details of the proposed method For the calculations performed in the process, we used three main features: namely chemical properties of drugs, protein targets, and side-effect profiles. In this section, we explain details of the various steps of the proposed method and how the above-mentioned features are used. Similarity calculation In this step, similarity between drugs is calculated for each type of features. We used several similarity measures in the calculation, namely Cosine similarity, Jaccard similarity and a similarity score based on Smith-Waterman sequence alignment. In this section, we present how these similarity measures are calculated. In the evaluation section, we present how these similarity measures have been used and combined, as well as their corresponding performance results. Cosine similarity is calculated as depicted in Eq. 1, where A and B denote drugs. Drugs may be represented as vectors, where a vector contains one value per feature to reflect how a drug is related to the specific feature. Subscript j in Eq. 1 refers to individual values of a feature vector. For instance, for the "chemical properties" feature, a drug may be represented as a binary vector where values represent the existence/non-existence of a chemical structure. Similarity between two drugs can be calculated based on common chemical structures and the length of the feature vector. $$ sim(A,B)=\frac{\sum\limits_{j=1}^{n} A_{j} \times B_{j}}{\sqrt{\sum\limits_{j=1}^{n} A_{j}^{2}} \times \sqrt{ \sum\limits_{j=1}^{n} B_{j}^{2}}} $$ Jaccard similarity is calculated by invoking Eq. 2, where |A| represents length of the drug feature vector and |AB| represents size of common elements in the feature vector. This similarity measure is also called Tanimoto index/similarity when the feature vector is binary. $$ sim(A,B)=\frac{|AB|}{|A| + |B| - |AB| } $$ In the work of Zhang et al. [41], a similarity score based on Smith-Waterman sequence alignment is used. In this study, we also applied the same similarity measure when possible. As explained previously, drugs may be represented as a feature vector. Entries/elements of a vector themselves can be represented as sequences. For instance, a drug can be represented as a vector of proteins. Proteins themselves may be represented as a sequence of smaller biological elements. Similarity of these sequences, e.g., protein sequences, can be calculated by Smith-Waterman sequence alignment method. After having Smith-Waterman sequence alignment score, similarity among drugs can be calculated by the formula given in Eq. 3Footnote 1. In Eq. 3, V(A) represents the feature vector of drug A, and each vector element is composed of a sequence of smaller elements, where these elements are represented as Vi(A). Smith-Waterman sequence alignment score computed by Eq. 3 is denoted simSW(Vi(A),Vj(B)). $$ sim(A,B)=\frac{\sum\limits_{i=1}^{|V(A)|} \sum\limits_{j=1}^{|V(B)|} sim_{S}W(V_{i}(A),V_{j}(B))} {|V(A)| \times |V(B)|} $$ Neighbor selection In this step, drugs most similar to the target drug (i.e., its neighbors) are selected. Neighbors are decided using the similarities calculated in the previous step and by applying a Pareto dominance based method. In this method, drugs not dominated by other drugs are selected as neighbors. Dominance relation among drugs is decided by Eq. 4, where di and dj represent drugs and f indicates features. According to this equation, if drug di has at least one higher similarity value than drug dj and no lower similarity values than drug dj, then drug di dominates drug dj. $$ dom(d_{i},d_{j}) = \left\{ \begin{array}{l l} 1.0 & \quad \forall f \; d_{i}(f)\geq d_{j}(f) \text{ and} \\ & \quad \exists f \; d_{i}(f) > d_{j}(f)\\ 0.0 & \quad \text{otherwise} \end{array} \right. $$ An example input and non-dominated solutions are given in Fig. 2, where the data-set is composed of eight drugs and the target drug is identified as drug d0. Similarities between drugs for each feature fi are also listed. First, based on these similarities dominance matrix is created using Eq. 4. Then non-dominated drugs (i.e., drugs with zero column total in the dominance matrix) are selected as neighbors. In this example, d5, d6 and d7 are selected as the drugs most similar to the target drug. Example input and non-dominated solutions As explained in [29], the application of Pareto dominance based approach on a single iteration may provide less than the predefined number of neighbors. In order to collect as many neighbors as predefined, an iterative process can be applied. In each iteration, first, non-dominated neighbors are found and are removed from the first set of candidates. Then iterations are executed until the predefined number of neighbors are collected. At the end, if the collected number of neighbors is more than the predefined number (i.e., non-dominated drugs found in the last iteration are more than expected), neighbors can be pruned into exact number of neighbors or neighbors list may remain as it is. These preferences are identified in [30]; they are called Multi-Objective Optimization Type (MOT) setting which could be explained as follows. Only_Dominates (OD): Execute single iteration to find non-dominated neighbors. Number of non-dominated drugs is not set, and it depends directly on similarity values. N_Dominates (ND): Execute multiple iterations to find non-dominated neighbors. Number of non-dominated drugs is set exactly to N, i.e., pruning is applied when necessary. At_Least_N_Dominates (AND): Execute multiple iterations to find non-dominated neighbors. Number of non-dominated drugs is set to at least N, i.e., pruning is not applied. Item selection In this step, items to be recommended are selected. For the problem investigated in this study, selected items are diseases for which the target drug could be re-positioned. First candidates are identified by collecting items which have related neighbors, i.e., some diseases are listed as indicators for neighbor drugs. For each candidate item (disease), a score is calculated by Eq. 5, where the score is denoted score(c,t), candidate item (disease) is denoted c, the target is denoted t, and the neighbor is denoted n. Similarity between the target and neighbor drugs is given as sim(t,n). The function f(n,c) represents neighbor drug- candidate disease relationship score given in the input data. It is possible to have this score different from zero and one, but our data-set is represented as binary vectors to indicate whether a drug has a relation with a disease or not, and the values of f(n,c) is either one or zero. Higher item selection score means the target drug has a more promising relation with the candidate disease. $$ score(c, t) = \sum_{n \in \text{Nghb}} sim(t,n) \times f(n,c) $$ For computing the score, two different settings can be used. They are called Item Selection Type (IST) settings, and they are described as follows: Sum (SUM): Without considering similarities between the target and neighbor drugs, votes (summation of f(n,c) values) are calculated for each candidate. Items (disease) which have highest number of votes are presented in the output list. This settings has been already described in [30]. Weighted Sum (WSUM): For the summation, sim(t,n) value is also included, where more similar drugs have more weight in the prediction. Items (diseases) which have highest scores are included in the output list. For the evaluation, we used the same data-set used by Zhang et al. [41], which they have shared in their website (see http://astro.temple.edu/~tua87106/drugreposition.html). In the following subsections, we explain the data-set, evaluation metrics and evaluation results. As the golden data-set, we used the same drug-disease data provided by Zhang et al. [41]; the dataset was also used by Li et al. [22]. The dataset integrates three data sources, namely chemical data, protein data and side-effects data. Chemical data contains 122,022 links between 1007 drugs and 881 PubChem [36] chemical substructures. Each drug is represented as a binary vector, where each entry indicates presence or absence of related chemical substructure. Sparsity of the data-set is about 86.25%. Protein data contains 3152 associations between 1007 drugs and 775 UniProt target proteins. Target drugs are generated using DrugBank [38]. Sparsity of this data is 99.60%. Side-effects data contains 61,102 connections between 888 drugs and 1385 side-effects. Sparsity ratio is 95.03%. Information related to this data has been generated from SIDER database [21]. Each data source contains information about a single feature, and features are represented as a binary vector. Drugs listed in each data source are not necessarily the same. Based on this, the overall data-set (combination of all three data-sources) contains more than 1007 drugs. Since drugs in each data source may be different, drugs may have missing information about one or more features. In this work, after obtaining the dataset of Zhang et al. [41], we applied a preprocessing step to collect a list of drug names and for the mapping to drug names in chemical, protein and side-effects data sources. During this process, we noticed that some drugs may have different names (synonyms). For example, we found that one drug is referred to as Ursodiol in chemical data, while it is referred to as Ursodeoxycholic acid in both protein and side-effect data. We looked up synonyms from DrugBank website [8]. As a result of the preprocessing step, we obtained 1224 different drugs with the mappings of their namesFootnote 2. The golden dataset, which is also provided by Zhang et al. [41], contains associations between 799 drugs and 719 diseases, with 3250 treatment relations (edges). However, not all drugs listed in this dataset are listed in the input data sources (chemical, protein and side-effect data). Since it is nearly impossible to predict targets of a drug without any prior information, we did not consider those drugs in the process. The resulting golden dataset contains 781 drugs, 719 diseases and 3179 associationsFootnote 3. Here, it is worth mentioning that this dataset may lack information on recent drug-disease relations which were not available at the time it was created by Zhang et al. [41]. The overall structure of the dataset is shown in Fig. 3. Drug-drug relations are created based on their similarities to each other using the above-mentioned data sources, namely, protein interactions, chemical structures and side-effects. These data sources are represented as binary matrices, where rows represent drugs and columns represent proteins, chemical compounds or side-effects, depending on the information in the data source. In the binary matrix, 1 and 0 are used to indicate whether a relationship (like causing a certain side-effect) exists or not, respectively. Drug-disease relations are also represented as binary matrix, where drugs are listed as rows and diseases are listed as columns. If a drug in a row is known to be used for the treatment of a disease in a column, the intersection cell is set to 1; otherwise the cell is set to 0. In all the data, drugs and diseases have been represented using their names as text; no other identifier has been used. Drug-disease relations Evaluation metrics For the evaluation, precision@k, recall@k and F1-measure metrics are used. The formulas for computing these metrics are given in Eqs. 6, 7 and 8, where k indicates output list length, tp denotes true positives, i.e., predicted and actually indicated diseases, fp denotes false positives, i.e., predicted but actually not indicated diseases, and fn denotes false negatives, i.e., not predicted but actually indicated diseases. $$ Precision_{k} = \frac{tp_{k}}{tp_{k}+fp_{k}} $$ $$ Recall_{k} = \frac{tp_{k}}{tp_{k}+fn_{k}} $$ $$ F1-measure = 2* \frac{Precision*Recall}{Precision+Recall} $$ For the evaluation, we used the leave-one-out strategy, i.e., we removed the target drug and its relations from the dataset and used the rest in the calculation (Fig. 4). For example, for target drug Irbesartan we removed drug-disease relations that already exist in the input dataset. These diseases are known to be cured by Irbesartan, and hence they have been used for validation. The output of our methodology, i.e., predictions of diseases which can be cured by "Irbesartan" are compared to this validation set. For each target drug, we computed the metrics explained above and we reported the average results. Also, noticing the fact that recent drug-disease relationships don't exist in the input dataset (since those relations were not known at the time when the dataset was generated), we additionally compared our predictions to the novel clinical tests, using ClinicalTrials.gov website. Leave-one-out strategy We first calculated upper bounds of the performance metrics. Figure 5 shows the upper bounds of precision, recall and F1-measure for different k values. As expected, precision is inversely proportional to the value of k, i.e., best precision is achieved for smaller k values, and it decreases as k increases. Recall has reverse behavior compared to precision, i.e., it increases as k increases. F1-measure, which is the harmonic mean of precision and recall, reaches its best value when k is equal to 4. We stopped the evaluation when k=20, since recall has already reached 0.9966. Upper bounds recall, precision and F1 measures Setting the output list size to exactly k has one drawback because not all drugs in the golden dataset have association with k-many diseases. If output list size is set to exactly k, then some predictions will always be wrong. For example, assume that k is set to 10, and for target drug d1, disease associations in the golden set is 5. Then, precision will be at most 0.5. However, if k is set to 10 in a loosely way to allow the methods to predict at most 10 items, precision may become 1.0. The proposed method has the ability to predict at most k associations and does not make any random guesses. We argue that making random guesses for drug-repositioning is not an appropriate idea. It will reduce the benefits of computational drug repositioning compared to traditional methods. Figure 6 shows upper bounds of precision, recall and F1-measure when random guess is not allowed. In this figure, precision is always 1.0, as expected. The recall increases as k increases, and this leads to increase in F1-measure. In our method, we used the value of k in a loosely way, such that the method can't produce more than k predictions. However, it is possible that the proposed method predicts less than k drug-disease relations per target drug. Here it is worth noting that the process of making at most k predictions (without guesses) is more challenging, since the method should decide on the best output list size for each target, in addition to making the best prediction. Upper bounds of recall, precision and F1 measures when random guess is not allowed We conducted experiments using several settings. We used different similarity metrics, namely Multi-Objective Optimization Type (MOT), and Item Selection Type (IST). For similarity type settings, we concentrated on four different settings that use Cosine similarity, Jaccard similarity or Smith-Waterman sequence alignment based similarity scores for various features, namely chemical, protein and side-effect features. In the first setting (CCC), Cosine similarity is used for all features. In the second setting (JJJ), Jaccard similarity is used for all features. In the third setting (JJC), Jaccard similarity is used for chemical and side-effect features and Cosine similarity is used for protein feature. For the last setting (JJS), Jaccard similarity is used for chemical and side-effect features and Smith-Waterman sequence alignment based similarity is used for protein feature. In the experiments, we need to set two variables, namely neighbors count (N) and output list size (k). We set maximum neighbor count and output list size to 20. Instead of testing with a single value, during the experiments we set N and k to 1, 4, 8, 12, 16 or 20 and conducted experiments using the combination of N and k values. Figures 7, 8 and 9 present the best performance of the proposed method with different settings. The presented results are calculated for each N×k combinations, but only results of best performing values for the related setting are used. The settings are presented on the x-axis and each line reflects a similarity type (e.g., CCC), MOT (e.g., ND) and IST (e.g., SUM), respectively. Performance results (Precision) Performance results (Recall) Performance results (F1-Measure) Figures 7 and 8 reveal that using weighted summation for item selection (WSUM) performs equally well or better than summation (SUM). ND and AND settings as MOT type perform equally well; they perform better than OD which has the limitation of choosing non-dominated neighbors on a single iteration and lead to selection of few neighbors. ND and AND have ability to choose more neighbors and performance results show that choosing more neighbors is more informative. Using different similarity measures during the calculations don't effect the performance much. Using Smith-Waterman sequence alignment based similarity score for protein feature similarity (JJS) performs slightly better than others in terms of precision. Figure 9 shows that the performance of all settings are nearly equal. Considering all figures, observing the performance on F1-measure indicates that methods which perform good on precision do not perform good on recall, and methods which perform good on recall do not perform good on precision. Table 1 reports best performance of the settings which use different similarity metrics in more detail. Performance results of each setting are grouped together. In each group, we report the approach which produced best precision, best recall and best F1-measure scores. As expected, precision performed better when there are fewer predictions and recall performed better when there are many predictions. While listing only one disease for a target drug produced better precision, listing many (20) diseases in prediction produced the best recall. We observed that using ND or AND method as Multi-Objective Optimization Type (MOT) performed better compared to OD. During the experiments, we observed that OD (Only dominates) type usually finds few neighbors. We further observed that having more neighbors is more useful for making better prediction. When we look at Item Selection Type (IST), we observe that using weighted sum (WSUM) performs better than using sum (SUM). This indicates that it is more informative to integrate similarity between a target drug and its neighbors. Also comparing the results in Table 1 to the upper-bounds in Fig. 6) reveals that the proposed method is able to achieve around 33% performance. Table 1 The best results when different similarity metrics are used We observed that several studies described in the drug repositioning literature prefer to present AUC-ROC (Area Under Curve - Receiver Operator Characteristic) results. However, for highly skewed data, it is stated in [6] that using precision-recall is more informative than using ROC curves. Prediction based on data which has fewer positive relations and many negative relations is commonly considered in the information retrieval literature as "searching for a needle in haystack'. The golden data we used has similar characteristics, since there are only 3179 positive relations and 558,360 negative relations. Based on this observation, we also included AUC-PR scores while presenting the performance of the proposed method and settings. Table 2 reports the calculated AUC-PR scores of the proposed method and settings. To compute AUC-PR values of the proposed methods we used code from https://github.com/andybega/auc-pr/blob/master/auc-pr.r. The results show that using Jaccard and Smith-Waterman sequence alignment based similarity scores can lead to better performance compared to other methods, especially when the output list size is limited to few predictions (e.g., k=1). Table 2 AUC-PR results when different similarity metrics are used We also compared our proposed method to the methods described in the literature; the results are reported in Table 3. Actually, we compared our method to the state of the art methods which were evaluated using the same dataset we used in this study, namely Li and Lu [22], Chiang and Butte [3], and Zhang et al. [41]. For the proposed method, we presented two settings which produce best precision and best recall. In the table, we have included precision, recall and F1-measure results. We have not included AUC-PR results since the methods described in the literature usually use ROC and AUC-ROC results. To be able to compare results from the proposed methods to results from other methods described in the literature, we have decided to include in the table sensitivity (recall), specificity and AUC-ROC measures as well. The importance of using AUC-PR in scale-free networks, like biological networks, is also underlined in the works conducted by Wu et al. [39] and Lotfi et al. [25]. They stated that PR curves are more informative when the distribution of relations are skewed. Table 3 Comparison of the proposed method to other state of the art methods from the literature Sensitivity (recall) and specificity are used to create ROC. Equation 9 shows how specificity (SPC) is calculated. In the equation tn refers to true negatives, i.e, not predicted and actually not indicated diseases, and fp represents false positives, i.e., predicted but actually not indicated diseases. Specificity (SPC) measures performance of the methods on negative links (i.e., no indication for a disease). Finally, AUC-ROC values of the proposed method have been derived using ROCR library in R. $$ SPC = \frac{tn}{tn+fp} $$ The results reported in Table 3 show that the proposed method with JJS setting performs better than other methods in terms of precision and specificity. This indicates that this method is able to make true predictions for positive and negative relations; i.e,. its tp and tn values are high. However, it has low recall, indicating that it cannot predict all true drug-disease relations. This result is expected, since in this setting number of predictions is set to 1 (k=1). Actually, the upper-bound of recall when k=1 is around 0.25 (Fig. 6) and the proposed method is able to achieve 33% of recall performance. Other methods have lower precision and higher recall and AUC-ROC values. This reflects that those methods were able to predict many drug-disease relations (i.e., k has higher value in their settings), but they also listed many false relations. The golden data we use is very skewed and has 99.44% sparsity; i.e., there are many diseases that are irrelevant to a target drug. We would argue that precision is more important than recall for this dataset and for the drug repositioning problem in general, i.e., making the right prediction for drug-disease relations is more important than finding all relations. Comparing our method to other state of the art methods from the literature shows that the proposed method can achieve higher precision, e.g., when it predicts a drug-disease relation, nearly half of those predictions are true. Lastly, we compared our predictions to novel clinical tests, using ClinicalTrials.gov website, which collects and presents information on publicly and privately supported clinical studies of human participants around the world. From the website, we looked up a drug and disease relations predicted by the proposed method with highest precision value, i.e., Proposed Method - JJS and output list size (k) is 1. Comparing predictions to golden dataset reveals that the proposed method predicted 269 true positives (predicted and actually true relation) and 284 false positives (predicted, but not actually true relation). When we use ClinicalTrials.gov for comparison to novel clinical tests, we realized that 98 of the false positives, nearly 1/3 of the false positives, were actually clinically tested after the golden dataset was produced. This indicates that these predictions are actually true. For example, the relation between drug Amifostine and disease Xerostomia does not exist in the golden dataset. However, our proposed method is able to predict this relation. ClinicalTrials.gov website revealed that there is actually a relation between drug Amifostine and disease Xerostomia. In Table 4, we present an example set of predictions made by the combination Proposed Method - JJS with output list size (k) set to 1, together with whether these predictions are actually clinically tested or notFootnote 4. Table 4 An example set of predictions (Proposed Method - JJS and k=1) Drug repositioning is an essential process for linking emerging diseases to existing known and well tested drugs as opposed to seeking the development of new drugs for such diseases. The latter process is associated with several risks and costs which may not be easily affordable. Thus, repositioning has received considerable attention in industry and academia. In this paper, we described a new approach for drug repositioning which performs well compared to state of the art other approaches described in the literature. The originality of our approach is realizing the whole drug repositioning process as a recommendation process where drugs are recommended based on similarity and overlap between symptoms of diseases and effectiveness of drugs. This approach opens a new dimension in the drug repositioning literature by demonstrating how it is possible to reposition existing computation techniques developed to handle a specific domain and map them to become effective solutions for other emerging domains. We illustrated how various computing techniques may contribute to ongoing efforts for drug repositioning, and hence may help in reducing associated risks, cost and time required to identify new drugs. One attraction of our approach is the set of features used in the process. The approaches described in the literature employ a variety of computational methods and various features of drugs and diseases to identify drug-disease coupling. The most common features used in the literature are chemical structure of drugs, protein targets interaction networks, side-effects of drugs, gene expressions and textual features. Computational drug repositioning methods use a single feature or combine multiple of them. On the other hand, our recommendation system based method is able to integrate multiple data-sources and multiple features. The method is based on Pareto dominance and collaborative filtering to identify drugs most similar to a target drug, and neighbor drugs are then used to predict new indication of the target drug. Also, we applied and compared the performance of several different settings that affect the computation. Experimental results show that the proposed method is able to achieve high precision, such that nearly half of the predictions are true. Comparison to the other methods described in the literature show that the proposed method is better at making concentrated predictions with higher true positive ratio. Having concentrated (fewer and to-the-target) predictions helps researchers in biology and chemistry who will use the output drug-disease relation predictions in their laboratory experiments. In general, the results show that it is highly promising to use a recommendation method to tack drug repositioning. In order to further our research, we intend to use a more up-to-date drug-disease relations dataset and apply the proposed method on this new dataset. We plan to use a recent database which integrates multiple data sources and presents more recent drug-disease relations [4]. We also want to integrate other known recommendation methods in handling the drug repositioning problem and to apply these methods on other (larger) datasets to observe and analyze their performance in depth. Lastly, we are aware of the fact that drug-disease relations can be organized in different ways rather than a flat structure. For example, diseases may have hierarchical relations or drugs' features (e.g., drug-protein relations) may have multiple levels. Future studies should examine the effects of different structural representations of drug-disease relations. Another idea that future studies may focus on is the representation of drugs and diseases in the input dataset, where identifiers may be preferred to using names. Following publication of the original article [1], the authors reported that there was an error in the spelling of the name of one of the authors. We used Uniprot to collect protein sequence information and ClustalX2 for protein sequence alignment. We plan to share the mappings of names on our website. We will share the golden set on our website. We will present on our website all predictions made by all combinations of the proposed method and similarity metrics with output list size (k) values. At_Least_N_Dominates N_Dominates Only_Dominates ROC: Receiver operator characteristic WSUM: Weighted sum Campillos M, Kuhn M, Gavin A-C, Jensen LJ, Bork P. Drug target identification using side-effect similarity. Science. 2008; 321(5886):263–6. Cheng D, Knox C, Young N, Stothard P, Damaraju S, Wishart DS. Polysearch: a web-based text mining system for extracting relationships between human diseases, genes, mutations, drugs and metabolites. Nucleic Acids Res. 2008; 36(suppl 2):W399–W405. Chiang AP, Butte AJ. Systematic evaluation of drug-disease relationships to identify leads for novel drug uses. Clin Pharmacol Ther. 2009; 86(5):507. Corsello SM, Bittker JA, Liu Z, Gould J, McCarren P, Hirschman JE, Johnston SE, Vrcic A, Wong B, Khan M, et al.The drug repurposing hub: a next-generation drug library and information resource. Nat Med. 2017; 23(4):405–8. Csermely P, Korcsmáros T, Kiss HJ, London G, Nussinov R. Structure and dynamics of molecular networks: a novel paradigm of drug discovery: a comprehensive review. Pharmacol Ther. 2013; 138(3):333–408. Davis J, Goadrich M. The relationship between precision-recall and roc curves. In: Proceedings of the 23rd international conference on Machine learning. USA: ACM: 2006. p. 233–40. DiMasi JA. 2014. Cost of developing a new drug. Available: http://csdd.tufts.edu/news/complete_story/pr_tufts_csdd_2014_cost_study. Accessed Apr 2016. DrugBank. 2016. Drugbank. Available: http://www.drugbank.ca/. Accessed Apr 2018. Dudley J, Deshpande T, Butte AJ. Exploiting drug-disease relationships for computational drug repositioning. Brief Bioinforma. 2011; 12(4):303–11. Gligorijević V, Pržulj N. Methods for biological data integration: perspectives and challenges. J R Soc Interface. 2015;12(112). Gottlieb A, Stein GY, Ruppin E, Sharan R. Predict: a method for inferring novel drug indications with application to personalized medicine. Mol Syst Biol. 2011; 7(1):496. Hu G, Agarwal P. Human disease-drug network based on genomic expression profiles. PLoS ONE. 2009; 4(8):e6536. Iorio F, Bosotti R, Scacheri E, Belcastro V, Mithbaokar P, Ferriero R, Murino L, Tagliaferri R, Brunetti-Pierri N, Isacchi A, et al.Discovery of drug mode of action and drug repositioning from transcriptional responses. Proc Natl Acad Sci. 2010; 107(33):14621–6. Kamal MS, Chowdhury L, Khan MI, Ashour AS, Tavares JMR, Dey N. Hidden markov model and chapman kolmogrov for protein structures prediction from images. Comput Biol Chem. 2017; 68:231–44. Kamal MS, Nimmy SF. Strucbreak: a computational framework for structural break detection in dna sequences. Interdisc Sci Comput Life Sci. 2017; 9(4):512–27. Kamal MS, Nimmy SF, Parvin S. Performance evaluation comparison for detecting dna structural break through big data analysis. Comput Syst Sci Eng. 2016; 31:1–15. Kamal MS, Parvin S, Ashour AS, Shi F, Dey N. De-bruijn graph with mapreduce framework towards metagenomic data classification. Int J Inf Technol. 2017; 1(9):59–75. Kamal S, Dey N, Nimmy SF, Ripon SH, Ali NY, Ashour AS, Karaa WBA, Nguyen GN, Shi F. Evolutionary framework for coding area selection from cancer data. Neural Comput & Applic. 2018; 29(4):1015–37. Keiser MJ, Setola V, Irwin JJ, Laggner C, Abbas AI, Hufeisen SJ, Jensen NH, Kuijer MB, Matos RC, Tran TB, et al.Predicting new molecular targets for known drugs. Nature. 2009; 462(7270):175–81. Kotelnikova E, Yuryev A, Mazo I, Daraselia N. Computational approaches for drug repositioning and combination therapy design. J Bioinforma Comput Biol. 2010; 8(03):593–606. Kuhn M, Campillos M, Letunic I, Jensen LJ, Bork P. A side effect resource to capture phenotypic effects of drugs. Mol Syst Biol. 2010; 6(1):343. https://doi.org/10.1038/msb.2009.98. Li J, Lu Z. A new method for computational drug repositioning using drug pairwise similarity. 2013 IEEE Int Conf Bioinforma Biomed. 2012; 0:1–4. Li J, Zhu X, Chen JY. Building disease-specific drug-protein connectivity maps from molecular interaction networks and pubmed abstracts. PLoS Comput Biol. 2009; 5(7):e1000450. Lim H, Poleksic A, Yao Y, Tong H, He D, Zhuang L, Meng P, Xie L. Large-scale off-target identification using fast and accurate dual regularized one-class collaborative filtering and its application to drug repurposing. PLoS Comput Biol. 2016; 12(10):e1005135. Lotfi Shahreza M, Ghadiri N, Mousavi SR, Varshosaz J, Green JR. A review of network-based approaches to drug repositioning. Brief Bioinforma. 2017;:bbx017. Luo H, Wang J, Li M, Luo J, Peng X, Wu F-X, Pan Y. Drug repositioning based on comprehensive similarity measures and bi-random walk algorithm. Bioinformatics. 2016; 32(17):2664–71. Noeske T, Sasse BC, Stark H, Parsons CG, Weil T, Schneider G. Predicting compound selectivity by self-organizing maps: Cross-activities of metabotropic glutamate receptor antagonists. ChemMedChem. 2006; 1(10):1066–8. Özgür A, Vu T, Erkan G, Radev DR. Identifying gene-disease associations using centrality on a literature mined gene-interaction network. Bioinformatics. 2008; 24(13):i277–i285. Ozsoy MG, Polat F, Alhajj R. Multi-objective optimization based location and social network aware recommendation. In: 10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing, CollaborateCom 2014, Miami, Florida, USA, October 22-25, 2014. USA: IEEE: 2014. p. 233–42. Ozsoy MG, Polat F, Alhajj R. Inference of gene regulatory networks via multiple data sources and a recommendation method. In: 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). USA: IEEE: 2015. p. 661–4. Pulley JM, Shirey-Rice JK, Lavieri RR, Jerome RN, Zaleski NM, Aronoff DM, Bastarache L, Niu X, Holroyd KJ, Roden DM, et al.Accelerating precision drug development and drug repurposing by leveraging human genetics. ASSAY Drug Dev Technol. 2017; 15(3):113–9. Qabaja A, Alshalalfa M, Alanazi E, Alhajj R. Prediction of novel drug indications using network driven biological data prioritization and integration. J Cheminformatics. 2014; 6(1):1–14. Rastegar-Mojarad M, Liu H, Nambisan P. Using social media data to identify potential candidates for drug repurposing: a feasibility study. JMIR Res Protocol. 2016; 5(2):e121. https://doi.org/10.2196/resprot.5621. Sirota M, Dudley JT, Kim J, Chiang AP, Morgan AA, Sweet-Cordero A, Sage J, Butte AJ. Discovery and preclinical validation of drug indications using compendia of public gene expression data. Sci Transl Med. 2011; 3(96):96ra77–96ra77. Sisignano M, Parnham MJ, Geisslinger G. Drug repurposing for the development of novel analgesics. Trends Pharmacol Sci. 2016; 37(3):172–83. Wang Y, Xiao J, Suzek TO, Zhang J, Wang J, Bryant SH. Pubchem: a public information system for analyzing bioactivities of small molecules. Nucleic Acids Res. 2009; 37(Web Server issue):W623–W633. Wikipedia. 2016. Wikipedia: Drug repositioning. Available: https://en.wikipedia.org/wiki/Drug_repositioning. Accessed Apr 2018. Wishart DS, Knox C, Guo AC, Cheng D, Shrivastava S, Tzur D, Gautam B, Hassanali M. Drugbank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Res. 2008; 36(suppl 1):D901—D906. Wu Z, Wang Y, Chen L. Network-based drug repositioning. Mol BioSyst. 2013; 9(6):1268–81. Yang L, Agarwal P. Systematic drug repositioning based on clinical side-effects. PLoS ONE. 2011; 6(12):e28025. Zhang P, Agarwal P, Obradovic Z. Computational drug repositioning by ranking and integrating multiple data sources In: Blockeel H, Kersting K, Nijssen S, Zelezný F, editors. ECML/PKDD (3), ser. Lecture Notes in Computer Science, vol. 8190. Heidelberg: Springer: 2013. p. 579–94. This research is supported by TUBITAK-BIDEB 2214/A program. Data and programs will be shared in case the paper will be accepted. Department of Computer Engineering, Middle East Technical University, Ankara, Turkey Makbule Guclin Ozsoy & Faruk Polat Department of Computer Engineering, TOBB University, Ankara, Turkey Tansel Özyer Department of Computer Science, University of Calgary, Calgary, AB, Canada Reda Alhajj Makbule Guclin Ozsoy Faruk Polat All authors developed the methodology. MGO conducted the experiments and wrote the manuscript. All authors proofread the manuscript and validated the results. All authors read and approved the final manuscript. Correspondence to Reda Alhajj. None of the authors have any competing interests. Ozsoy, M.G., Özyer, T., Polat, F. et al. Realizing drug repositioning by adapting a recommendation system to handle the process. BMC Bioinformatics 19, 136 (2018). https://doi.org/10.1186/s12859-018-2142-1 Accepted: 27 March 2018 Drug repositioning Multiple features Pareto dominance Collaborative filtering Recommendation systems
CommonCrawl
Cannabidiol rather than Cannabis sativa extracts inhibit cell growth and induce apoptosis in cervical cancer cells Sindiswa T. Lukhele1 & Lesetja R. Motadi1 Cervical cancer remains a global health related issue among females of Sub-Saharan Africa, with over half a million new cases reported each year. Different therapeutic regimens have been suggested in various regions of Africa, however, over a quarter of a million women die of cervical cancer, annually. This makes it the most lethal cancer amongst black women and calls for urgent therapeutic strategies. In this study we compare the anti-proliferative effects of crude extract of Cannabis sativa and its main compound cannabidiol on different cervical cancer cell lines. To achieve our aim, phytochemical screening, MTT assay, cell growth analysis, flow cytometry, morphology analysis, Western blot, caspase 3/7 assay, and ATP measurement assay were conducted. Results obtained indicate that both cannabidiol and Cannabis sativa extracts were able to halt cell proliferation in all cell lines at varying concentrations. They further revealed that apoptosis was induced by cannabidiol as shown by increased subG0/G1 and apoptosis through annexin V. Apoptosis was confirmed by overexpression of p53, caspase 3 and bax. Apoptosis induction was further confirmed by morphological changes, an increase in Caspase 3/7 and a decrease in the ATP levels. In conclusion, these data suggest that cannabidiol rather than Cannabis sativa crude extracts prevent cell growth and induce cell death in cervical cancer cell lines. Cannabis sativa is a dioecious plant that belongs to the Cannabaceae family and it originates from Central and Eastern Asia [11, 28]. It is widely distributed in countries including Morocco, South Africa, United States of America, Brazil, India, and parts of Europe [14, 28]. Cannabis sativa grows annually in tropical and warm regions around the world [11]. Different ethnic groups around the world use Cannabis sativa for smoking, preparing concoctions to treat diseases, and for various cultural purposes [17]. According to [28], it is composed of chemical constituents including cannabinoids, nitrogenous compounds, flavonoid glycosides, steroids, terpenes, hydrocarbons, non-cannabinoid phenols, vitamins, amino acids, proteins, sugars and other related compounds. Cannabinoids are a family of naturally occurring compounds highly abundant in Cannabis sativa plant [1, 6, 14, 24]. Screening of Cannabis sativa has led to isolation of at least 66 types of cannabinoid compounds [1, 14, 30]. These compounds are almost structurally similar or possess identical pharmacological activities and offer various potential applications including the ability to inhibit cell growth, proliferation and inflammation [22]. One such compound is cannabidiol (CBD), which is among the top three most widely studied compounds, following delta-9-tetrahydrocannabinol (Δ9-THC) [14]. It has been found to be effective against a variety of disorders including neurodegerative disorders, autoimmune diseases, and cancer [24, 25]. In a research study conducted by [26], it was found that CBD inhibited cell proliferation and induces apoptosis in a series of human breast cancer cell lines including MCF-10A, MDA-MB-231, MCF-7, SK-BR- 3, and ZR-7-1 and further studies found it to possess similar characteristics in PC-3 prostate cancer cell line [25]. However, to allow us to further our studies in clinical trials a range of cancers in vitro should be tested to give us a clear mechanism before we can proceed. Cannabis sativa in particular cannabidiol, we propose it plays important role in helping the body fight cancer through inhibition of pain and cell growth. Therefore, the aim of this study was to evaluate the cytotoxic and anti-proliferative properties of Cannabis sativa and its isolate, cannabidiol in cervical cancer cell lines. An aggressive HeLa, a metastatic ME-180 and a primary SiHa cell lines were purchased from ATCC (USA, MD). Camptothecin was supplied by Calbiochem® and cannabidiol was purchased from Sigma-Aldrich and used as a standard reference. Plant collection and preparation of extracts Fresh leaves, stem and roots of Cannabis sativa were collected from Nhlazatshe 2, in Mpumalanga province. Air dried C. sativa plant material was powdered and soaked for 3 days in n-hexane, ethanol and n-butanol, separately. Extracts were filtered using Whatman filter paper and dried. Dimethyl sulfoxide was added to dried extracts to give a final concentration of a 100 mg/ml. Different concentrations (50, 100, and 150 μg/ml) of C. sativa extracts were prepared from the stock and used in treating cells during MTT assay. HPLC-Mass spectrophotometry was performed to verify the presence of cannabidiol in our extracts. The plant was identified by forensic specialist in a forensic laboratory in Pretoria. The laboratory number 201213/2009 and the voucher number is CAS239/02/2009. HeLa, ME-180 and SiHa were cultured in Dulbecco's Modified Eagle Media (DMEM) supplemented with 10 % Fetal Bovine Serum (FBS) (Highveld biological,) and 1 % penicillin/streptomycin (Sigma, USA). Cells were maintained at 37 °C under 5 % of carbon dioxide (CO2) and 95 % relative humidity. After every third day of the week, old media was removed and replaced with fresh media, to promote growth until the cells reach a confluence of ~70–80 %. MTT assay Ninety microlitres of HeLa and SiHa cells were seeded into 96-well plates at 5×103 cells per well and incubated overnight at 37 °C under 5 % CO2 and 95 % relative humidity to promote cell attachment at the bottom of the plate. Media was changed and the cells were treated with Cannabis sativa plant extracts at various concentrations (0, 50, 100, and 150 μg/ml (w/v)) for 24 h. After 24 h, the cells were treated with 10 μl of (5 mg/ml) MTT reagent (3-[4, 5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide) for 4 h at 37 °C under 5 % CO2. Ninety microlitres of DMSO was added into each well including wells containing media only and serves as a blank, to dissolve formazan crystals. Camptothecin and DMSO were included as controls. Optical density was measured using a micro plate reader (Bio-Rad) at 570 nm to determine the percentage of viable cells and account for cell death induced according to the outlined equation below: $$ \%\ \mathrm{Cell}\ \mathrm{viability}=\frac{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{treated}\ \mathrm{cells}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{blank}}{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{untreated}\ \mathrm{cells}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{blank}}\times 100 $$ Cell growth analysis Before seeding cells, a 100 μl of media was added to the 16 well E-plate and placed in the incubator to record background readings. A blank with media only was included to rule the possibility of media having a negative effect on the cells. In each well of the E-plate, 1×104 cells were seeded and left in the incubator for 30 min to allow the cells to adhere to the bottom of the plate. The E-plate was placed and locked in the RTCA machine and experiment allowed to run for 22 h prior to the addition of the test compounds. Cells were treated with various concentrations (0, 50, 100, 150 μg/ml) of C. sativa hexane extract. Following treatment, the experiment was allowed to run for a further 22 h. Camptothecin (0.3 μM (v/v)) and DMSO (0.1 % (v/v)) were used as controls for comparative purposes. Procedure was repeated for C. sativa butanol extract. Apoptosis assay Cells were washed twice with 100 μl of cold Biolegend's cell staining buffer followed by resuspension in 100 μl of Annexin V binding buffer. A 100 μl of cell suspension was transferred into 15 ml falcon tube and 5 μl of FITC Annexin V and 10 μl of Propidium iodide solution (PI) were added into untreated and treated cell suspension. The cells were gently vortexed and incubated at room temperature (25 °C) in the dark for 15 min. After 15 min, 400 μl of Annexin V binding buffer was added to the cells. The stained cells were analysed using FACSCalibur (BD Biosciences, USA). Five hundred microliter of 1×104 cells was added onto a 6-well plate containing coverslips. The plate was incubated overnight to allow the cells to attach. Following attachment, media was removed and cells were washed twice with PBS, prior to incubation with IC50 of Cannabis sativa extracts for 24 h. After 24 h, media was removed and cells were washed twice with PBS. Four percent (4 %) was added into each well and the plate incubated for 20 min at room temperature, to allow efficient fixation of cells. Cells were washed twice with PBS and once with 0.1 % BSA wash buffer and further stained with DAPI and Annexin V/FITC for 5 min. BX-63 Olympus microscope (Germany) was used to visualize the cells. Mitochondrial assay (ATP detection) Twenty five microlitres of 1×104 cells per well were plated in a white 96-well luminometer plate overnight. Cells were treated with 25 μl of IC50 concentrations of Cannabis sativa crude extracts and cannabidiol dissolved in a glucose free media supplemented with 10 mM galactose. The plate was incubated at 37° in a humidified and CO2-supplemented incubator for a period of 24 h. Fifty microlitres of ATP detection reagent was added to each well and the plate further incubated for 30 min. Luminescence was measured using GLOMAX (Promega, USA). The assay was conducted in duplicates and ATP levels were reported as a mean of Relative Light Units (RLU). The following formula was used to calculate the ATP levels in RLU: $$ \mathbf{R}\mathbf{L}\mathbf{U}=\mathbf{Luminescence}\left(\mathbf{sample}\right)-\mathbf{Luminescence}\left(\mathbf{blank}\right) $$ Caspase 3/7 activity A hundred microliters of 1×104 cells were plated overnight on a 96-well luminometer plate and allowed to attach overnight. The next day, cells were treated with 0.3 μM camptothecin and the IC50 concentrations of Cannabis sativa crude extracts and further incubated for a period of 24 h. Caspase-Glo 3/7 assay was performed according to manufacturer's protocol (Promega, USA). Briefly, following treatment, media was replaced with caspase glo 3/7 reagent mixed with a substrate at a ratio of 1:1 v/v of DMEM: Caspase-glo 3/7 reagent and was incubated for 2 h at 37 °C in 5 % CO2. Luminescence was quantified using GLOMAX from Promega (USA). The assay was conducted in duplicates and caspase 3/7 activity was reported as a mean of Relative Light Units (RLU). The following formula was used to calculate caspase 3/7 activity in RLU: $$ \mathbf{R}\mathbf{L}\mathbf{U}=\mathbf{Luminiscence}\left(\mathbf{sample}\right)-\mathbf{Luminiscence}\left(\mathbf{blank}\right) $$ Cells were harvested with 2 ml of 0.05 % trypsin-EDTA. Ten millilitres of media was added to the cells to inactivate trypsin and the cell suspension was centrifuged at 1500 rpm for 10 min. The supernatant was discarded and pellet was re-suspended twice in 1 ml PBS. Cell suspension was centrifuged at 5000 rpm for 2–5 min and PBS was discarded. Seven hundred microlitres of pre-chilled absolute ethanol was added to the cell suspension followed by storage at −20 °C for 30 min, to allow efficient permeabilization and fixing of the cells. After 30 min, cells were centrifuged at 5000 rpm for 5 min to remove ethanol. The pellet was washed twice with PBS and centrifuged at 5000 rpm to remove PBS. Five hundred microlitres of FxCycle™ PI/RNase Staining solution (Life technologies, USA) was added to the cells and vortexed for 30 s (sec). The cells were analysed with FACSCalibur (BD Biosciences, USA). Following 24 h of treatment with IC50 concentrations, cells were lysed using RIPA buffer (50 mM Tris-HCl pH 7.4, 150 mM NaCl, 1 % NP-40, 0.1 % SDS, 2 mM EDTA). Protein content was measured by the BCA assay and equal amounts were electrophoresed in SDS polyacrylamide gel and then transferred onto nitrocellulose membranes. Membranes were subsequently immunoblotted with Anti-mouse monoclonal p53, Bcl-2, Bax, RBBP6, Caspase-3 and -9 antibodies were used at 1:500–1000 dilutions as primary antibodies, while a goat anti-mouse horseradish peroxidise-conjugated horse IgG (Santa Cruz, USA) were used at a 1:2000 dilution as a secondary antibody. The membranes were developed using Chemiluminescence detection kit (Santa Cruz Biotechnology, CA). The membranes were imaged using a Biorad ChemiDoc MP. Experiments were performed in duplicates. Statistical analysis of the graphical data was expressed as the mean standard deviation. The p-value was analysed in comparison to the untreated using Students t-Test wherein p < 0.05 was considered as significant. Effect of Cannabis sativa and cannabidiol on SiHa, HeLa, and ME-180 cells To determine half maximal inhibitory concentration (IC50) for both Cannabis sativa and cannabidiol, MTT assay was used. Camptothecin, as our positive control, significantly reduced cell viability in SiHa (40.36 %), HeLa (47.19 %), and ME-180 cells (32.25 %), respectively. As shown in Fig. 1a and d, the IC50 was cell type dependent rather than time dependent with SiHa at less than 50 μg/ml in both butanol (56 %) and hexane (48.9 %). Similarly IC50 in HeLa was at 100 μg/ml at p < 0.001 (Fig. 1b). while ME-180 cells treated with butanolic extract exhibited an IC50 of a 100 μg/ml, reducing viability to 48.6 % (Fig. 1c) and hexane extracts IC50 was at 50 μg/ml with 54 % death (Fig. 1f). This was not the case in cannabidiol with SiHa (51 %) and HeLa (50) IC50 at a much lower dose (3.2 μg\ml) while ME-180 cells (56 %) at 1.5 μg\ml when compared to Cannabis sativa extracts (p < 0.001) (Fig. 1g–i). Dimethyl sulfoxide (DMSO) was included as a vehicle control and it inhibited between 4 and 11 %. Whereas ethanol exhibited between 7.3 and 7.8 % since cannabidiol was alcohol extracted. Representative cell viability bar graphs of cervical cancer cell lines. MTT assay was conducted to determine IC50 following incubation of SiHa, HeLa, and ME-180 cells with different doses of butanol extract (a, b, c), hexane extract (d, e, f), and cannabidiol extract (g, h, i) for a period of 24 h. Data was expressed as mean value ± standard deviation (SD). The level of significance was determined using Students t-Test with nsrepresenting p > 0.05, ***represents p < 0.001, **represents p < 0.01, and *represents p < 0.05 xCELLigence analysis of the cell growth pattern after treatment of cervical cancer cells with Cannabis sativa extracts and cannabidiol. SiHa (a, d, g), HeLa (b, e, h), and ME-180 (c, f, i) cells were seeded for a period of 22–24 h, followed by treatment with IC50 concentration of butanol (a, b, c), hexane (d, e, f), and cannabidiol (g, h, i) Effect of Cannabis sativa extracts and cannabidiol on cell growth of cervical cancer cells The IC50 obtained during MTT assay was tested for their ability to alter cell viability in real time. An impedance based system was employed to evaluate the effect of Cannabis sativa and cannabidiol on SiHa, HeLa, and ME-180. Cells were seeded in an E-plate and allowed to attach. Cells were further treated with IC50 for a period of 22–24 h, depending on their doubling time. Continuous changes in the impedance were measured and displayed as cell index (CI). Little can be read from xCELLigence except that cannabidiol in all cell lines has shown to reduce cell index while the plant extract had mixed results sometimes showing reduction on the other hand remained unchanged (Fig. 3). Suggesting that cannabidiol is the most effective compound. Apoptosis assessment following treatment of cervical cancer cells with IC50 concentrations of Cannabis sativa extract and cannabidiol. These bar graphs are a representative of apoptosis induction in SiHa (a and d), HeLa (b and e), and ME-180 (c and f) cells. Cells were treated with IC50 of Cannabis sativa extracts and cannabidiol for a period of 24 h and further stained with Annexin-V/PI. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated Morphological analysis and assessment of apoptosis in SiHa cells stained with DAPI and Annexin V dye. Cells were incubated with IC50 of Cannabis sativa extracts for a period of 24 h. Cells were stained with Annexin V and counterstained with DAPI. BX63-fluorescence confocal microscopy was used to visualize the cells Morphological analysis and assessment of apoptosis in HeLa cells stained with DAPI and Annexin V dye. Cells were incubated with IC50 of Cannabis sativa extracts for a period of 24 h. Cells were stained with Annexin V and counterstained with DAPI. BX63-fluorescence confocal microscopy was used to visualize the cells Cannabis sativa extracts and cannabidiol induce apoptosis in cervical cancer cells Flow cytometry revealed a significant increase in SiHa cells undergoing apoptosis during treatment with butanol (from 2 to 28.5 %) and hexane (from 2 to 17.2 %) as compared to camptothecin with 30.4 %. In HeLa cells, apoptosis was increased to 31.9 % in butanol extract and only 15.3 % in hexane cells (Fig. 3b). A similar events was observed following treatment of ME-180 cells with butanol extract were 44.8 % apoptosis was recorded and 43.2 % in hexane treated cells (Fig. 3c). Cannabidiol was also tested for its ability to induce apoptosis in all three cell lines. The results further confirmed that the type of cell death induced was apoptosis. Figure shows that cannabidiol induced early apoptosis in all three cell lines. Cannabidiol was more effective in inducing apoptosis In comparison to both extracts of Cannabis sativa. In SiHa cells cannabidiol induced 51.3 % apoptosis (Fig. 3d), 43.3 % in HeLa and 28.6 % in ME-180 cell lines (Fig. 3f). Effect of Cannabis sativa extracts and cannabidiol on the morphology of SiHa and HeLa cells To characterise the cell death type following treatment with our test compounds, cell were stained with DAPI and Annixin V to show if apoptosis was taking place. Treatment of SiHa and HeLa cells with IC50 of both butanol and hexane extracts confirmed the type of cell death as apoptosis since they picked a green colour from Annexin V that bind on phosphotidyl molecules that appear in early stages of apoptosis. Similar results were also observed in cannabidiol treated cells. Another feature that is a representative of cell death is the change in morphology. Morphological appearance of live cells displayed a round blue nuclei following staining with DAPI. Exposure of SiHa and HeLa cells to IC50 of Cannabis sativa extracts caused a change in morphology coupled with an uptake of annexin V. Loss of shape, nuclear fragmentation, reduction in cell size and blebbing of the cell membrane were among the observed morphological features implicated to be associated with apoptosis (Fig. 6). Caspase 3/7 activity after treatment of SiHa, HeLa, and ME-180 cells with IC50 of Cannabis sativa extract and cannabidiol. Cells were treated with IC50 of Cannabis sativa and cannabidiol extracts for a period of 24 h. Caspase 3/7 reagent was added to the treated cells for 1 h. Luminescence–was measured using GLOMAX instrument in RLU. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01, and *p < 0.05 representing the level of significance in comparison to the untreated Bar graphs representing changes in the ATP levels following treatment of cervical cancer cells with Cannabis sativa and cannabidiol. Cells were treated with IC50 of both Cannabis sativa extracts and cannabidiol for a period of 2–24 h. Untreated and camptothecin were included as controls for comparative purposes. The level of significance was determined using Students t-Test with ***p < 0.001, **p < 0.01, *p < 0.05, and nsp > 0.05 in comparison to the untreated Effect of Cannabis sativa extracts and cannabidiol on the ATP levels Since Adenosine 5'-triphosphate acts as a biomarker for cell proliferation and cell death, an ATP assay was conducted. This was done in order to determine whether Cannabis sativa and cannabidiol deplete ATP levels in cervical cancer cells. SiHa, HeLa, and ME-180 cells were treated at different time points, between 2 and 24 h. ATP levels were first detected after 2 h. In general, ATP depletion was cell type dependent. In HeLa cells treated with the crude extracts from butanol and hexane, ATP was significantly reduced by 74 % (from 627621 to 164208 RLU) and 78 % (from 627621 to 133693 RLU) respectively. While with SiHa there was reduction of 31 % (from 4719589 to 3221245 RLU) and 22.5 % (4719589 to 3655730 RLU) respectively (figure). Whereas in ME-180 there was no change between treatments and untreated. Similar results were observed in cannabidiol treated cells. At 2 h, treatment with IC50 led to a reduction in ATP levels by ~ 61 % (from 4704419 to 1802508 RLU), 93 % (from 627621 to 40371 RLU), and 8 % (from 798688 to 734039 RLU) in SiHa, HeLa, and ME-180 cells respectively (Figure). A prolonged incubation period (24 h) of cells with IC50 led to a further decrease in the ATP levels by ~66 % (from 4486150 to 1497648 RLU), 97 % (from 601694 to 13426 RLU), and 8.5 % (from 790757 to 723039 RLU) in SiHa, HeLa, and ME-180 cells respectively. This could mean that cannabidiol depletes ATP levels more than Cannabis sativa extracts and might be the main compound responsible for cell death in cancer cells treated with Cannabis sativa. Effect of Cannabis sativa and cannabidiol on caspase 3/7 activity of SiHa, HeLa, and ME-180 cells As shown in Fig. 8a, b and c, we observed an increase in caspase 3/7 activity all three cell lines following treatment with 0.3 μM of camptothecin. Similar results were observed in crude extract treated cells by 25 % (SiHa) and 40 % (HeLa) in butanol extract and 50 % (SiHa) and 100 % (HeLa) in hexane treated. There was no significant change in ME-180 cells (Figure). When cells were treated with cannabidiol, caspase 3/7 activity increased in all three cell lines. SiHa cells so an increase from 200000 RLU to 2500000 while HeLa increased to 900000 from 800000 RLU. ME-180 was fairly increased also to 230000 from 200000 RLU all increase were significant and in line with other increase in apoptosis as shown in Annexin V. Representative bar graph of the cervical cancer cell cycle before and after treatment with Cannabis sativa extracts and cannabidiol. Cells were harvested and treated with camptothecin and the IC50 concentrations of Cannabis sativa extracts and cannabidiol. Bar graph a and d represents SiHa cells, b and e represents HeLa cells, c and f represents ME-180 cells. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01, *p < 0.05, and nsp > 0.05 representing the level of significance in comparison to the untreated Effect of Cannabis sativa extracts and cannabidiol on cell cycle progression We further assessed the effects of Cannabis sativa extracts and cannabidiol on cell cycle progression using flow cytometry. Flow cytometry showed that in the presence of Cannabis sativa crude extracts and camptothecin, SiHa cells exhibited a significant increase (p < 0.001) in sub-G0 population with a decrease in G0/G1, S, and G2/M phase. In butanol, sub-G0 phase was increased from 4.2 to 20.1 %, while decreasing the G0/G1 (from 64.0 to 48.7 %), S-phase (from 9.3 to 6.5 %), and G2/M (from 18.5 to 17 %) in SiHa population (Fig. 9a) while in hexane treated sub-G0 phase was at 39.1 % compared to 4.2 % in untreated, with a decrease in G0/G1 (from 64.0 to 30.4 %), S (from 9.3 to 6.5 %), and G2/M (from 18.5 to 13.8 %) population (Fig. 9a). In HeLa cells, butanol extracts reduced G0/G1 to 54.9 % while the S-phase and G2/M significantly increased to 18.4 and 25.7 % while with hexane there was increase in the G2/M phase (20.3 %) and a decrease in the S-phase (8.1 %). In ME-180 there was insignificant increase in all cell cycle stages. Each cell line responded differently to cannabidiol treatment. Almost 42.2 % of SiHa cells were observed in the sub-G0 (p < 0.001) while there was reduction in cells in the G0/G1 phase, from 57.9 to 42.8 % (Fig. 9d). A similar trend was observed in HeLa cells but much lower sub-G0 (from 5.1 to 17.4 %) and S phase (from 4.8 to 11.2 %) (Fig. 9e). A similar event was observed during treatment of ME-180 cells. Cannabidiol significantly increased sub-G0 in ME-180 cells to 34.3 % (Fig. 9f). From this data, we can conclude that cannabidiol induced cell death without cell cycle arrest. Western blot analysis of the protein expression before and after 24 h treatment with IC50 of Cannabis sativa extracts and cannabidiol. SiHa (a and d), HeLa (b and e), and ME-180 (c and f) cells were treated for a period of 24 h and protein lysates were separated using SDS-PAGE gel. Untreated protein was used as a control. Antibodies against pro-apoptotic proteins (p53 and Bax) and anti-apoptotic proteins (Bcl-2 and RBBP6), Initiator caspase-9 and effecter caspase-3 were included to elucidate apoptosis induction A densitometry analysis SiHa protein was performed using ImageJ quantification software to measure the relative band intensity. CPT represents camptothecin. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated represent the western blot analysis of SiHa and HeLa cells. The genes analyzed are p53 and RBBP6 including caspases. Equal amount of protein (conc) was loaded in each well. Note that the darker the bands increased expression of the gene A densitometry analysis HeLa protein was performed using ImageJ quantification software to measure the relative band intensity. CPT represents camptothecin. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated A densitometry analysis ME-180 protein was performed using ImageJ quantification software to measure the relative band intensity. CPT represents camptothecin. Data represented as mean ± standard deviation with ***p < 0.001, **p < 0.01 and nsp > 0.05 representing the level of significance in comparison to the untreated Effect of Cannabis sativa extracts and cannabidiol on the expression of upstream and downstream target proteins From the apoptosis experiments conducted, it is clear that the mode of cell death induced by cannabidiol and extract of Cannabis Savita was that of apoptosis. However, we needed to confirm whether the type of apoptosis induced is it p53 dependent or independent as it is well known that p53 is mutated in many cancers. Protein expression of RBBP6, Bcl-2, Bax and p53were performed and results recorded. In butanol extract p53 was significantly increased in SiHa and HeLa cells while remaining unchanged in ME-180. Similar results were observed in hexane treated cells. In all cell lines the level of p53 negative regulator in cancer development was reduced by all treatment. Following treatment of cervical cancer cells, Bax protein was up-modulated and Bcl-2 was down-modulated. Western blot analysis revealed that cannabidiol effectively caused an increase in the expression of pro-apoptosis proteins, p53 and Bax, while simultaneously decreasing the anti-apoptosis proteins, RBBP6 and Bcl-2 in all three cervical cancer cell lines (SiHa, HeLa, and ME-180 cells). Caspases play an effective role in the execution of apoptosis, an effector caspase-9 and executor capsase-3 were included in our western blot to check if they played a role in inducing apoptosis. In all Cannabis sativa extracts, caspase-3 and caspase-9 were upregulated in all cell lines. Similar results were also observed in cannabidiol treated cells with upregulation of both caspase-3 and -9. Cervical cancer remains a burden for women of Sub-Saharan Africa. Half a million new cases of cervical cancer and a quarter of a million deaths are reported annually due to lack of effective treatment [12]. Currently, the recommended therapeutic regimens include chemotherapy, radiation therapy, and surgery. However, they present several limitations including side effects or ineffectiveness [2]. Therefore, it is important to search for new novel therapeutic agents that are naturally synthesized and cheaper, but still remain effective. Medicinal plants have been used for decades for health benefits and to treat several different diseases [22]. In South Africa, over 80 % of the population are still dependent on medicinal plants to maintain mental and physical health [27]. However, some of the medicinal plants used by these individuals are not known to be effective and their safety is still unclear. It is therefore important to scientifically evaluate and validate their efficacy and safety. In the present study, cervical cancer cell lines (SiHa, HeLa, and ME-180) were exposed to different concentrations of Cannabis sativa extracts and that of its compound, cannabidiol, with the aim of investigating their anti-proliferative activity. We first determined whether Cannabis sativa extracts and cannabidiol possess anti-proliferative effects using MTT assay. MTT assay determines IC50, which represents the half maximal concentration that induces 50 % cell death. Cannabis sativa extracts were able to reduce cell viability and increase cell death in SiHa, HeLa, and ME-180 cells. These results correlate with the findings obtained by [23], whereby they reported reduced cell proliferation in colorectal cancer cell lines following treatment with Cannabis sativa. According to [7, 24, 25] Cannabis sativa extracts rich in cannabidiol were able to induce cell death in prostate cancer cell lines LNCaP, DU145, and PC3 at low doses (20–70 μg/ml). It was suggested that cannabidiol might be responsible for the reported activities. Therefore, in this study, cannabidiol was included as a reference standard in order to determine whether the reported pharmacological activities displayed by Cannabis sativa extracts might have been due to the presence of this compound. For positive extract inhibitory activity, Camptothecin was included as a positive control. Camptothecin functions as an inhibitor of a topoisomerase I enzyme that regulates winding of DNA strands [19, 20]. This in turn causes DNA strands to break in the S-phase of the cell cycle [20]. A study conducted by [19], exhibited the ability of camptothecin to be cytotoxic against MCF-7 breast cancer cell line and also induce apoptosis as a mode of cell death at 0.25 μM. We also observed a similar cytotoxic pattern, whereby camptothecin induced cell death in HeLa, SiHa, and ME-180 cells, however, at a much higher concentration. xCELLigence continuously monitors cell growth, adhesion, and morphology in real-time in the presence of a toxic substance. Upon treatment of SiHa and HeLa cells with IC50 of butanol extract, we noted that there was little to no inhibitory effect observed on cell growth. The growth curve continued in its exponential growth in all cells including the treated, untreated and 0.1 % DM'SO. However, at a similar IC50 of 100 μg/ml, a reduction in cell viability was observed following treatment of HeLa cells with hexane extract. On the other hand, ME-180 cells responded after a period of 2 h following treatment with the IC50 of butanol and hexane extract. In comparison to butanol and hexane extracts, cannabidiol reduced the cell index of ME-180 cells after 2 h of treatment, signalling growth inhibition. Differences in the findings could be attributable to the fact that both methods have different principles and mechanism of action. MTT assay is an end-point method that is based on the reduction of tetrazolium salt into formazan crystals by mitochondrial succinate dehydrogenase enzyme. Mitochondrial succinate dehydrogenase is only active in live cells with an intact metabolism [8, 13]. Induction of cell death by Cannabis sativa crude extracts decreases the activity of the enzyme following treatment of HeLa, SiHa, and ME-180 cervical cancer cell lines. On the other hand, xCELLigence system is a continuous method that relies on the use of E-plates engraved with gold microelectrodes at the bottom of the plate. The xCELLigence system is based on the changes in impedance influenced by cell number, size and attachment [13]. Therefore, we concluded that it was possible that dead cells might have been attached at the bottom of the E-plate after treatment. Cell death can be characterized by a decrease in the energy levels as a result of dysfunction of the mitochondria [8]. Therefore, to evaluate the effect of treatment on the energy content of the cells, we conducted mitochondrial assay. We only used IC50 as indicated by MTT assay only. ATP acts as determinant of both cell death and cell proliferation [15]. Exposure of SiHa, HeLa, and ME-180 cells to the IC50 of Cannabis sativa extracts caused a reduction in the ATP levels. Treatment of cells with cannabidiol either slightly or severely depleted the ATP levels. According to [16], a reduction of the ATP levels compromises the status of cell and often leads to cell death either by apoptosis or necrosis, while an increase is indicative of cell proliferation. Therefore, we concluded that the reduction of ATP might have been as a result of cell death induction since the cells ATP production recovered. Following confirmation that Cannabis sativa and cannabidiol have anti-proliferative activity, we had to verify whether both treatments have the ability to induce cell cycle arrest in all three cell lines. This method uses a PI stain and flow cytometry to measure the relative amount of DNA present in the cells. In this study, propidium iodide (PI) was used to stain cells. Propidium iodide can only intercalate into the DNA of fixed and permeabilized cells with a compromised plasma membrane or cells in the late stage of apoptosis. Viable cells with an intact plasma membrane cannot uptake the dye. The intensity of stained cells correlates with the amount of DNA within the cells. HeLa, SiHa, and ME-180 cervical cancer cells were stained with PI and analysed using flow cytometry. Treatment of SiHa cells with butanol and hexane extracts led to the accumulation of cells in the cell death phase (sub-G0 phase), without cell cycle arrest. When compared to the S-phase and G2/M phase of untreated cells, exposure of HeLa cells to Cannabis sativa butanol extract resulted in the accumulation of cells in the S-phase of the cell cycle and slight cell death induction. And thus, according to [3], signals DNA synthesis and cell cycle proliferation. A decrease in the S-phase and an increase in the G2/M phase of HeLa cells following treatment with hexane extract, suggests a blockage of mitosis and an induction of cell cycle arrest. Interesting to note was that, treatment of ME-180 cells with both extracts led to an increase of cells coupled by an increase in the S-phase population which favours replication and duplication of DNA. This was not the case following treatment of cells with cannabidiol. Cannabidiol resulted in the accumulation of cells in the cell death phase of the cell cycle. SiHa, and HeLa, and ME-180 cells were committed to the cell death phase. In summary, Cannabis sativa induces cell death with or without cell cycle arrest while cannabidiol induces cell death without cell cycle arrest. Apoptosis plays a major role in determining cell survival. Annexin V/FITC and PI were used to stain the cells to be able to distinguish between viable, apoptotic and necrotic cells. Annexin V/ FITC can only bind to phosphatidylserine residues exposed on the surface of the cell membrane while PI intercalates into the nucleus and binds to the fragmented DNA. Viable cells cannot uptake both dyes due to the presence of an intact cell membrane. Since treatment caused the accumulation of cells in the sub-G0 phase, also known as the cell death phase, and the severe depletion of ATP levels by cannabidiol, we further conducted an apoptosis assay. Treatment of all three cell lines with camptothecin, IC50 of Cannabis sativa and cannabidiol exhibited the type of induced cell death as apoptosis. Sharma et al. [25] also showed a similar pattern of cell death, whereby treatment of a prostate cancer cell lines with Cannabis sativa resulted in the induction of apoptosis. Apoptosis is characterized by morphological changes and biochemical features which include condensation of chromatin, convolution of nuclear and cellular outlines, nuclear fragmentation, formation of apoptotic blebs within the plasma membrane, cell shrinkage due to the leakage of organelles in the cytoplasm as well as the presence of green stained cells at either late or early apoptosis [5, 17, 28]. Annexin V/FITC and DAPI were used to visualize the cells under a fluorescence confocal microscopy. According to [18], an uptake of Annexin V/FITC suggests the induction of apoptosis, since it can only bind to externalized PS residues. This also proves that during cell growth analysis, SiHa and HeLa cells were undergoing cell death while still attached to the surface of the flask. Apoptosis is known to occur via two pathways, the death receptor pathway and the mitochondrial pathway [30]. Cannabis sativa isolates including cannabidiol have been implicated in apoptosis induction via the death receptor pathway, by binding to Fas receptor or through an activated of Bax triggered by the synthesis of ceramide in the cells [4]. However, not much has been reported on the induction of apoptosis via activation of p53 by Cannabis sativa. Our focus in this study was also to identify downstream molecular effect of extracts. One such important gene is p53 which acts as a transcription factor for a number of target genes [29]. Under normal conditions, p53 levels are maintained through constant degradation MDM2 and its monomers [29]. RBBP6 is one of the monomers that helps degrade p53, due the presence of Ring finger domain that promotes the interaction of both proteins [14]. In response to stress stimuli such as DNA damage, hypoxia, UV light, and radiation light, p53 becomes activated and causes MDM2 expression to decrease [10]. Mutation of p53, implicated to be associated with 50 % of all human cancers, promote the tumorigenesis. Bax and Bcl-2 form part of the proteins that regulate apoptosis via the mitochondria [21]. Following activation, p53 translocates into the cytosol and triggers the oligomerization of Bcl-2 with BAD, resulting in the inhibition of Bcl-2 activity [17]. This in turn allows Bax protein to be translocated to the mitochondria and participate in the release of cytochrome c through poration of the outer mitochondrial membrane [9, 17]. An imbalance between Bax and Bcl-2 has been linked to the development and progression of tumours through the resistance of apoptosis [17]. It is therefore crucial to design drugs that would effectively target these genes involved in the execution of apoptosis via the mitochondrial pathway. Camptothecin, hexane extract, and cannabidiol effectively up-modulated the expression of p53 in all three cell lines, leading to a decrease in RBBP6 protein expression. Apart from SiHa and HeLa, butanol extract failed to up-modulate p53 in ME-180 cells. Interesting to note is that butanol extract reduced the expression of RBBP6 protein in ME-180 cells. The mechanism behind failure of butanol to up-modulate p53 while down-modulating RBBP6 is unclear. However, we came to a conclusion that butanol induces apoptosis independently of p53. We further demonstrated that Cannabis sativa extracts, cannabidiol, and camptothecin were able to down-modulate the expression of Bcl-2 protein and up-modulate Bax expression. Caspases play an effective role in the execution of apoptosis either through the extrinsic or intrinsic pathway [9]. In this study, we wanted to validate whether caspase-9 and caspase-3 were involved in the initiation and execution of apoptosis. We demonstrated the ability of Cannabis sativa to initiate apoptosis by activating caspase-9. However, execution of apoptosis was either with or without the presence of capsase-3, depending on each cell line. Western blot revealed that Cannabis sativa hexane extract induced apoptosis via the activation of caspase-9 and caspase-3 when compared to untreated cells in all three cell lines. Similar results were obtained during treatment of all three cell lines with camptothecin. This was not the case with butanol. Butanol extracts up-modulated caspase-9 and caspase-3 in SiHa and HeLa cells only. Caspase-3 was not up-modulated in ME-180 cells. Caspase 3/7 activity assay revealed the up-modulation of caspase 3/7 following treatment of cervical cancer cells. However on the basis of the Western blot results, wherein butanol extract failed to up-modulate caspase-3, we can conclude that caspase-7 was responsible for the reported activity. Cannabidiol effectively up-modulated caspase-9 and caspase-3 in all three cell lines, when compared to the untreated and Cannabis sativa extract. From the results we can conclude that, apoptosis induction was caspase dependent. The aim of this study was to evaluate for the anti-growth effects of Cannabis sativa extracts and to also determine the mode of cell death following treatment. The activity of Cannabis sativa extracts was compared to that of cannabidiol, in order to verify whether the reported results were due to the presence of the compound. The study showed that the activity of one of the extracts might have been due to the presence of cannabidiol. It further demonstrated the ability of Cannabis sativa to induce apoptosis with or without cell cycle arrest and via mitochondrial pathway. More research needs to be done elucidating the mechanism between the active ingredients and molecular targets involved in the regulation of the cell cycle. Bcl-2-associated death promoter Bak-1: Bcl2-antagonist/killer 1 Bax: Bcl2-associated X protein Bcl-2: B-cell lymphoma 2 BH: Bcl-2 homology domain BH3 interacting-domain Bik: Bcl-2-interacting killer FITC: Fluorescein isothiocyanate P53: High Perfomance Liquid Chromatography RBBP6: Retinoblastoma binding protein 6 Alexander A, Smith PF, Rosengren RJ. Cannabinoids in the treatment of cancer. Cancer Lett. 2009;285:6–12. Arbyn M, Castellsague X, de Sanjose S, Bruni L, Saraiya M, Bray F, Ferlay J. Worldwide burden of cervical cancer in 2008. Ann Oncol. 2011;22:2675–86. Armania N, Yazan LS, Ismail IS, Foo JB, Tor YS, Ishak N, Ismail N, Ismail M. Dillenia suffruticosa extract inhibits proliferation of human breast cancer cell lines (MCF-7 and MDA-MB-231) via induction of G2/M arrest and apoptosis. Molecules. 2013;18(11):13320–39. Bla'zquez C, Galve-Roperh I, Guzma'n M. De novo-synthesized ceramide signals apoptosis in astrocytes via extracellular signal-regulated kinase. FASEB J. 2000;14:2315–22. Bortner CD, Oldenburg NB, Cidlowski JA. The role of DNA fragmentation in apoptosis. Trends Cell Biol. 1995;5:21–6. Caffarel MM, Andradas C, Perez-Gomez E, Guzman M, Sanchez C. Cannabinoids: Anew hope for breast cancer therapy? Cancer Treat Rev. 2012;38:911–8. Chen P, Yu J, Chalmers B, Drisko J, Yang J, Li B, Chen Q. Pharmacologic ascorbate induces cytotoxicity in prostate cancer cells through ATP depletion and the induction of autophagy. Anticancer Drugs Preclinical Rep. 2011;23:437–44. Choene M, Motadi L. Validation of the Antiproliferative Effects of Euphorbia tirucalli extracts in Breast Cancer Cell Lines. Mol Biol. 2016;50(1):115–28. Chipuk TE, Kuwana T, Bouchier-Hayes L, Droin MN, Newmeyer DD, Schuler M, Green DR. Direct activation of Bax by p53 mediates Mitochondrial Membrane Permeabilization and Apoptosis. Sci J. 2004;303:1010. de Bruin EC, Medema JP. Apoptosis and non-apoptosis deaths in cancer development and treatment response. Cancer Treat Rev. 2008;34:737–49. Flemming R, Muntendam T, Steup T, Kayser O. Chemistry and biological activity of tetrahydrocannabinol and its derivatives. Top Heterocycl Chem. 2007;10:1–42. GLOBOCAN 2012 v1.0, Cancer Incidence and Mortality Worldwide: IARC Cancer Base No. 11 [Internet]. Lyon, France: International Agency for Research on Cancer. Available from http://globocan.iarc.fr Accessed 25 July 2014. Gumulec J, Balvan J, Sztalmachova M, Raudeska M, et al. Cisplatin-resistant prostate cancer model: differences in antioxidant system, apoptosis, and cell cycle. Int J Oncol. doi:10.3892/ijo.2013.2223. Happyana N, Agnolet S, Muntendam R, Van Dam A, Schneider B, Kayser O. Analysis of cannabinoids in laser-micro dissected trichomes of medicinal Cannabis sativa using LCMS and cryogenic NMR. Phytochemistry. 2013;87:51–9. Lemasters JJ, Nieminen A, Qian T, Frost LC, et al. The mitochondrial permeability transition in cell death: a common mechanism in necrosis, apoptosis, and autophagy. Biochim Biophys Acta. 1998;1366:177–96. Ligresti A, Moriello AS, Matias I, et al. Anti-tumor activity of plant cannabinoids with the emphasis on the effect of cannabidiol on human breast cancer. J Pharmacol Exp Ther. 2006;318(3):1375–87. Li-Weber M. Targeting apoptosis pathways in cancer by Chinese medicine. Cancer Lett. 2013;332:304–12. Lozano I. The therapeutic use of Cannabis sativa in Arabic medicine. J Cannabis Ther. 2001;1(1):63-70. Moela P, Choene MS, Motadi LR. Silencing RBBP6 (Retinoblastoma binding protein 6) sensitizes breast cancer cells MCF-7 to camptothecin and staurosporine-induced cell death. Immunology. 2013;219:1–9. Nobili S, Lippi D, Witort E, Donnini M, et al. Natural compounds for cancer treatment and prevention. Pharmacol Res. 2009;59(6):365–78. O'Brien MA, Kirby R. Apoptosis: a review of pro-apoptotic and anti-apoptotic pathways and dysregulation in disease. J Vet Emerg Crit Care. 2008;18(6):572–85. Rao GV, Kumar S, Islam M, Mansour ES. Folk medicines for anticancer therapy-a current status. Cancer Ther. 2008;6:913–22. Romano B, Borrelli F, Pagano E, Cascio MG, Pertwee RG, Izzo AA. Inhibition of colon carcinogenesis by a standardized Cannabis sativa extract with high content of cannabidiol. Phytomedicine. 2014;21(5):631–9. Safaraz S, Adhami VM, Syed DN, Afaq, Mukhtar H. Cannabinoids for cancer treatment: Progress and promise. Cancer Res. 2008;68(2):339–44. Sharma M, Hudson JB, Adomat H, Guns E, Cox ME. In Vitro Anticancer Activity of Plant-Derived Cannabidiol on Prostate Cancer Cell Line. Pharmacol Pharm. 2014;5:806–20. Shrivastava A, Kuzontkoski PM, Groopman JE, Prasad A. Cannabidiol induces programmed cell death by coordinating the cross-talk between apoptosis and autophagy. Mol Cancer Ther. 2011;10(7):1161–72. Street RA, Prinsloo G. "Commercially Important Medicinal Plants of South Africa: A Review.". J Chem. 2013:1–16. doi:10.1155/2013/205048. Thafeni M, Sayed Y, Motadi L. Euphorbia mauritanica and Kedrostis hirtella extracts induces cell death in lung cancer cells. J Mol Biol. 2012;39(12):10785–94. Turner CE, Hadley KW, Holley HJ, Billets S, Mole Jr LM. Constituents of Cannabis sativa L. VIII: Possible biological application of a new method to separate cannabidiol and cannabichromene. J Pharm Sci. 1975;64(5):810–4. Yamaori S, Kushihara M, Yamamoto I, Watanabe K. Characterization of major phytocannabinoids, cannabidiol and cannabinol, as isoform-selective and potent inhibitors of human CYP1 enzymes. Biochem Pharmacol. 2010;79:1691–8. Our gratitude goes to South African MRC for funding assistance. The work was funded by MRC. The datasets supporting the conclusions of this article are included within the article and its additional files. STL was responsible for the experimental design and LRM prepared the manuscript. Both authors read and approved the final manuscript. The authors give consent for the journal to be published. This study was approved by the Human Research Ethics Committee (Medical): M140801. Department of Biochemistry, North-west University (Mafikeng campus), Private Bag X1290, Potchefstroom, 2520, South Africa Sindiswa T. Lukhele & Lesetja R. Motadi Search for Sindiswa T. Lukhele in: Search for Lesetja R. Motadi in: Correspondence to Lesetja R. Motadi. Lukhele, S.T., Motadi, L.R. Cannabidiol rather than Cannabis sativa extracts inhibit cell growth and induce apoptosis in cervical cancer cells. BMC Complement Altern Med 16, 335 (2016) doi:10.1186/s12906-016-1280-0
CommonCrawl
Optimization of medium compositions to improve a novel glycoprotein production by Streptomyces kanasenisi ZX01 Yong Zhou1, Yu-Bo Sun1, Hong-Wei He1, Jun-Tao Feng1,2, Xing Zhang1,2 & Li-Rong Han1,2 Streptomyces kanasenisi ZX01 was found to produce a novel glycoprotein GP-1 previously, which was secreted into medium and had significant activity against tobacco mosaic virus. However, the low production of GP-1 by strain ZX01 limited its further studies. In order to improve the yield of GP-1, a series of statistical experimental design methods were applied to optimize medium of strain ZX01 in this work. Millet medium was chosen to be the optimal original medium for optimization. Soluble starch and yeast extract were identified as the optimal carbon and nitrogen source using one-factor-at-a-time method. Response surface methodology was used to optimize medium compositions (soluble starch, yeast extract and inorganic salts). A higher yield of GP-1 was 601.33 µg/L after optimization. The optimal compositions of medium were: soluble starch 13.61 g/L, yeast extract 4.19 g/L, NaCl 3.54 g/L, CaCO3 0.28 g/L, millet, 10 g/L. The yield of GP-1 in a 5 L fermentor using optimized medium was 2.54 mg/L, which is much higher than the result of shake flask. This work will be helpful for the improvement of GP-1 production on a large scale and lay a foundation for developing it to be a novel anti-plant virus agent. Streptomyces is a famous prokaryotic gram-positive mycelial soil bacteria and has become one of the most important microbial resources these years, owing to the ability to produce many kinds of natural products, especially antibiotics (Demain and Sanchez 2009; Watve et al. 2001). Some antibiotics showed great activity against plant virus, such as tunicamycin by Streptomyces lysosuperificus (Takatsuki et al. 1971), herbimycin B by Streptomyces hygroscopicus (Iwai et al. 1980), ningnanmycin by Streptomyces noursei var. xichangenisi (Deng et al. 2004), cytosinpeptidemycin by Streptomyces achygroscopicus var. liaoningensis (Zhu et al. 2005). Non-antibiotics like polysaccharide (He et al. 2010), protein (Gomes et al. 2001) and glycoprotein (Nwodo et al. 2012) also can be produced by Streptomyces. However, it was rarely reported that these biopolymers produced by Streptomyces had high activity against plant virus. Streptomyces kanasenisi ZX01 (CGMCC 4893) was isolated from soil around Kanas Lake, Xinjiang Province, China. Our previous research indicated that strain ZX01 can produce a novel glycoprotein (GP-1) with significant activity against some plant virus, especially tobacco mosaic virus (TMV) (Han et al. 2015). GP-1 was a heat-sensitive glycoprotein with approximate 8.5 kDa molecular weight, which contained 40.23% carbohydrate with N-linked and O-linked glycan (Zhang et al. 2015). Due to extremely low production of GP-1 and long fermentation time of strain ZX01, this is necessary to improve GP-1 yield in batch culture. Nutrition plays a significant role in the process of microorganism producing secondary metabolites, not only because limiting the supply of some essential nutrients is an effective way to restrict growth but also because the choice of limiting nutrient can have specific metabolic and regulatory effects. To achieve the maximum yield, it is necessary to design an appropriate fermentation medium in an efficient fermentation process. There is usually a relationship between the medium compositions and secondary metabolites (Azma et al. 2011; Elibol 2004). Different statistical design methods can be used to optimize the fermentation medium. The conventional method of one-factor-at-a-time optimization need to keep the other factors constant and changes one independent variable. This method is not only time consuming, but also can't describe the interactions between different factors, leading to unreliable results. These limitations of one-factor-at-a-time method can be replaced by response surface methodology (RSM) (Sayyad et al. 2007; Zhao et al. 2013). Optimization through RSM is a common practice in biotechnology. Various research workers have applied this technique, especially for the optimization of culture conditions, the determination of optimal values for processing parameters such as pH, temperature and aeration (Kalil et al. 2000). RSM, which includes factorial design and regression analysis, can be used to build models to determine relationship, select the optimal conditions of the variables for a desirable response, and estimate the interactions between a set of controlled experimental factors (Muntari et al. 2012). The objective of the present research was to optimize the fermentation medium of strain ZX01 for the maximum yield of GP-1 using both one-factor-at-a-time optimization and response surface methodology. The conventional one-factor-at-a-time method was applied to screen the medium compositions, such as carbon sources and nitrogen sources. Optimal value of these compositions was obtained by response surface methodology. Moreover, a further study on scale-up fermentation was carried out in 5 L bench fermentor to explore primarily the possibility of scale-up of GP-1 production from shake flask to fermentor. Streptomyces kanasenisi ZX01 obtained from Research and Development Center of Biorational Pesticide, Yangling, China, was isolated from soil of Kanas Lake, Xinjiang Province, China. Strain ZX01 is registered at China General Microbiological Culture Collection Center (CGMCC) under strain number CGMCC 4893. The strain was maintained on Gause's No.1 agar medium and subcultured at a month interval, or was stored in 20% glycerol at −70 °C. Inoculum preparation Inoculum was prepared by inoculating a loopful of strain ZX01 growing on Gause's No. 1 agar plate for 72 h into a 250 mL flask containing 100 mL Gause's No. 1 liquid medium. The compositions of Gause's No. 1 liquid medium were (g/L): soluble starch, 20; NaCl, 0.5; FeSO4·7H2O, 0.01; K2HPO4, 0.5; KNO3, 1; MgSO4·7H2O, 0.5. Medium was adjusted to a final pH of 7.0 to provide suitable conditions for growth of strain ZX01. The flasks were incubated at 28 °C on a shaker at 180 rpm for 72 h. The inoculum quantity was controlled at 5% in all the fermentation experiments. Extraction and measurement of GP-1 The fermentation broth was centrifuged at 10,000 rpm for 20 min to separate the precipitate and supernatant. The supernatant was concentrated to a volume of 10 mL by rotary evaporator and then precipitated by adding 4-fold volumes of ethanol at 4 °C. The precipitate was redissolved in distilled water (10 mL) and centrifuged (10,000 rpm, 10 min) again to remove those water-insoluble materials. The supernatant was subjected to DEAE-52 Cellulose anion-exchange column (2 cm × 60 cm) eluted with deionized water first, and then with 0.1 M NaCl at a flow rate of 5 mL/min. The 0.1 M NaCl fraction was collected and centrifuged (10,000 rpm, 15 min) with centrifugal filter devices (3K, 0.5 mL) to remove NaCl. The fraction was subjected to HiTrap™ Con A 4B eluted with binding buffer (20 mM Tris–HCl, 0.5 M NaCl, 1 mM MnCl2, 1 mM CaCl2, pH 7.4) and elution buffer (0.1 M methyl-α-d-glucoside, 20 mM Tris–HCl, 0.5 M NaCl, pH 7.4) sequentially at a flow rate of 1 mL/min. The fraction eluted with elution buffer that contained GP-1 was concentrated to 100 µL and evaluated by high performance liquid chromatography (HPLC). The concentration of GP-1 was analyzed by a HPLC (Waters, USA) with a gel filtration column (TSK-gel G2000SWXL, 7.8 × 300 mm, 5 µm, TOSOH, Japan) and a 996 photodiode array detector at 280 nm. HPLC was performed on 10 µL sample with 20% acetonitrile at a flow rate of 0.5 mL/min and 28 °C. GP-1 purified previously (purity >99%) was diluted to 10, 5, 2.5, 1.25, 0.625 and 0.3125 mg/mL as standards. Anti-TMV activity assay The anti-TMV activity was tested by half-leaf method. The fermentation broth diluted to one twentieth was equally mixed with TMV (50 µg/mL). After 10 min, the mixture was mechanically inoculated onto the left side of the leaves of Nicotiana glutinosa as the treatment, while the right side of the leaves was inoculated with a mixture of distilled water and TMV as the negative control. N. glutinosa were kept in a culture chamber at 28 °C for 2–3 days, and then the number of local lesions on the leaves was recorded. The TMV inhibition rate was calculated as follows: $${\text{Inhibition rate}}\left( \% \right) = \left( {1 - \frac{T}{C}} \right) \times 100\%$$ where T is the average number of local lesions of treatment, C is the average number of local lesions of negative control. All experiments were conducted in triplicate. TMV was stored in systemic host Nicotiana tabacum K326 and purified as described by Gooding and Hebert (1967). Selection of the optimal fermentation medium 10 different media were used to find the optimal fermentation medium. The compositions of medium (g/L): bean broth medium (soybean leach liquor, 20; soluble starch, 20; yeast extract, 5; peptone, 2; NaCl, 5; CaCO3, 2); modified bean broth medium (soybean leach liquor, 20; soluble starch, 5; sucrose, 10; yeast extract, 2; peptone, 2; NaCl, 2; K2HPO4, 0.5; MgSO4∙7H2O, 0.5; CaCO3, 3); millet medium (millet leach liquor, 10; glucose, 10; peptone, 3; NaCl, 2.5; CaCO3, 0.2); Gause's No.1 medium (soluble starch, 20; NaCl, 0.5; FeSO4∙7H2O, 0.01, K2HPO4, 0.5; KNO3, 1; MgSO4∙7H2O, 0.5); ISP1 (sucrose, 30; NaNO3, 2; K2HPO4, 1; MgSO4∙7H2O, 0.5; KCl, 0.5; FeSO4∙7H2O, 0.01); ISP2 (yeast extract, 4; malt extract, 1; glucose, 4; trace salt solution, 1); ISP3 (oatmeal leach liquor, 20; trace salt solution, 1); ISP4 (soluble starch, 10; NaCl, 1; KH2PO4, 1; MgSO4∙7H2O, 1; (NH4)2SO4, 2; CaCO3, 2; trace salt solution, 1); ISP5 (l-Asparagine, 1; glycerol, 10; KH2PO4, 1; trace salt solution, 1); PDA (potato leach liquor, 200; glucose, 20). The media were mixed fully and then sterilized at 120 °C for 30 min. 5 mL inoculum was inoculated into 100 mL medium in 250 mL flask. All the flasks were incubated at 28 °C on a shaker at 200 rpm for 7 days. All experiments were conducted in triplicate. Selection of the optimal carbon and nitrogen source One-factor-at-a-time method was used to investigate the best carbon and nitrogen source. We used 7 different carbon sources and 7 different nitrogen sources (Table 3) to replace corresponding carbon and nitrogen source in the optimal fermentation medium while other compositions were kept constant at their original concentration. All experiments were conducted in triplicate. Experiments design by response surface methodology Response surface methodology (RSM) based on central composite design (CCD) was used to optimize optimal levels of medium compositions. The concentrations of carbon source (X 1), nitrogen source (X 2) and inorganic salts (X 3) were selected as the independent variables (main factors). Inorganic salts were defined as total concentration of inorganic salts of medium and kept original proportion. The levels of the variables are presented in Table 1. The total number of experimental combinations was estimated according to the equation: $$N = 2^{k} + 2k + n_{0}$$ where N, k and n 0 are the number of experimental combinations, the number of the variables and the number of repetitions of experiments at the central point, respectively. Table 1 The levels of the independent variables through CCD A total of 20 experiments were performed, including 23 cube points, 6 axial points and 6 repetitions. The selected independent variables (X i) were coded as x i according to the equation: $$x_{\text{i}} = \frac{{X_{\text{i}} - \bar{X}_{\text{i}} }}{{\Delta X_{\text{i}} }}\left( {{\text{i}} = 1, 2, 3, \ldots {\text{k}}} \right)$$ where x i is the coded value of the variable, X i is the actual value of the variable, \(\bar{X}_{\text{i}}\) is the actual value of the variable at the central point and \(\Delta X_{\text{i}}\) is step change value. The mathematical relationship between the response variable (the yield of GP-1) and the independent variables can be described by the following equation: $$Y = b_{0} + \mathop \sum \limits_{\text{i}} b_{\text{i}} x_{\text{i}} + \mathop \sum \limits_{\text{i}} \mathop \sum \limits_{\text{j}} b_{\text{ij}} x_{\text{i}} x_{\text{j}} + \mathop \sum \limits_{\text{ii}} b_{\text{ii}} x_{\text{i}}^{2}$$ where Y is the predicted response, b 0, b i, b ij and b ii are regression coefficients for intercept, the coefficient of linear effects, interaction coefficient and coefficients of quadratic effect, respectively. x i and x j are coded value of the independent variables (i < j). The yield of GP-1 by S. kanasenisi ZX01 in bench fermentor This part of the experiment was performed in a 5 L fermentor (GBCN-5C, Zhenjiang East Biotech Equipment and Technology Co., Ltd, China) with a working volume of 3 L. The fermentor was equipped with a temperature probe, pH sensor and dissolved oxygen (DO) sensor. The height and diameter of the 5 L fermentor were 0.35 and 0.2 m, respectively. The agitation system was a coupling of stirrer that consisted of two propellers on one axle and each propeller had four flat-blades. The agitation rate was controlled by electromagnetic impulse. The aeration system was an air inlet through a ring sparger with air-flow meter and filter. The fermentor and all its parts containing 3 litre medium was sterilized at 121 °C for 30 min. After sterilization, the fermentation medium was inoculated with 5% (v/v) seed medium. Diluted antifoaming agent was added when foam appeared in the fermentor during the fermentation process. The temperature and aeration rate were maintain at 28 °C and 3 L/min during the whole fermentation process, respectively. The agitation speed was controlled in the range of 150–300 rpm to maintain the DO concentrations over 20% saturation, which made sure that oxygen supply was enough for cell growth. Samples were acquired from fermentor at every 24 h interval for analysis of GP-1 production and dry cell weight (DCW). Effect of different media on the yield of GP-1 The effect of different media on GP-1 yield by strain ZX01 is presented in Table 2. The results indicated that The yield of GP-1 was the highest in millet medium (345.35 µg/L), which had significant differences with other media, followed by modified bean broth medium (280.47 µg/L), ISP2 (264.31 µg/L), bean broth medium (260.34 µg/L), ISP3 (244.60 µg/L), ISP5 (240.26 µg/L) and ISP4 (225.23 µg/L). On the other hand, GP-1 concentrations in Gause's No.1 (190.74 µg/L), PDA (185.04 µg/L) and ISP1 (177.52 µg/L) were comparatively low and less than 200.00 µg/L. Among all the tested media, millet medium was chosen to be the optimal original medium for carbon and nitrogen sources selection experiments for the highest yield of GP-1. Table 2 Effect of different media on the yield of GP-1 by strain ZX01 Effect of different carbon and nitrogen sources on the yield of GP-1 Based on millet medium, the effect of different carbon and nitrogen sources on GP-1 yield by strain ZX01 is presented in Table 3. The results showed that Streptomyces sp. ZX01 had the maximum GP-1 yield with soluble starch (482.36 µg/L) as carbon source. Lactose (438.14 µg/L), sucrose (427.83 µg/L) and fructose (407.93 µg/L) were in the second echelon formation and had no significant difference with each other. In contrast, glycerol (211.28 µg/L) seemed to have negative influence on GP-1 yield. Table 3 Effect of different carbon and nitrogen sources on the yield of GP-1 by strain ZX01 As for the nitrogen sources, the maximum yield of GP-1 was obtained with yeast extract (534.83 µg/L) as nitrogen source. Peptone, soy peptone, tryptone and beef extract ranged from 500.67 to 452.75 µg/L. The yields of GP-1 with addition of fish meal (378.58 µg/L) and urea (352.20 µg/L) were lower than other nitrogen sources. Therefore, soluble starch and yeast extract were chosen as the optimal carbon and nitrogen source for the following experiments, respectively. Optimization of medium compositions After confirmation of carbon and nitrogen source in fermentation medium, central composite design (CCD) was used to determine the optimal concentration of each composition of medium. The levels of the three independent variables, viz. soluble starch, yeast extract and inorganic salts are given in Table 1. A total of 20 experiments with different combinations of three independent variables were made according to CCD (Table 4). Each run was performed in triplicate and thus the experimental values of GP-1 yield given in Table 4 were averages of three sets of experiments, while the predicted values were obtained from quadratic polynomial equation mentioned below. The results were analyzed using Design Expert 8.0.5, and the following quadratic polynomial equation was found to explain the relationship between the independent variables and GP-1 yield: $$\begin{aligned}Y &= 539.66 + 49.10x_{1} \\&\quad+ 24.54x_{2} + 12.58x_{3} - 2.83x_{1} x_{2} \\&\quad+ 6.93x_{1} x_{3} + 10.66x_{2} x_{3} \\&\quad- 22.24x_{1}^{2} - 13.90x_{2}^{2} - 14.94x_{3}^{2}\end{aligned}$$ where Y is the predicted response of GP-1 concentration, x 1, x 2 and x 3 are coded values of soluble starch, yeast extract and inorganic salts, respectively. Table 4 Experimental design by using CCD, experimental value and predicted value In order to evaluate the significance and adequacy of the quadratic model, an analysis of variance (ANOVA) was conducted (Table 5). The ANOVA of the quadratic model indicates that the model is highly significant,due to the Fish's F test (F model = mean square regression/mean square residual = 68.92) with a very low P value [(P model > F) < 00001]. The model's goodness of fit can be checked by the determination coefficient (R 2) and the correlation coefficient (R). The R 2 value is always between 0 and 1. The closer the R 2 value is to 1, the better the correlation between the experimental and predicted values is (Wang et al. 2011). Here, the adj-R 2 (0.9699) demonstrates that about 97% of variability in the response is attributed to the independent variables and only about 3% of variability could not be explained by the model. The lack of fit measures the failure of the model to represent data in the experimental domain at point that are not included in the regression (Xu et al. 2008). The F value (3.06) of lack of fit is lower than the tabulated F value (F 0.01(9,5) = 10.15) and is not significant (P = 0.1227). The coefficient of variation (CV) is related with the precision and reliability, and the higher the value of CV the lower the reliability of experiment (Zhou et al. 2010). Here, a lower value of the CV (3.11%) indicates better precision and reliability of the experiment. Table 5 The ANOVA of the quadratic model The significant of each coefficient is determined by t value and P value, which are listed in Table 6. The student's t test and P value also indicate the interaction strength between each independent variable. The larger the t value and smaller the P value is, the more significant the corresponding coefficient is. The results show that the linear coefficients of X 1, X 2 and quadratic coefficient of X 21 are more significant than the other factors, followed by X 3, X 22 and X 23 . These implied that the concentrations of soluble starch and yeast extract have strong influence on GP-1 yield. The interaction coefficients of X 1 X 2, X 1 X 3 and X 2 X 3 seem to be insignificant, which means that the interactions of any two variables have insignificant effect on GP-1 yield. Table 6 Estimated regression coefficient and corresponding t value and P value The 3D response surface plots and the 2D contour plots described by the regression model are drawn to expose the optimal values of the independent variables and interactive effects of each independent variable on the response. Both plots are presented in Figs. 1, 2 and 3. From the 3D response surface plots and the corresponding 2D contour plots, the optimal values of the independent variables and the maximum response could be predicted and the interaction between each independent variable could be understood. Each contour curve stands for a response value influenced by two test independent values with the rest maintained at its zero level. The maximum predicted value could be obtained in the smallest ellipse in 2D contour plots. The smallest ellipse also indicates that there is a perfect interaction between the independent values, optimal values of which are nearby (Xu et al. 2008; Yin et al. 2011). Contour plot and response surface plot of GP-1 yield: the combined effect of soluble starch and yeast extract on the yield of GP-1 Contour plot and response surface plot of GP-1 yield: the combined effect of soluble starch and inorganic salts on the yield of GP-1 Contour plot and response surface plot of GP-1 yield: the combined effect of yeast extract and inorganic salts on the yield of GP-1 As shown in Fig. 1, 2 and 3, the optimal values of medium compositions for obtaining the maximum yield of GP-1 lie in the following ranges: soluble starch 12–14 g/L, yeast extract 3.67–4.33 g/L and inorganic salts 3.37–4.03 g/L. Through analysis of Design Expert Software, the optimal values of the independent variables in uncoded (actual) unit are: soluble starch 13.61 g/L, yeast extract 4.19 g/L and inorganic salts 3.82 g/L. Inorganic salts translated into NaCl and CaCO3 are 3.54 and 0.28 g/L. The model predicted that the maximum yield of GP-1 obtained by using the above optimal concentrations of medium compositions was 590.90 µg/L. A verification experiment using the optimized medium was carried out. The maximum yield of GP-1 was found to be 601.33 µg/L, which is in close agreement with the model prediction (590.90 µg/L). This indicated that the model was suitable and accurate for enhancing the yield of GP-1 by strain ZX01. The yield of GP-1 using the optimized medium (soluble starch 13.61 g/L, yeast extract 4.19 g/L, NaCl 3.54 g/L, CaCO3 0.28 g/L, millet 10 g/L) was 601.33 µg/L, while the yield of GP-1 using the original medium (glucose 10.00 g/L, peptone 3.00 g/L, NaCl 2.50 g/L, CaCO3 0.20 g/L, millet 10 g/L) was 345.35 µg/L. After optimization, the yield of GP-1 was improved by 74.12%. Moreover, a anti-TMV activity assay was also conducted, and the anti-TMV activity through optimization was 91.10%, in contrast with 74.88% using original millet medium. Fermentation in 5 L bench fermentor Based on the medium obtained from shake flask optimization experiments, a scale-up fermentation was conducted in a 5 L bench fermentor. A time course of GP-1 production and DCW was presented in Fig. 4. DCW increased sharply to a maximum value (3.2 g/L) at 48 h, and then decreased to 1.5 g/L at the end of the fermentation process. The highest yield of GP-1 was achieved at 120 h and was 2.54 mg/L, which is about 4.23 times higher than that using optimized medium in shake flask (601.33 µg/L), and 7.36 times higher than that using original medium (345.35 µg/L). Time course of GP-1 production and DCW by S. kanasenisi ZX01 using optimized medium in 5 L fermentor (squares is GP-1 production, circles is DCW) GP-1 is a novel glycoprotein produced by S. kanasenisi ZX01 with significant activity against TMV (Zhang et al. 2015). However, the extremely low yield of GP-1 and long fermentation period of strain ZX01 have restricted its further researches and market applications. For this reason, this paper was aimed at improving the yield of GP-1, which laid the foundation for developing GP-1 to be a novel anti-plant virus agent. Any microbial fermentation process can be affected by medium compositions and process parameters, and therefore statistical experimental design methods were used efficiently to optimize fermentation medium of strain ZX01. Among 10 different media, which have been reported earlier for the growth of Streptomyces, millet medium was selected to be the original fermentation medium. Based on the millet medium, one-factor-at-a-time optimization was used to determine carbon source and nitrogen source. The comparison of several commonly used carbon sources indicated that soluble starch was the optimal carbon source. It is well known that carbohydrates are energy source of the organism and play a key role in the metabolite biosynthesis, thus the yield of GP-1 was improved by 39.67% compared with the original medium. We also found that the polysaccharides as carbon sources were better than monosaccharides and disaccharides for the strain ZX01 to produce a higher yield of GP-1. The reason may be that soluble starch is hydrolyzed to glucose slowly in liquid medium and the rate is very slow compared with that of glucose uptake, leading to alleviation of catabolite repression on growth caused by glucose (Chen et al. 2010). Similarly, yeast extract was screened out to be the best nitrogen source. Yeast extract is an ideal organic nitrogen source in the fermentation industry, since it is inexpensive and could be more easily absorbed by microorganisms (Hernández-Cortés et al. 2016). The optimal concentrations of carbon source, nitrogen source and inorganic salts were further optimized by RSM based on CCD. RSM was proved to be a powerful tool for optimizing GP-1 yield by strain ZX01. The RSM model equation demonstrated that soluble starch, yeast extract and inorganic salts were positively significant factors to GP-1 production. From the equation, interaction between two factors was also found. Both soluble starch and yeast extract interacted positively with inorganic salts, whereas there was negative interaction between soluble starch and yeast extract. The final optimized fermentation medium was shown as follows: soluble starch 13.61 g/L, yeast extract 4.19 g/L, NaCl 3.54 g/L, CaCO3 0.28 g/L, millet 10 g/L. Theoretically, the predicted value of GP-1 production could reach 590.90 µg/L using this medium. In practice, the maximum yield of GP-1 was found to be 601.33 µg/L in verification test, which also proved that the model was able to predict GP-1 yield accurately. Compared with original fermentation medium, the yield of GP-1 by strain ZX01 has increased by 74.12% after optimization. A scale-up fermentation of S. kanasenisi ZX01 was further carried out in 5 L bench fermentor. The yield of GP-1 was improved to 2.54 mg/L when the DO was sufficient, which was much higher than that in shake flask. There is a big difference between fermentor and shake flask in agitation and aeration form, therefore, the reason why the yield of GP-1 in 5 L fermentor was much higher than that in shake flask might be the oxygen transfer rate (OTR). OTR is an important parameter rely on the operation of a reaction and agitation in fermentor (Bandaiphet and Prasertsan 2006; Mantzouridou et al. 2002). This result provides a useful idea for the improvement of GP-1 production in 5 L bench fermentor through optimizing aeration and agitation. Our medium optimization study provides first-hand data and informations that are fundamental and useful for the development of strain ZX01 fermentation process and improvement of GP-1 production on a large scale. Moreover, due to the rapid development of modern bio-engineering technology, these informations make it possible for the strain ZX01 to become an efficient engineering strain combined with genetic engineering methods, which provides us more glycoproteins with activity against plant virus, and even various natural products for pesticides discovery. RSM: response surface methodology CCD: central composite design TMV: tabacco mosaic virus ANOVA: DCW: dry cell weight OTR: the oxygen transfer rate Azma M, Mohamed MS, Mohamad R, Rahim RA, Ariff AB (2011) Improvement of medium composition for heterotrophic cultivation of green microalgae, Tetraselmis suecica, using response surface methodology. Biochem Eng J 53(2):187–195. doi:10.1016/j.bej.2010.10.010 Bandaiphet C, Prasertsan P (2006) Effect of aeration and agitation rates and scale-up on oxygen transfer coefficient, kLa in exopolysaccharide production from Enterobacter cloacae WD7. Carbohyd Polym 66(2):216–228. doi:10.1016/j.carbpol.2006.03.004 Chen Z-M, Li Q, Liu H-M, Yu N, Xie T-J, Yang M-Y, Shen P, Chen X-D (2010) Greater enhancement of Bacillus subtilis spore yields in submerged cultures by optimization of medium composition through statistical experimental designs. Appl Microbiol Biotechnol 85(5):1353–1360. doi:10.1007/s00253-009-2162-x Demain AL, Sanchez S (2009) Microbial drug discovery: 80 years of progress. J Antibiot 62(1):5–16 Deng G, Wan B, Hu H, Chen J, Yu M (2004) Biological activity of ningnanmycin on tobacco mosaic virus. Ying Yong Yu Huan Jing Sheng Wu Xue Bao (in Chinese) 10(6):695–698 Elibol M (2004) Optimization of medium composition for actinorhodin production by Streptomyces coelicolor A3(2) with response surface methodology. Process Biochem 39(9):1057–1062. doi:10.1016/S0032-9592(03)00232-2 Gomes RC, Sêmedo LTAS, Soares RMA, Linhares LF, Ulhoa CJ, Alviano CS, Coelho RRR (2001) Purification of a thermostable endochitinase from Streptomyces RC1071 isolated from a cerrado soil and its antagonism against phytopathogenic fungi. J Appl Microbiol 90(4):653–661. doi:10.1046/j.1365-2672.2001.01294.x Gooding GV, Hebert TT (1967) A simple technique for purification of tobacco mosaic virus in large quantities. Phytopathology 57(11):1285 Han L, Zhang G, Miao G, Zhang X, Feng J (2015) Streptomyces kanasensis sp. nov., an antiviral glycoprotein producing actinomycete isolated from forest soil around Kanas Lake of China. Curr Microbiol 71(6):627–631. doi:10.1007/s00284-015-0900-0 He F, Yang Y, Yang G, Yu L (2010) Studies on antibacterial activity and antibacterial mechanism of a novel polysaccharide from Streptomyces virginia H03. Food Control 21(9):1257–1262. doi:10.1016/j.foodcont.2010.02.013 Hernández-Cortés G, Valle-Rodríguez JO, Herrera-López EJ, Díaz-Montaño DM, González-García Y, Escalona-Buendía HB, Córdova J (2016) Improvement on the productivity of continuous tequila fermentation by Saccharomyces cerevisiae of Agave tequilana juice with supplementation of yeast extract and aeration. AMB Express 6(1):47. doi:10.1186/s13568-016-0218-8 Iwai Y, Nakagawa A, Sadakane N, Omura S, Oiwa H, Matsumoto S, Takahashi M, Ikai T, Ochiai Y (1980) Herbimycin B, a new benzoquinonoid ansamycin with anti-TMV and herbicidal activities. J Antibiot (Tokyo) 33(10):1114–1119 Kalil SJ, Maugeri F, Rodrigues MI (2000) Response surface analysis and simulation as a tool for bioprocess design and optimization. Process Biochem 35(6):539–550. doi:10.1016/S0032-9592(99)00101-6 Mantzouridou F, Roukas T, Kotzekidou P (2002) Effect of the aeration rate and agitation speed on β-carotene production and morphology of Blakeslea trispora in a stirred tank reactor: mathematical modeling. Biochem Eng J 10(2):123–135. doi:10.1016/S1369-703X(01)00166-8 Muntari B, Amid A, Mel M, Jami MS, Salleh HM (2012) Recombinant bromelain production in Escherichia coli: process optimization in shake flask culture by response surface methodology. AMB Express 2(1):12. doi:10.1186/2191-0855-2-12 Nwodo UU, Agunbiade MO, Green E, Mabinya LV, Okoh AI (2012) A freshwater Streptomyces, isolated from Tyume river, produces a predominantly extracellular glycoprotein bioflocculant. Int J Mol Sci 13(7):8679–8695. doi:10.3390/ijms13078679 Sayyad SA, Panda BP, Javed S, Ali M (2007) Optimization of nutrient parameters for lovastatin production by Monascus purpureus MTCC 369 under submerged fermentation using response surface methodology. Appl Microbiol Biot 73(5):1054–1058. doi:10.1007/s00253-006-0577-1 Takatsuki A, Arima K, Tamura G (1971) Tunicamycin, a new antibiotic. I. Isolation and characterization of tunicamycin. J Antibiot 24(4):215–223 Wang Y, Fang X, An F, Wang G, Zhang X (2011) Improvement of antibiotic activity of Xenorhabdus bovienii by medium optimization using response surface methodology. Microb Cell Fact 10(1):1–15. doi:10.1186/1475-2859-10-98 Watve MG, Tickoo R, Jog MM, Bhole BD (2001) How many antibiotics are produced by the genus Streptomyces? Arch Microbiol 176(5):386–390. doi:10.1007/s002030100345 Xu H, Sun L-P, Shi Y-Z, Wu Y-H, Zhang B, Zhao D-Q (2008) Optimization of cultivation conditions for extracellular polysaccharide and mycelium biomass by Morchella esculenta As51620. Biochem Eng J 39(1):66–73. doi:10.1016/j.bej.2007.08.013 Yin X, You Q, Jiang Z (2011) Optimization of enzyme assisted extraction of polysaccharides from Tricholoma matsutake by response surface methodology. Carbohyd Polym 86(3):1358–1364. doi:10.1016/j.carbpol.2011.06.053 Zhang G, Han L, Zhang G, Zhang X, Feng J (2015) Purification and characterization of a novel glycoprotein from Streptomyces sp. ZX01. Int J Biol Macromol 78:195–201. doi:10.1016/j.ijbiomac.2015.04.012 Zhao L, Fan F, Wang P, Jiang X (2013) Culture medium optimization of a new bacterial extracellular polysaccharide with excellent moisture retention activity. Appl Microbiol Biotechnol 97(7):2841–2850. doi:10.1007/s00253-012-4515-0 Zhou W-W, He Y-L, Niu T-G, Zhong J-J (2010) Optimization of fermentation conditions for production of anti-TMV extracellular ribonuclease by Bacillus cereus using response surface methodology. Bioproc Biosyst Eng 33(6):657–663. doi:10.1007/s00449-009-0330-0 Zhu C, Wu Y, Wang C, Zhao X, Wang Y, Du C (2005) Inhibition of cytosinpeptidemycin on Tobacco mosaic virus. Plant Prot (in Chinese) 31(4):52–54 YZ, YBS and HWH carried out all the experiments. YZ collected and calculated all data, created the tables and figures and wrote this manuscript. LRH took charge of the preservation of strain ZX01. XZ and JTF designed the experiments. All authors read and approved the final manuscript. We greatly appreciated the funding supported by National Key Technology R&D Program of China (2014BAD23B01) and National Natural Science Foundation of China (NSFC 31201536). All the data and materials used in the study are publicly available. National Key Technology R&D Program of China (2014BAD23B01) and National Natural Science Foundation of China (NSFC 31201536). Research and Development Center of Biorational Pesticides, Northwest A & F University, Yangling, 712100, Shaanxi, China Yong Zhou, Yu-Bo Sun, Hong-Wei He, Jun-Tao Feng, Xing Zhang & Li-Rong Han Shannxi Research Center of Biopesticides Engineering and Technology, Northwest A & F University, Yangling, 712100, Shannxi, China Jun-Tao Feng, Xing Zhang & Li-Rong Han Yong Zhou Yu-Bo Sun Hong-Wei He Jun-Tao Feng Xing Zhang Li-Rong Han Correspondence to Xing Zhang or Li-Rong Han. Zhou, Y., Sun, YB., He, HW. et al. Optimization of medium compositions to improve a novel glycoprotein production by Streptomyces kanasenisi ZX01. AMB Expr 7, 6 (2017). https://doi.org/10.1186/s13568-016-0316-7 Streptomyces Anti-TMV
CommonCrawl
Search Results: 1 - 10 of 350181 matches for " G. De Fabritiis " Performance of the Cell processor for biomolecular simulations G. De Fabritiis Physics , 2006, DOI: 10.1016/j.cpc.2007.02.107 Abstract: The new Cell processor represents a turning point for computing intensive applications. Here, I show that for molecular dynamics it is possible to reach an impressive sustained performance in excess of 30 Gflops with a peak of 45 Gflops for the non-bonded force calculations, over one order of magnitude faster than a single core standard processor. Dynamical geometry for multiscale dissipative particle dynamics G. De Fabritiis,P. V. Coveney Abstract: In this paper, we review the computational aspects of a multiscale dissipative particle dynamics model for complex fluid simulations based on the feature-rich geometry of the Voronoi tessellation. The geometrical features of the model are critical since the mesh is directly connected to the physics by the interpretation of the Voronoi volumes of the tessellation as coarse-grained fluid clusters. The Voronoi tessellation is maintained dynamically in time to model the fluid in the Lagrangian frame of reference, including imposition of periodic boundary conditions. Several algorithms to construct and maintain the periodic Voronoi tessellations are reviewed in two and three spatial dimensions and their parallel performance discussed. The insertion of polymers and colloidal particles in the fluctuating hydrodynamic solvent is described using surface boundaries. On size and growth of business firms G. De Fabritiis,F. Pammolli,M. Riccaboni Abstract: We study size and growth distributions of products and business firms in the context of a given industry. Firm size growth is analyzed in terms of two basic mechanisms, i.e. the increase of the number of new elementary business units and their size growth. We find a power-law relationship between size and the variance of growth rates for both firms and products, with an exponent between -0.17 and -0.15, with a remarkable stability upon aggregation. We then introduce a simple and general model of proportional growth for both the number of firm independent constituent units and their size, which conveys a good representation of the empirical evidences. This general and plausible generative process can account for the observed scaling in a wide variety of economic and industrial systems. Our findings contribute to shed light on the mechanisms that sustain economic growth in terms of the relationships between the size of economic entities and the number and size distribution of their elementary components. ACEMD: Accelerating bio-molecular dynamics in the microsecond time-scale M. J. Harvey,G. Giupponi,G. De Fabritiis Abstract: The high arithmetic performance and intrinsic parallelism of recent graphical processing units (GPUs) can offer a technological edge for molecular dynamics simulations. ACEMD is a production-class bio-molecular dynamics (MD) simulation program designed specifically for GPUs which is able to achieve supercomputing scale performance of 40 nanoseconds/day for all-atom protein systems with over 23,000 atoms. We illustrate the characteristics of the code, its validation and performance. We also run a microsecond-long trajectory for an all-atom molecular system in explicit TIP3P water on a single workstation computer equipped with just 3 GPUs. This performance on cost effective hardware allows ACEMD to reach microsecond timescales routinely with important implications in terms of scientific applications. A hybrid method coupling fluctuating hydrodynamics and molecular dynamics for the simulation of macromolecules G. Giupponi,G. De Fabritiis,P. V. Coveney Abstract: We present a hybrid computational method for simulating the dynamics of macromolecules in solution which couples a mesoscale solver for the fluctuating hydrodynamics (FH) equations with molecular dynamics to describe the macromolecule. The two models interact through a dissipative Stokesian term first introduced by Ahlrichs and D\"unweg [J. Chem. Phys. {\bf 111}, 8225 (1999)]. We show that our method correctly captures the static and dynamical properties of polymer chains as predicted by the Zimm model. In particular, we show that the static conformations are best described when the ratio $\frac{\sigma}{b}=0.6$, where $\sigma$ is the Lennard-Jones length parameter and $b$ is the monomer bond length. We also find that the decay of the Rouse modes' autocorrelation function is better described with an analytical correction suggested by Ahlrichs and D\"unweg. Our FH solver permits us to treat the fluid equation of state and transport parameters as direct simulation parameters. The expected independence of the chain dynamics on various choices of fluid equation of state and bulk viscosity is recovered, while excellent agreement is found for the temperature and shear viscosity dependence of centre of mass diffusion between simulation results and predictions of the Zimm model. We find that Zimm model approximations start to fail when the Schmidt number $Sc \lessapprox 30$. Finally, we investigate the importance of fluid fluctuations and show that using the preaveraged approximation for the hydrodynamic tensor leads to around 3% error in the diffusion coefficient for a polymer chain when the fluid discretization size is greater than $50\AA$. Foundations of Dissipative Particle Dynamics Eirik G. Flekkoy,Peter V. Coveney,Gianni De Fabritiis Physics , 2000, DOI: 10.1103/PhysRevE.62.2140 Abstract: We derive a mesoscopic modeling and simulation technique that is very close to the technique known as dissipative particle dynamics. The model is derived from molecular dynamics by means of a systematic coarse-graining procedure. Thus the rules governing our new form of dissipative particle dynamics reflect the underlying molecular dynamics; in particular all the underlying conservation laws carry over from the microscopic to the mesoscopic descriptions. Whereas previously the dissipative particles were spheres of fixed size and mass, now they are defined as cells on a Voronoi lattice with variable masses and sizes. This Voronoi lattice arises naturally from the coarse-graining procedure which may be applied iteratively and thus represents a form of renormalisation-group mapping. It enables us to select any desired local scale for the mesoscopic description of a given problem. Indeed, the method may be used to deal with situations in which several different length scales are simultaneously present. Simulations carried out with the present scheme show good agreement with theoretical predictions for the equilibrium behavior. Determination of the chemical potential using energy-biased sampling R. Delgado-Buscalioni,G. De Fabritiis,P. V. Coveney Physics , 2005, DOI: 10.1063/1.2000244 Abstract: An energy-biased method to evaluate ensemble averages requiring test-particle insertion is presented. The method is based on biasing the sampling within the subdomains of the test-particle configurational space with energies smaller than a given value freely assigned. These energy-wells are located via unbiased random insertion over the whole configurational space and are sampled using the so called Hit&Run algorithm, which uniformly samples compact regions of any shape immersed in a space of arbitrary dimensions. Because the bias is defined in terms of the energy landscape it can be exactly corrected to obtain the unbiased distribution. The test-particle energy distribution is then combined with the Bennett relation for the evaluation of the chemical potential. We apply this protocol to a system with relatively small probability of low-energy test-particle insertion, liquid argon at high density and low temperature, and show that the energy-biased Bennett method is around five times more efficient than the standard Bennett method. A similar performance gain is observed in the reconstruction of the energy distribution. Multiscale modelling of liquids with molecular specificity G. De Fabritiis,R. Delgado-Buscalioni,P. V. Coveney Physics , 2006, DOI: 10.1103/PhysRevLett.97.134501 Abstract: The separation between molecular and mesoscopic length and time scales poses a severe limit to molecular simulations of mesoscale phenomena. We describe a hybrid multiscale computational technique which address this problem by keeping the full molecular nature of the system where it is of interest and coarse-graining it elsewhere. This is made possible by coupling molecular dynamics with a mesoscopic description of realistic liquids based on Landau's fluctuating hydrodynamics. We show that our scheme correctly couples hydrodynamics and that fluctuations, at both the molecular and continuum levels, are thermodynamically consistent. Hybrid simulations of sound waves in bulk water and reflected by a lipid monolayer are presented as illustrations of the scheme. Fluctuating hydrodynamic modelling of fluids at the nanoscale G. De Fabritiis,M. Serrano,R. Delgado-Buscalioni,P. V. Coveney Physics , 2006, DOI: 10.1103/PhysRevE.75.026307 Abstract: A good representation of mesoscopic fluids is required to combine with molecular simulations at larger length and time scales (De Fabritiis {\it et. al}, Phys. Rev. Lett. 97, 134501 (2006)). However, accurate computational models of the hydrodynamics of nanoscale molecular assemblies are lacking, at least in part because of the stochastic character of the underlying fluctuating hydrodynamic equations. Here we derive a finite volume discretization of the compressible isothermal fluctuating hydrodynamic equations over a regular grid in the Eulerian reference system. We apply it to fluids such as argon at arbitrary densities and water under ambient conditions. To that end, molecular dynamics simulations are used to derive the required fluid properties. The equilibrium state of the model is shown to be thermodynamically consistent and correctly reproduces linear hydrodynamics including relaxation of sound and shear modes. We also consider non-equilibrium states involving diffusion and convection in cavities with no-slip boundary conditions. A stochastic Trotter integration scheme for dissipative particle dynamics M. Serrano,G. De Fabritiis,P. Espa?ol,P. V. Coveney Abstract: In this article we show in details the derivation of an integration scheme for the dissipative particle dynamic model (DPD) using the stochastic Trotter formula [De Fabritiis et al., Physica A, 361, 429 (2006)]. We explain some subtleties due to the stochastic character of the equations and exploit analyticity in some interesting parts of the dynamics. The DPD-Trotter integrator demonstrates the inexistence of spurious spatial correlations in the radial distribution function for an ideal gas equation of state. We also compare our numerical integrator to other available DPD integration schemes.
CommonCrawl
Numerically computing normal, binormal, and tangent directions of non-parametric curve in $\mathbb{R}^{3}$ Let's say that I have numerical data for a curve in $\mathbb{R}^{3}$, but I do not have the parametric equations of the curve; all that I have is a sampling of $N$-many points that lie on this curve, $R_{i}=[x_{i} \, y_{i} \, z_{i}]$ for rows $1\leq i \leq N$. How would I numerically determine the normal, binormal, and tangent vectors for each point along this curve? I do know the direction that points on the curve travel, and so I can numerically compute/approximate the tangent vector using, say, a central difference scheme: $$\vec{T}v_{i} \approx \frac{v_{i+1}-v_{i-1}}{2\Delta t},$$ where $\Delta t$ is the time delay in the sampling of this parametric curve (and $i$ runs $2 \leq N-1$). Conventionally, the binormal and directions require the parametric equation(s) of motion in order to be computed (e.g., the Frenet-Serret Equations). Are there methods for numerically approximating the normal $\vec{N}$ and binormal $\vec{B}$, just as I have numerically approximated the tangent vector $\vec{T}$ at $v_{i}$? The key, I think, is the normal vector because once $\vec{N}v_{i}$ and $\vec{T}v_{i}$ have been found, $\vec{B}v_{i}$ can then be computed by its definition as $\vec{B} = \vec{T} \times \vec{N}$. In particular, $\vec{N}$ should be orthogonal to $\vec{T}$ and $\vec{B}$, and it should point in the direction that the curve bends. This refers to the direction in which the curve deviates from straight line motion. The binormal vector points in the direction around which the tangent vector turns. linear-algebra differential-geometry numerical-methods frenet-frame matt1011matt1011 $\begingroup$ I wonder... if you take three nearby points and calculate the unit normal vector to the plane containing them (with direction determined by the orientation of the triple in an appropriate way), then under what assumptions would it be true that this normal vector converges to the binormal vector as the three points get closer together? $\endgroup$ – Daniel Schepler Jul 11 at 19:47 $\begingroup$ You could use spline interpolation between the given points and use that interpolated curve to estimate those quantities. $\endgroup$ – WimC Jul 11 at 19:48 $\begingroup$ @WimC (and Daniel Schepler) - I like your thoughts, which seem convergent to the same idea. Using splines is an interesting suggestion, but I wonder if it would be computationally expensive. I've edited my question to provide more background on the normal vector $\vec{N}$. My gut tells me that the properties of $\vec{T}$, $\vec{N}$, and $\vec{B}$, and the relationships between these vectors, should give us a way to numerically approximate these vectors using only this information. What do you think? $\endgroup$ – matt1011 Jul 12 at 11:23 $\begingroup$ Similarly to how you've computed $R'$ via finite differences, you could also compute $R''$, and then use the formulas for the Frenet frame in terms of those. $\endgroup$ – Rahul Jul 12 at 12:06 $\begingroup$ You could interpolate the points using B-splines, or the barycentric rational interpolator. These are both differentiable representations, and then the rest is just assembling the pieces into what you want. $\endgroup$ – user14717 Jul 12 at 12:29 This question is at the heart of the general field of "Discrete Differential Geometry". It's also the kind of thing my colleague Tom Banchoff has studied for more than 50 years. The first question you have to ask is "Does the underlying curve have a Frenet frame?" If, for instance, with finer and finer sampling you get a sequence of tangent vectors that don't converge, you've got a problem. (A good example of this: look at the graph of $y = |x|$ near the origin.) Even if there is a limiting tangent vector, it's possible that the curvature ends up zero, in which case there's no Frenet frame. The second question (which maybe should have been the first) is "is there an underlying curve at all?" If I draw, say, 6 random points from a gaussian distribution around the origin, you can "connect the dots" to form a curve, but the next point I draw from that distribution is unlikely to lie on it. Indeed, the underlying set isn't a curve at all -- it's all of $\Bbb R^2$. That's an extreme case, but even things as simple as the solutions of polynomial equations like $x^2 + y^2 = 1$ might have problems, as $y^3 = x^2$ will show you. Having asked those questions, which I beg you not to ignore, you can pretend everything is fine and then there's a nice solution: to compute the normal at point $i$, look at the points $P_{i-1}, P_i, P_{i+1}$; these three lie on a plane, and the curve normal is the vector in that plane that's closest to being orthogonal to the tangent. Letting $v = P_i - P_{i-1}$ and $u = P_{i+1} - P_i$m the normal to the plane (not the curve normal!) is $n = u \times v$, and you can then approximate $N$ (the curve normal) as $N = \pm n \times T$ (I leave you to work out the sign). By the way, the vector $\pm n$ (again, you get to work out the sign) is a good approximation of the binormal (once you make it a unit vector). To be honest, the approach in the previous paragraph will work OK, but near a point of inflection, the three points will be collinear (or nearly so), and then you're screwed. Then again, at such a point, you don't actually have a Frenet frame either, so maybe you're lucky that the computation will tell you that. More likely, you'll just get a really short vector for $n$, and thus a short vector for $N$, you'll normalize it without thinking, get a vector in some random direction due to numerical glitches and then claim that my reasoning was wrong. (Sigh.) A slightly improved approach is to estimate the best-fit-plane through a sequence of $5$ or $7$ or more points, i.e., $P_{i-k},\ldots, P_{i-1}, P_i, P_{i+1}, \ldots, P_{i+k}$; if your sampling is fine enough, this'll give a great approximation of $n$, and hence $N$, at least when $N$ is well-defined. If your sampling is too coarse, then a large $k$ will lead to garbage. The problem is that "how big should $k$ be" depends on your sampling rate, and on the unknown curvature of the underlying curve. I have no advice on how to address this particular tradeoff. Why does this work? Well, it helps to go back to classical differential geometry, where the "osculating plane" (the "TN plane" in Frenet terms!) is defined by some weird phrase like "it's the plane passing through three successive points of the curve". Once you understand that kind of phrase, translating to numerical computation isn't too tough. As another (and better in my opinion) approach, you could regard the sequence of points you've got as a polygon, and ask what the frenet frame of a polygon actually "should be". Once you think about it, you find things like "the tangent vector along an edge should be that edge", and "the binormal vector at a vertex should be the cross product of the two adjacent edge vectors." What about the binormal vector along the edge? Well, there's a pretty good case for extending from each end towards the edge-midpoint as a constant, so that the binormal becomes a locally-constant function (rather than a continuous one), and then filling in, at the edge-midpoint, with an arc of vectors that transitions from one binormal to another...but that's too tough to describe here. This idea belongs (as far as I know) to Tom Banchoff, who told it to me over a cup of coffee one day. I'm pretty sure that Eitan Grinspun has used this idea (and related stuff for surfaces) in some of his computer graphics work as well, probably having discovered it independently. You might want to take a look at that. answered Jul 12 at 11:56 John HughesJohn Hughes Not the answer you're looking for? Browse other questions tagged linear-algebra differential-geometry numerical-methods frenet-frame or ask your own question. Necessary and sufficient conditions for a helix Relationships between curvature, torsion, unit tangent vector, and binormal vector of a curve Is Frenet-Serret frame valid for non-natural parametrized curves when one normalizes the tangent vector? A curve with positive curvature is asymptotic if and only if its binormal is parallel to the unit normal of the surface plane curves and osculating plane determine the principal curvatures of the surface defined as the tube around a space curve using the Frenet Serret frame. Proof that the Frenet frame is orthonormal. Normal vector to a surface and binormal vector to a curve that lies on the surface Calculate the Frenet apparatus $\kappa,\tau,T,N,B$ where $T,N,B$ are the tangent, normal and binormal vectors. in 2D dimensional plane, is it problematic to have Frenet-Serret frame with zero curvature?
CommonCrawl
Correlated Electrons in Transition-Metal Compounds: Correlated Electrons in Transition-Metal Compounds: New Challenges opening - Roderich Moessner, director of the MPIPKS & scientific coordinators chair: Igor Mazin 09:00 - 09:25 J. Paul Attfield (University of Edinburgh) Orbital molecules in oxides Orbital molecules are weakly bonded clusters of transition metal ions within an orbitally ordered solid. The importance of these quantum states has become apparent in recent years following the discovery of 'trimeron' orbital molecules in the ground state of magnetite (Fe3O4). Determination of the full superstructure below the famous Verwey transition at 125 K showed that Fe2+/Fe3+ charge ordering occurs with a pronounced orbital ordering of Fe2+ states that leads to localization of electrons in the linear, three-Fe trimerons. CaFe3O5 provides a new example of electronic phase separation driven by trimeron formation. Vanadium oxides also provide many examples of orbital molecule orders, associated with NTE (negative thermal expansion) in the orbital polymer material V2OPO4. Persistence of large orbital molecules to high temperatures is discovered in the spinels AlV2O4 and the new analog GaV2O4. 09:25 - 09:50 Paolo Radaelli (University of Oxford) Rust to riches: the physics of magnetic vortices in \(\alpha-Fe_2O_3\) Vortices are among the simplest topological structures, and occur whenever a flow field `whirls' around a one-dimensional core. Although ubiquitous to many branches of physics, vortex formation in the crystalline state is rare, since it is generally hampered by long-range interactions. Here, we present the discovery of a novel form of crystalline vortices in \emph{antiferromagnetic} (AFM) hematite ($\alpha$-Fe$_2$O$_3$) epitaxial films, in which the primary whirling parameter is the non-ferroics staggered magnetisation. Remarkably, ferromagnetic (FM) topological objects known as half-skyrmions or \emph{merons} with the same vorticity and winding number of the $\alpha$-Fe$_2$O$_3$ vortices are imprinted onto an ultra-thin Co ferromagnetic over-layer by exchange proximity. Our data suggest that the vortex/meron pairs are relatively robust well beyond the Co coercive field, but can be manipulated by the application of a larger (H$_{\parallel}$~100mT) in-plane magnetic field H$_{\parallel}$, giving rise to large-scale vortex-antivortex annihilation. 09:50 - 10:15 Andrew Mackenzie (Max-Planck-Institut für Chemische Physik fester Stoffe) Surface and bulk magnetism in layered delafossite metals chair: Hua Wu 10:45 - 11:10 D.D. Sarma (Indian Institute of Science, Bengaluru) Layer-resolved electronic structure of oxide heterostructures using high energy photoelectron spectroscopy There is a rapidly expanding field over the last few decades that deal with emergent properties at a variety of interfaces formed in heterostructured materials. Specifically, it has been shown that an atomically flat interface between two highly insulating oxide materials can exhibit properties not found in either of the bulk systems defining the interface, such properties covering realms of magnetic-nonmagnetic transitions, insulator-metal transitions, emergence of superconductivity, depending on specific systems and synthesis conditions. Interestingly, there are only few investigations to probe the nature of the charge carriers in such systems, arising from difficulties inherent in the problem. It is in general difficult to probe directly the nature of such interface states since these are typically buried under a depth and represent a very small volume fraction of the entire sample. Photoemission spectroscopy, capable of directly mapping out the electronic structure, has only been used in on-resonance condition to enhance the spectral intensity manifold, thereby making the signal visible. However, this has the potential problem of distorting spectral features in an uncontrolled manner, since the resonance condition itself depends on the energy state of the electron. We present layer-resolved electronic structures of two oxide heterostructures, namely the originally discovered LaAlO3-SrTiO3 as well as LaTiO3-SrTiO3, using non-resonant photoemission and contrast strikingly different behaviours between the two systems. 11:10 - 11:35 Warren Pickett (University of California, Davis) Chern insulators: from design, toward realization Topological insulators are seemingly ordinary insulators in the bulk, but have guaranteed surface states that are gapless and present the promise of unique functionalities, viz. next generation electronic devices with smaller dimension and much less use of power. If these materials break time-reversal symmetry, more specific properties emerge: a quantized anomalous Hall effect that takes strong advantage of the spin of the electron. We will discuss lessons learned from two types of systems, both of which incorporate Khomskii physics: the naturally occurring compound $BaFe_2(PO_4)_2$ (BFPO), and the designed system consisting of $Ru^{3+}$ on a honeycomb lattice (2LRO). Both systems contain Chern insulting phases in their multidimensional phase diagram. Both display competition between many energy scales including: Coulomb repulsion U, Hund's J, intersublattice hopping t, crystal subfield splitting $\Delta$ lattice distortion $\delta$t, spin-orbit coupling $\xi$, and strain, all accounted for within the rotationally invariant DFT+U approach. BFPO is a rare Ising ferromagnetic insulator that we find to be near, but not within, a Chern insulating phase in its ground state; its large Ising anisotropy and large orbital moment are reproduced correctly. While displaying related transitions between Chern and trivial insulator phases as the entire set of degrees of freedom are relaxed and practically all symmetries are broken, 2LRO (a [111] perovskite bilayer of $LaRuO_3$), unlike BFPO, settles down into a Chern insulator ground state with a calculated and potentially very useful gap of 130 meV. Crucial parts of both stories involve Khomskii physics ("blame it on Daniel"). We theorists have successfully written a grant for experimental testing of the predictions. Spoiler alert: the results of the (rather challenging) synthesis and characterization are not yet in. 11:35 - 12:00 Cristian Daniel Batista (University of Tennessee, Knoxville & Oak Ridge National Laboratory) \(Ba_3CoSb_2O_9\) and the dynamical structure factor of the triangular Heisenberg model We will review recent inelastic experiments in the triangular lattice $S=1/2$ Heisenberg antiferromagnet, Ba3CoSb2O9 [1-5], revealing large deviations from the dynamical spin structure factor obtained from non-linear spin wave theory (NSWT). We will see that, while NSWT works very well inside the magnetic field induced magnetization plateau (up-up-down phase) [5], it fails at zero magnetic field (120-degree ordering). This observation strongly suggests that the failure of a semiclassical treatment is due to strong quantum fluctuations, which are indeed expected for frustrated 2D antiferromagnets. In an attempt of finding alternative ways of modelling the of ordered low dimensional antiferromagnets, we will derive the zero temperature of the triangular lattice Heisenberg model using a Schwinger Boson approach that includes the Gaussian fluctuations ($1/N$ correction) around the saddle point solution [5]. While the ground state of this model exhibits a well-known 120 degree magnetic ordering, the excitation spectrum revealed by has a strong quantum character, which is not captured by low-order $1/S$expansions. The low-energy magnons consist of two-spinon bound states confined by the gauge fluctuations of the auxiliary fields. This composite nature of the magnons potentially leads to an internal structure of the magnon peaks. In addition, the continuum of high-energy spinon modes extends up to three times the single-magnon bandwidth. [1] Takuya Susuki, et al., Phys. Rev. Lett. 110, 267201 (2013). [2] Koutroulakis, G. et al., Phys. Rev. B 91, 024410 (2015). [3] Ma, J. et al., Phys. Rev. Lett. 116, 087201 (2016). [4] Ito, S. et al., Nat. Comm. 8, 235 (2017). [5] Y. Kamiya, et al., arxiv/1701.07971, Nat. Comm., in press. [6] E. A. Ghioldi, et al., arxiv/1802.06878. chair: Paolo Radaelli Robert J. Cava (Princeton University) New transition metal oxides with odd numbers of metal-oxygen octahedra per cell 14:50 - 15:15 Nicola Spaldin (ETH Zurich) Beyond Moscow in the '50s From the early days of the ``Renaissance'' of the Field of Multiferroics I have enjoyed enthusiastic and stimulating discussions with Daniel Khomskii, and benefited tremendously from his extensive knowledge of the Russian literature. After some years, our discussions evolved from his pointing out (very helpfully) that my favorite ideas had been studied already in Moscow in the '50s, as I progressed to ideas that had been discovered in St. Petersburg in the '70s. Recently, Daniel paid me the strongest possible compliment, when he commented that one of our projects was interesting enough to have been done in Moscow in the '50s but to his surprise it had not been. I will talk about this project -- our search for a magnetic monopole at a magnetoelectric surface -- today. 15:15 - 15:40 Bernhard Keimer (Max-Planck-Institut für Festkörperforschung) Resonant x-ray scattering from quantum materials 15:40 - 16:05 Liu Hao Tjeng (Max-Planck-Institut für Chemische Physik fester Stoffe) Core-level non-resonant inelastic x-ray scattering: An extremely powerful method to determine the local ground state wave function A prerequisite for a microscopic understanding of the physical properties of materials is the identification of quantum numbers that characterize, for example, the charge, spin, and orbital degrees of freedom of the atomic constituents. This is especially true for strongly correlated or narrow band systems. While invaluable insight has been obtained using a wide range of x-ray based spectroscopic techniques, most of these are based on dipolar electronic transitions and have as such limitations, e.g. asymmetries with higher than twofold rotational symmetry cannot be detected (unless it is accompanied by a sufficiently large energy difference). Here we utilize the opportunities provided by a new technique, namely non-resonant x-ray scattering (NIXS). This photon-in-photon-out technique with hard x-rays has become feasible thanks to the high brilliance of modern synchrotrons and advanced instrumentation. The available large momentum transfers allows for the study of excitations that are well beyond the dipole limit. While so far most of the NIXS studies were carried out on powder samples, it becomes very clear that the directional dependence of the momentum transfer observed in experiments on single crystals has the potential to give a very detailed insight into the ground-state symmetry of the ion of interest. It is this very aspect, i.e. the direction dependence of the scattering function beyond the dipole limit, that we utilize for our study of strongly correlated systems for which the orbital symmetry remained elusive so far [1-6]. The interpretation of the spectra is straightforward and quantitative, facilitated also by the fact the multipolar excitations are more excitonic than the dipole ones. Our excitement about NIXS is further enhanced by the fact that this element specific technique is also bulk sensitive, requires only tiny samples not larger than 0.1 mm, and allows for more demanding sample environments. We now have also the first results from our own Max Planck inelastic scattering beamline at PETRA III in Hamburg. *Work done in close collaboration with Martin Sundermann and Andrea Severing from the Physics Institute 2, University of Cologne, and with Maurits Haverkort, Institute for Theoretical Physics, Heidelberg University. [1] T. Willers, F. Strigari, N. Hiraoka, Y. Q. Cai, M.W. Haverkort, K.-D. Tsuei, Y. F. Liao, S. Seiro, C. Geibel, F. Steglich, L. H. Tjeng, and A. Severing, Phys. Rev. Lett. 109, 046401 (2012). [2] J.-P. Rueff, J.M. Ablett, F. Strigari, M. Deppe, M.W. Haverkort, L.H. Tjeng, and A. Severing, Phys. Rev. B 91, 201108(R) (2015). [3] M. Sundermann, F.Strigari, T. Willers, H. Winkler, A. Prokofiev, J.M. Ablett, J.-P. Rueff, D. Schmitz, E. Weschke, M. Moretti Sala, A. Al-Zein, A. Tanaka, M.W. Haverkort, D. Kasinathan, L.H. Tjeng, S. Paschen, and A. Severing, Sci. Rep. 5:17937 (2015). [4] M. Sundermann, M.W. Haverkort, S. Agrestini, A. Al-Zein, M. Moretti Sala, Y.K. Huang, M. Golden, A. deVisser, P. Thalmeier, L.H. Tjeng, and A. Severing, PNAS 113 (49), 13898 (2016) [5] M. Sundermann, K. Chen, H. Yavaç, H. Lee, Z. Fisk, M. W. Haverkort, L. H. Tjeng, and A. Severing, Europhys. Lett. 117 (2017) 17003. [6] M. Sundermann, H. Yavas, K. Chen, D. J. Kim, Z. Fisk, D. Kasinathan, M. W. Haverkort, P. Thalmeier, A. Severing and L. H. Tjeng, Phys. Rev. Lett. 120 (2018) 016402. CETMC18 colloquium (chair: Inti Sodemann, MPIPKS) George Sawatzky (University of British Columbia) An effective molecular orbital approach to electron phonon and pairing interactions in skipped valence an In high oxidation state oxides like the trivalent Nickel oxides, tetravalent Co and Fe oxides as well as the parent superconductors $BaBiO_3$ and $SrBiO_3$ and High Tc hole doped cuprates, the cation electron affinity in the formal valence could end up larger than the O2-ionization potential leading to a so called negative charge transfer gap. If the charge transfer energy is strongly negative, then we should really adopt starting electronic configurations such as Ni2+ rather than 3+ or Bi 3+ rather than 4+ with compensation holes in the O 2p valence band for charge neutrality. If in addition the lowest energy cation ionization states are strongly hybridized with the valence O 2p states the low energy scale electronic structure and be well described by a molecular orbital type of approach (1,2). This is a new approach to the Wannier function description (3) but with explicit inclusion of the O states which provides a natural path to inclusion of the electron phonon coupling, charge density wave formation, potential bipolaron formation and paring interactions in superconductors. We discuss recent developments in this approach and show that the effective electron phonon coupling involving these molecular like orbitals is much stronger than that estimated from density function approaches. We also show that this leads to Peierls like charge density wave like ground states and we describe how the electron phonon coupling involving the hopping integrals rather than the on-site energies evolves into a large effective attractive interaction between low energy scale electrons. I will also briefly describe how these effects lead to our coelution that the ion battery material $LiNiO_2$ should be viewed as an "entropy-stabilized charge- and bond-disproportionated glass". chair: Kliment Kugel 19:00 - 19:45 Giniyat Khaliullin (Max-Planck-Institut für Festkörperforschung) Kugel-Khomskii models: opening Pandora's box chair: Leon Balents 09:00 - 09:25 Roderich Moessner (Max-Planck-Institut für Physik komplexer Systeme) Dynamics of quantum spin liquids 09:25 - 09:50 Simon Trebst (Universität zu Köln) Field-driven Higgs transition in two-dimensional Kitaev materials 09:50 - 10:15 Claudia Felser (Max-Planck-Institut für Chemische Physik fester Stoffe) Magnetic Weyl semimetals! Magnetic Weyl Semimetals! Claudia Felser1, Johannes Gooth1, Kaustuv Manna1, Enke Lui1 and Yan Sun1 1Max Planck Institute Chemical Physics of Solids, Dresden, Germany (e-mail: [email protected]) Topology a mathematical concept became recently a hot topic in condensed matter physics and materials science. One important criteria for the identification of the topological material is in the language of chemistry the inert pair effect of the s-electrons in heavy elements and the symmetry of the crystal structure [1]. Beside of Weyl and Dirac new fermions can be identified compounds via linear and quadratic 3-, 6- and 8- band crossings stabilized by space group symmetries [2]. Binary phoshides are the ideal material class for a systematic study of Dirac and Weyl physics. Weyl points, a new class of topological phases was also predicted in NbP, NbAs. TaP, MoP and WP2. [3-7]. In magnetic materials the Berry curvature and the classical AHE helps to identify interesting candidates. Magnetic Heusler compounds were already identified as Weyl semimetals such as Co2YZ [8-10], in Mn3Sn [11,12] and Co3Sn2S2 [13]. The Anomalous Hall angle helps to identify even materials in which a QAHE should be possible in thin films. Besides this k-space Berry curvature, Heusler compounds with non-collinear magnetic structures also possess real-space topological states in the form of magnetic antiskyrmions, which have not yet been observed in other materials [14]. [1] Bradlyn et al., Nature 547 298, (2017) arXiv:1703.02050 [2] Bradlyn, et al., Science 353, aaf5037A (2016). [3] Shekhar, et al., Nature Physics 11, 645 (2015) [4] Liu, et al., Nature Materials 15, 27 (2016) [5] Yang, et al., Nature Physics 11, 728 (2015) [6] Shekhar, et al. preprint arXiv:1703.03736 [7] Kumar, et al., Nature Com. , preprint arXiv:1703.04527 [8] Kübler and Felser, Europhys. Lett. 114, 47005 (2016) [9] Chang et al., Scientific Reports 6, 38839 (2016) [10] Kübler and Felser, EPL 108 (2014) 67001 (2014) [11] Nayak, et al., Science Advances 2 e1501870 (2016) [12] Nakatsuji, Kiyohara and Higo, Nature 527 212 (2015) [13] Liu, et al. preprint arXiv:1712.06722 [14] Nayak, et al., Nature 548, 561 (2017) group photo (to be published on the workshop web page) chair: Andrea Damascelli 10:45 - 11:10 Markus Braden (Universität zu Köln) Ferromagnetic and quasiferromagnetic fluctuations in ruthenates Triplet pairing in Sr$_{2}$RuO$_{4}$ was initially suggested based on the hypothesis of strong ferromagnetic spin fluctuations, but so far there is little evidence for these. Magnetic excitations are dominated by antiferromagnetic incommensurate excitations, but these do not change when entering into the superconducting state for energies well below twice the superconducting gap. This observation renders their dominant role in the superconducting pairing rather unlikely. Using polarized inelastic neutron scattering, we accurately determine the full spectrum of spin fluctuations in Sr$_{2}$RuO$_{4}$. Besides the well-studied incommensurate magnetic fluctuations we do find a sizeable quasiferromagnetic signal, quantitatively consistent with all macroscopic and microscopic probes. We use this result to address the possibility of magnetically-driven triplet superconductivity in Sr$_{2}$RuO$_{4}$. We conclude that, even though the quasiferromagnetic signal is stronger and sharper than previously anticipated, spin fluctuations alone are not enough to generate a triplet state strengthening the need for additional interactions. The quasiferromagnetic fluctuations in Sr$_{2}$RuO$_{4}$ considerably differ from the expectations for a nearly ferromagnetic material. In contrast SrRuO$_{3}$ exhibits ferromagnetic order and true ferromagnetic magnons, that show a strange temperature dependence for the anisotropy gap and for the stiffness constant possibly related to the impact of Weyl fermions. 11:10 - 11:35 Je-Geun Park (Seoul National University) Orbital physics in Ru oxides Orbital degree of freedom is one of the least understood degrees of freedom compared with other threes: spin, charge, and lattice. Nonetheless, it has demonstrated itself that it is at the very heart of several intriguing problems of condensed matter physics, and hence has attracted significant attentions over the past decades or so. In this talk, I am going to highlight how the orbital degree of freedom plays an important role in the metal-insulator transition of several Ru compounds covering several aspects of its physics: Tl2Ru2O7 [1], Li2RuO3 [2, 3], SrRuO3 thin film [4]. [1] Seongsu Lee, et al., Nature Materials 5, 471-476 (2006). [2] J. Park, et al., Scientific Reports 6, 25238 (2016). [3] S. Yun, et al., to be submitted. [4] S. Kang, et al., to be submitted. 11:35 - 12:00 Andrew Millis (Columbia University) Strain and the Mott transition in and out of equilibrium Correlatioin-driven metal-insulator transitions often involve orbital ordering, which couples strongly both to local (octahedral distortion) and long wavelength (strain) lattice distortions. I present a theory of the intertwined electronic and lattice transitions in Ca2RuO4 and show how it accounts for the dramatic variation of the metal-insulator transition on epitaxial strain and for the stripe patterns observed at the metal-insulator interface in materials under current flow. Generalizations to Ca3Ru2O7, to VO2, and to other correlated electron materials will also be presented. chair: Achim Rosch Antoine Georges (College de France, Paris & Flatiron Institute, NYC) Transition-metal oxides: a dynamical mean-field theory perspective 14:50 - 15:15 Roser Valenti (Johann Wolfgang Goethe-Universität Frankfurt) On dimers, quasimolecular orbitals, Kitaev physics and more in honeycomb lattices I this talk I will review recent work deeply linked to collaborations with Daniel Khomskii on the electronic and magnetic properties of two-dimensional honeycomb-lattice materials ranging from Li$_2$RuO$_3$, Li$_2$RhO$_3$, Na$_2$IrO$_3$, $\alpha$-Li$_2$IrO$_3$ to RuCl$_3$ where dimers, quasi-moleculars orbitals, Kitaev physics and possible novel states of matter emerge. [1] Hermann et al. PRB 97, 020104(R) (2018). [2] Biesner et al. arXiv:1802.10060 [3] Winter et al. PRL 120, 077203 (2018). [4] Kimber et al. PRB 89, 081408(R) (2014). [5] Foyevtsova et al. PRB 88, 035107 (2013). [6] Mazin et al. PRL 109, 197201 (2012). chair: Simon Trebst 15:45 - 16:10 Matthias Vojta (Technische Universität Dresden) Novel phases in bilayer Kitaev models Kitaev's honeycomb-lattice spin-1/2 model has become a paradigmatic example for $Z_2$ quantum spin liquids, both gapped and gapless. Here we study the fate of these spin-liquid phases in differently stacked bilayer versions of the Kitaev model. Increasing the ratio between the inter-layer Heisenberg coupling $J_\perp$ and the intra-layer Kitaev couplings $K_{x,y,z}$ destroys the topological spin liquid in favor of a paramagnetic dimer phase. We study phase diagrams as a function of $J_\perp/K$ and Kitaev coupling anisotropies using Majorana-fermion mean-field theory, series expansions, and effective models. We find that the phase diagrams depend sensitively on the nature of the stacking and anisotropy strength. While in some stackings and at strong anisotropies we find a single transition between the Kitaev and dimer phases, other stackings are more involved: Most importantly, we prove the existence of two novel macro-spin phases which can be understood in terms of Ising chains which can be either coupled ferromagnetically, or remain degenerate, thus realizing a classical spin liquid. In addition, our results suggest the existence of a flux phase with spontaneous inter-layer coherence. 16:10 - 16:35 Achim Rosch (Universität zu Köln) Quantum thermal Hall effect in \(RuCl_3\): the role of phonons The recent observation [1] of a half-integer quantized thermal Hall effect in $\alpha$-RuCl$_3$ is interpreted as a unique signature of a chiral spin liquid with a Majorana edge mode. In the talk we will discuss why it is possible to observe an approximately quantized Hall effect in the presence of phonons [2]. In the experiment phonons carry 99.9\% of the heat and the coupling to phonons destroys ballistic transport at the edge channels. Nevertheless, the thermal Hall conductivity remains approximately quantized and we argue, that the coupling to phonons to the edge mode is a necessary condition for the observation of the quantized thermal Hall effect. The coupling to phonons activates a gravitational anomaly which pumps heat in transversal direction into the phonon system. Furthermore, we calculate the intrinsic Hall effect of acoustic phonons in a spin liquid and argue that it gives only a tiny correction to the quantized thermal Hall effect. [1] Y. Kasahara et al., Nature 559, 227-231 (2018) [2] Y. Vinkler-Aviv, A. Rosch, Phys. Rev. X 8, 031032 (2018) 16:35 - 17:00 Manfred Fiebig (ETH Zurich) Fun stuff to do with multiferroic order parameters Requirements to "good multiferroics" are tough. They are supposed to have a spontaneous magnetization and polarization, preferably parallel to each other, with a strong magnetoelectric coupling between them. Inevitably, this leads to a multiferroic state that is described by a very complex set of order parameters – complex enough to provide the symmetry degrees of freedom to fulfil so many requirements at once. With the focus on electric-field-controlled magnetic order, it goes unnoticed that these degrees of freedom will permit many functionalities other than a refined magnetoelectric coupling. In my talk, I will describe the quest for such functionalities in our group. Here are some examples for the cases I may discuss: (i) For the multiferroic hexagonal manganites I will show that amplitude and phase of the order parameter may exhibit different coherence length. Taking this into account, we resolve the long-standing controversial question of how exactly the topological ferroelectric state in this system arises. (ii) The emergence of a magnetic bulk phase transition out of the spin structure in the domain walls is shown for (Tb,Dy)FeO$_3$. (iii) Inversion of a ferroelectric and a ferromagnetic multi-domain state in homogeneous external fields is demonstrated: In each domain, the direction of the order parameter is reversed but the domain pattern as such is left untouched. poster session (focus on posters with odd poster numbers) chair: Sang-Wook Cheong 09:00 - 09:25 Philipp Gegenwart (Universität Augsburg) Competing phases in \(Li_2IrO_3\) Due to the presence of sizable nearest-neighbor bond-dependent Ising interactions between effective spin-1/2 local moments, hexagonal 4d and 5d metal oxides have been intensively scrutinized as candidates for the realization of the Kitaev model, although deviation from ideal bond symmetry and presence of additional exchange interactions drive these materials away from the Kitaev limit. We discuss recent hydrostatic pressure experiments on \alpha- and \beta-Li2IrO3 [1,2], which indicate much richer physics, including by off-diagonal exchange induced classical spin liquid behavior, as well as structural dimerization. [1] V. Hermann, M. Altmeyer, J. Ebad-Allah, F. Freund, A. Jesche, A.A. Tsirlin, M. Hanfland, P. Gegenwart, I.I. Mazin, D.I. Khomskii, R. Valentí, C.A. Kuntscher, Phys. Rev. B 97, 020104(R) (2018). 2] M. Majumder, R.S. Manna, G. Simutis, J.C. Orain, T. Dey, F. Freund, A. Jesche, R. Khasanov, P.K. Biswas, E. Bykova, N. Dubrovinskaia, L.S. Dubrovinsky, R. Yadav, L. Hozoi, S. Nishimoto, A.A. Tsirlin, P. Gegenwart, arXiv:1802.06819, Phys. Rev. Lett, in press. 09:25 - 09:50 Gang Cao (University of Colorado at Boulder) Control of quantum states in canted antiferromagnetic insulators This talk offers a brief review of current experimental studies of iridates [1] and emphasize discrepancies between experimental confirmation and theoretical proposals that address superconducting, topological and quantum spin liquid phases. It then reports our recent study on electrical-current controlled behavior in iridates [2]. Electrical control of structural and physical properties is a long-sought, but elusive goal of contemporary science and technology. This work demonstrates that a combination of strong spin-orbit interactions and a canted antiferromagnetic Mott state is sufficient to attain that goal and points the way to novel possibilities for functional materials and devices [2]. References: 1. "The Challenge of Spin-Orbit-Tuned Ground-States in the Iridates: A Key Issues Review", Gang Cao and P. Schlottmann, Reports on Progress in Physics 81 042502 (2018); https://doi.org/10.1088/1361-6633/aaa979 2. "Electrical Control of Structural and Physical Properties via Spin-Orbit Interactions in Sr2IrO4", G. Cao, J. Terzic, H. D. Zhao, H. Zheng, Peter Riseborough, L. E. DeLong, Phys. Rev. Lett. 120, 017201 (2018); https://doi.org/10.1103/PhysRevLett.120.017201; Editor's Suggestion 09:50 - 10:15 Jak Chakhalian (Rutgers, The State University of New Jersey) Adventures in the world of topology and strong correlations For the past decade, condensed matter physics has witnessed a tremendous shift from the understanding of materials based on bands and bonds towards non-trivial geometric properties of the symmetry protected bands where Dirac or Weyl equations govern electrons. In parallel, a new paradigm of quantum topology (QT) has emerged. This QT framework encompasses Majorana and Haldane states, various quantum spin liquids, FQHE and all that characterized by topological order. From the materials standpoint, however, the fundamental challenge is to discover broad materials architectures which can host these exotic phases. In this talk, I will present a recently developed approach collectively known as the geometrical lattice engineering and illustrate how potential quantum spin liquid and QAHE can be realized in practice. chair: Roser Valenti 10:45 - 11:10 Judit Romhanyi (Okinawa Institute of Science and Technology) Spin-orbit dimers and non-collinear phases in \(d^1\)cubic double perovskites Novel quantum phases of matter arising in heavy transition metal compounds due to the strong relativistic spin-orbit coupling have attracted a lot of interest recently. Current experiments on the molybdenum [1-3] and osmium [4-6] based double perovskites, suggest that unusual ordered and disorder quantum states are hosted by these materials. We formulate and study a microscopic spin-orbital model for a family of cubic double perovskites with $d^1$ ions occupying frustrated fcc sublattice. Relying on variational approaches and a complimentary analytical analysis, we find a rich variety of phases, emerging from the interplay of Hund's coupling and spin-orbit interaction. The phase digram contains non-collinear ordered states, with or without net moments, and, remarkably, a large window of magnetically disordered spin-orbit dimer phase [7]. We discuss the physical origin of the unusual amorphous valence bond state experimentally suggested for Ba$_2${\it B}MoO$_6$ ({\it B}=Y,Lu), and predict possible ordered patterns in Ba$_2${\it B}OsO$_6$ ({\it B}=Na,Li) compounds. Additionally, we provide a theoretical background for the available experimental observation in these materials [7]. The proposed physical picture applies to a broad family of heavy transition metal compounds and helps uncovering the origins of magnetism in spin-orbit assisted Mott insulators. [1] T. Aharen, et al, Phys. Rev. B, 81, 224409 (2010). [2] M. de Vries, et al, Phys. Rev. Lett., 104, 177202 (2010). [3] M. de Vries, et al, New Journal of Physics, 15, 043024 (2013). [4] K. Stitzer, et al, Solid State Sciences, 4, 311 (2002). [5] A. Erickson, et al, Phys. Rev. Lett., 99, 016404 (2007). [6] A. Steele, et al, Phys. Rev. B 84, 144416 (2011). [7] J.Romhanyi, L. Balents, and G. Jackeli Phys. Rev. Lett. 118, 217202 (2017). 11:10 - 11:35 Sergey Streltsov (Russian Academy of Sciences, Ekaterinburg) Spin-orbit-entangled \(j_{eff}\)=1/2 state in 3d transition metal oxide: \(CuAl_2O_4\) Spin-orbit (SO) Mott insulators are regarded as a new paradigm of magnetic materials, whose properties are largely influenced by the SO coupling and featured by highly anisotropic bond-dependent exchange interactions between the spin-orbital entangled Kramers doublets, as manifested in 5d iridates and 4d ruthenates. I will show that a very similar situation can be realized in cuprates, when the Cu$^{2+}$ ions reside in a tetrahedral environment, like in spinel compounds. A special attention will be paid to CuAl$_2$O$_4$, which was experimentally found to retain cubic structure and does not show any long-range magnetic ordering down to very temperatures (0.5 K). We argue that these are the strong Coulomb correlations and the spin-orbit coupling, which conspire to suppress the Jahn-Teller distortions in CuAl$_2$O$_4$. The spin-orbit-entangled $j_{eff}$=1/2 state is then naturally realizes in the situation of $t_{2g}^5$ configuration and degenerate $t_{2g}$ subshell. This in turn explains unusual magnetic properties of CuAl$_2$O$_4$. Using first-principles electronic structure calculations, we construct a realistic model for the diamond lattice of the Cu$^{2+}$ ions in CuAl$_2$O$_4$ and show that the magnetic properties of this compound are largely controlled by anisotropic compass-type exchange interactions that dramatically modify the magnetic ground state by lifting the spiral spin-liquid degeneracy and stabilizing a commensurate single-q spiral. 11:35 - 12:00 Andrea Damascelli (University of British Columbia) New approaches in spin and time resolved ARPES I will review some of the new approaches we have developed in spin and time-resolved ARPES, and their application to unconventional superconductors and Dirac materials. chair: Dieter Vollhardt Yoshinori Tokura (RIKEN) Emergent properties of Dirac and Weyl semimetals of iridates Fusion of strong correlation and quantum topology may provide a new arena for materials physics toward emerging quantum technology. Here, we target the Ir-oxides with orthodox structures of perovskite ({\it A}IrO$_{3}$) and pyrochlore ({\it R}$_{2}$Ir$_{2}$O$_{7}$), which are both characterized by strong electron correlation and large spin-orbit coupling. In those compounds, intriguing magneto-transport properties emerge in the proximity of the Mott transition, such as unusually high electron mobility, large thermoelectric effect, large topological Hall response, and large magnetoresistance in the symmetry-protected Dirac semimetal states of {\it A}IrO$_{3}$ and the magnetic-order-induced multiple Weyl semimetal states of {\it R}$_{2}$Ir$_{2}$O$_{7}$. 14:50 - 15:15 Leon Balents (University of California, Santa Barbara) Transport and topology in some exotic quantum states Topology induces remarkable new phases of electronic matter with emergent types of particles such as Weyl and Majorana fermions. To *observe* these particles is trickier! I will discuss some recent work showing some ways they can be revealed in transport, and give connections to recent experiments on topological semimetals and quantum spin liquids. I may also discuss topological effects in twisted bilayer graphene, and their relation to flat band correlation physics. chair: Bernd Büchner 15:45 - 16:10 Naoto Nagaosa (The University of Tokyo) Magnetism and phonon The electron-phonon interaction in solids is considered to be mainly related to the charge degrees of freedom. However, spin-phonon interaction is also relevant to variety of phenomena in magnets. Especially, the modulation of spin-orbit interaction by phonons recently turns to be strong both experimentally and theoretically. I this talk, I will discuss the interplay between magnetism and phonon in several situations of interests including the phonon-Hall effect and orbital magnetism, nonreciprocal spin-phonon propagation, and ultrasonic attenuation in magnetic monopole system. 16:10 - 16:35 Takashi Mizokawa (Waseda University) Effect of oxygen holes in complex transition-metal oxides with small charge-transfer energy The fundamental electronic structure of transition-metal oxides (TMOs) is characterized by on-site Coulomb interaction $U$ between d electrons and the charge-transfer energy $\Delta$ from the oxygen 2p to transition-metal d orbitals [1]. In TMOs with small or negative $\Delta$, the effect of oxygen 2p orbitals becomes relevant for understanding of their physical properties. The exotic electronic structure of such TMOs can be investigated due to combination of advanced crystal growth and x-ray spectroscopy techniques. In the present work, we focus on the effect of oxygen holes in novel perovskite-type TMOs in which both A-site and B-site metals are electronically active. In BiNiO$_3$ and PbCoO$_3$, valence instability of Bi or Pb is coupled with spin-charge-orbital orderings of Ni or Co 3d electrons, and provides giant negative thermal expansion [2,3]. Recent x-ray photoemission measurements indicate that the unique valence states of BiNiO$_3$ and PbCoO$_3$ are stabilized by O 2p hole transfer between the Ni-O or Co-O bond and the Bi-O or Pb-O bond. In a similar manner, A-site ordered perovskites such as CaCu$_3$Co$_4$O$_{12}$ exhibit charge transfer between A-site and B-site transition-metal ions which is mediated by O 2p holes involved in the A-O and B-O bond formations [4,5]. The effect of oxygen 2p holes on the various valence and magnetic transitions in TMOs will be discussed in details based on the results of x-ray spectroscopy measurements and model calculations. The authors would like to thank Prof. M. Azuma, Prof. Y. Shimakawa, Prof. M. Takano, Prof. A. Fujimori, Prof. D. I. Khomskii, Prof. G. A. Sawatzky for the long term collaborations and Mr. K. Murota, Mr. K. Yamamoto, and Mr. J. Komiyama for the contributions to the recent works. [1] D. I. Khomskii, Transition Metal Compounds (Cambridge University Press, 2014). [2] M. Azuma et al., Nat. Commun. 2, 347 (2011). [3] Y. Sakai et al., J. Am. Chem. Soc., 139, 4574 (2017). [4] T. Mizokawa et al., Phys. Rev. B 80, 125105 (2009). [5] M. Mizumaki et al., Phys. Rev. B 84, 094418 (2011). 16:35 - 17:00 Daniel Khomskii (Universität zu Köln) Oxides vs peroxides Oxides vs peroxides D.I.Khomski II.Physikalisches Institut, Universitaet zu Koeln, Germany In this talk I will discuss some effects occurring in transition metal compounds with small or negative charge transfer gap and with large contribution of ligand (e.g. oxygen) holes. In this case, when a lot of holes are transferred to oxygens (or in general to ligands, e.g. S, Se, Te) one of the options is that instead of the usual oxides, like say Ti4+(O2-)2 one could form peroxides, e.g. with pyrite structure, such as for example Mg2+(O2)2- or Fe2+(S2)2-. This is a very interesting class of compounds, having nontrivial magnetic and sometimes orbital properties. Specifically we consider the recently synthesized material FeO2 [1], which, according to our theoretical calculations [2], is a system "in between" the usual dioxides like TiO2, VO2, and peroxides M2+(O2)2- : in FeO2 the valence of Fe is neither 4+ as in dioxides nor 2+ as in pyrite, but 3+. This specific material can play a very important role in the physics of the deep Earths mantle, especially at the early stages of the Earths history. Peroxides can also be important ingredients in the attempts to make better cathode materials for rechargeable batteries, and in many other applications. [1] Hu, Q. et al. ''FeO2 and FeOOH under deep lower-mantle conditions and Earth's oxygen–hydrogen cycles'', Nature 534, 241–244 (2016). [2] S.V. Streltsov, A.O. Shorikov, S.L. Skornyakov, A.I. Poteryaev, D.I. Khomskii, '' Unexpected 3+ valence of iron in FeO2 , a geologically important material lying "in between" oxides and peroxides '', Sci. Rep. 7, 13005 (2017) workshop dinner at the restaurant Italienisches Dörfchen chair: Paul H.M. van Loosdrecht 09:00 - 09:25 Sang-Wook Cheong (Rutgers, The State University of New Jersey) Broken symmetries, non-reciprocity and multiferroicity When the motion of an object in one direction is different from that in the opposite direction is called non-reciprocal directional dichroism (or simply a non-reciprocal effect). The object can be electron, phonon (lattice wave), magnon (spin wave), or light in crystalline solids, and the best known example is non-reciprocal charge transport (i.e. diode) effects in p-n junctions, where a built-in electric field (E) breaks the directional symmetry. In addition to p-n junctions, numerous technological devices such as optical isolators or spin current diodes can utilize non-reciprocal effects. We, fist, insource the concept of symmetry-Operational Equivalence (SOE). Then, we will discuss how non-reciprocal effects can be understood in terms of SOE. Furthermore, we will demonstrate that the SOE approach can readily explain various mechanisms for multiferroicity with magnetism-induced electric polarization. 09:25 - 09:50 Dirk van der Marel (Université de Genève) Tantalizing \(SrTiO_3\) TANTALIZING SRONTIUM TITANATE A. Stucky, Willem Rischau, Dorota Pulmannova, G. Scheerer, Z. Ren, D. Jaccard, J.-M. Poumirol, C. Barreteau, E. Giannini and D. van der Marel* University of Geneva, SWITZERLAND * Presenting Author, E-mail: [email protected] SrTiO3 in its' pristine state is a band insulator. Doping with 1 electron per 10'000 formula units suffices to render the material superconducting. The maximum Tc of 0.5 Kelvin occurs for about 1 percent doping. The doped electrons are coupled to the lattice parameters, and from a wealth of experiments it is known that this causes a factor of two mass enhancement, corresponding to the limit of large -and highly mobile-polarons [1]. The pairing interaction in this limit has significant momentum and energy dependence, a state of affairs which is appropriately captured by the Kirznits-Maksimov-Khomskii formalism [2]. The insulating parent compound can be made ferroelectric by substitution 18O on the oxygen sites [3,4]. Recently it has been suggested that superconductivity may be mediated by coupling to fluctuations of the ferro-electric order parameter, and that this results in an anomalous isotope effect [5]. Inspired by this prediction we carried out studies where we substituted 18O for 16O in SrTiO3, and observed a very strong enhancement of Tc [6,7]. The observed isotope effect is more than a factor 20 stronger than the BCS prediction and, even more tantalizing, it is of opposite sign. While this observation is in agreement with Ref. [5], the 18O-induced Tc enhancement is also a consequence of polaronic band narrowing [6,8]. Clearly the last word hasn't been said on the puzzle of superconductivity in doped SrTiO3. References 1. For a review see D. van der Marel et al., Phys. Rev. B 84, 251 11 (2011). 2. D. A. Kirzhnits, E. G. Maksimov and D. I. Khomskii, J. Low Temp. Phys. 10, 79 (1973). 3. K. A. Mueller, H. Burkard, Phys. Rev. B, 19, 3593-3602 (1979). 4. S. E. Rowley et al., Nat. Phys. 10, 367-372 (2014). 5. J. M. Edge et al., Phys. Rev. Lett. 115, 247002 (2016). 6. A. Stucky et al.; Scientific Reports 6, 37582 (2016). 7. W. Rischau et al.; to be published (2018). 8. A. S. Alexandrov, Phys. Rev. B 46, 14932 (1992). 09:50 - 10:15 Tanusri Saha Dasgupta (Indian Association for the Cultivation of Science) Hybridization-switching induced Mott transition in \(ABO_3\) perovskites We propose the concept of hybridization-switching induced Mott transition which is relevant to a broad class of $ABO_3$ perovskite materials including $BiNiO_3$ and $PbCrO_3$ which feature extended 6s orbitals on the A-site cation (Bi or Pb). Using ab initio electronic structure and slave rotor theory calculations, we show that such systems exhibit a breathing phonon driven A-site to oxygen hybridization-wave instability which conspires with strong correlations on the B-site transition metal ion (Ni or Cr) to induce a Mott insulator. In contrast to perovskites with passive A-site cations, these Mott insulators with active A-site orbitals are shown to undergo a pressure induced insulator to metal transition accompanied by a colossal volume collapse due to ligand hybridization switching. Work carried out in collaboration with Atanu Paul, Anamitra Mukherjee, Indra Dasgupta, Arun Paramekanti chair: Radu Coldea 10:45 - 11:10 Henrik M. Rønnow (École Polytechnique Fédérale de Lausanne) Spin waves in undoped cuprates - quantum effect, Hubbard heritage and what about the lattice? The undoped cuprates display a Mott insulating antiferromagnetically ordered ground state whose excitations to first order can be described by simple spin-wave theory. I will present 3 extensions to this picture: i) a quantum effect around the $(0,\pi)$ zone boundary point, which indicates spinon-deconfinement or strong spin-wave interaction; ii) deriving the spin-wave dispersion by projecting an effective one-band Hubbard model allow quantitative comparison of hopping in different cuprate families; iii) Finally we point out that inserting the thermal and zero-point ionic motion into expressions for distance and angle dependence of $t$ and $J$ lead to significant modulations, posing the question as to whether lattice effects need to be included when describing magnetism of cuprates. 11:10 - 11:35 Thomas Lorenz (Universität zu Köln) Quantum phase transitions in spin-chain materials 11:35 - 12:00 Igor Mazin (Naval Research Laboratory) Magnetic interactions: DFT meets Hubbard, or why should we learn foreign languages chair: John Freeland Maxim Mostovoy (University of Groningen) Electric excitation of topological magnetic defects in Mott insulators Mott insulators with competing Heisenberg exchange interactions form a new class of materials where topological magnetic defects, such as skyrmions, can exist in absence of inversion symmetry breaking [1-4]. Skyrmions in centrosymmetric materials have more degrees of freedom and show more complex dynamics than skyrmions in chiral magnets. In addition, the electric polarization induced by non-collinear spin textures couples the topological magnetic defects to an applied electric field [5]. This magnetoelectric coupling allows for an electric control of skyrmions in Mott insulators accompanied by low energy losses. In my talk I will discuss stability, dynamics and ferroelectric properties of skyrmions and merons in frustrated magnets. I will also discuss materials that can host these topological defects. [1] T. Okubo, S. Chung and H. Kawamura, Phys. Rev. Lett. 108, 017206 (2012). [2] A. O. Leonov and M. Mostovoy, Nature Commun. 6, 8275 (2015); 8, 14394 (2017). [3] S. Hayami, S.-Z. Lin, and C. D. Batista, Phys. Rev. B 93, 184413 (2016). [4] Y. A. Kharkov, O. P. Sushkov and M. Mostovoy, Phys. Rev. Lett. 119, 207201 (2017). [5] S-W. Cheong and M. Mostovoy, Nature Materials 6, 13 (2007). 14:50 - 15:15 Antoine Maignan (CNRS ENSICAEN) A DK-thlon in some transition metal oxides: major results and recent outcomes With DK and other co-workers, we have been co-authoring 10 papers (decathlon). Starting from our common results, obtained mostly on manganites and cobaltites, including spin blockade [1], quantum tunnelling of the magnetization [2] or magnetothermopower [3], recent results will be shown. First, cobaltites and sulphides, crystallizing in structures derived from delafossite, will be chosen: 2D metals (Fig.1), bad metals and multiferroics [4]. Second, ferrites containing Jahn-Teller $Fe^{2+}$ as the "429" $Fe_4(Nb/Ta)_2O_9$ phases [5] or the $Fe^{2+}:Fe^{3+}$ containing $Fe_3BO_5$ vonsenite [6] will be taken as examples to illustrate the ferrites richness for multiferroic or magnetoelectric properties. Most of the common work with DK was made possible through several EU projects including OXSEN, SCOOTMO and SOPRANO. [1] A Maignan et al, Phys Rev Lett 93, 026401 (2004) [2] A Maignan et al, J Mater Chem 14, 1231 (2004) [3] A Maignan et al, J Phys : Cond Matter 15, 2711 (2003) [4] R. Daou et al, Sc Techn of Adv Mater, 18, 919 (2017) [5] A Maignan and C Martin, Phys Rev B 97, 161106(R) (2018) ; Phys Rev Mater (2018) [6] A. Maignan et al., J. of Solid State Chem., 246 209 (2017) chair: Markus Braden 15:45 - 16:10 Bernd Büchner (IFW Dresden) Field and pressure-induced spin gaps in the Kitaev-Heisenberg magnet \(\alpha-RuCl_3\) 16:10 - 16:35 Paul H.M. van Loosdrecht (Universität zu Köln) Raman scattering in \(\alpha-RuCl_3\) in high magnetic fields 16:35 - 17:00 Alessandro Revelli (Universität zu Köln) RIXS on edge-sharing and face-sharing iridates poster session (focus on posters with even poster numbers) chair: Henrik M. Rønnow 09:00 - 09:25 Oleg Janson (IFW Dresden) DFT+DMFT for honeycomb lattices in (111) oxide heterostructures The physics of oxide heterostructures is often governed by electronic correlations, implying that charge, orbital and spin degrees of freedom are simultaneously present. The combination of density functional theory and dynamical mean-field theory, DFT+DMFT, is a state-of-the-art method to address such complicated cases [1]. Recently, (111) heterostructures came into the focus as prospective correlated analogs of graphene: perovskite bilayers grown in the [111] direction form a buckled honeycomb lattice. Here, we apply DFT+DMFT to explore the electronic, magnetic and topological properties of two different (111) heterostructures: i) bilayers of $SrRuO_3$ (a $t_{2g}$ system) in $SrTiO_3$ and ii) bilayers of $LaNiO_3$ (an $e_g$ system) in $LaAlO_3$. For (111) $SrRuO_3$ bilayers, we find half-metallic ferromagnetism with a remarkably high Curie temperature of 500K and the ordered magnetic moment of 2 $\mus_B/Ru$ [2]. This resilient ferromagnetism is attributed to the orbital degeneracy of $t_{2g}$ electrons, which is largely preserved in (111) layers, in contrast to conventional (001) heterostructures. Moreover, the topologically nontrivial quantum anomalous Hall (QAH) state can be induced by doping [2]. The strength of interaction parameters in nickelates is under debate. Hence for (111) $LaNiO_3$ in $LaAlO_3$, we study a wide range of interaction parameters, and find a involved phase diagram comprising four phases: a ferromagnetic metal, a paramagnetic metal, an antiferro-orbitally-ordered insulator, and a paramagnetic insulator [3]. Interestingly, breathing distortions of $NiO_6$ octahedra have only a minor impact on the phase diagram. By comparing with the experimental data and earlier model DMFT studies, we argue that (111) $LaNiO_3$ bilayers are close to the metal-insulator transition and may exhibit a high magnetoresistance at low temperatures [3]. [1] O. Janson, Z. Zhong, G. Sangiovanni, and K. Held, Dynamical mean field theory for oxide heterostructures, in Spectroscopy of Complex Oxide Interfaces, Springer Series in Materials Science 266, eds. C. Cancellieri and V. Strocov (Springer International Publishing, Basel, 2018), pp. 215-243. [2] L. Si, O. Janson, G. Li, Z. Zhong, Z. Liao, G. Koster, and K. Held, Phys. Rev. Lett. 119, 026402 (2017). [3] O. Janson and K. Held, Phys. Rev. B 98, 115118 (2018). 09:25 - 09:50 Radu Coldea (University of Oxford) Quantum effects in the spin dynamics of the frustrated pyrochlore magnets \(Yb_2Ti_2O_7\) and \(Er_2Ti_2O_7\) We report high-resolution inelastic neutron scattering measurements of the spin dynamics as a function of applied magnetic field in the frustrated quantum pyrochlore magnets Yb2Ti2O7 and Er2Ti2O7, which show persistent zero-point quantum fluctuations in their magnetically ordered ground states. In Yb2Ti2O7 we observe directly how magnons become unstable to decay into a continuum of multi-particle excitations upon decreasing magnetic field and we attribute this to anomalously strong quantum fluctuations due to proximity to a phase boundary between ferromagnetic and antiferromagnetic orders for a Kitaev-Gamma Hamiltonian. Er2Ti2O7 is located well inside an antiferromagnetic XY phase and here magnons remain well-defined throughout most of the spectrum, but we find strong field-dependent quantum renormalizations of their dispersion relations and magnon decay effects at high energies. [1] J.D. Thompson, P.A. McClarty, D. Prabhakaran, I. Cabrera, T. Guidi, D. Voneshen and R. Coldea, Phys. Rev. Lett. 119, 057203 (2017). 09:50 - 10:15 Joachim Hemberger (Universität zu Köln) Dynamics of magnetic monopoles in the magnetoelectric spin ice In systems with competing interactions frustration can create complex ground states with exotic excitations. Spin-ice is a solid manifestation of such a degenerate ground state in which magnetic degrees of freedom carry zero point entropy (in analogy to water ice) and for which the corresponding excitations behave like magnetic monopoles1. The density of these monopoles is a function of temperature and external magnetic field and the corresponding phase diagram exhibits a phase transition into a "monopole-ordered" state (again resembling the gas-liquid transition in water). Accordingly, there exists a critical end-point for the monopole condensation1. It had been postulated by Daniel Khomskii that the emergent magnetic monopoles in addition carry electric dipole moments2. Using dielectric and magnetic spectroscopy we demonstrate the existence of such an electric dipole moment coupled to magnetic monopole excitations in $Dy_2Ti_2O_7$. Furthermore we are able to examine the monopole dynamics via this magnetoelectric coupling. We can track the relaxation time of the electrically dipolar contribution down to low temperatures in the mK-range as a function of an external magnetic field along the crystalline [111] direction and are able to identify different contributions to the response function. Analyzing the dynamics at temperatures above the critical end-point we see the crossover from the conventional slowing down of the fluctuation dynamics to an critical speeding up. [1] C. Castelnovo, R. Moessner, and S. L. Sondhi, Nature 451, 42 (2008) [2] D. I. Khomskii, Nature Communications 3, 1 (2012) [3] C.P. Grams, M. Valldor, M. Garst, and J. Hemberger, Nature Communications 5:4853 (2014) chair: Giniyat Khaliullin 10:45 - 11:10 John Mitchell (Argonne National Laboratory) Nickel oxides: insights in charge and orbitals from new crystals 11:10 - 11:35 George Jackeli (Universität Stuttgart) Dimer phases of orbitally degenerate quantum antiferromagnets Orbitally degenerate magnetic compounds are known to often develop nonmagnetic ground states with a spin gap. I will discuss a possible theory behind this phenomenon. I argue that the orbital degrees of freedom induce a spontaneous dimerization of spins and drive them into highly degenerate nonmagnetic manifold spanned by hard-core dimer (spin-singlet) coverings of the lattice. Two possible mechanisms of lifting this extensive degeneracy will be discussed: (i) by order-out-of-disorder due to the virtual triplet fluctuations and (ii) by magneto-elastic interaction. They lead to dimer condensation into a valence bond crystal pattern and provide an explanation for dimerized superstructures seen experimentally. 11:35 - 12:00 Andrzej M. Oles (Jagellionian University) Entanglement in the one-dimensional \(SU(2)\otimes SU(2)\) models In insulating states of transition metal oxides with orbital degeneracy effective interactions are described in terms of spin-orbital superexchange [1]. Consequently, the resulting models are often inherently frustrated and the quasi-empirical Goodenough-Kanamori rules may be violated leading to (intersite) spin-orbital entanglement which excludes the use of mean field procedure separating spin and orbital degrees of freedom [2]. Here we present a brief overview of the earlier study of the one-dimensional (1D) SU(2)$\otimes$SU(2) symmetric models that show entangled spin-orbital bound states and enhanced spin-orbital fluctuations, both for antiferromagnetic and ferromagnetic exchange [3], and analyze the respective phase diagrams. We show that in general spin-orbital fluctuations are enhanced near the highly symmetric SU(4) model. Next, we concentrate on a 1D spin-orbital model with lower symmetry that is able to describe a wider class of transition metal oxides [4]. It turns out that, even though such a model exhibits a more complex spin-orbital entanglement, amazingly it can under some circumstances be reduced to an exactly solvable free fermion model. This property gives more insights into the nature of spin-orbital correlations. \\ \scriptsize{\hspace{-2mm} %%%%insert bibliographic reference here. if needed %%%%use the format: [numb] %%%%%multiple references separated by semicolons %\noindent [1] A. M. Ole\'s, J. Phys.: Condensed Matter \textbf{24}, 313201 (2012).\hfill\break [2] A. M. Ole\'s, P. Horsch, L. F. Feiner, and G. Khaliullin, Phys. Rev. Lett. \textbf{96}, 147205 (2006).\hfill\break [3] W.-L. You, P. Horsch, and A. M. Ole\'s, Phys. Rev. B \textbf{92}, 054423 (2015).\hfill\break [4] D. Gotfryd, E. Paerschke, K. Wohlfeld et al., in preparation (2018).
CommonCrawl
MathOverflow Meta MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. In what sense Fraissean view point shows Model Theory can be done without any formal syntax and deduction rule? Active 11 years ago In this post I want to look at an issue I was in doubt when looking at the comment of F. G. Dorais in the post In model theory, does compactness easily imply completeness? F. G. Dorais remark was: Blockquote ...The first, which comes through rather clearly, is that Model Theory could ultimately be done without any formal syntax and deduction rules... I think F. G. Dorais was talking about Fraisse's development of model theory via back and forth. It is, however, not clear to me that this is more free from syntax and deduction rule than the traditional one in a meaningful way. I think Fraissen view does show that Model Theory could be done without a specific choice of syntax. But it seems unreasonable to think that the traditional model theorists would believe that a specific choice of syntax does matter. The main question I want to ask: Is there any difference between Fraissean point of view to traditional point of view BEYOND switching from formal syntax and deduction rule to its informal counter part? It is not immediately clear to me that the Fraissean did anything more than doing so. If this is the case then there is nothing genuine new about Fraissean point of view than the traditional one. For example, there seems to be no fundamental difference between writing $ N \vDash S0+S0=SS0 $ and speaking out loud "in $N$, one plus one is equal to two". In both cases, we use language which are ultimately meaningless. There is no more meaning in the utterance of "one" than writing $S0$. (If a parrot shout "one plus one is equal to two", his statement would have no meaning). In both cases meaning is given by the interpretations. The only difference is the interpretation in the case of "one" is more familiar than in the case of $S0$. This difference has nothing to deal with the subject matter of mathematics. Likewise, there is no meaningful difference between " there exist .. in M" and $M \vDash \exists ...$. lo.logic model-theory mathematical-philosophy 122 silver badges33 bronze badges abcdxyzabcdxyz $\begingroup$ To clarify a little, in my comment I intended "syntax and deduction rules" as a single entity, i.e. some form of proof theory. Your question focuses on the syntax part, which is the least important part of that compound. $\endgroup$ – François G. Dorais♦ Jan 14 '10 at 16:12 $\begingroup$ It is true that the above questions focuses more on the syntax part. But I wonder whether a similar concern might also arise for the deduction rules. So the respective question would be: Is there any difference between Fraissean point of view to traditional point of view BEYOND switching from formal deduction rules system to informal deduction rules? ( After all the formal deduction rules was the formalization of some informal deduction rules). The picture is misty here and I don't know whether this is a correct concern $\endgroup$ – abcdxyz Jan 14 '10 at 17:35 $\begingroup$ Can you illustrate what concerns you about formal/informal deduction rules? Maybe reformulate the question accordingly? Remember that the two approaches look at exactly the same structures and they view them in exactly the same way at the atomic level, i.e. $N \vDash 1+1=2$ has exactly the same meaning in both approaches. $\endgroup$ – François G. Dorais♦ Jan 14 '10 at 21:48 After your answer, I think I understand better where you see a problem. I don't think you fully appreciate the way of interpreting formulas from Fraïssé's point of view. For simplicity, I will follow your lead and stick with the case of a language with just one relation symbol. It's not hard to generalize, but that would introduce some unnecessary tedium. First, think about how you would actually define a formula in the Fraïssean style. What you have are the $n$-types (${\sim_\omega}$-equivalence classes of $n$-tuples). There is a (Hausdorff, in fact, zero-dimensional) topology on the space $S_n$ of $n$-types which is induced by the ${\sim_p}$-equivalences. A good way to think about this topology is to think of the set $S_n$ of $n$-types as the inverse limit of the ${\sim_p}$-quotients, so the basic clopen sets of the topology on $S_n$ correspond to ${\sim_p}$-equivalence classes for some $p < \omega$. Formulas can then be viewed as clopen sets in $S_n$. The meaning of ${\land}$, ${\lor}$, ${\lnot}$, is clear since clopen sets form a Boolean algebra. Before thinking about quantifiers, let's see what it means to satisfy a formula $\phi$, i.e. a clopen set in $S_n$. Let's take a structure $(M,R)$ and pick an $n$-tuple $\bar{a}$ from $M$. Then $(M,R,\bar{a})$ has a specific type, which may or may not belong to the clopen set $\phi$ of $S_n$. To (ab)use classical symbolism, we can write $(M,R) \vDash \phi(\bar{a})$ when the type of $(M,R,\bar{a})$ does belong to $\phi$. This gives the usual interpretation of ${\land}$, ${\lor}$, ${\lnot}$. Returning to quantifiers, the existential quantifier is, in a certain sense, the projection $\exists x_{n+1}:S_{n+1} \to S_n$ which simply forgets the last coordinate (suggested by the dummy variable symbol $x_{n+1}$). More precisely, if $\psi$ is a clopen set in $S_{n+1}$, then the set $\exists x_{n+1}\psi$ of $n$-types that extend to an $(n+1)$-type in $\psi$ is a well-defined subset of $S_n$. The fact that this is a clopen set is however not immediately obvious, nor is the fact that this is correct. (It's easier to think about this when $\psi$ is a basic clopen set, i.e. a ${\sim_p}$-equivalence class for some $p < \omega$. Working through the definitions, you see that $\exists x_{n+1}\psi$ is easy to understand in terms of ${\sim_{p+1}}$-equivalence. It's also easier to see that this is indeed correct.) Now that we understand how to view formulas from Fraïssé's point of view. How does compactness enter? The Compactness Theorem, in Fraïssé's view, simply says that the spaces $S_n$ are all compact. (Or, in a more restricted sense, that $S_0$ is compact.) In our case, the fact that the spaces $S_n$ are compact is obvious, since they are inverse limits of finite spaces. However, this fact uses the cheat that we're only considering a language with only one relation symbol. For the general case, the ultrafilter construction reduces the problem to the case where the language is finite. (This works well in a relational language with constants, to handle functions you need some magic tricks.) The point here is that you prove that the spaces $S_n$ are compact directly, you don't need to know that clopen sets are actually formulas. The Classical Compactness Theorem then follows from the simple observation that formulas are closed sets. The other big theorems follow in the same way. For example, the Omitting Types Theorem follows from the fact that the Baire Category Theorem holds for compact spaces, again without explicit mention formulas. What about the Completeness Theorem? Here, you definitely need formulas (or at least sentences), but we know how to interpret those so it's not a big deal. The Compactness Theorem tells us that any inconsistent set of sentences has a finite inconsistent subset. As a collection of deduction rules, we can take all rules $\phi_1,\dots,\phi_k \vdash \psi$ where $\{\phi_1,\dots,\phi_k,\lnot\psi\}$ is an inconsistent set of sentences. This is a horrible system, but it's finitary, sound and complete for semantical consequence. (You can do something similar if you want deduction rules for formulas, but there's really no point to any of this.) This is a completely useless completeness theorem since there it does not give a useful description of this set of deduction rules. You would have a very hard time proving the Gödel Incompleteness Theorems from this... François G. Dorais♦François G. Dorais $\begingroup$ Thanks a lot. This definitely clarify a lot of things for me. I have not fully digest the strength of Fraissean view yet. I think I should take a pen and work it out. Also, I have a vague objection about your comment on Completeness theorem. I think that even working on the model side, there is still a deductive system that "force itself upon us". That is the deductive system given by intersecting the "formulas" and going from a smaller "formulas" to a bigger one; also projecting a "formula" to one with less variables. Let me check it, but I won't be able to return to this for a few days. $\endgroup$ – abcdxyz Jan 16 '10 at 19:03 $\begingroup$ Yes, the Completeness Theorem in the sense of Gödel is still true. In other words, the standard rules of logic still hold and are complete for Fraïssean Model Theory. It's just not obvious to see that unless you happen to know the classical theory too. In practice, there are no sides to take both views have their advantages and disadvantages. The smart thing to do is to flip sides whenever it's convenient... $\endgroup$ – François G. Dorais♦ Jan 16 '10 at 19:16 This is a partial answer to the above question. It is too long for a comment. I write it hear hoping to hear idea from those senior than me, and in case it is useful. Here is some back ground, you might skip it if you are familiar with Poizat's definition: There are two definitions of p- equivalent given in Poizat's A course in Model Theory, one via local isomorphism and the other via formal language. I think to compare the two view, it is crucial to compare this twos definitions. Let ${\bf M}=(M, R)$, ${\bf N}=(N, S)$ structures with $ R, S$ $m$-ary relations. Formal language definition: Two $n$-tupe $\vec{a},\vec{b}$ are $p$-equivalent if and only if they satisfy the same formula in the language with quantifier rank at most $p$. Local isomorphism definition: (Fraissean point of view) A local isomorphism $s$ from $\bf M$ to $\bf N$ is defined to be an isomorphism between the a restriction of $\bf M$ to a finite set $\vec{a}$ to the restriction of $\bf N$ to a finite set $\vec{b}$. 0-isomorphism are local isomorphisms. A local isomorphism $s$ is a $p+1$-isomorphism iff 1) Forth condition: for any element $c$ in $M$ there is $d$ in $N$, and $t$ a $p$-isomorphism which map $c$ to$d$ and extends $s$. 2) Back condition: for any element $d$ in $N$ there is $c$ in $M$, and $t$ a $p$-isomorphism which map $c$ to $d$ and extends $s$. Two $n$-tupe $\vec{a},\vec{b}$ are $p$-equivalent if there is a $p$ automorphism that maps one into another First, we try to answer the question about syntax. We can unravel the local isomorphism definition to make it more like the language one, we get the following: Two $n$-tupe $\vec{a},\vec{b}$ are $p$-equivalent iff all the following are satisfied $ \forall c_p \in M, \exists d_p \in N,$ $ \forall c_{p-1} \in M, \exists d_{p-1} \in N,$...( all statements about $\vec{a}, c_p, c_{p-1}, c_{p-2}, ...c_1 ) \leftrightarrow$ ( all statements about $\vec{b}, d_p, d_{p-1}, d_{p-2}, ...d_1 $ ) $ \forall d_p \in N, \exists c_p \in M,$ $ \forall c_{p-1} \in M, \exists d_{p-1} \in N,$...( all statements about $\vec{a}, c_p, c_{p-1}, c_{p-2}, ...c_1 ) \leftrightarrow$ ( all statements about $\vec{b}, d_p, d_{p-1}, d_{p-2}, ...d_1 $ ) ... ( all alternation between $c_i \in M$ and $d_i \in N$). I think was wrong in the question. There was some genuine difference between the Fraissean view and the traditional view. In both cases we do use (formal or informal) quantifiers. But in traditional view the quantifier was on each domain and in the Fraissean view the quantifier is running back and forth between two domains. However, this difference does NOT shows Fraissean view point is anymore free from language than the traditional view. (The quantifier is even more complicated, in fact. But it is not the point). I think the difference is like this: The traditional view point characterize local morphism in term of invariance. (In this case, it preserve the statements with at most p quantifier). The Fraissean view point describe the morphisms directly through induction. Both are useful. The traditional view point is used in the proof of compactness (I don't know if this can be done in an easy way using the Fraissean view). The Fraissean view is important in many applications: To show that many theories are complete. How about deduction rules? I think we can define the deduction rule on the model side in an adhoc way (or may be not so adhoc) but it seems rather irrelevant here. So my previous concern is not correct. I think that it is right that model theory can be developed independent of deduction rules. Perhaps that is because two equivalent statement are exactly the same to all model. I still don't understand exactly the relationship between Fraissean approach and this. Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged lo.logic model-theory mathematical-philosophy or ask your own question. Is it necessary that model of theory is a set? Is there formal definition of universal quantification? Higher-order, multi-sorted, non purely equational version of universal algebra ? How to think like a set (or a model) theorist. When does $ZFC \vdash\ ' ZFC \vdash \varphi\ '$ imply $ZFC \vdash \varphi$? Looking for a source for Intended Interpretation Status of proof by contradiction and excluded middle throughout the history of mathematics? Regarding Gentzen's note regarding 'Godel-points' (i.e., "Where is the Godel-point hiding?")
CommonCrawl
qchem.pnpi.spb.ru Quantum Chemistry Laboratory Invited Talks and Reports Recent Posters The EXP-T program Core restoration Effective potentials A.V. Oleynichenko, A. Zaitsevskii, N.S. Mosyagin, A.N. Petrov, E. Eliav, A.V. Titov LIBGRPP: A Library for the Evaluation of Molecular Integrals of the Generalized Relativistic Pseudopotential Operator over Gaussian Functions Symmetry, 15, 1 (2023) V.A. Knyazeva, K.N. Lyashchenko, M. Zhang, D. Yu, O.Yu. Andreev Investigation of two-photon $2s{\rightarrow}1s$ decay in one-electron and one-muon ions Phys. Rev. A, 106, 012809 (2022) S.D. Chekhovskoi, D.V. Chubukov, L.V. Skripnikov, A.N. Petrov, L.N. Labzowsky Atomic-level-mixing contribution to the $\mathcal{P},\mathcal{T}$-odd Faraday effect as an enhancement factor in the search for $\mathcal{P},\mathcal{T}$-odd interactions in nature T. Isaev, D. Makinskii, A. Zaitsevskii Radium-containing molecular cations amenable for laser cooling Chemical Physics Letters, 807, 140078 (2022) I. Kurchavov, A. Petrov $\mathcal{P},\mathcal{T}$-odd energy shifts of the $^{173}\mathrm{YbOH}$ molecule D.E. Maison, L.V. Skripnikov Static electric dipole moment of the francium atom induced by axionlike particle exchange D.E. Maison, L.V. Skripnikov, G. Penyazkov, M. Grau, A.N. Petrov $\mathcal{T},\mathcal{P}$-odd effects in the ${\mathrm{LuOH}}^{+}$ cation M.Y. Kaygorodov, D.P. Usov, E. Eliav, Y.S. Kozhedub, A.V. Malyshev, A.V. Oleynichenko, V.M. Shabaev, L.V. Skripnikov, A.V. Titov, I.I. Tupitsyn, A.V. Zaitsevskii Ionization potentials and electron affinities of Rg, Cn, Nh, and Fl superheavy elements V. Krumins, M. Tamanis, R. Ferber, A.V. Oleynichenko, L.V. Skripnikov, A. Zaitsevskii, E.A. Pazyuk, A.V. Stolyarov, A. Pashov The a3Σ+ state of KCs revisited: hyperfine structure analysis and potential refinement Journal of Quantitative Spectroscopy and Radiative Transfer, 108124 (2022) A.V. Oleynichenko, L.V. Skripnikov, A.V. Zaitsevskii, V.V. Flambaum Laser-coolable ${\mathrm{AcOH}}^{+}$ ion for $\mathcal{CP}$-violation searches A. Petrov, A. Zakharova Sensitivity of the YbOH molecule to $\mathcal{P}\mathcal{T}$-odd effects in an external electric field Phys. Rev. A, 105, L050801 (2022) S.G. Semenov, M.E. Bedrina, T.A. Andreeva, A.V. Titov Hydroxylated buckminsterfullerene complexes with endohedral europium atom The European Physical Journal D, 76, 12, 253 (2022) S.G. Semenov, M.E. Bedrina, V.A. Klemeshev, A.V. Titov Quantum chemical modeling of electron-deficient hollow TlkPb12–k and TlkBi20–k shells and related endohedral complexes (k = 1; 2) International Journal of Quantum Chemistry, e27046 (2022) V.M. Shakhova, D.A. Maltsev, Yu.V. Lomachuk, N.S. Mosyagin, L.V. Skripnikov, A.V. Titov Compound-tunable embedding potential method: analysis of pseudopotentials for Yb in YbF2, YbF3, YbCl2 and YbCl3 crystals Phys. Chem. Chem. Phys., 24, 19333-19345 (2022) L.V. Skripnikov, S.D. Prosnyak Refined nuclear magnetic dipole moment of rhenium: $^{185}\mathrm{Re}$ and $^{187}\mathrm{Re}$ Phys. Rev. C, 106, 054303 (2022) G. Penyazkov, L.V. Skripnikov, A.V. Oleynichenko, A.V. Zaitsevskii Effect of the neutron quadrupole distribution in the TaO+ cation E. Eliav, A. Borschevsky, A. Zaitsevskii, A.V. Oleynichenko, U. Kaldor Relativistic Fock-Space Coupled Cluster Method: Theory and Recent Applications In: Reference Module in Chemistry, Molecular Sciences and Chemical Engineering (2022) A. Zaitsevskii, N.S. Mosyagin, A.V. Oleynichenko, E. Eliav Generalized relativistic small-core pseudopotentials accounting for quantum electrodynamic effects: Construction and pilot applications A. Zaitsevskii, L.V. Skripnikov, N.S. Mosyagin, T. Isaev, R. Berger, A.A. Breier, T.F. Giesen Accurate ab initio calculations of RaF electronic structure appeal to more laser-spectroscopical measurements The Journal of Chemical Physics, 156, 4, 044306 (2022) A. Zakharova, A. Petrov Impact of ligand deformation on the P,T-violation effects in the YbOH molecule The Journal of Chemical Physics, 157, 15, 154310 (2022) A. Zakharova Rotating and vibrating symmetric-top molecule ${\mathrm{RaOCH}}_{3}$ in fundamental $\mathcal{P}, \mathcal{T}$-violation searches T. Zalialiutdinov, D. Solovyev, D. Chubukov, S. Chekhovskoi, L. Labzowsky Alternative interpretation of relativistic time-reversal and the time arrow Phys. Rev. Res., 4, L022052 (2022) T. Zalialiutdinov, D. Glazov, D. Solovyev Thermal radiative corrections to hyperfine structure of light hydrogenlike systems T.A. Zalialiutdinov, A.A. Anikin, D.A. Soloviev Thermally Induced Stark Shifts of Highly Excited States of Hydrogen Atom Journal of Experimental and Theoretical Physics, 135, 5, 605-610 (2022) Russian version: Журнал экспериментальной и теоретической физики, 162, 5, 615-622 (2022) D.M. Vasileva, K.N. Lyashchenko, A.B. Voitkiv, D. Yu, O.Yu. Andreev Resonant elastic scattering of polarized electrons on H-like ions K.N. Lyashchenko, O.Yu. Andreev, D. Yu QED calculation of two-electron one-photon transition probabilities in He-like ions P.-M. Hillenbrand, K.N. Lyashchenko, S. Hagmann, O.Yu. Andreev, D. Banas, E.P. Benis, A.I. Bondarev, C. Brandau, E. De Filippo, O. Forstner, J. Glorius, R.E. Grisenti, A. Gumberidze, D.L. Guo, M.O. Herdrich, M. Lestinsky, Yu.A. Litvinov, E.V. Pagano, N. Petridis, M.S. Sanjari, D. Schury, U. Spillmann, S. Trotsenko, M. Vockert, A.B. Voitkiv, G. Weber, T. Stohlker Electron-loss-to-continuum cusp in collisions of U$^{89+}$ with N$_{2}$ and Xe K.N. Lyashchenko, V.A. Knyazeva, O.Yu. Andreev, D. Yu Asymmetry in emission of photons with left- and right-hand circular polarizations in two-photon decay V.N. Kutuzov, D.V. Chubukov, L.V. Skripnikov, A.N. Petrov, L.N. Labzowsky P,T-odd Faraday rotation in intracavity absorption spectroscopy with particle beam as a possible way to improve the sensitivity of the search for the time reflection noninvariant effects in nature Annals of Physics, 434, 168591 (2021) D.V. Chubukov, L.V. Skripnikov, A.N. Petrov, V.N. Kutuzov, L.N. Labzowsky $\mathcal{P},\mathcal{T}\text{-}\mathrm{odd}$ Faraday rotation in intracavity absorption spectroscopy with a molecular beam as a possible way to improve the sensitivity of the search for time-reflection-noninvariant effects in nature Photon-spin-dependent contribution to the P,T -odd Faraday rotation effect for atoms Journal of Physics B: Atomic, Molecular and Optical Physics, 54, 5, 055001 (2021) T.A. Isaev, A.V. Zaitsevskii, A. Oleynichenko, E. Eliav, A.A. Breier, T.F. Giesen, R.F. Garcia Ruiz, R. Berger Ab initio study and assignment of electronic states in molecular RaCl Journal of Quantitative Spectroscopy and Radiative Transfer, 269, 107649 (2021) D.E. Maison, V.V. Flambaum, N.R. Hutzler, L.V. Skripnikov Electronic structure of the ytterbium monohydroxide molecule to search for axionlike particles D.E. Maison, L.V. Skripnikov, A.V. Oleynichenko, A.V. Zaitsevskii Axion-mediated electron–electron interaction in ytterbium monohydroxide molecule D.A. Maltsev, Yu.V. Lomachuk, V.M. Shakhova, N.S. Mosyagin, L.V. Skripnikov, A.V. Titov Compound-tunable embedding potential method and its application to calcium niobate crystal ${\mathrm{CaNb}}_{2}{\mathrm{O}}_{6}$ with point defects containing tantalum and uranium Phys. Rev. B, 103, 205105 (2021) N.S. Mosyagin, A.V. Oleynichenko, A. Zaitsevskii, A.V. Kudrin, E.A. Pazyuk, A.V. Stolyarov Ab initio relativistic treatment of the a3Π−X1Σ+, a′3Σ+−X1Σ+ and A1Π−X1Σ+ systems of the CO molecule A. Barzakh, A.N. Andreyev, C. Raison, J.G. Cubiss, P. Van Duppen, S. Peru, S. Hilaire, S. Goriely, B. Andel, S. Antalic, M. Al Monthery, J.C. Berengut, J. Bieron, M.L. Bissell, A. Borschevsky, K. Chrysalidis, T.E. Cocolios, T. Day Goodacre, J.-P. Dognon, M. Elantkowska, E. Eliav, G.J. Farooq-Smith, D.V. Fedorov, V.N. Fedosseev, L.P. Gaffney, R.F. Garcia Ruiz, M. Godefroid, C. Granados, R.D. Harding, R. Heinke, M. Huyse, J. Karls, P. Larmonier, J.G. Li, K.M. Lynch, D.E. Maison, B.A. Marsh, P. Molkanov, P. Mosat, A.V. Oleynichenko, V. Panteleev, P. Pyykko, M.L. Reitsma, K. Rezynkina, R.E. Rossel, S. Rothe, J. Ruczkowski, S. Schiffmann, C. Seiffert, M.D. Seliverstov, S. Sels, L.V. Skripnikov, M. Stryjczyk, D. Studer, M. Verlinde, S. Wilman, A.V. Zaitsevskii Large Shape Staggering in Neutron-Deficient Bi Isotopes Phys. Rev. Lett., 127, 192501 (2021) A. Kruzins, V. Krumins, M. Tamanis, R. Ferber, A.V. Oleynichenko, A. Zaitsevskii, E.A. Pazyuk, A.V. Stolyarov Fourier-transform spectroscopy and relativistic electronic structure calculation on the c3Σ+ state of KCs M.Y. Kaygorodov, L.V. Skripnikov, I.I. Tupitsyn, E. Eliav, Y.S. Kozhedub, A.V. Malyshev, A.V. Oleynichenko, V.M. Shabaev, A.V. Titov, A.V. Zaitsevskii Electron affinity of oganesson V.V. Baturo, P.M. Rupasinghe, T.J. Sears, R.J. Mawhorter, J.-U. Grabow, A.N. Petrov Electric-field-dependent $g$ factor for the ground state of lead monofluoride, PbF E. Tiesinga, J. Klos, M. Li, A. Petrov, S. Kotochigova Relativistic aspects of orbital and magnetic anisotropies in the chemical bonding and structure of lanthanide molecules New Journal of Physics, 23, 8, 085007 (2021) Quantum Chemical Study of X@BikPbm, BikPbm∙X, X@SbkSnm, and SbkSnm∙X Clusters Russian Journal of General Chemistry, 91, 2, 241-250 (2021) Russian version: Журнал Общей Химии, 91, 2, 290-300 (2021) S.G. Semenov, M.V. Makarova, M.E. Bedrina, A.V. Titov Quantum-Chemical Model of the Minimal Cluster in Xenotime S.D. Prosnyak, L.V. Skripnikov Effect of nuclear magnetization distribution within the Woods-Saxon model: Hyperfine splitting in neutral Tl L.V. Skripnikov, D.V. Chubukov, V.M. Shakhova The role of QED effects in transition energies of heavy-atom alkaline earth monofluoride molecules: A theoretical study of Ba+, BaF, RaF, and E120F L.V. Skripnikov, A.V. Oleynichenko, A.V. Zaitsevskii, D.E. Maison, A.E. Barzakh Relativistic Fock space coupled-cluster study of bismuth electronic structure to extract the Bi nuclear quadrupole moment L.V. Skripnikov Approaching meV level for transition energies in the radium monofluoride molecule RaF and radium cation Ra+ by including quantum-electrodynamics effects A. Zakharova, I. Kurchavov, A. Petrov Rovibrational structure of the ytterbium monohydroxide molecule and the P,T-violation searches $\mathcal{P}, \mathcal{T}$-odd effects for the RaOH molecule in the excited vibrational state K. Gaul, M.G. Kozlov, T.A. Isaev, R. Berger Chiral Molecules as Sensitive Probes for Direct Detection of $\mathcal{P}$-Odd Cosmic Fields Parity-nonconserving interactions of electrons in chiral molecules with cosmic fields R.F. Garcia Ruiz, R. Berger, J. Billowes, C.L. Binnersley, M.L. Bissell, A.A. Breier, A.J. Brinson, K. Chrysalidis, T.E. Cocolios, B.S. Cooper, K.T. Flanagan, T.F. Giesen, R.P. de Groote, S. Franchoo, F.P. Gustafsson, T.A. Isaev, A. Koszorus, G. Neyens, H.A. Perrett, C.M. Ricketts, S. Rothe, L. Schweikhard, A.R. Vernon, K.D.A. Wendt, F. Wienholtz, S.G. Wilkins, X.F. Yang Spectroscopy of short-lived radioactive molecules Nature, 581, 7809, 396-400 (2020) T.A. Isaev Direct laser cooling of molecules Phys. Usp., 63, 3, 289-302 (2020) Russian version: Усп. физ. наук, 190, 3, 313-328 (2020) Yu. Lomachuk, D. Maltsev, N.S. Mosyagin, L. Skripnikov, R.V. Bogdanov, A.V. Titov Compound-tunable embedding potential: Which oxidation state of uranium and thorium as point defects in xenotime is favorable? D.E. Maison, L.V. Skripnikov, V.V. Flambaum, M. Grau Search for CP-violating nuclear magnetic quadrupole moment using the LuOH+ cation N.S. Mosyagin, A.V. Zaitsevskii, A.V. Titov Generalized relativistic effective core potentials for superheavy elements International Journal of Quantum Chemistry, 120, 2, e26076 (2020) A.V. Oleynichenko, A. Zaitsevskii, E. Eliav Towards High Performance Relativistic Electronic Structure Modelling: The EXP-T Program Package In: Supercomputing, Ed: Voevodin, Vladimir and Sobolev, Sergey, 375-386 (2020) A.V. Oleynichenko, A. Zaitsevskii, L.V. Skripnikov, E. Eliav Relativistic Fock Space Coupled Cluster Method for Many-Electron Systems: Non-Perturbative Account for Connected Triple Excitations Symmetry, 12, 7, 1101 (2020) A.V. Oleynichenko, L.V. Skripnikov, A. Zaitsevskii, E. Eliav, V.M. Shabaev Diagonal and off-diagonal hyperfine structure matrix elements in KCs within the relativistic Fock space coupled cluster theory A.N. Petrov, L.V. Skripnikov Energy levels of radium monofluoride RaF in external electric and magnetic fields to search for $P$- and $T,P$-violation effects I.P. Kurchavov, A.N. Petrov Calculation of the energy-level structure of the ${\mathrm{HfF}}^{+}$ cation to search for parity-nonconservation effects M. Li, J. Klos, A. Petrov, H. Li, S. Kotochigova Effects of conical intersections on hyperfine quenching of hydroxyl OH in collision with an ultracold Sr atom Scientific Reports, 10, 1, 14130 (2020) H. Li, S. Jyothi, M. Li, J. Klos, A. Petrov, K.R. Brown, S. Kotochigova Photon-mediated charge exchange reactions between 39K atoms and 40Ca+ ions in a hybrid trap Three-Center Bonds in closo-Sb2Sn10 Clusters Russian Journal of General Chemistry, 90, 877-879 (2020) S.G. Semenov, M.E. Bedrina, A.V. Titov Modeling the Structure of Endohedral Eu@C60 and (Eu@C60)2 Metallofullerenes L.V. Skripnikov, N.S. Mosyagin, A.V. Titov, V.V. Flambaum Actinide and lanthanide molecules to search for strong CP-violation S.D. Prosnyak, D.E. Maison, L.V. Skripnikov Hyperfine structure in thallium atom: Study of nuclear magnetization distribution effects V. Fella, L.V. Skripnikov, W. Nortershauser, M.R. Buchner, H.L. Deubner, F. Kraus, A.F. Privalov, V.M. Shabaev, M. Vogel Magnetic moment of $^{207}\mathrm{Pb}$ and the hyperfine splitting of $^{207}\mathrm{Pb}^{81+}$ Phys. Rev. Research, 2, 013368 (2020) T. Fleig, L.V. Skripnikov P, T-Violating and Magnetic Hyperfine Interactions in Atomic Thallium Symmetry, 12, 4, 498 (2020) Nuclear magnetization distribution effect in molecules: Ra+ and RaF hyperfine structure E.A. Bormotova, A.V. Stolyarov, L.V. Skripnikov, A.V. Titov Ab initio study of R-dependent behavior of the hyperfine structure parameters for the (1)1,3Σ+ states of LiRb and LiCs Chemi. Phys. Lett., 760, 137998 (2020) D.V. Chubukov, L.V. Skripnikov, L.N. Labzowsky, G. Plunien Nuclear Spin-Dependent Effects of Parity Nonconservation in Ortho-H2 A. Zaitsevskii, A.V. Oleynichenko, E. Eliav Finite-Field Calculations of Transition Properties by the Fock Space Relativistic Coupled Cluster Method: Transitions between Different Fock Space Sectors Symmetry, 12, 11, 1845 (2020) V. Krumins, A. Kruzins, M. Tamanis, R. Ferber, A. Pashov, A.V. Oleynichenko, A. Zaitsevskii, E.A. Pazyuk, A.V. Stolyarov The branching ratio of intercombination A1Σ+∼b3Π→a3Σ+/X1Σ+ transitions in the RbCs molecule: Measurements and calculations S.V. Kozlov, E.A. Bormotova, A.A. Medvedev, E.A. Pazyuk, A.V. Stolyarov, A. Zaitsevskii A first principles study of the spin–orbit coupling effect in LiM (M = Na, K, Rb, Cs) molecules Phys. Chem. Chem. Phys., 22, 2295-2306 (2020) arXiv e-prints (2020) A. Zakharova, S. Semenov, M. Bedrina, A. Titov Quantum-Chemical Study of Yb@ C 60, Yb@ B 2 C 58, and Gd@ B 3 C 57 Molecules A.V. Zakharova, M.E. Bedrina A quantum chemical study of endometallofullerenes: Gd@ C 70, Gd@ C 82, Gd@ C 84, and Gd@ C 90 The European Physical Journal D, 74, 6, 1-6 (2020) D.V. Chubukov, L.V. Skripnikov, L.N. Labzowsky On the Search for the Electric Dipole Moment of the Electron: P-, T-Odd Faraday Effect on a PbF Molecular Beam JETP Letters, 110, 6, 382-386 (2019) K. Gaul, S. Marquardt, T. Isaev, R. Berger Systematic study of relativistic and chemical enhancements of $\mathcal{P},\mathcal{T}$-odd effects in polar diatomic radicals Phys. Rev. A, 99, 032509 (2019) A.V. Kudrin, A. Zaitsevskii, T.A. Isaev, D.E. Maison, L.V. Skripnikov Towards the Search for Thallium Nuclear Schiff Moment in Polyatomic Molecules: Molecular Properties of Thallium Monocyanide (TlCN) ATOMS, 62, 7 (2019) M.V. Makarova, S.G. Semenov, M.E. Bedrina, A.V. Titov Quantum-Chemical Modeling of the First Coordination Sphere of the Metal Cation in Monazite S.G. Semenov, M.E. Bedrina, A.E. Buzin, A.V. Titov Structural Parameters and Electron Transfer in Ytterbium, Lutetium, and Cerium Compounds with Hydrocarbon Monocycles Russian Journal of General Chemistry, 89, 7, 1424-1432 (2019) D.E. Maison, L.V. Skripnikov, V.V. Flambaum Theoretical study of $^{173}\mathrm{YbOH}$ to search for the nuclear magnetic quadrupole moment Interference between the E1 and M1 Amplitudes of the Transition from the H State to C of a ThO Molecule Optics and Spectroscopy, 126, 4, 331-335 (2019) D.V. Chubukov, L.V. Skripnikov, L.N. Labzowsky, V.N. Kutuzov, S.D. Chekhovskoi Evaluation of the $\mathcal{P}, \mathcal{T}$-odd Faraday effect in Xe and Hg atoms W. Nörtershäuser, J. Ullmann, L.V. Skripnikov, Z. Andelkovic, C. Brandau, A. Dax, W. Geithner, C. Geppert, C. Gorges, M. Hammen, V. Hannen, S. Kaufmann, K. König, F. Kraus, B. Kresse, Y.A. Litvinov, M. Lochmann, B. Maass, J. Meisner, T. Murböck, A.F. Privalov, R. Sanchez, B. Scheibe, M. Schmidt, S. Schmidt, V.M. Shabaev, M. Steck, T. Stöhlker, R.C. Thompson, C. Trageser, M. Vogel, J. Vollbrecht, A.V. Volotka, C. Weinheimer The hyperfine puzzle of strong-field bound-state QED Hyperfine Interactions, 240, 1, 51 (2019) D.V. Chubukov, L.V. Skripnikov, V.N. Kutuzov, S.D. Chekhovskoi, L.N. Labzowsky Optical Rotation Approach to Search for the Electric Dipole Moment of the Electron Atoms, 7, 2 (2019) D.E. Maison, L.V. Skripnikov, D.A. Glazov Many-body study of the $g$ factor in boronlike argon L.V. Skripnikov, A.N. Petrov, A.V. Titov, V.V. Flambaum ${\mathrm{HfF}}^{+}$ as a candidate to search for the nuclear weak quadrupole moment A. Znotins, A. Kruzins, M. Tamanis, R. Ferber, E.A. Pazyuk, A.V. Stolyarov, A. Zaitsevskii Fourier-transform spectroscopy, relativistic electronic structure calculation, and coupled-channel deperturbation analysis of the fully mixed $A^{1}{\mathrm{\Sigma}}_{u}^{+}$ and $b^{3}{\mathrm{\Pi}}_{u}$ states of ${\mathrm{Cs}}_{2}$ E.A. Pazyuk, V.I. Pupyshev, A.V. Zaitsevskii, A.V. Stolyarov Spectroscopy of Diatomic Molecules in an Adiabatic Approximation Russian Journal of Physical Chemistry A, 93, 10, 1865-1872 (2019) M. Hosseinpour Khanmiri, S.Yu. Yanson, E.V. Fomin, A.V. Titov, A.V. Grebeniuk, Yu.S. Polekhovsky, R.V. Bogdanov Uranium as a possible criterion for the hydro-chemical alteration of betafite Physics and Chemistry of Minerals, 45, 6, 549-562 (2018) M. Hosseinpour Khanmiri, R.V. Bogdanov Nuclear Chemical Effects in the Paragenetic Mineral Association Based on Polycrase Radiochemistry, 60, 1, 79-91 (2018) Affiliation: Saint Petersburg State University On the feasibility of determining the 230Th activity in minerals without the addition of a Th radiotracer Applied Radiation and Isotopes, 133, 57-60 (2018) M. Hosseinpour Khanmiri, D. Goldwirt, N. Platonova, S. Janson, Yu. Polekhovsky, R.V. Bogdanov On the identification of Ti-Ta- Nb-oxides in "wiikites" from Karelia. Mining of Mineral Deposits, 12, 1, 28-38 (2018) $\mathcal{P}, \mathcal{T}$-odd Faraday rotation in heavy neutral atoms pdf file, for local use A.A. Bondarevskaya, D.V. Chubukov, E.A. Mistonova, K.N. Lyashchenko, O.Yu. Andreev, A. Surzhykov, L.N. Labzowsky, G. Plunien, D. Liesen, F. Bosch, T. Stöhlker Considerations towards the possibility of the observation of parity nonconservation in highly charged ions in storage rings Physica Scripta, 93, 2, 025401 (2018) Yu. Demidov, A. Zaitsevskii Adsorption of the astatine species on a gold surface: A relativistic density functional theory study Chemical Physics Letters, 691, 126-130 (2018) T.A. Isaev, R. Berger Towards Ultracold Chiral Molecules CHIMIA International Journal for Chemistry, 72, 6, 375-378 (2018) Yu.V. Lomachuk, Yu.A. Demidov, L.V. Skripnikov, A.V. Zaitsevskii, S.G. Semenov, N.S. Mosyagin, A.V. Titov Calculation of Chemical Shifts of X-Ray-Emission Spectra of Niobium in Niobium(V) Oxides Relative to Metal Russian version: Оптика и спектроскопия, 124, 4, 455-460 (2018) A. Oleynichenko, A. Zaitsevskii, S. Romanov, L.V. Skripnikov, A.V. Titov Global and local approaches to population analysis: Bonding patterns in superheavy element compounds Chemical Physics Letters, 695, 63-68 (2018) A.N. Petrov, L.V. Skripnikov, A.V. Titov, V.V. Flambaum Evaluation of $\mathit{CP}$ violation in ${\mathrm{HfF}}^{+}$ A.N. Petrov Systematic effects in the ${\mathrm{HfF}}^{+}$-ion experiment to search for the electron electric dipole moment V.V. Baturo, I.N. Cherepanov, S.S. Lukashov, A.N. Petrov, S.A. Poretsky, A.M. Pravilov Hyperfine coupling of the iodine {${\boldsymbol{D}}{0}_{{\boldsymbol{u}}}^{+}$} and β 1 g ion-pair states S. Lukashov, A. Petrov, A. Pravilov The Iodine Molecule: Insights into Intra- and Intermolecular Perturbation in Diatomic Molecules Springer International Publishing (2018) A Quantum Chemical Study of C60Cl30, C60(OH)30 Molecules and Fe@C60(OH)30 Endocomplex Journal of Structural Chemistry, 59, 3, 506-511 (2018) Russian version: Журнал структурной химии, 59, 3, 530-535 (2018) Quantum Chemical Study of Niobium and Tantalum M4O10 Oxide Clusters and M4O10--Anions M.V. Makarova, S.G. Semenov, R.R. Kostikov A Quantum Chemical Study of the Acidity of Acetylene and 1,2-Dihydrobuckminsterfullerene Derivatives Journal of Structural Chemistry, 59, 1, 43-46 (2018) Russian version: Журнал структурной химии, 59, 1, 51-53 (2018) V.M. Shakhova, S.G. Semenov, Yu.V. Lomachuk, Yu.A. Demidov, L.V. Skripnikov, N.S. Mosyagin, A.V. Zaitsevskii, A.V. Titov Chemical Shift of the K$\alpha$1 and K$\alpha$2 Lines of the X-ray Emission Spectrum of Yb(II)/Yb(III) Fluorides: a Quantum-Chemical Investigation V. Shakhova, S.G. Semenov, Yu. Lomachuk, Yu. Demidov, L. Skripnikov, N.S. Mosyagin, A.V. Zaitsevskii, A.D. Kudashov, A. Titov Chemical shifts of Kα1 and Kα2 lines in X-ray emission spectra of Yb(II)/Yb(III) fluorides: A quantum chemical study Nonlinear Phenomena in Complex Systems, 21, 56-61 (2018) Nuclear spin-independent effects of parity nonconservation in molecule of hydrogen A.J. Geddes, L.V. Skripnikov, A. Borschevsky, J.C. Berengut, V.V. Flambaum, T.P. Rakitzis Enhanced nuclear-spin-dependent parity-violation effects using the $^{199}\mathrm{HgH}$ molecule S. Schmidt, J. Billowes, M.L. Bissell, K. Blaum, R.F.G. Ruiz, H. Heylen, S. Malbrunot-Ettenauer, G. Neyens, W. Nörtershäuser, G. Plunien, S. Sailer, V.M. Shabaev, L.V. Skripnikov, I.I. Tupitsyn, A.V. Volotka, X.F. Yang The nuclear magnetic moment of 208Bi and its relevance for a test of bound-state strong-field QED Physics Letters B, 779, 324-330 (2018) L.V. Skripnikov, S. Schmidt, J. Ullmann, C. Geppert, F. Kraus, B. Kresse, W. Nörtershäuser, A.F. Privalov, B. Scheibe, V.M. Shabaev, M. Vogel, A.V. Volotka New Nuclear Magnetic Moment of $^{209}\mathrm{Bi}$: Resolving the Bismuth Hyperfine Puzzle A. Arslanaliev, H. Kamada, A. Shebeko, M. Stepanova, H. Witala, S. Yakovlev Applications of the Kharkov potential in the theory of nuclear forces and nuclear reactions Problems of Atomic Science and Technology, 115, 3, 3 (2018) A.V. Zaitsevskii, L.V. Skripnikov, A.V. Kudrin, A.V. Oleinichenko, E. Eliav, A.V. Stolyarov Electronic Transition Dipole Moments in Relativistic Coupled-Cluster Theory: the Finite-Field Method A. Zaitsevskii, E. Eliav Padé extrapolated effective Hamiltonians in the Fock space relativistic coupled cluster method International Journal of Quantum Chemistry, 118, 23, e25772 (2018) M.I. Skriplev, R.V. Bogdanov, L.R. Schwink Non- identical thermochemical behavior of 234U-and 238U- isotopes in metamict britholite Applied Radiation and Isotopes, 119, 1-5 (2017) D.V. Chubukov, L.V. Skripnikov, O.Yu. Andreev, L.N. Labzowsky, G. Plunien Effects of parity nonconservation in a molecule of oxygen Journal of Physics B: Atomic, Molecular and Optical Physics, 50, 10, 105101 (2017) D.V. Chubukov, L.N. Labzowsky $\mathcal{P},\mathcal{T}$-odd Faraday effect in intracavity absorption spectroscopy T.A. Isaev, A.V. Zaitsevskii, E. Eliav Laser-coolable polyatomic molecules with heavy nuclei Yu.V. Lomachuk, D.A. Maltsev, Yu.A. Demidov, N.S. Mosyagin, L.A. Batalov, E. Fomin, R.V. Bogdanov, A.V. Zaitsevskii, A.V. Titov Calculations of Chemical Shifts of X-ray Emission Spectra and Effective States of Nb Atom in the Niobates Nonlinear Phenomena in Complex Systems, 20, 2, 170-176 (2017) I.V. Abarenkov, V.A. Fedorova, D.A. Maltsev, A.V. Titov Valence Electronic Structure of CaNb2O6 Crystal with Embedding Potential Method N.S. Mosyagin Generalized relativistic effective core potentials for lanthanides A. Oleynichenko, A. Zaitsevskii Projection population analysis for molecules with heavy and superheavy atoms A.N. Petrov, L.V. Skripnikov, A.V. Titov Zeeman interaction in the $^{3}\mathrm{\Delta}_{1}$ state of ${\mathbf{HfF}}^{+}$ to search for the electron electric dipole moment A. Petrov, C. Makrides, S. Kotochigova Laser controlled charge-transfer reaction at low temperatures Rabi frequency of the $H^{3}\mathrm{\Delta}_{1}$ to $C^{1}\mathrm{\Pi}$ transition in ThO: Influence of interaction with electric and magnetic fields M. Li, A. Petrov, C. Makrides, E. Tiesinga, S. Kotochigova Pendular trapping conditions for ultracold polar molecules enforced by external electric fields J.F.E. Croft, C. Makrides, M. Li, A. Petrov, B.K. Kendrick, N. Balakrishnan, S. Kotochigova Universality and chaoticity in ultracold K+KRb chemical reactions Nature Communications, 8, 15897 EP - (2017) Simulation of metal ion coordination sphere in crystals with fluorite structure Russian Journal of General Chemistry, 87, 11, 2750-2753 (2017) Russian version: Журнал общей химии, 87, 11, 1928-1931 (2017) S.G. Semenov, A.V. Titov Valence of an atom and bond indices in the relativistic theory of electronic structure of chemical compounds S.G. Semenov, M.E. Bedrina, M.V. Makarova, A.V. Titov A quantum chemical study of the Fe@C60 endocomplex S.G. Semenov, M.E. Bedrina A quantum chemical study of gallium(III) ($\mu$-oxo)bis[phthalocyaninate] and gallium(III) ($\mu$-oxo)bis[perfluorophthalocyaninate] molecules V.M. Shakhova, Yu.V. Lomachuk, Yu.A. Demidov, L.V. Skripnikov, N.S. Mosyagin, A.V. Zaitsevskii, A.V. Titov Chemical shifts of X-ray emission spectra and effective states of ytterbium in fluorides: embedded cluster modeling of YbF2 and YbF3 crystals Radiation and Applications, 2, 3, 169-174 (2017) Communication: Theoretical study of HfF+ cation to search for the T,P-odd interactions L.V. Skripnikov, A.V. Titov, V.V. Flambaum Enhanced effect of $CP$-violating nuclear magnetic quadrupole moment in a ${\mathrm{HfF}}^{+}$ molecule L.V. Skripnikov, D.E. Maison, N.S. Mosyagin Scalar-pseudoscalar interaction in the francium atom A. Zaitsevskii, N.S. Mosyagin, A.V. Stolyarov, E. Eliav Approximate relativistic coupled-cluster calculations on heavy alkali-metal diatomics: Application to the spin-orbit-coupled ${A}^{1}{\mathrm{\Sigma}}^{+}$ and ${b}^{3}\mathrm{\Pi}$ states of RbCs and ${\mathrm{Cs}}_{2}$ E.V. Puchkova, R.V. Bogdanov, R. Gieré Redox states of uranium in samples of microlite and monazite American Mineralogist, 101, 8, 1884 (2016) Polyatomic Candidates for Cooling of Molecules with Lasers from Simple Theoretical Concepts N.S. Mosyagin, A.V. Zaitsevskii, L.V. Skripnikov, A.V. Titov Generalized relativistic effective core potentials for actinides International Journal of Quantum Chemistry, 116, 4, 301-315 (2016) Near-resonant rovibronic Raman scattering from 0g+ (bb) valence state via the D0u+ ion-pair state in iodine molecule R. Roy, R. Shrestha, A. Green, S. Gupta, M. Li, S. Kotochigova, A. Petrov, C.H. Yuen Photoassociative production of ultracold heteronuclear ${\mathrm{YbLi}}^{*}$ molecules M.V. Makarova, S.G. Semenov, O.A. Guskova Computational study of structure, electronic, and microscopic charge transport properties of small conjugated diketopyrrolopyrrole-thiophene molecules International Journal of Quantum Chemistry, 116, 20, 1459-1466 (2016) S.G. Semenov, M.E. Bedrina, N.V. Egorov Quantum chemical calculation of spectroscopic and photoelectronic characteristics of [n]staffanes Quantum-chemical study of lutetium and ytterbium bis- and tetrakis(phthalocyaninates) Quantum-chemical study of ytterbium fluorides and of complex F2YbF2CeF2 S.G. Semenov, M.E. Bedrina, N.V. Egorov, A.V. Titov Quantum-chemical study of lutetium, ytterbium, and gadolinium phthalocyaninates PcLnCl Combined 4-component and relativistic pseudopotential study of ThO for the electron electric dipole moment search L.V. Skripnikov, A.V. Titov LCAO-based theoretical study of PbTiO3 crystal to search for parity and time reversal violating interaction in solids Reply to the Comment on "Theoretical study of thorium monoxide for the electron electric dipole moment search: Electronic properties of $H^3\Delta_1$ in ThO" A.V. Zaitsevskii, L.V. Skripnikov, A.V. Titov Chemical bonding and effective atomic states of actinides in higher oxide molecules Mendeleev Communications, 26, 4, 307-308 (2016) A. Zaitsevskii, Yu. Demidov, N.S. Mosyagin, L.V. Skripnikov, A.V. Titov First Principle Based Modeling and Interpretation of Chemical Experiments on Superheavy Element Identification Rad. Applic., 1, 2, 132-137 (2016) R.V. Bogdanov, R.A. Kuznetsov, E.V. Puchkova, E.E. Prudnikov, V.N. Epimahov, A.V. Titov The Geoceramics Based on Apatite ORE Tailings as Matrices for Immobilization of Sr-Cs-fractions of HLW In: Multi Vol. Series "Energy science and technology". Vol. 4: "Nuclear Energy"., 641-673 (2015) R.A. Kuznetsov, N.V. Platonova, R.V. Bogdanov A polyphase geoceramic matrix for joint immobilization of the strontium-cesium and rare earth fractions of high-level waste Radiochemistry, 57, 2, 200-206 (2015) A comparative study of molecular hydroxides of element 113 (I) and its possible analogs: Ab initio electronic structure calculations Yu.A. Demidov, A. Zaitsevskii Chemical Pseudo-Homologues of Superheavy Element 113 In: Exotic Nuclei: EXON-2014 - Proceedings of International Symposium, Ed: Penionzhkevich, Y. and Sobolev, Y., 285-289 (2015) T. Maier, H. Kadau, M. Schmitt, M. Wenzel, I. Ferrier-Barbut, T. Pfau, A. Frisch, S. Baier, K. Aikawa, L. Chomaz, M.J. Mark, F. Ferlaino, C. Makrides, E. Tiesinga, A. Petrov, S. Kotochigova Emergence of Chaotic Scattering in Ultracold Er and Dy Phys. Rev. X, 5, 041029 (2015) A. Frisch, M. Mark, K. Aikawa, S. Baier, R. Grimm, A. Petrov, S. Kotochigova, G. Quéméner, M. Lepers, O. Dulieu, F. Ferlaino Ultracold Dipolar Molecules Composed of Strongly Magnetic Atoms A. Dunning, A. Petrov, S.J. Schowalter, P. Puri, S. Kotochigova, E.R. Hudson Photodissociation spectroscopy of the dysprosium monochloride molecular ion ac Stark effect in ThO ${H}^{3}{\mathrm{\Delta}}_{1}$ for the electron electric-dipole-moment search C. Makrides, J. Hazra, G.B. Pradhan, A. Petrov, B.K. Kendrick, T. González-Lezana, N. Balakrishnan, S. Kotochigova Ultracold chemistry with alkali-metal--rare-earth molecules W. Dowd, R.J. Roy, R.K. Shrestha, A. Petrov, C. Makrides, S. Kotochigova, S. Gupta Magnetic field dependent interactions in an ultracold Li–Yb( 3 P 2 ) mixture Magnetic control of ultra-cold 6 Li and 174 Yb( 3 P 2 ) atom mixtures with Feshbach resonances S.G. Semenov, M.V. Makarova Quantum chemical study Ca@C60 and Sc+@C60 endo complexes in the gas phase and pyridine S.G. Semenov, V.M. Shakhova, M.V. Makarova Quantum chemical study of structure of ionic complexes of I and II groups metals with xenon or krypton M.V. Makarova, S.G. Semenov The estimation of the optical activity of polyatomic molecules by DFT methods Quantum-chemical study of barbaralone in singlet and triplet states Quantum chemical study of ethynylfullerenes Russian Journal of Organic Chemistry, 51, 2, 273-276 (2015) Russian version: Журнал органической химии, 51, 2, 283-286 (2015) A quantum chemical study of diethynyl derivatives of dodecahedrane and buckminsterfullerene in vacuum and in tetrahydrofuran Optics and Spectroscopy, 118, 1, 46-49 (2015) Russian version: Оптика и спектроскопия, 118, 1, 50-53 (2015) V.M. Shakhova, S.G. Semenov, A.V. Titov Quantum chemical study of pentafluorophenylxenonium pentafluorobenzoate in the gas phase and acetonitrile solution Quantum-chemical estimation of the relaxation of equilibrium structure upon radiochemical reactions of iodine-containing molecules and ions L.V. Skripnikov, A.N. Petrov, A.V. Titov, R.J. Mawhorter, A.L. Baum, T.J. Sears, J.-U. Grabow Further investigation of $g$ factors for the lead monofluoride ground state L.V. Skripnikov, A.N. Petrov, N.S. Mosyagin, A.V. Titov, V.V. Flambaum TaN molecule as a candidate for the search for a T,P-violating nuclear magnetic quadrupole moment Theoretical study of ${\mathrm{ThF}}^{+}$ in the search for $T,P$-violation effects: Effective state of a Th atom in ${\mathrm{ThF}}^{+}$ and ThO compounds Theoretical study of thorium monoxide for the electron electric dipole moment search: Electronic properties of H3Δ1 in ThO E.A. Pazyuk, A.V. Zaitsevskii, A.V. Stolyarov, M. Tamanis, R. Ferber Laser synthesis of ultracold alkali metal dimers: optimization and control Russian Chemical Reviews, 84, 10, 1001 (2015) Affiliation: Moscow State University A. Zaitsevskii Plutonium and transplutonium element trioxides: molecular structures, chemical bonding, and isomers A.V. Zaitsevskii, A.V. Polyaev, Yu.A. Demidov, N.S. Mosyagin, Y.V. Lomachuk, A.V. Titov Superheavy Element Chemistry by Relativistic Density Functional Theory Electronic Structure Modeling M. Deminsky, S. Adamson, I. Chernysheva, N. Dyatko, A. Eletzkii, I. Kochetov, A. Napartovich, E. Rykova, L. Sukhanov, S. Umanskii, A. Zaitsevskii, D.J. Smith, T.J. Sommerer, J. Costas, B. Potapkin Comparative nonempirical analysis of emission properties of the Ar–MeI n glow discharge (Me = Ga, Zn, Sn, In, Bi, Tl) Journal of Physics D: Applied Physics, 48, 20, 205202 (2015) Yu.A. Demidov, A.V. Zaitsevskii Simulation of chemical properties of superheavy elements from the island of stability Russian Chemical Bulletin, 63, 8, 1647-1655 (2014) Russian version: Известия Академии наук. Серия химическая, 63, 8, 1647-1655 (2014) Yu. Demidov, A. Zaitsevskii, R. Eichler First principles based modeling of the adsorption of atoms of element 120 on a gold surface A.D. Kudashov, A.N. Petrov, L.V. Skripnikov, N.S. Mosyagin, T.A. Isaev, R. Berger, A.V. Titov Ab initio study of radium monofluoride (RaF) as a candidate to search for parity- and time-and-parity--violation effects A.N. Petrov, L.V. Skripnikov, A.V. Titov, N.R. Hutzler, P.W. Hess, B.R. O'Leary, B. Spaun, D. DeMille, G. Gabrielse, J.M. Doyle Zeeman interaction in ThO $H^{3}{\Delta}_{1}$ for the electron electric-dipole-moment search P. Puri, S.J. Schowalter, S. Kotochigova, A. Petrov, E.R. Hudson Action spectroscopy of SrCl+ using an integrated ion trap time-of-flight mass spectrometer A. Frisch, M. Mark, K. Aikawa, F. Ferlaino, J.L. Bohn, C. Makrides, A. Petrov, S. Kotochigova Quantum chaos in ultracold collisions of gas-phase erbium atoms Nature, 507, 475-479 (2014) A. Khramov, A. Hansen, W. Dowd, R.J. Roy, C. Makrides, A. Petrov, S. Kotochigova, S. Gupta Ultracold Heteronuclear Mixture of Ground and Excited State Atoms S.G. Semenov, M.E. Bedrina, V.A. Klemeshev, M.V. Makarova Spin populations and free valences in excited molecules and in radicals Quantum chemical study of the products and hypothetical intermediates in the dehydrobromination of bromobullvalene in furan Russian Journal of Organic Chemistry, 50, 7, 1003-1005 (2014) Russian version: Журнал органической химии, 50, 7, 1021-1023 (2014) Quantum-chemical study of tautomers of reduced forms of anthraquinone Quantum chemical study of $\alpha$-diazocarbonyl bullvalene derivatives and related heterocyclic compounds A quantum-chemical study of intermediates of the 1O2 photogeneration sensitized by buckminsterfullerene and accompanying photochemical reactions A quantum chemical study of the structure of dodecasilsequioxane H12Si12O18 $CP$-Violating Effect of the Th Nuclear Magnetic Quadrupole Moment: Accurate Many-Body Study of ThO L.V. Skripnikov, A.D. Kudashov, A.N. Petrov, A.V. Titov Search for parity- and time-and-parity--violation effects in lead monofluoride (PbF): Ab initio molecular study A.V. Titov, Yu.V. Lomachuk, L.V. Skripnikov Concept of effective states of atoms in compounds to describe properties determined by the densities of valence electrons in atomic cores A. Zaitsevskii, W.H.E. Schwarz Structures and stability of AnO4 isomers, An = Pu, Am, and Cm: a relativistic density functional study R.V. Bogdanov, M.I. Skriplev, A.A. Petrunin, A.V. Titov The thermochemistry of uranium and cerium in native britholite Journal of Nuclear Materials, 440, 1, 440-444 (2013) A.A. Rusakov, Yu.A. Demidov, A. Zaitsevskii Estimating the adsorption energy of element 113 on a gold surface Central European Journal of Physics, 11, 11, 1537-1540 (2013) A comparative study of the chemical properties of element 120 and its homologs A.D. Kudashov, A.N. Petrov, L.V. Skripnikov, N.S. Mosyagin, A.V. Titov, V.V. Flambaum Calculation of the parity- and time-reversal-violating interaction in ${}^{225}$RaO Yu.V. Lomachuk, A.V. Titov Method for evaluating chemical shifts of x-ray emission lines in molecules and solids Phys. Rev. A, 88, 6, 062511 (2013) N.S. Mosyagin, A.N. Petrov, A.V. Titov, A.V. Zaitsevskii Generalized relativistic effective core potential calculations of the adiabatic potential curve and spectroscopic constants for the ground electronic state of the Ca2 molecule A.N. Petrov, L.V. Skripnikov, A.V. Titov, R.J. Mawhorter Centrifugal correction to hyperfine structure constants in the ground state of lead monofluoride V.V. Flambaum, Y.V. Stadnik, M.G. Kozlov, A.N. Petrov Enhanced effects of temporal variation of the fundamental constants in ${}^{2}{\Pi}_{1/2}$-term diatomic molecules: ${}^{207}$Pb${}^{19}$F External field control of spin-dependent rotational decoherence of ultracold polar molecules Molecular Physics, 111, 12-13, 1731-1737 (2013) L.V. Skripnikov, A.N. Petrov, A.V. Titov Communication: Theoretical study of ThO for the electron electric dipole moment search A. Le, T.C. Steimle, L. Skripnikov, A.V. Titov The molecular frame electric dipole moment and hyperfine interactions in hafnium fluoride, HfF J. Lee, J. Chen, L.V. Skripnikov, A.N. Petrov, A.V. Titov, N.S. Mosyagin, A.E. Leanhardt Optical spectroscopy of tungsten carbide for uncertainty analysis in electron electric-dipole-moment search L.V. Skripnikov, N.S. Mosyagin, A.V. Titov Relativistic coupled-cluster calculations of spectroscopic and chemical properties for element 120 Theoretical study of the parity and time reversal violating interaction in solids A. Zaitsevskii, N.S. Mosyagin, A.V. Titov, Yu.M. Kiselev Relativistic density functional theory modeling of plutonium and americium higher oxide molecules A.V. Zaitsevskii Molecular anions of uranium fluorides and oxides: First principle based relativistic calculations A. Zaitsevskii, A.V. Titov Interaction of copernicium with gold: Assessment of applicability of simple density functional theories A.V. Zaitsevskii, A.V. Titov, S.S. Mal'kov, I.G. Tananaev, Yu.M. Kiselev On the existence of oxide molecules of plutonium in highest oxidation states Doklady Chemistry, 448, 1, 1-3 (2013) A. Petrov, E. Tiesinga, S. Kotochigova Anisotropy-Induced Feshbach Resonances in a Quantum Dipolar Gas of Highly Magnetic Atoms B. Neyenhuis, B. Yan, S.A. Moses, J.P. Covey, A. Chotia, A. Petrov, S. Kotochigova, J. Ye, D.S. Jin Anisotropic Polarizability of Ultracold Polar $^{40}\mathrm{K}^{87}\mathrm{Rb}$ Molecules K.C. Cossel, D.N. Gresh, L.C. Sinclair, T. Coffey, L.V. Skripnikov, A.N. Petrov, N.S. Mosyagin, A.V. Titov, R.W. Field, E.R. Meyer, E.A. Cornell, J. Ye Broadband velocity modulation spectroscopy of HfF+: Towards a measurement of the electron electric dipole moment Chemical Physics Letters, 546, 1-11 (2012) N.S. Mosyagin, A.N. Petrov, A.V. Titov The effect of the iterative triple and quadruple cluster amplitudes on the adiabatic potential curve in the coupled cluster calculations of the ground electronic state of the Yb dimer Hyperfine and Zeeman interactions of the $a(1)[{}^{3}{\Sigma}_{1}^{+}]$ state of PbO G. Quéméner, J.L. Bohn, A. Petrov, S. Kotochigova Universalities in ultracold reactions of alkali-metal polar molecules W.G. Rellergert, S.T. Sullivan, S. Kotochigova, A. Petrov, K. Chen, S.J. Schowalter, E.R. Hudson Measurement of a Large Chemical Reaction Rate between Ultracold Closed-Shell $^{40}\mathrm{Ca}$ Atoms and Open-Shell $^{174}\mathrm{Yb}^{+}$ Ions Held in a Hybrid Atom-Ion Trap S. Kotochigova, A. Petrov, M. Linnik, J. Kłos, P.S. Julienne Ab initio properties of Li-group-II molecules for ultracold matter studies S. Kotochigova, A. Petrov Anisotropy in the interaction of ultracold dysprosium L.D. Alphei, J. Grabow, A.N. Petrov, R. Mawhorter, B. Murphy, A. Baum, T.J. Sears, T.Z. Yang, P.M. Rupasinghe, C.P. McRaven, N.E. Shafer-Ray Precision spectroscopy of the $^{207}\mathrm{Pb}$$^{19}\mathrm{F}$ molecule: Implications for measurement of $P$-odd and $T$-odd effects K. Chen, S.J. Schowalter, S. Kotochigova, A. Petrov, W.G. Rellergert, S.T. Sullivan, E.R. Hudson Molecular-ion trap-depletion spectroscopy of BaCl${}^{+}$ L.V. Skripnikov, A.V. Titov, A.N. Petrov, N.S. Mosyagin, O.P. Sushkov Enhancement of the electron electric dipole moment in Eu${}^{2+}$ A. Zaitsevskii, A.V. Titov, A.A. Rusakov, C.v. Wüllen Ab initio study of element 113–gold interactions Chemical Physics Letters, 508, 4, 329-331 (2011) N.S. Mosyagin, A. Zaitsevskii, A.V. Titov Shape-consistent Relativistic Effective Potentials of Small Atomic Cores International Review of Atomic and Molecular Physics, 1, 1, 63-72 (2010) N.S. Mosyagin, I.I. Tupitsyn, A.V. Titov Precision calculation of the low-lying excited states of the Rf atom Russian version: Радиохимия, 52, 4, 335-338 (2010) K.I. Baklanov, A.N. Petrov, A.V. Titov, M.G. Kozlov Progress toward the electron electric-dipole-moment search: Theoretical study of the PbF molecule A. Zaitsevskii, C. van Wüllen, A.V. Titov Communications: Adsorption of element 112 on the gold surface: Many-body wave function versus density functional theory A. Zaitsevskii, C. van Wüllen, E.A. Rykova, A.V. Titov Two-component relativistic density functional theory modeling of the adsorption of element 114(eka-lead) on gold A.N. Petrov, N.S. Mosyagin, A.V. Titov Theoretical study of low-lying electronic terms and transition moments for $\mathrm{Hf}{\mathrm{F}}^{+}$ for the electron electric-dipole-moment search A.N. Petrov, N.S. Mosyagin, A.V. Titov, A.V. Zaitsevskii, E.A. Rykova Ab initio study of Hg-Hg and E112-E112 van der Waals interactions Physics of Atomic Nuclei, 72, 3, 396-400 (2009) Russian version: Ядерная физика, 72, 3, 429-433 (2009) L.V. Skripnikov, A.N. Petrov, A.V. Titov, N.S. Mosyagin Electron electric dipole moment: Relativistic correlation calculations of the $P,T$-violation effect in the ${^{3}\Delta}_{3}$ state of ${\text{PtH}}^{+}$ O.V. Sizova, L.V. Skripnikov, A.Yu. Sokolov, V.V. Sizov Atomic-orbital-symmetry based σ-, π-, and δ-decomposition analysis of bond orders L.V. Skripnikov, A.N. Petrov, N.S. Mosyagin, V.F. Ezhov, A.V. Titov Ab initio calculation of the spectroscopic properties of TlF− A.V. Zaitsevskii, C.v. Wüllen, A.V. Titov Relativistic pseudopotential model for superheavy elements: applications to chemistry of eka-Hg and eka-Pb Russian version: Успехи химии, 78, 12, 1263-1272 (2009) R.A. Evarestov, M.V. Losev, A.I. Panin, N.S. Mosyagin, A.V. Titov Electronic structure of crystalline uranium nitride: LCAO DFT calculations physica status solidi (b), 245, 1, 114-122 (2008) O.V. Sizova, L.V. Skripnikov, A.Yu. Sokolov Symmetry decomposition of quantum chemical bond orders Journal of Molecular Structure: THEOCHEM, 870, 1, 1-9 (2008) L.V. Skripnikov, N.S. Mosyagin, A.N. Petrov, A.V. Titov On the search for time variation in the fine-structure constant: Ab initio calculation of HfF+ JETP Letters, 88, 9, 578-581 (2008) Russian version: Письма в ЖЭТФ, 88, 9, 668-672 (2008) Calculation of $\sigma$-, $\pi$-, and $\delta$-components of quantum-chemical bond orders A.V. Zaitsevskii, E.A. Rykova, A.V. Titov Theoretical studies on the structures and properties of superheavy element compounds Russian Chemical Reviews, 77, 3, 205 (2008) Russian version: Успехи химии, 77, 3, 211-226 (2008) T.A. Isaev, A.N. Petrov, N.S. Mosyagin, A.V. Titov Search for the nuclear Schiff moment in liquid xenon A.N. Petrov, N.S. Mosyagin, T.A. Isaev, A.V. Titov Theoretical study of $\mathrm{Hf}{\mathrm{F}}^{+}$ in search of the electron electric dipole moment O.V. Sizova, A.Yu. Sokolov, L.V. Skripnikov Quantum-chemical study of donor-acceptor interactions in chelate dicarbonyl complexes of rhodium(I) Russian Journal of Coordination Chemistry, 33, 11, 800-808 (2007) O.V. Sizova, A.Yu. Sokolov, L.V. Skripnikov, V.I. Baranovski Quantum chemical study of the bond orders in the ruthenium, diruthenium and dirhodium nitrosyl complexes Polyhedron, 26, 16, 4680-4690 (2007) O.V. Sizova, L.V. Skripnikov, A.Yu. Sokolov, N.V. Ivanova Rhodium and ruthenium tetracarboxylate nitrosyl complexes: Electronic structure and metal-metal bond Russian Journal of Coordination Chemistry, 33, 8, 588-593 (2007) O.V. Sizova, Yu.S. Varshavskii, L.V. Skripnikov Quantum-chemical study of donor-acceptor interactions in rhodium(I) carbonyl carboxylate complexes with phosphine ligands O.V. Sizova, L.V. Skripnikov, A.Yu. Sokolov, O.O. Lyubimova Features of the electronic structure of ruthenium tetracarboxylates with axially coordinated nitric oxide (II) N.S. Mosyagin, T.A. Isaev, A.V. Titov Is E112 a relatively inert element? Benchmark relativistic correlation study of spectroscopic constants in E112H and its cation N.S. Mosyagin, A.N. Petrov, A.V. Titov, I.I. Tupitsyn Generalized RECPs accounting for Breit effects: uranium, plutonium and superheavy elements 112,113,114. In: Recent Advances in the Theory of Chemical and Physical Systems, Ed: J.-P. Julien, J.Maruani, D.Mayou, S.Wilson and G.Delgado-Barrio, 229-251 (2006) A.V. Titov, N.S. Mosyagin, A.N. Petrov, T.A. Isaev, D. DeMille P,T-parity violation effects in polar heavy-atom molecules A. Zaitsevskii, E. Rykova, N.S. Mosyagin, A.V. Titov Towards relativistic ECP / DFT description of chemical bonding in E112 compounds: spin-orbit and correlation effects in E112X versus HgX (X=H, Au) Central European Journal of Physics, 4, 4, 448-460 (2006) E.A. Rykova, A. Zaitsevskii, N.S. Mosyagin, T.A. Isaev, A.V. Titov Relativistic effective core potential calculations of Hg and eka-Hg (E112) interactions with gold: Spin-orbit density functional theory modeling of Hg–Aun and E112–Aun systems In Search of the Electron Electric Dipole Moment: Relativistic Correlation Calculations of the $P,T$-Violating Effect in the Ground State of ${\mathrm{HI}}^{+}$ Phys. Rev. Lett., 95, 163004 (2005) N.S. Mosyagin, A.V. Titov Accounting for correlations with core electrons by means of the generalized relativistic effective core potentials: Atoms Hg and Pb and their compounds A.N. Petrov, A.V. Titov, T.A. Isaev, N.S. Mosyagin, D. DeMille Configuration-interaction calculation of hyperfine and $P,T$-odd constants on $^{207}\mathrm{Pb}\mathrm{O}$ excited states for electron electric-dipole-moment experiments A.V. Titov, N.S. Mosyagin, A.N. Petrov, T.A. Isaev Two-step method for precise calculation of core properties in molecules Benchmark ab initio study of heavy- and superheavy-element systems. PNPI Progress Report (2005) T.A. Isaev, A.N. Petrov, N.S. Mosyagin, A.V. Titov, E. Eliav, U. Kaldor In search of the electron dipole moment: Ab initio calculations on $^{207}\mathrm{PbO}$ excited states Ab initio calculations of the electronic structure of the lead atom and its compounds with the chemical accuracy PhD thesis (on Theoretical Physics), PNPI (2004) A.N. Petrov, N.S. Mosyagin, A.V. Titov, I.I. Tupitsyn Accounting for the Breit interaction in relativistic effective core potential calculations of actinides Journal of Physics B: Atomic, Molecular and Optical Physics, 37, 23, 4621 (2004) A.V. Titov, N.S. Mosyagin, T.A. Isaev, A.N. Petrov Accuracy and efficiency of modern methods for electronic structure calculation on heavy-and superheavy-element compounds Physics of Atomic Nuclei, 66, 6, 1152-1162 (2003) T.A. Isaev, N.S. Mosyagin, A.V. Titov, A.B. Alekseyev, R.J. Buenker GRECP/5e-MRD-CI calculation of the electronic structure of PbH International Journal of Quantum Chemistry, 88, 5, 687-690 (2002) N.S. Mosyagin, A.V. Titov, R.J. Buenker, H.-P. Liebermann, A.B. Alekseyev GRECP/MRD-CI calculations on the Hg atom and HgH molecule A.N. Petrov, N.S. Mosyagin, T.A. Isaev, A.V. Titov, V.F. Ezhov, E. Eliav, U. Kaldor Calculation of $\mathit{P},\mathit{T}$-Odd Effects in $^{205}T\mathrm{lF}$ Including Electron Correlation A.V. Titov Generalized Relativistic Effective Potential and restoration of the electronic structure in heavy atom cores and molecules ScD thesis (Doctor of Science degree on Theoretical Physics), PNPI (2002) V.F. Ezhov, M.G. Groshev, T.A. Isaev, V.V. Knjazkov, M.G. Kozlov, G.B. Krygin, N.S. Mosyagin, A.N. Petrov, S.G. Porsev, V.L. Ryabov, A.V. Titov Study of the weak interactions by means of atomic and molecular spectroscopy. Article in PNPI scientific report book (2001) N.S. Mosyagin, A.V. Titov, E. Eliav, U. Kaldor Generalized relativistic effective core potential and relativistic coupled cluster calculation of the spectroscopic constants for the HgH molecule and its cation The Journal of Chemical Physics, 115, 5, 2007-2013 (2001) N.S. Mosyagin, E. Eliav, U. Kaldor Convergence improvement for coupled-cluster calculations Journal of Physics B: Atomic, Molecular and Optical Physics, 34, 3, 339 (2001) A.N. Petrov, A.I. Panin Electronic structure and fluorescence spectrum of the HeO+ cation Optics and Spectroscopy, 90, 3, 367-370 (2001) A.V. Titov, N.S. Mosyagin, A.B. Alekseyev, R.J. Buenker GRECP/MRD-CI calculations of spin-orbit splitting in ground state of Tl and of spectroscopic properties of TlH T.A. Isaev, N.S. Mosyagin, M.G. Kozlov, A.V. Titov, E. Eliav, U. Kaldor Accuracy of RCC-SD and PT2/CI methods in all-electron and RECP calculations on Pb and Pb 2+ N.S. Mosyagin, E. Eliav, A.V. Titov, U. Kaldor Comparison of relativistic effective core potential and all-electron Dirac-Coulomb calculations of mercury transition energies by the relativistic coupled-cluster method Quantum-chemical study of possibility of generating laser radiation on the excimer cations HeNa+, NeNa+, and HeO+ PhD thesis, SPBU (2000) A.V. Titov, N.S. Mosyagin Comments on "Effective Core Potentials" [M.Dolg, Modern Methods and Algorithms of Quantum Chemistry (Ed. by J.Grotendorst, John von Neumann Institute for Computing, Jülich, NIC Series, Vol.1, ISBN 3-00-005618-1, pp.479-508, 2000)] ArXiv Physics e-prints (2000) Generalized Relativistic Effective Core Potential Method: Theory and calculations Rus.J.Phys.Chem., 74, Suppl.2, S376-S387 (2000) A.I. Panin, A.N. Petrov, Y.G. Khait Theoretical study of low-lying electronic states and emission spectra of the excimer ions NaHe+ and NaNe+ Journal of Molecular Structure: THEOCHEM, 490, 1, 189-200 (1999) A. N Petrov, A. Panin Electronic Structure of the Excimer Emitting Systems of the Type Inert Gas-Alkali Metal: The HeNa$^+$ Cation Optics and Spectroscopy, 86, 377-382 (1999) Generalized relativistic effective core potential: Theoretical grounds V.G. Kuznetsov, I.V. Abarenkov, V.A. Batuev, A.V. Titov, I.I. Tupitsyn, N.S. Mosyagin Multiconfigurational calculations of electronic structure Ag2, Ag2$^+$ with effective core potential. II. Spectroscopic constants and low-lying electronic states Russian version: Оптика и спектроскопия, 87, 6, 963-973 (1999) N.S. Mosyagin, M.G. Kozlov, A.V. Titov Electric dipole moment of the electron in the YbF molecule Journal of Physics B: Atomic, Molecular and Optical Physics, 31, 19, L763 (1998) N. Mosyagin, M. Kozlov, A. Titov All-electron Dirac-Coulomb and RECP calculations of excitation energies for mercury atom with combined CI/MBPT2 method N. Mosyagin, A. Titov Comment on "Accurate relativistic effective potentials for the sixth-row main group elements" [J.Chem.Phys. 107, 9975 (1997)] Multiconfigurational calculations of electronic structure Ag2, Ag2$^+$ with effective core potential. I. Atomic calculations and generation of effective core potential for Ag N.S. Mosyagin, A.V. Titov, Z. Latajka Generalized relativistic effective core potential: Gaussian expansions of potentials and pseudospinors for atoms Hg through Rn International Journal of Quantum Chemistry, 63, 6, 1107-1122 (1997) M.G. Kozlov, A.V. Titov, N.S. Mosyagin, P.V. Souchko Enhancement of the electric dipole moment of the electron in the BaF molecule Phys. Rev. A, 56, R3326-R3329 (1997) I.V. Abarenkov, V.L. Bulatov, R. Godby, V. Heine, M.C. Payne, P.V. Souchko, A.V. Titov, I.I. Tupitsyn Electronic-structure multiconfiguration calculation of a small cluster embedded in a local-density approximation host Phys. Rev. B, 56, 1743-1750 (1997) A.V. Titov, N.S. Mosyagin, V.F. Ezhov $\mathit{P},\mathit{T}$-Odd Spin-Rotational Hamiltonian for YbF Molecule Phys. Rev. Lett., 77, 5346-5349 (1996) A two-step method of calculation of the electronic structure of molecules with heavy atoms: Theoretical aspect V.A. Batuev, V.G. Kuznetsov, A.V. Titov, I.I. Tupitsyn, N.S. Mosyagin, I.V. Abarenkov Ab initio calculations of Ag2, Ag2+ with Effective Core Potential: Spectroscopic constants and low-lying electronic states Preprint PNPI, 2095, 26 (1996) Self-consistent relativistic effective core potentials for transition metal atoms: Cu, Ag, and Au Structural Chemistry, 6, 4, 317-321 (1995) I.I. Tupitsyn, N.S. Mosyagin, A.V. Titov Generalized relativistic effective core potential. I. Numerical calculations for atoms Hg through Bi The Journal of Chemical Physics, 103, 15, 6548-6555 (1995) N.S. Mosyagin, A.V. Titov, A.V. Tulub Generalized effective-core-potential method: Potentials for the atoms Xe, Pd, and Ag Phys. Rev. A, 50, 2239-2247 (1994) Variational principle for transition matrix International Journal of Quantum Chemistry, 45, 1, 71-85 (1993) Matrix elements of the U(2n) generators in the spin-orbit basis Yu.Yu. Dmitriev, Yu.G. Khait, M.G. Kozlov, L.N. Labzovsky, A.O. Mitrushenkov, A.V. Shtoff, A.V. Titov Calculation of the spin-rotational Hamiltonian including P- and P, T-odd weak interaction terms for HgF and PbF molecules Physics Letters A, 167, 3, 280-286 (1992) A.V. Titov, A.O. Mitrushenkov, I.I. Tupitsyn Effective core potential for pseudo-orbitals with nodes Yu.Yu. Dmitriev, A.V. Titov, A.V. Shtoff Generalized Brillouin Theorem and Calculation of Transition Matrices and Energies for Atoms and Molecules In: Mnogochastichnie effecti v Atomah, Ed: U.Safronova, 90-109 (1988) M.G. Kozlov, V.I. Fomichev, Y.Y. Dmitriev, L.N. Labzovsky, A.V. Titov Calculation of the P- and T-odd spin-rotational Hamiltonian of the PbF molecule Journal of Physics B: Atomic and Molecular Physics, 20, 19, 4939 (1987) Yu.Yu. Dmitriev, A.V. Titov Generalized Brillouin theorem and self-consistent approximation for transition density matrix Vestnic LGU (Leningrad University, Leningrad), 18, 15-21 (1985) Yu.Yu. Dmitriev, M.G. Kozlov, L.N. Labzovsky, A.V. Titov EDM of the PbF molecule induced by P,T-nonconcerving neutral current Preprint LNPI, 1046, 18 (1985)
CommonCrawl
Check dams and storages beyond trapping sediment, carbon sequestration for climate change mitigation, Northwest Ethiopia Solomon Addisu1 & Mulatie Mekonnen2 Global warming as a result of increased greenhouse gases (GHGs) concentration in the atmosphere is threating the existence of life on earth. Reducing the concentration of such gases with sequestering mechanism on the surface of the land helps to treat the problem. One of such methods is trapping carbon in the form of soil organic carbon (SOC) together with sediments, by implementing sediment trapping practices. Direct field measurements, calculations and laboratory analysis were used. The result shows that sediment storage dams (SSDs) sequestered/trapped ~ 60.97*103 t of sediment with the SOC content ranged from 14 to 87 g kg− 1 and check dams (CDs) trapped 7.8*103 t of sediment with the SOC content ranged from 20 to 290 g kg− 1. In general, the studied SSDs and CDs sequestered ~ 44.68*105 kg of SOC together with ~ 68.8*106 kg of sediment. In this study, SSDs and CDs were found to be important SOC sequestering practices together with sediments. Thus, it is concluded that soil and water conservation structures can be used as carbon sequestering methods to reduce the concentration of GHGs in the atmosphere in addition to reducing soil erosion. Global warming will continue being a critical problem in the twenty-first century due to increased carbon dioxide and other greenhouse gases (GHGs) concentration in the atmosphere. The atmospheric concentrations of the greenhouse gases carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) is increasing from time to time. In 2011 the concentrations of these GHGs were 391 ppm, 1803 ppb, and 324 ppb, which exceeded the pre-industrial levels by about 40%, 150%, and 20%, respectively (IPCC, 2013). The exchange of carbon (C) between soils and the atmosphere is a significant part of the carbon cycle since carbon is a major component of soils as organic matter content. Globally, the soil organic carbon (SOC) pool stores ~ 1500 PgC in the first one meter depth of soil, which is more carbon than is contained in the atmosphere (~ 800 PgC) and terrestrial vegetation (~ 500 PgC) (FAO and ITPS, 2015) although their distribution is spatially and temporally variable. The 2300 Gt of C stored in global soil is 3 times the size of the atmospheric C pool and 4.1 times the biotic C pool (Lal, 2003). SOC is the main component of soil organic matter (SOM). According to FAO and ITPS (2015), SOM contains roughly 55–60% C by mass. A high SOM content provides nutrients to plants and improves water availability, which enhances soil fertility and improves crop productivity. Moreover, SOC improves soil structural stability and porosity, which ensures sufficient aeration and water infiltration to support plant growth. The carbon flux derived from land use change was more predominant in preindustrial times (between 1750 and 2011) in which one-third of all anthropogenic GHGs was derived from land use changes (IPCC, 2014). On a long-term basis, atmospheric CO2 has increased from about 180 to 280 ppm since the last glacial period, adding about 220 PgC to the atmosphere over a 10,000 year period at a rate of ~ 4.4 PgC yr.− 1 (Baldocchi et al., 2016). The global soil water erosion process has been described both as a net C source of around 1 Gt yr.− 1 (Lal, 2003) and a net C sink of up to 1.5 Gt yr.− 1 (Stallard, 1998). The carbon-based GHGs emitted by soil are CO2 and methane (CH4), which are two of the most leading anthropogenically emitted GHGs (IPCC, 2014). This means dead organic material incorporated into the soil can be potential sources of carbon through organic material transformation by heterotrophic microorganisms. SOC sequestration is the process by which carbon is fixed from the atmosphere via plants or organic residues and stored in the soil. When dealing with CO2, SOC sequestration involves three stages: (i) the removal of CO2 from the atmosphere via plant photosynthesis; (i) the transfer of carbon from CO2 to plant biomass; and (i) the transfer of carbon from plant biomass to the soil where it is stored in the form of SOC (FAO, 2017). As a result of soil erosion & deposition processes, the sediments trapped by sediment trapping dams or structures can potentially serve as natural carbon sinks (Poch et al. 2006; Harden et al. 2008; Van Oost et al. 2008; Cao et al. 2009, 2010). The organic carbon content in deposited sediments & eroded material was over twice that of the original soils (Jacinthe et al., 2004). Globally sedimentation resulting from soil erosion can sequester ~ 1 Pg C yr.− 1 (Stallard 1998; Smith et al. 2001). Dean and Gorham (1998) estimated the organic carbon sequestering role of reservoir sediments with a rate of 0.16 × 1015 g yr.− 1. Floodplain sediments have also been documented as potential organic carbon stores up to 0.22 kg C m− 2 yr.− 1 (DeLaune and White, 2012; Kayranli et al., 2010). SOC sequestration for check-dams (CDs) was evaluated in China (Bao, 2008; Wang et al. 2011) based on the sediments volume & average SOC contents. The result showed that CDs trapped over 21*109 m3 with an average SOC content of ~ 3.31 g kg− 1 (Bao, 2008) and 0.952 Gt (1Gt = 109 t = 1015 g) with an average SOC content of 3.4 g kg− 1 (Wang et al., 2011). In Ethiopia, CDs and sediment storage dams (SSDs) have been implemented in large areas as sediment trapping measures, which stored large amounts of sediment (Mekonnen et al., 2015; MERET, 2008). Together with the sediment, these structures are trapping and storing large amount of organic carbon. However, the amount of carbon sequestered together with the sediment and their climate change mitigation role is not yet evaluated (MERET, 2008). According to Li et al. (2007) and Cao (2008), although CDs, which are widely used to trap sediments in areas with high soil erosion, are acting as carbon sinks, only a few assessments of their carbon sequestration have been performed. Therefore, the objectives of this study were to (i) quantify the amount of sediment trapped and stored by check dams (CDs) and sediment storage dams (SSDs) in northwest Ethiopia, (ii) determine the amount of soil organic carbon (SOC) and soil organic matter (SOM) sequestered by CDs and SSDs together with the sediment, and (iii) asses the climate change mitigation role of such sediment trapping structures. Study area description The study was conducted in the Amhara National Regional State, northwest Ethiopia (Fig. 1), at eight sediment storage dams constructed at the outlet of micro-watersheds with an area ranging from 35 to 105 ha (Segno Gebeya, Woybila, Shehena Borkena, Enchet Kab, Tigrie Mender, Worka Wotu, Dodota, Wuha Chale) and six check dams constructed within gullies for the purpose of gully treatment (Rim/Debre Yakob, Minizr/Adibera, Gosheye, Debre Mewi, Debre Tabor and Bure). Farmland is the dominant land use type in each sub-catchment amounting to about 80% while about 20% is used as grazing land, eucalyptus plantation and/or bush land. The slopes in the sub-catchments ranged from 0.4–31% with dominant average slopes of 11.6–24%. location map of the study areas During site selection difference in soil type, elevation and rainfall amount were considered. Five structures were selected at an elevation less than 2200 m a.s.l.; five structures between 2200 and 2700 m a.s.l. and four structures above 2700 m a.s.l. Difference in soil types were also taken into account such as Nitosols, Cambisols, Leptosols, Regosols and Vertisols. Table 1 summarizes the geographic location (X, Y coordinates), mean annual rainfall, soil type and elevation of each of the study sites. Table 1 Study sites geographical location, soil type, rainfall and elevation Measuring trapped sediment In this study, eight sediment storage dams (SSDs) and six check dams (CDs) were selected and the amount of sediment trapped behind each structure was measured based on the geometric nature of the gullies, SSD/CD dimensions and the area of sediment deposition using GPS and measuring tape. Some of the structures have trapezoidal shapes and others have rectangular shapes. Figure 2 shows two example SSDs investigated. Example SSD pictures constructed to sequester sediment and SOC two years old (left, Segno Gebeya watershed) and five years old (right, Woybila watershed), in the northwest highlands of Ethiopia To calculate the volume (V) of the sediment accumulated behind the trapezoidal shape SSDs/CDs, the area (A, m2) of the sedimentation times the length (L, m), which is from the SSDs/CDs to the end of sedimentation upstream was used (Eq.1). The area (A) of the trapped sediment is the average of the top and bottom widths (b1 and b2; m) of the sediment multiplied by its height (h, m) measured from the base of the dam to the sediment surface (Eq.2). For rectangular shape SSDs/CDs, length (L, m) times width (W, m) times depth (D, m) of the trapped sediment was used. $$ V=A\ast L $$ $$ A=\frac{1}{2}\left(b1+b2\right)\ast h $$ Dry sediment mass calculation To convert sediment volume, which was directly measured in the field to dry sediment mass, the bulk density of the trapped sediment was estimated using the cylindrical core method (McKenzie et al., 2002, Mekonnen et al. 2015). In the middle of the deposited sediment from 1 to 1.5 m deep pit (based on the depth of the deposited sediment) was dugout vertically downward and cylindrical core sampling was done at three locations along the side walls of the dugout pits (upper, middle and lower) inserting the cylindrical core sampler (100 cm3) into the side wall at the desired depth. The collected samples were oven dried at 105 °C in the laboratory for 24 h and sediment density was calculated weighing the dried sediment and subtracting it from the wet sediment mass. Quantifying sediment organic carbon and organic matter To quantify the amount of sediment organic carbon (SOC) and sediment organic matter (SOM) sequestered by SSDs and CDs together with the trapped sediment, from 1 to 1.5 m deep pits were dugout vertically downward in the middle of the deposited sediment behind the SSDs and CDs based on the depth of the trapped sediment. Sampling was done at three locations (upper, middle and lower) along the side of the walls in each pit. Samples collected from the upper, middle and lower locations were thoroughly mixed and 1 kilogram composite sediment sample was taken for laboratory analysis for each location, i.e. 14 kg composite sediment samples were used for laboratory analysis in total. The Walkley-Black titration method (Nelson and Sommers, 1982), which is one of the cheapest and rapid methods for the analysis of organic carbon (OC) in soils and sediments, was used. Soil organic matter was calculated by multiplying soil organic carbon by a factor of 1.724. Trapped sediment The eight SSDs and six CDs investigated, built from gabion and stone trapped a total of ~ 50.5*103 m3 or ~ 68.8*103 t of sediment (44*103 m3 or 60.97*103 by SSDs, and 6.5*103 m3 or 7.8 t by CDs). Sediment bulk density values ranged from 1.18–1.53 g cm− 3 with an average value of 1.36 g cm− 3 in the case of SSD sediments and from 1.04–1.32 g cm− 3 with an average value of 1.23 g cm− 3 in the case of CD sediments. Sediment bulk density is lower in heavy clay sediment deposits and higher in sandy loam dominated sediments. Sediment bulk density of the trapped sediment in SSDs and CDs ranged from 1.04–1.53 g cm− 3 with an average value of 1.3 g cm− 3. Table 2 shows volume, mass and bulk density of the trapped sediment behind each SSDs and CDs. Table 2 Volume and mass of sediment trapped by SSDs and CDs Sediment organic carbon and organic matter Through laboratory analysis, the deposited mass of SOC and SOM were determined (Table 3). SOC trapped by SSDs ranged from 14 to 87 g kg− 1 of sediment. SOM trapped by SSDs ranged from 24 to 147 g kg− 1 of sediment. SOC trapped by CDs ranged from 20 to 290 g kg− 1 of sediment. SOM trapped by CDs ranged from 35 to 530 g kg− 1 of sediment. Table 3 SOC and OM trapped by SSDs and CDs The evaluated eight SSDs trapped ~ 28*105 kg SOC and 56.6*105 kg SOM together with 61*106 kg sediment and the six CDs trapped 16.68*105 kg SOC and 29.2*105 kg SOM together with 78*106 kg sediment. A kilogram of sediment contains 47–59% more SOM compared with SOC (Table 3 and Fig. 3). SOC and SOM contained in a kilogram of sediment showed direct correlation with R2 = 0.99 (Fig. 4). In general, the studied SSDs and CDs sequestered ~ 44.68*105 kg of SOC and 85.8*105 kg SOM together with ~ 68.8*106 kg of sediment. Amount of SOC and SOM contained in a kilogram of sediment collected from fourteen locations the correlation between SOC and SOM contained in a kilogram of sediment collected from fourteen locations Role of SSDs and CDs practices in trapping sediment Rising soil erosion emphasises the need to trap sediment along the sediment transfer pathways. Dam construction of both large and small sizes to trap sediment can reduce soil erosion, downstream sedimentation, flooding and other environmental problems. Sediment storage dams (SSDs) and check dams (CDs) are soil and water conservation practices constructed in large areas of Ethiopia by Governmental offices and non-governmental organization with the objective to trap sediment and reduce soil erosion (Mekonnen et al. 2014; Mekonnen et al. 2015; MERET, 2008). SSDs are physical structures or barriers made of stone or gabion mostly constructed at the outlets of catchments and CDs are physical structures like SSDs but mostly constructed within gullies. The sediment trapping role of eight SSDs and six CDs were investigated in this study and the result shows that SSDS trapped 44*103m3 or 60.97*103t and CDS trapped 6.5*103m3 or 7.8*103t of sediment. Both SSDs and CDs trapped a total of ~ 50.5*103m3 or ~ 68.8*103 t of sediment. Dams are important practices in reducing downstream sedimentation problem. In addition to reducing downstream reservoirs sedimentation, SSDs contributed in conserving soil within the larger catchment and re-filling and stabilizing gullies. An SSD constructed at Woybila catchment within a gully, which is serving as a temporary drainage channel during the rainy seasons, trapped ~ 22*103 t of sediment and refilled a 8 m deep and 20 m wide gully in 5 years reducing slope gradient by 12% on average, which can slow down the speed of runoff and give time for infiltration and sediment deposition. The sediment trapping role of dams of different sizes were evaluated by previous studies and proved their contribution. For example, Vorosmarty et al. (2003) noted that the world's registered 45,000 large dams can trap 4–5 billion t yr.− 1 of sediment. In China more than 100,000 smaller check dams trapped 21 billion m3 of sediment (Wang et al., 2011). Sougnez et al. (2011) estimated the sediment volume trapped by 20 check dams in southern Spain as ranging from 4 to 920 m3. Sediment trapping dams not only help to trap sediment but also can be used to estimate sediment yield of the sediment contributing catchments above the dams, to refill gullies with sediment and reduce gully channel gradient. Sediment a sink for organic carbon Sediments trapped by sediment trapping dams or structures can potentially serve as carbon sinks (Poch et al. 2006; Harden et al. 2008; Van Oost et al. 2008; Cao et al. 2009, 2010; Zougmoré and Mando, 2010). Globally sedimentation resulting from soil erosion can sequester ~ 1 Pg C yr.− 1 (Stallard 1998; Smith et al. 2001). Bao (2008), Wang et al. (2011) and Li et al., (2007) evaluated the SOC sequestration role of CDs and the result showed that CDs trapped over 21*109 m3 with an average SOC content of ~ 3.31 g kg− 1 (Bao, 2008) and 0.952 Gt with an average SOC content of 3.4 g kg− 1 (Wang et al. 2011) and from 2 to 43 g kg− 1 of sediment (Li et al., 2007). In this study, SSDs and CDs showed a great role in sequestering carbon in the form of SOC. SSDs trapped ~ 60.97*103 t of sediment with the SOC content ranged from 14 to 87 g kg− 1 and CDs trapped 7.8*103 t of sediment with the SOC ranged from 20 to 290 g kg− 1. The SOC contents obtained within the trapped sediments are higher compared with other study findings mentioned above. The potential reason will be that the mass of SOC will vary spatially since the sources of carbon differ spatially. The possible sources of such SOC is decomposed crop stubbles, plant leaves and fertilizers applied in fields by farmers in addition to the nutrient content of the soil in the area. Moreover, the SOC content of sediments will increase when soil erosion is intensified and will decrease when soil erosion reduced. This means the quantity of SOC storage by sediments is controlled by the amount and type of organic residues that enter the soil (i.e. the input of organic C to the soil system) (FAO and ITPS, 2015). SSDS and CDs also played an important role in trapping SOM. The SOM contents sediments trapped by SSDs and CDS ranged from 24 to 147 g kg− 1 and 35–531 g kg− 1 of sediment, respectively. The SOM content of sediments was higher than its SOC content. A kilogram of sediment contains 47–59% more SOM compared with SOC, which agreed well with the findings of FAO and ITPS, (2015) that is SOM contains roughly 55–60% C by mass. The mass SOC and SOM contained in a kilogram of sediment also showed direct correlation with R2 = 0.99. The evaluated eight SSDs trapped ~ 56.6*105 SOM together with 61*106 kg sediment and six CDs trapped 29.2*105 SOM together with 78*105 kg sediment. This shows that SSDs and CDs are also important C sinks. In general, the studied SSDs and CDs totally sequestered 85.8*105 kg SOM together with ~ 68.8*106 kg of sediment, which is a great contribution in reducing the GHGs concentration in the atmosphere. It also helps to have a good knowledge of the current SOC and SOM sinks, its spatial distribution and its sinking mechanisms to inform various stakeholders (e.g. farmers, policy makers, land users) to provide the best opportunities to mitigate climate change. After carbon enters the soil in the form of organic material from soil fauna and flora, it can persist in the soil for decades, centuries or even millennia. When the soil carbon is released into the atmosphere it becomes an important source GHGs. Soils are major carbon reservoirs/sinks containing more carbon than the atmosphere and terrestrial vegetation (Lal, 2003; FAO and ITPS, 2015; FAO, 2017). Sediment storage dams (SSDs) and check dams (CDs) were found to be important structural sediment trapping measures trapping large amount of carbon at the outlets of small sized catchments and within gullies. The eight SSDs and six CDs investigated, trapped a total of ~ 50.5*103 m3 or ~ 68.8*103 t of sediment. In addition to trapping the sediment and reducing soil erosion SSDs and CDs played a promising role in sequestering soil organic carbon (SOC) and soil organic matter (SOM). The result shows that SSDs and CDs trapped from 14 to 87 and 20–290 g of SOC within a kilogram of sediment, respectively. In general, the studied SSDs and CDs sequestered ~ 44.68*105 kg of SOC together with ~ 68.8*106 kg of sediment. This means sediments are important reservoirs of SOC and plays an important role in reducing the amount of organic carbon that could be released to the atmosphere as a GHG. In conclusion, SSDs and CDs have retained substantial amount of carbon that could otherwise release to the atmosphere and contribute to global warming and thus SSDs and CDs can be used as climate change mitigation measures in additions to trapping sediments as soil and water conservation practices. CDs: check dams CH4 : Food and Agricultural Organization GHGs: IPCC: SOC: soil organic carbon soil organic matter SSDs: sediment storage dams Baldocchi, D., Y. Ryu, and T. Keenan. 2016. Terrestrial carbon cycle variability, version 1. Issue 5. Bao, Y.X. 2008. The characteristics and evolution of soil nitrogen in Damland and terrace in loess hilly region. Xi'an, China: Northwest agriculture Forestry University press. Cao, S.X. 2008. Impact of spatial and temporal scales on afforestation effects: Response to comment on "why large-scale afforestation efforts in China have failed to solve the desertification problem". Environmental Science and Technology 42: 7724–7725. Cao, S.X., L. Chen, and X.X. Yu. 2009. Impact of China's grain for Green project on the landscape of vulnerable arid and semi-arid agricultural regions: A case study in northern Shaanxi Province. Journal of Applied Ecology 46: 536–543. Cao, S.X., G.S. Wang, and L. Chen. 2010. Assessing effects of afforestation projects in China. Nature 466: 315–315. Dean, W.E., and E. Gorham. 1998. Magnitude and significance of carbon burial in lakes, reservoirs, and peatlands. Geology 26: 535–538. DeLaune, R.D., and J.R. White. 2012. Will coastal wetlands continue to sequester carbon in response to an increase in global sea level? A case study of the rapidly subsiding Mississippi river deltaic plain. Climate Change 110: 297–314. FAO. 2017. Soil organic carbon: The hidden potential. Italy: Food and Agriculture Organization of the United Nations Rome. FAO and ITPS. 2015. Status of the World's soil resources. Rome: Italy. Harden, J.W., A.A. Berhe, M. Torn, J. Harte, S. Liu, and R.F. Stallard. 2008. Soil erosion: Data say C sink. Science 320: 178–179. IPCC (2013) Summary for policymakers. In: Climate change 2013: The physical science basis. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change [stocker, T.F., D. Qin, G. K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University press, Cambridge, United Kingdom and New York, NY, USA. IPCC. 2014. Climate change 2014: Synthesis, report contribution of working groups I, II and III to the fifth assessment report of the intergovernmental panel on climate change. Geneva: IPCC. Jacinthe, P.A., R. Lal, L.B. Owens, and D.L. Hothem. 2004. Transport of labile carbon in runoff as affected by land use and rainfall characteristics. Soil and Tillage Research 77: 111–123. Kayranli, B., M. Scholz, A. Mustafa, and Å. Hedmark. 2010. Carbon storage and fluxes within freshwater wetlands: A critical review. Wetlands 30: 111–124. Lal, R. 2003. Soil erosion and the global carbon budget. Environment International 29: 437–450. Li, G.X., Z.B. Li, and X. Wei. 2007. Two key physical characteristics indexes of farmland sediment for check dams in loess plateau. Research of Soil and Water Conservation 14: 218–221. McKenzie, N., K.J. Coughlan, and H. Cresswell. 2002. Soil physical measurement and interpretation for land evaluation. Collingwood, Victoria: CSIRO Publishing. Mekonnen, M., S.D. Keesstra, J.E.M. Baartman, C.J. Ritsema, and A.M. Melesse. 2015. Evaluating sediment storage dams: Structural off-site sediment trapping measures in Northwest Ethiopia. CIG 41: 7–22. Mekonnen, M., S.D. Keesstra, L. Stroosnijder, J.E.M. Baartman, and J. Maroulis. 2014. Soil conservation through sediment trapping: A review. Land Degradation &d Development 26: 544–556. MERET. 2008. MERET NEWS: A quarterly newsletter published by the Ministry of Agriculture and Rural Development, MERET, project coordination office, Bahir Dar, Ethiopia. Report No 5: 9–10. Nelson DW, Sommers LE (1982) Total carbon, organic carbon and organic matter. Methods of soil analysis. Part 2: Chemical and microbiological properties. Madison, Wisconsin. Poch, R.M., J.W. Hopmans, J.W. Six, D.E. Rolston, and J.L. McIntyre. 2006. Considerations of a field-scale soil carbon budget for furrow irrigation. Agriculture, Ecosystems & Environment 113: 391–398. Smith, S.V., W.H. Renwick, R.W. Buddemeier, and C.J. Crossland. 2001. Methane oxidation in a peatland core. Global Biogeochemical Cycles 15: 697–707. Sougnez, N., B. van Wesemael, and V. Vanacker. 2011. Low erosion rates measured for steep, sparsely vegetated catchments in Southeast Spain. Catena 84: 1–11. Stallard, R.F. 1998. Terrestrial sedimentation and the carbon cycling: Coupling weathering and erosion to carbon burial. Global Biogeochemical Cycles 12: 231–257. Van Oost, K., J. Six, G. Govers, T.A. Quine, and S. Gryze. 2008. Response to "soil erosion: A carbon sink or source?". Science 319: 1042. Vorosmarty, C.J., M. Meybeck, B. Fekete, K. Sharma, P. Green, and J.P.M. Syvitski. 2003. Anthropogenic sediment retention: Major global impact from registered river impoundments. Global and Planetary Change 39: 169–190. Wang, Y., B. Fu, L. Chen, Y. Lü, and Y. Gao. 2011. Check dam in the loess plateau of China: Engineering for environmental services and food security. Environmental Science and Technology 45: 10298–10299. Zougmoré, R., and A. Mando. 2010. Benefits of integrated soil fertility and water management in semi-arid West Africa: An example study in Burkina Faso. Nutrient Cycling in Agroecosystems 88: 17–27. This study would never be completed without the contribution of many people to whom we would like to express our gratitude. The administrative kebele's development agents, district agricultural officials, local guiders, committee leaders and respondent households in each of the sampling kebeles were indispensable for the successful completion of the field work. We would like also to acknowledge people who contributed their knowledge and time in data collection and entry processes. Self-funded. The dataset supporting the conclusions of this article is included within the article. I, Solomon Addisu, holder of ORCID hereby declare that this research article is written by the authors whose names have been appropriately indicated. Bahir Dar University, College of Agriculture and Environmental Sciences, P.o.box 5501, Bahir Dar, Ethiopia Solomon Addisu Mulatie Mekonnen Search for Solomon Addisu in: Search for Mulatie Mekonnen in: MM has made substantial contributions in conception design, acquisition of data, and interpretation of results and leading the overall activities of the research; FA has been involved in data collection, entry, coding, and analysis. SA contributed in writing, drafting the manuscript, revising it critically for important intellectual content. He has given also the final approval of the version to be published. All authors read and approved the final manuscript. Correspondence to Solomon Addisu. The authors hereby declare that, this manuscript is not published or considered for publication elsewhere. All authors read the manuscript and agree to publication. Addisu, S., Mekonnen, M. Check dams and storages beyond trapping sediment, carbon sequestration for climate change mitigation, Northwest Ethiopia. Geoenviron Disasters 6, 4 (2019) doi:10.1186/s40677-019-0120-1 Soil and water conservation structures
CommonCrawl
Seog Oh Professor of Physics Prof. Oh's research utilized the CDF and ATLAS experiments. Using CDF data he is presently searching for high mass resonances and SUSY particles. For the resonance search, he is looking at the decay channels involving lepton pairs, WW or WZ pairs. The candidates for these resonances could be Z', a gauge boson similar to Z, W' and Graviton. For a SUSY particle search, the search is conducted using events with W, Z and large missing transverse momentum. The LHC is expected to deliver proton-proton collisions in 2009, and he will continue similar studies including the Higgs search using ATLAS data. For the ATLAS experiment, he was responsible for constructing a major part of the inner detector called the TRT (Transition Radiation Tracker – see http://atlas.phy.duke.edu) Professor of Physics, Physics, Trinity College of Arts & Sciences 2000 279 Physics Bldg, Durham, NC 27708 Box 90305, Durham, NC 27708-0305 [email protected] (919) 660-2579 Education, Training, & Certifications Ph.D., Massachusetts Institute of Technology 1981 Previous Appointments & Affiliations Director, Undergraduate Studies, Physics, Trinity College of Arts & Sciences 2009 - 2010 Associate Professor with Tenure, Physics, Trinity College of Arts & Sciences 1990 - 2000 Assistant Professor, Physics, Trinity College of Arts & Sciences 1984 - 1990 Global Scholarship Switzerland (Country) Selected Grants Research in High Energy Physics at Duke University awarded by Department of Energy 2013 - 2025 Mu2e Tracker Panel Processing at Duke University awarded by Fermilab 2020 - 2022 Mu2e Tracker Panel Processing awarded by Fermilab 2018 - 2020 Mu2e Tracker X-ray Wire Scanning Apparatus awarded by Fermilab 2016 - 2019 Mu2e Tracker X-ray Wire awarded by Fermilab 2013 - 2015 High Energy Physics - Supplementary Proposal for Computer and Travel awarded by Department of Energy 1991 - 2000 Research in High Energy Physics -- ATLAS TRT Detector Development awarded by Department of Energy 1991 - 2000 Research in High Energy Physics (ATLAS project) awarded by Office of Energy Research 1995 - 1997 Straw Tube Tracking Detector Construction for SDC/Proposal to DOE awarded by Department of Energy 1991 - 1994 Bono, J. S., J. F. Caron, S. H. Oh, and C. Wang. "The stress relaxation (creep) rate of Mu2e straw tubes." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 902 (September 11, 2018): 95–102. https://doi.org/10.1016/j.nima.2018.05.051. Oh, S. H., D. Hazineh, and C. Wang. "The effect of electrostatic and gravity force on offset wire inside tube." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 888 (April 21, 2018): 79–87. https://doi.org/10.1016/j.nima.2018.01.017. Oh, S. H., S. Lin, and C. Wang. "Techniques to measure tension in wires or straw tubes." Journal of Instrumentation 13, no. 1 (January 1, 2018). https://doi.org/10.1088/1748-0221/13/01/T01001. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for dark matter produced in association with a hadronically decaying vector boson in pp collisions at √s=13 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 763 (December 10, 2016): 251–68. https://doi.org/10.1016/j.physletb.2016.10.042. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of W+W-production in association with one jet in proton-proton collisions at √s = 8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 763 (December 10, 2016): 114–33. https://doi.org/10.1016/j.physletb.2016.10.014. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, O. S. AbouZeid, et al. "Transverse momentum, rapidity, and centrality dependence of inclusive charged-particle production in sNN=5.02 TeV p + Pb collisions measured by the ATLAS experiment." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 763 (December 10, 2016): 313–36. https://doi.org/10.1016/j.physletb.2016.10.053. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of top quark pair differential cross sections in the dilepton channel in pp collisions at √s =7 and 8 TeV with ATLAS." Physical Review D 94, no. 9 (November 11, 2016). https://doi.org/10.1103/PhysRevD.94.092003. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the W±Z boson pair-production cross section in pp collisions at s=13 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 762 (November 10, 2016): 1–22. https://doi.org/10.1016/j.physletb.2016.08.052. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for new resonances in events with one lepton and missing transverse momentum in pp collisions at √s=13 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 762 (November 10, 2016): 334–52. https://doi.org/10.1016/j.physletb.2016.09.040. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for the Standard Model Higgs boson produced by vector-boson fusion and decaying to bottom quarks in √s=8 TeV pp collisions with the ATLAS detector." Journal of High Energy Physics 2016, no. 11 (November 1, 2016). https://doi.org/10.1007/JHEP11(2016)112. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Study of hard double-parton scattering in four-jet events in pp collisions at √s=7 TeV with the ATLAS experiment." Journal of High Energy Physics 2016, no. 11 (November 1, 2016). https://doi.org/10.1007/JHEP11(2016)110. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "A measurement of material in the ATLAS tracker using secondary hadronic interactions in 7TeV pp collisions." Journal of Instrumentation 11, no. 11 (November 1, 2016). https://doi.org/10.1088/1748-0221/11/11/P11020. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the Inelastic Proton-Proton Cross Section at sqrt[s]=13 TeV with the ATLAS Detector at the LHC." Phys Rev Lett 117, no. 18 (October 28, 2016): 182002. https://doi.org/10.1103/PhysRevLett.117.182002. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the tt¯ production cross-section using eμ events with b-tagged jets in pp collisions at √s=13TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 761 (October 10, 2016): 136–57. https://doi.org/10.1016/j.physletb.2016.08.019. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the top quark mass in the tt¯→dilepton channel from √s=8 TeV ATLAS data." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 761 (October 10, 2016): 350–71. https://doi.org/10.1016/j.physletb.2016.08.042. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for high-mass new phenomena in the dilepton final state using proton–proton collisions at s=13TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 761 (October 10, 2016): 372–92. https://doi.org/10.1016/j.physletb.2016.08.055. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in s =13 TeV pp collisions with the ATLAS detector." Physical Review D 94, no. 5 (September 19, 2016). https://doi.org/10.1103/PhysRevD.94.052009. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for TeV-scale gravity signatures in high-mass final states with leptons and jets with the ATLAS detector at √s=13 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 760 (September 10, 2016): 520–37. https://doi.org/10.1016/j.physletb.2016.07.030. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for heavy long-lived charged R-hadrons with the ATLAS detector in 3.2 fb−1 of proton–proton collision data at √s=13 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 760 (September 10, 2016): 647–65. https://doi.org/10.1016/j.physletb.2016.07.042. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for pair production of Higgs bosons in the bbbb final state using proton-proton collisions at √s =13 TeV with the ATLAS detector." Physical Review D 94, no. 5 (September 2, 2016). https://doi.org/10.1103/PhysRevD.94.052002. Adam, J., D. Adamová, M. M. Aggarwal, G. Aglieri Rinella, M. Agnello, N. Agrawal, Z. Ahammed, et al. "Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at √sNN=2.76 TeV." Journal of High Energy Physics 2016, no. 9 (September 1, 2016). https://doi.org/10.1007/JHEP09(2016)164. Adam, J., D. Adamová, M. M. Aggarwal, G. Aglieri Rinella, M. Agnello, N. Agrawal, Z. Ahammed, et al. "Elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity in Pb-Pb collisions at √sNN=2.76 TeV." Journal of High Energy Physics 2016, no. 9 (September 1, 2016). https://doi.org/10.1007/JHEP09(2016)028. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for Higgs and Z Boson Decays to ϕγ with the ATLAS Detector." Physical Review Letters 117, no. 11 (September 2016): 111802. https://doi.org/10.1103/physrevlett.117.111802. The ATLAS collaboration, L., M. Aaboud, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, et al. "Measurement of jet activity in top quark events using the eμ final state with two b-tagged jets in pp collisions at √s=8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 9 (September 1, 2016). https://doi.org/10.1007/JHEP09(2016)074. The ATLAS collaboration, L., M. Aaboud, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, et al. "Searches for heavy diboson resonances in pp collisions at √s=13 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 9 (September 1, 2016). https://doi.org/10.1007/JHEP09(2016)173. The ATLAS collaboration, L., M. Aaboud, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, et al. "Dark matter interpretations of ATLAS searches for the electroweak production of supersymmetric particles in √s=8 TeV proton-proton collisions." Journal of High Energy Physics 2016, no. 9 (September 1, 2016). https://doi.org/10.1007/JHEP09(2016)175. The ATLAS collaboration, Manon, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Measurement of total and differential W + W − production cross sections in proton-proton collisions at √s=8 TeV with the ATLAS detector and limits on anomalous triple-gauge-boson couplings." Journal of High Energy Physics 2016, no. 9 (September 1, 2016). https://doi.org/10.1007/JHEP09(2016)029. The ATLAS collaboration, Mark E., M. Aaboud, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, et al. "Search for resonances in diphoton events at √s=13 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 9 (September 1, 2016). https://doi.org/10.1007/JHEP09(2016)001. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of exclusive γγ →w+W- production and search for exclusive Higgs boson production in pp collisions at s =8 TeV using the ATLAS detector." Physical Review D 94, no. 3 (August 31, 2016). https://doi.org/10.1103/PhysRevD.94.032011. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the WW and WZ production cross section using final states with a charged lepton and heavy-flavor jets in the full CDF Run II data set." Physical Review D 94, no. 3 (August 23, 2016). https://doi.org/10.1103/PhysRevD.94.032008. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at s =13 TeV using the ATLAS detector." Physical Review D 94, no. 3 (August 22, 2016). https://doi.org/10.1103/PhysRevD.94.032005. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Measurements of the charge asymmetry in top-quark pair production in the dilepton final state at s =8 TeV with the ATLAS detector." Physical Review D 94, no. 3 (August 22, 2016). https://doi.org/10.1103/PhysRevD.94.032006. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for charged Higgs bosons produced in association with a top quark and decaying via H± → τν using pp collision data recorded at √s=13 TeV by the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 759 (August 10, 2016): 555–74. https://doi.org/10.1016/j.physletb.2016.06.017. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for resonances in the mass distribution of jet pairs with one or two jets identified as b-jets in proton–proton collisions at s=13 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 759 (August 10, 2016): 229–46. https://doi.org/10.1016/j.physletb.2016.05.064. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, O. S. AbouZeid, et al. "Measurement of W± and Z-boson production cross sections in pp collisions at √s=13 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 759 (August 10, 2016): 601–21. https://doi.org/10.1016/j.physletb.2016.06.023. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, O. S. Abouzeid, et al. "Search for pair production of gluinos decaying via stop and sbottom in events with b -jets and large missing transverse momentum in pp collisions at s =13 TeV with the ATLAS detector." Physical Review D 94, no. 3 (August 9, 2016). https://doi.org/10.1103/PhysRevD.94.032003. Adam, J., D. Adamová, M. M. Aggarwal, G. Aglieri Rinella, M. Agnello, N. Agrawal, Z. Ahammed, et al. "Measurement of D-meson production versus multiplicity in p-Pb collisions at √sNN=5.02 TeV." Journal of High Energy Physics 2016, no. 8 (August 1, 2016). https://doi.org/10.1007/JHEP08(2016)078. The ATLAS collaboration, Andrea L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of fiducial differential cross sections of gluon-fusion production of Higgs bosons decaying to WW ∗→eνμν with the ATLAS detector at √s=8 TeV." Journal of High Energy Physics 2016, no. 8 (August 1, 2016). https://doi.org/10.1007/JHEP08(2016)104. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the double-differential high-mass Drell-Yan cross section in pp collisions at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 8 (August 1, 2016). https://doi.org/10.1007/JHEP08(2016)009. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the inclusive isolated prompt photon cross section in pp collisions at s=8TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 8 (August 1, 2016). https://doi.org/10.1007/JHEP08(2016)005. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the angular coefficients in Z-boson events using electron and muon pairs from data taken at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 8 (August 1, 2016). https://doi.org/10.1007/JHEP08(2016)159. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Measurement of the CP-violating phase ϕsand the Bs0meson decay width difference with Bs0→ J/ψϕ decays in ATLAS." Journal of High Energy Physics 2016, no. 8 (August 1, 2016). https://doi.org/10.1007/JHEP08(2016)147. The ATLAS collaboration, Naomi J., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurements of the Higgs boson production and decay rates and constraints on its couplings from a combined ATLAS and CMS analysis of the LHC pp collision data at √s = 7 and 8 TeV." Journal of High Energy Physics 2016, no. 8 (August 1, 2016). https://doi.org/10.1007/JHEP08(2016)045. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Charged-particle distributions in √s=13 TeV pp interactions measured with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 758 (July 10, 2016): 67–88. https://doi.org/10.1016/j.physletb.2016.04.050. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Search for single production of a vector-like quark via a heavy gluon in the 4b final state with the ATLAS detector in pp collisions at s=8 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 758 (July 10, 2016): 249–68. https://doi.org/10.1016/j.physletb.2016.04.061. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "A search for an excited muon decaying to a muon and two jets in pp collisions at √ s = 8 TeV with the ATLAS detector." New Journal of Physics 18, no. 7 (July 1, 2016). https://doi.org/10.1088/1367-2630/18/7/073021. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of sin2 θefflept using e+e- pairs from γ∗/Z bosons produced in pp collisions at a center-of-momentum energy of 1.96 TeV." Physical Review D 93, no. 11 (June 28, 2016). https://doi.org/10.1103/PhysRevD.93.112016. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for metastable heavy charged particles with large ionization energy loss in pp collisions at s =13 TeV using the ATLAS experiment." Physical Review D 93, no. 11 (June 28, 2016). https://doi.org/10.1103/PhysRevD.93.112015. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for a low-mass neutral Higgs boson with suppressed couplings to fermions using events with multiphoton final states." Physical Review D 93, no. 11 (June 20, 2016). https://doi.org/10.1103/PhysRevD.93.112010. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Search for new phenomena in final states with large jet multiplicities and missing transverse momentum with ATLAS using √s=13 TeV proton–proton collisions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 757 (June 10, 2016): 334–55. https://doi.org/10.1016/j.physletb.2016.04.005. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the forward-backward asymmetry of top-quark and antiquark pairs using the full CDF Run II data set." Physical Review D 93, no. 11 (June 3, 2016). https://doi.org/10.1103/PhysRevD.93.112005. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the forward-backward asymmetry in low-mass bottom-quark pairs produced in proton-antiproton collisions." Physical Review D 93, no. 11 (June 2, 2016). https://doi.org/10.1103/PhysRevD.93.112003. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Measurements of Zγ and Zγγ production in pp collisions at s =8 TeV with the ATLAS detector." Physical Review D 93, no. 11 (June 2, 2016). https://doi.org/10.1103/PhysRevD.93.112002. Adam, J., D. Adamová, M. M. Aggarwal, G. Aglieri Rinella, M. Agnello, N. Agrawal, Z. Ahammed, et al. "Centrality dependence of ψ(2S) suppression in p-Pb collisions at √sNN= 5.02 TeV." Journal of High Energy Physics 2016, no. 6 (June 1, 2016): 1–23. https://doi.org/10.1007/JHEP06(2016)050. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of D⁎±, D± and Ds± meson production cross sections in pp collisions at √s=7 TeV with the ATLAS detector." Nuclear Physics B 907 (June 1, 2016): 717–63. https://doi.org/10.1016/j.nuclphysb.2016.04.032. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Identification of high transverse momentum top quarks in pp collisions at √s= 8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 6 (June 1, 2016). https://doi.org/10.1007/JHEP06(2016)093. The ATLAS collaboration, L., M. Aaboud, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, et al. "Search for new phenomena in events with a photon and missing transverse momentum in pp collisions at √s = 13 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 6 (June 1, 2016): 1–41. https://doi.org/10.1007/JHEP06(2016)059. The ATLAS collaboration, L., M. Aaboud, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, et al. "Measurement of the relative width difference of the B0‐ B¯ 0 system with the ATLAS detector." Journal of High Energy Physics 2016, no. 6 (June 1, 2016). https://doi.org/10.1007/JHEP06(2016)081. The ATLAS collaboration, Manesh R., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "A search for top squarks with R-parity-violating decays to all-hadronic final states with the ATLAS detector in √s = 8 TeV proton-proton collisions." Journal of High Energy Physics 2016, no. 6 (June 1, 2016): 1–49. https://doi.org/10.1007/JHEP06(2016)067. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Beam-induced and cosmic-ray backgrounds observed in the ATLAS detector during the LHC 2012 proton-proton running period." Journal of Instrumentation 11, no. 5 (May 20, 2016). https://doi.org/10.1088/1748-0221/11/05/P05013. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions at s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 93, no. 9 (May 17, 2016). https://doi.org/10.1103/PhysRevD.93.092005. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Measurements of W±Z production cross sections in pp collisions at s =8 TeV with the ATLAS detector and limits on anomalous gauge boson self-couplings." Physical Review D Particles, Fields, Gravitation and Cosmology 93, no. 9 (May 13, 2016). https://doi.org/10.1103/PhysRevD.93.092004. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Measurement of the charge asymmetry in highly boosted top-quark pair production in √s=8 TeV pp collision data collected by the ATLAS experiment." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 756 (May 10, 2016): 52–71. https://doi.org/10.1016/j.physletb.2016.02.055. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Evidence for single top-quark production in the s-channel in proton–proton collisions at √s=8 TeV with the ATLAS detector using the Matrix Element Method." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 756 (May 10, 2016): 228–46. https://doi.org/10.1016/j.physletb.2016.03.017. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of the dependence of transverse energy production at large pseudorapidity on the hard-scattering kinematics of proton–proton collisions at √s=2.76 TeV with ATLAS." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 756 (May 10, 2016): 10–28. https://doi.org/10.1016/j.physletb.2016.02.056. Adam, J., D. Adamová, M. M. Aggarwal, G. Aglieri Rinella, M. Agnello, N. Agrawal, Z. Ahammed, et al. "Particle identification in ALICE: a Bayesian approach." European Physical Journal Plus 131, no. 5 (May 1, 2016). https://doi.org/10.1140/epjp/i2016-16168-5. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for the Standard Model Higgs boson decaying into bb¯ produced in association with top quarks decaying hadronically in pp collisions at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 5 (May 1, 2016). https://doi.org/10.1007/JHEP05(2016)160. Mindur, B., T. P. A. Åkesson, F. Anghinolfi, A. Antonov, O. Arslan, O. K. Baker, E. Banas, et al. "Gas gain stabilisation in the ATLAS TRT detector." Journal of Instrumentation 11, no. 4 (April 29, 2016). https://doi.org/10.1088/1748-0221/11/04/P04027. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at s =8 TeV with the ATLAS detector." Physical Review D 93, no. 7 (April 18, 2016). https://doi.org/10.1103/PhysRevD.93.072007. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Combination of searches for WW, WZ, and ZZ resonances in pp collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 755 (April 10, 2016): 285–305. https://doi.org/10.1016/j.physletb.2016.02.015. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Performance of b-jet identification in the ATLAS experiment." Journal of Instrumentation 11, no. 4 (April 4, 2016). https://doi.org/10.1088/1748-0221/11/04/P04008. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Observation of Long-Range Elliptic Azimuthal Anisotropies in sqrt[s]=13 and 2.76 TeV pp Collisions with the ATLAS Detector." Physical Review Letters 116, no. 17 (April 2016): 172301. https://doi.org/10.1103/physrevlett.116.172301. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector." Journal of High Energy Physics 2016, no. 4 (April 1, 2016). https://doi.org/10.1007/JHEP04(2016)023. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at sNN =2.76 TeV measured with the ATLAS detector." Physical Review C 93, no. 3 (March 28, 2016). https://doi.org/10.1103/PhysRevC.93.034914. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Search for magnetic monopoles and stable particles with high electric charges in 8 TeV pp collisions with the ATLAS detector." Physical Review D 93, no. 5 (March 18, 2016). https://doi.org/10.1103/PhysRevD.93.052009. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Search for new phenomena in dijet mass and angular distributions from pp collisions at √s=13 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 754 (March 10, 2016): 302–22. https://doi.org/10.1016/j.physletb.2016.01.032. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Dijet production in √s=7 TeV pp collisions with large rapidity gaps at the ATLAS experiment." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 754 (March 10, 2016): 214–34. https://doi.org/10.1016/j.physletb.2016.01.028. The ATLAS collaboration, Tania J., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for new phenomena with photon+jet events in proton-proton collisions at √s = 13 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 3 (March 8, 2016). https://doi.org/10.1007/JHEP03(2016)041. The ATLAS collaboration, Donald E Jr, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for strong gravity in multijet final states produced in pp collisions at √s = 13 TeV using the ATLAS detector at the LHC." Journal of High Energy Physics 2016, no. 3 (March 7, 2016). https://doi.org/10.1007/JHEP03(2016)026. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for the electroweak production of supersymmetric particles in √s = 8 TeV pp collisions with the ATLAS detector." Physical Review D 93, no. 5 (March 4, 2016). https://doi.org/10.1103/PhysRevD.93.052002. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Measurement of jet charge in dijet events from s =8 TeV pp collisions with the ATLAS detector." Physical Review D 93, no. 5 (March 2, 2016). https://doi.org/10.1103/PhysRevD.93.052003. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Erratum to: Study of the spin and parity of the Higgs boson in diboson decays with the ATLAS detector." European Physical Journal C 76, no. 3 (March 1, 2016). https://doi.org/10.1140/epjc/s10052-016-3934-y. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Erratum to ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider." European Physical Journal C 76, no. 3 (March 1, 2016). https://doi.org/10.1140/epjc/s10052-016-3935-x. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Measurement of the differential cross-section of highly boosted top quarks as a function of their transverse momentum in s =8 TeV proton-proton collisions using the ATLAS detector." Physical Review D 93, no. 3 (February 26, 2016). https://doi.org/10.1103/PhysRevD.93.032009. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurements of four-lepton production in pp collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 753 (February 10, 2016): 552–72. https://doi.org/10.1016/j.physletb.2015.12.048. The ATLAS collaboration, A. T. L. A. S., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for the production of single vector-like and excited quarks in the Wt final state in pp collisions at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 2 (February 1, 2016): 1–46. https://doi.org/10.1007/JHEP02(2016)110. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "A search for prompt lepton-jets in pp collisions at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 2 (February 1, 2016): 1–51. https://doi.org/10.1007/JHEP02(2016)062. The ATLAS collaboration, Xiao-Fan, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 1 (January 28, 2016): 1–44. https://doi.org/10.1007/JHEP01(2016)172. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Measurement of the correlation between the polar angles of leptons from top quark decays in the helicity basis at s =7 T e V using the ATLAS detector." Physical Review D 93, no. 1 (January 13, 2016). https://doi.org/10.1103/PhysRevD.93.012002. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Searches for scalar leptoquarks in pp collisions at [Formula: see text] = 8 TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76 (January 2016): 5. https://doi.org/10.1140/epjc/s10052-015-3823-9. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for supersymmetry in a final state containing two photons and missing transverse momentum in [Formula: see text] = 13 TeV [Formula: see text] collisions at the LHC using the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 9 (January 2016): 517. https://doi.org/10.1140/epjc/s10052-016-4344-x. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Study of the rare decays of [Formula: see text] and [Formula: see text] into muon pairs from data collected during the LHC Run 1 with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 9 (January 2016): 513. https://doi.org/10.1140/epjc/s10052-016-4338-8. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Charged-particle distributions at low transverse momentum in [Formula: see text] TeV pp interactions measured with the ATLAS detector at the LHC." The European Physical Journal. C, Particles and Fields 76, no. 9 (January 2016): 502. https://doi.org/10.1140/epjc/s10052-016-4335-y. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for scalar leptoquarks in pp collisions at √S = 13TeV with the ATLAS experiment." New Journal of Physics 18, no. 9 (January 1, 2016): 1–25. https://doi.org/10.1088/1367-2630/18/9/093016. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for bottom squark pair production in proton-proton collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 10 (January 2016): 547. https://doi.org/10.1140/epjc/s10052-016-4382-4. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for new phenomena in different-flavour high-mass dilepton final states in pp collisions at [Formula: see text] Tev with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 10 (January 2016): 541. https://doi.org/10.1140/epjc/s10052-016-4385-1. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for minimal supersymmetric standard model Higgs Bosons H / A and for a [Formula: see text] boson in the [Formula: see text] final state produced in pp collisions at [Formula: see text] TeV with the ATLAS detector." Eur Phys J C Part Fields 76, no. 11 (2016): 585. https://doi.org/10.1140/epjc/s10052-016-4400-6. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Luminosity determination in pp collisions at [Formula: see text] = 8 TeV using the ATLAS detector at the LHC." Eur Phys J C Part Fields 76, no. 12 (2016): 653. https://doi.org/10.1140/epjc/s10052-016-4466-1. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the total cross section from elastic scattering in pp collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 761 (January 1, 2016): 158–78. https://doi.org/10.1016/j.physletb.2016.08.020. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the photon identification efficiencies with the ATLAS detector using LHC Run-1 data." Eur Phys J C Part Fields 76, no. 12 (2016): 666. https://doi.org/10.1140/epjc/s10052-016-4507-9. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the [Formula: see text] dijet cross section in pp collisions at [Formula: see text] TeV with the ATLAS detector." Eur Phys J C Part Fields 76, no. 12 (2016): 670. https://doi.org/10.1140/epjc/s10052-016-4521-y. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for squarks and gluinos in events with hadronically decaying tau leptons, jets and missing transverse momentum in proton-proton collisions at [Formula: see text] TeV recorded with the ATLAS detector." Eur Phys J C Part Fields 76, no. 12 (2016): 683. https://doi.org/10.1140/epjc/s10052-016-4481-2. Aaboud, M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Search for the Higgs boson produced in association with a W boson and decaying to four b-quarks via two spin-zero particles in pp collisions at 13 TeV with the ATLAS detector." Eur Phys J C Part Fields 76, no. 11 (2016): 605. https://doi.org/10.1140/epjc/s10052-016-4418-9. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Search for supersymmetry at [Formula: see text] TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 5 (January 2016): 259. https://doi.org/10.1140/epjc/s10052-016-4095-8. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Charged-particle distributions in pp interactions at [Formula: see text] measured with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 7 (January 2016): 403. https://doi.org/10.1140/epjc/s10052-016-4203-9. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Search for single production of vector-like quarks decaying into Wb in pp collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 8 (January 2016): 442. https://doi.org/10.1140/epjc/s10052-016-4281-8. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "The performance of the jet trigger for the ATLAS detector during 2011 data taking." The European Physical Journal. C, Particles and Fields 76, no. 10 (January 2016): 526. https://doi.org/10.1140/epjc/s10052-016-4325-0. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, M. Abolins, et al. "Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at [Formula: see text] = 13 Te V with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 10 (January 2016): 565. https://doi.org/10.1140/epjc/s10052-016-4397-x. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurements of fiducial cross-sections for [Formula: see text] production with one or two additional b-jets in pp collisions at [Formula: see text]=8 TeV using the ATLAS detector." The European Physical Journal. C, Particles and Fields 76 (January 2016): 11. https://doi.org/10.1140/epjc/s10052-015-3852-4. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at [Formula: see text] and 8 TeV in the ATLAS experiment." The European Physical Journal. C, Particles and Fields 76 (January 2016): 6. https://doi.org/10.1140/epjc/s10052-015-3769-y. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for flavour-changing neutral current top-quark decays to [Formula: see text] in [Formula: see text] collision data collected with the ATLAS detector at [Formula: see text] TeV." The European Physical Journal. C, Particles and Fields 76 (January 2016): 12. https://doi.org/10.1140/epjc/s10052-015-3851-5. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for single top-quark production via flavour-changing neutral currents at 8 TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76 (January 2016): 55. https://doi.org/10.1140/epjc/s10052-016-3876-4. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for new phenomena in events with at least three photons collected in pp collisions at [Formula: see text] = 8 TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 4 (January 2016): 210. https://doi.org/10.1140/epjc/s10052-016-4034-8. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of the differential cross-sections of prompt and non-prompt production of [Formula: see text] and [Formula: see text] in pp collisions at [Formula: see text] and 8 TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 5 (January 2016): 283. https://doi.org/10.1140/epjc/s10052-016-4050-8. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurements of top-quark pair differential cross-sections in the lepton+jets channel in pp collisions at [Formula: see text] using the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 10 (January 2016): 538. https://doi.org/10.1140/epjc/s10052-016-4366-4. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Performance of pile-up mitigation techniques for jets in [Formula: see text] collisions at [Formula: see text] TeV using the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 11 (January 2016): 581. https://doi.org/10.1140/epjc/s10052-016-4395-z. Aad, G., B. Abbott, O. Abdinov, J. Abdallah, B. Abeloos, R. Aben, M. Abolins, et al. "Test of CP invariance in vector-boson fusion production of the Higgs boson using the Optimal Observable method in the ditau decay channel with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 12 (January 2016): 658. https://doi.org/10.1140/epjc/s10052-016-4499-5. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton-lead collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 4 (January 2016): 199. https://doi.org/10.1140/epjc/s10052-016-4002-3. Atlas Collaboration, B. M., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for direct top squark pair production in final states with two tau leptons in pp collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76 (January 2016): 81. https://doi.org/10.1140/epjc/s10052-016-3897-z. Atlas Collaboration, Javed, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in pp collision data at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76 (January 2016): 87. https://doi.org/10.1140/epjc/s10052-016-3910-6. Atlas Collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Muon reconstruction performance of the ATLAS detector in proton-proton collision data at [Formula: see text]=13 TeV." The European Physical Journal. C, Particles and Fields 76, no. 5 (January 2016): 292. https://doi.org/10.1140/epjc/s10052-016-4120-y. Atlas Collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of event-shape observables in [Formula: see text] events in pp collisions at [Formula: see text] [Formula: see text] with the ATLAS detector at the LHC." The European Physical Journal. C, Particles and Fields 76, no. 7 (January 2016): 375. https://doi.org/10.1140/epjc/s10052-016-4176-8. Atlas Collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "A new method to distinguish hadronically decaying boosted Z bosons from W bosons using the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 5 (January 2016): 238. https://doi.org/10.1140/epjc/s10052-016-4065-1. Atlas Collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Measurement of the transverse momentum and [Formula: see text] distributions of Drell-Yan lepton pairs in proton-proton collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 5 (January 2016): 291. https://doi.org/10.1140/epjc/s10052-016-4070-4. Atlas Collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment." The European Physical Journal. C, Particles and Fields 76, no. 5 (January 2016): 295. https://doi.org/10.1140/epjc/s10052-016-4110-0. Atlas Collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Probing lepton flavour violation via neutrinoless [Formula: see text] decays with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 5 (January 2016): 232. https://doi.org/10.1140/epjc/s10052-016-4041-9. Atlas Collaboration, Ralph G., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, R. Aben, et al. "Measurement of the charged-particle multiplicity inside jets from [Formula: see text][Formula: see text] pp collisions with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 6 (January 2016): 322. https://doi.org/10.1140/epjc/s10052-016-4126-5. Atlas Collaboration, Shenglan, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for an additional, heavy Higgs boson in the [Formula: see text] decay channel at [Formula: see text] in [Formula: see text] collision data with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76 (January 2016): 45. https://doi.org/10.1140/epjc/s10052-015-3820-z. Atlas Collaboration, Yo, M. Aaboud, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, B. Abeloos, et al. "Search for squarks and gluinos in final states with jets and missing transverse momentum at [Formula: see text] =13 [Formula: see text]with the ATLAS detector." The European Physical Journal. C, Particles and Fields 76, no. 7 (January 2016): 392. https://doi.org/10.1140/epjc/s10052-016-4184-8. The ATLAS collaboration, Patrick, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Measurement of the production cross-section of a single top quark in association with a W boson at 8 TeV with the ATLAS experiment." Journal of High Energy Physics 2016, no. 1 (January 1, 2016): 1–48. https://doi.org/10.1007/JHEP01(2016)064. The ATLAS collaboration, V Joseph, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at (Formula presented.) TeV with the ATLAS detector." Journal of High Energy Physics 2016, no. 1 (January 1, 2016): 1–66. https://doi.org/10.1007/JHEP01(2016)032. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Search for pair production of a new heavy quark that decays into a W boson and a light quark in pp collisions at s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 11 (December 22, 2015). https://doi.org/10.1103/PhysRevD.92.112007. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of the branching ratio Γ(Λb0→ψ(2S)Λ0)/Γ(Λb0→J/ψΛ0) with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 751 (December 17, 2015): 63–80. https://doi.org/10.1016/j.physletb.2015.10.009. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Determination of the Ratio of b-Quark Fragmentation Fractions f(s)/f(d) in pp Collisions at √s=7 TeV with the ATLAS Detector." Physical Review Letters 115, no. 26 (December 2015): 262001. https://doi.org/10.1103/physrevlett.115.262001. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector SEARCH for NONPOINTING and DELAYED PHOTONS in ⋯ G. AAD et al." Zeitschrift Fur Antikes Christentum 19, no. 3 (December 1, 2015): 1DUMMY. https://doi.org/10.1103/PhysRevD.90.112005. The ATLAS collaboration, C. L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for high-mass diboson resonances with boson-tagged jets in proton-proton collisions at √s = 8TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 12 (December 1, 2015): 1–39. https://doi.org/10.1007/JHEP12(2015)055. The ATLAS collaboration, Deborah A., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Measurement of four-jet differential cross sections in √s=8 TeV proton-proton collisions using the ATLAS detector." Journal of High Energy Physics 2015, no. 12 (December 1, 2015): 1–76. https://doi.org/10.1007/JHEP12(2015)105. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for flavour-changing neutral current top quark decays t → Hq in pp collisions at √s=8 TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 12 (December 1, 2015): 1–65. https://doi.org/10.1007/JHEP12(2015)061. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of colour flow with the jet pull angle in tt- events using the ATLAS detector at √s=8 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 750 (November 12, 2015): 475–93. https://doi.org/10.1016/j.physletb.2015.09.051. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of transverse energy-energy correlations in multi-jet events in pp collisions at s=7 TeV using the ATLAS detector and determination of the strong coupling constant as(mZ)." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 750 (November 12, 2015): 427–47. https://doi.org/10.1016/j.physletb.2015.09.050. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of differential J/ψ production cross sections and forward-backward ratios in p + Pb collisions with the ATLAS detector." Physical Review C Nuclear Physics 92, no. 3 (October 14, 2015). https://doi.org/10.1103/PhysRevC.92.034904. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 749 (October 7, 2015): 242–61. https://doi.org/10.1016/j.physletb.2015.07.069. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for the associated production of the Higgs boson with a top quark pair in multilepton final states with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 749 (October 7, 2015): 519–41. https://doi.org/10.1016/j.physletb.2015.07.079. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Erratum to: Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √s=8 TeV with the ATLAS detector (Eur. Phys. J. C (2015) 75:299, 10.1140/epjc/s10052-015-3517-3)." European Physical Journal C 75, no. 9 (September 8, 2015). https://doi.org/10.1140/epjc/s10052-015-3639-7. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Centrality and rapidity dependence of inclusive jet production in √sNN =5 .02 TeV proton–lead collisions with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 748 (September 2, 2015): 392–413. https://doi.org/10.1016/j.physletb.2015.07.023. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for Dark Matter in Events with Missing Transverse Momentum and a Higgs Boson Decaying to Two Photons in pp Collisions at sqrt[s]=8 TeV with the ATLAS Detector." Physical Review Letters 115, no. 13 (September 2015): 131801. https://doi.org/10.1103/physrevlett.115.131801. The ATLAS collaboration, Thomas, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for production of vector-like quark pairs and of four top quarks in the lepton-plus-jets final state in pp collisions at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 8 (August 31, 2015). https://doi.org/10.1007/JHEP08(2015)105. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "First measurement of the forward-backward asymmetry in bottom-quark pair production at high mass." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 3 (August 18, 2015). https://doi.org/10.1103/PhysRevD.92.032006. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for high-mass diphoton resonances in pp collisions at s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 3 (August 14, 2015). https://doi.org/10.1103/PhysRevD.92.032004. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Study of (W/Z)H production and Higgs boson couplings using H→ W W∗ decays with the ATLAS detector." Journal of High Energy Physics 2015, no. 8 (August 14, 2015). https://doi.org/10.1007/JHEP08(2015)137. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for new phenomena in events with three or more charged leptons in pp collisions at √s=8 TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 8 (August 14, 2015). https://doi.org/10.1007/JHEP08(2015)138. The ATLAS collaboration, Maria, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "A search for tt resonances using lepton-plus-jets events in proton-proton collisions at √s=8 TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 8 (August 14, 2015). https://doi.org/10.1007/JHEP08(2015)148. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the top-quark mass in the tt¯ dilepton channel using the full CDF Run II data set." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 3 (August 6, 2015). https://doi.org/10.1103/PhysRevD.92.032003. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Search for type-III seesaw heavy leptons in pp collisions at s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 3 (August 3, 2015). https://doi.org/10.1103/PhysRevD.92.032001. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurements of the Total and Differential Higgs Boson Production Cross Sections Combining the H→γγ and H→ZZ^{*}→4ℓ Decay Channels at sqrt[s]=8 TeV with the ATLAS Detector." Physical Review Letters 115, no. 9 (August 2015): 091801. https://doi.org/10.1103/physrevlett.115.091801. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, F. Anzà, et al. "Search for Resonances Decaying to Top and Bottom Quarks with the CDF Experiment." Physical Review Letters 115, no. 6 (August 2015): 061801. https://doi.org/10.1103/physrevlett.115.061801. Oh, S. H., B. Tepera, and C. Wang. "Performance of a multianode straw tube detector." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 797 (July 27, 2015): 285–89. https://doi.org/10.1016/j.nima.2015.06.052. The ATLAS collaboration, Paul, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for low-scale gravity signatures in multi-jet final states with the ATLAS detector at √s = 8TeV." Journal of High Energy Physics 2015, no. 7 (July 23, 2015). https://doi.org/10.1007/JHEP07(2015)032. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Search for long-lived, weakly interacting particles that decay to displaced hadronic jets in proton-proton collisions at s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 1 (July 17, 2015). https://doi.org/10.1103/PhysRevD.92.012010. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Observation and measurement of Higgs boson decays to WW∗with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 1 (July 16, 2015). https://doi.org/10.1103/PhysRevD.92.012006. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "A search for high-mass resonances decaying to τ+τ− in pp collisions at √s =8 TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 7 (July 6, 2015). https://doi.org/10.1007/JHEP07(2015)157. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for heavy Majorana neutrinos with the ATLAS detector in pp collisions at √s = 8 TeV." Journal of High Energy Physics 2015, no. 7 (July 5, 2015). https://doi.org/10.1007/JHEP07(2015)162. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for a Heavy Neutral Particle Decaying to eμ, eτ, or μτ in pp Collisions at sqrt[s]=8 TeV with the ATLAS Detector." Physical Review Letters 115, no. 3 (July 2015): 031801. https://doi.org/10.1103/physrevlett.115.031801. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Evidence of Wγγ Production in pp Collisions at sqrt[s]=8 TeV and Limits on Anomalous Quartic Gauge Couplings with the ATLAS Detector." Physical Review Letters 115, no. 3 (July 2015): 031802. https://doi.org/10.1103/physrevlett.115.031802. The ATLAS collaboration, Steve A., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Differential top-antitop cross-section measurements as a function of observables constructed from final-state particles using pp collisions at √s = 7 TeV in the ATLAS detector." Journal of High Energy Physics 2015, no. 6 (June 29, 2015). https://doi.org/10.1007/JHEP06(2015)100. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of the top pair production cross section in 8 TeV proton-proton collisions using kinematic information in the lepton + jets final state with ATLAS." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 11 (June 24, 2015). https://doi.org/10.1103/PhysRevD.91.112013. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the production and differential cross sections of W+W- bosons in association with jets in pp¯ collisions at √s = 1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 11 (June 23, 2015). https://doi.org/10.1103/PhysRevD.91.111101. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. Abouzeid, et al. "Search for vectorlike B quarks in events with one isolated lepton, missing transverse momentum, and jets at s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 11 (June 19, 2015). https://doi.org/10.1103/PhysRevD.91.112011. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for New Phenomena in Dijet Angular Distributions in Proton-Proton Collisions at sqrt[s]=8 TeV Measured with the ATLAS Detector." Physical Review Letters 114, no. 22 (June 2015): 221802. https://doi.org/10.1103/physrevlett.114.221802. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for a Charged Higgs Boson Produced in the Vector-Boson Fusion Mode with Decay H(±)→W(±)Z using pp Collisions at √s=8 TeV with the ATLAS Experiment." Physical Review Letters 114, no. 23 (June 2015): 231801. https://doi.org/10.1103/physrevlett.114.231801. Aaltonen, T., M. G. Albrow, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of central exclusive π+π- production in p p ¯ collisions at s =0.9 and 1.96 TeV at CDF MEASUREMENT of CENTRAL EXCLUSIVE π+π- ... T. AALTONEN et al." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 9 (May 29, 2015). https://doi.org/10.1103/PhysRevD.91.091101. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the charge asymmetry in dileptonic decays of top quark pairs in pp collisions at √s = 7 TeV using the ATLAS detector." Journal of High Energy Physics 2015, no. 5 (May 22, 2015). https://doi.org/10.1007/JHEP05(2015)061. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for a CP-odd Higgs boson decaying to Zh in pp collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 744 (May 11, 2015): 163–83. https://doi.org/10.1016/j.physletb.2015.03.054. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Combined Measurement of the Higgs Boson Mass in pp Collisions at sqrt[s]=7 and 8 TeV with the ATLAS and CMS Experiments." Physical Review Letters 114, no. 19 (May 2015): 191803. https://doi.org/10.1103/physrevlett.114.191803. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Observation of top-quark pair production in association with a photon and measurement of the t t ¯ γ production cross section in pp collisions at √s =7 TeV using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 7 (April 28, 2015). https://doi.org/10.1103/PhysRevD.91.072007. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for W ′ → t b ¯ in the lepton plus jets final state in proton–proton collisions at a centre-of-mass energy of s = 8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 743 (April 9, 2015): 235–55. https://doi.org/10.1016/j.physletb.2015.02.051. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for pair-produced long-lived neutral particles decaying to jets in the ATLAS hadronic calorimeter in pp collisions at s=8TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 743 (April 9, 2015): 15–34. https://doi.org/10.1016/j.physletb.2015.02.015. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for Scalar Charm Quark Pair Production in pp Collisions at sqrt[s]=8 TeV with the ATLAS Detector." Physical Review Letters 114, no. 16 (April 2015): 161801. https://doi.org/10.1103/physrevlett.114.161801. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of spin correlation in top-antitop quark events and search for top squark pair production in pp collisions at √s=8 TeV using the ATLAS detector." Physical Review Letters 114, no. 14 (April 2015): 142001. https://doi.org/10.1103/physrevlett.114.142001. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Constraints on models of the Higgs boson with exotic spin and parity using decays to bottom-antibottom quarks in the full CDF data set." Physical Review Letters 114, no. 14 (April 2015): 141802. https://doi.org/10.1103/physrevlett.114.141802. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, J. P. Agnew, et al. "Tevatron constraints on models of the Higgs boson with exotic spin and parity using decays to bottom-antibottom quark pairs." Physical Review Letters 114, no. 15 (April 2015): 151802. https://doi.org/10.1103/physrevlett.114.151802. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for squarks and gluinos in events with isolated leptons, jets and missing transverse momentum at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 4 (April 1, 2015). https://doi.org/10.1007/JHEP04(2015)116. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector." Journal of High Energy Physics 2015, no. 4 (April 1, 2015): 1–74. https://doi.org/10.1007/JHEP04(2015)117. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for charged Higgs bosons decaying via H± → τ±ν in fully hadronic final states using pp collision data at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2015, no. 3 (March 17, 2015). https://doi.org/10.1007/JHEP03(2015)088. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for new phenomena in the dijet mass distribution using pp collision data at √s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 5 (March 9, 2015). https://doi.org/10.1103/PhysRevD.91.052007. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Simultaneous measurements of the tt¯, W+W-, and Z/γ∗ →ττ production cross-sections in pp collisions at √s =7 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 5 (March 6, 2015). https://doi.org/10.1103/PhysRevD.91.052005. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector." Physical Review Letters 114, no. 12 (March 2015): 121801. https://doi.org/10.1103/physrevlett.114.121801. Aaltonen, T., R. Alon, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Studies of high-transverse momentum jet substructure and top quarks produced in 1.96 TeV proton-antiproton collisions." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 3 (February 19, 2015). https://doi.org/10.1103/PhysRevD.91.032006. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of the transverse polarization of Λ and Λ ¯ hyperons produced in proton-proton collisions at √s = 7 TeV using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 3 (February 9, 2015). https://doi.org/10.1103/PhysRevD.91.032004. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurements of the Nuclear Modification Factor for Jets in Pb+Pb Collisions at √(s)NN]=2.76 TeV with the ATLAS detector." Physical Review Letters 114, no. 7 (February 2015): 072302. https://doi.org/10.1103/physrevlett.114.072302. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for Higgs boson pair production in the γγbb[over ¯] final state using pp collision data at sqrt[s]=8 TeV from the ATLAS detector." Physical Review Letters 114, no. 8 (February 2015): 081802. https://doi.org/10.1103/physrevlett.114.081802. The ATLAS collaboration, Amy P., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the inclusive jet cross-section in proton-proton collisions at √s=7 TeV using 4.5 fb−1 of data with the ATLAS detector." Journal of High Energy Physics 2015, no. 2 (February 1, 2015): 1–54. https://doi.org/10.1007/JHEP02(2015)153. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for new phenomena in events with a photon and missing transverse momentum in pp collisions at √s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 1 (January 27, 2015). https://doi.org/10.1103/PhysRevD.91.012008. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 1 (January 16, 2015). https://doi.org/10.1103/PhysRevD.91.012006. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Searches for heavy long-lived charged particles with the ATLAS detector in proton-proton collisions at √s = 8 Tev." Journal of High Energy Physics 2015, no. 1 (January 14, 2015). https://doi.org/10.1007/JHEP01(2015)068. The ATLAS collaboration, Adam P., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the tt¯ production cross-section as a function of jet multiplicity and jet transverse momentum in 7 TeV proton-proton collisions with the ATLAS detector." Journal of High Energy Physics 2015, no. 1 (January 8, 2015). https://doi.org/10.1007/JHEP01(2015)020. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of differential production cross sections for Z /γ∗ bosons in association with jets in p p ¯ collisions at s =1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 1 (January 6, 2015). https://doi.org/10.1103/PhysRevD.91.012002. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for s-channel single top-quark production in proton-proton collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 740 (January 5, 2015): 118–36. https://doi.org/10.1016/j.physletb.2014.11.042. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for the Xb and other hidden-beauty states in the π+π−ϒ(1S) channel at ATLAS." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 740 (January 5, 2015): 199–217. https://doi.org/10.1016/j.physletb.2014.11.055. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for H→γγ produced in association with top quarks and constraints on the Yukawa coupling between the top quark and the Higgs boson using data taken at 7 TeV and 8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 740 (January 5, 2015): 222–42. https://doi.org/10.1016/j.physletb.2014.11.049. ATLAS Collaboration, Andrea, G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Search for metastable heavy charged particles with large ionisation energy loss in pp collisions at [Formula: see text] TeV using the ATLAS experiment." The European Physical Journal. C, Particles and Fields 75, no. 9 (January 2015): 407. https://doi.org/10.1140/epjc/s10052-015-3609-0. ATLAS Collaboration, John H., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Performance of the ATLAS muon trigger in pp collisions at [Formula: see text] TeV." The European Physical Journal. C, Particles and Fields 75, no. 3 (January 2015): 120. https://doi.org/10.1140/epjc/s10052-015-3325-9. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the production and lepton charge asymmetry of [Formula: see text] bosons in Pb+Pb collisions at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 1 (January 2015): 23. https://doi.org/10.1140/epjc/s10052-014-3231-6. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for dark matter in events with heavy quarks and missing transverse momentum in [Formula: see text] collisions with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 2 (January 2015): 92. https://doi.org/10.1140/epjc/s10052-015-3306-z. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurements of the [Formula: see text] production cross sections in association with jets with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 2 (January 2015): 82. https://doi.org/10.1140/epjc/s10052-015-3262-7. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for invisible particles produced in association with single-top-quarks in proton-proton collisions at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 2 (January 2015): 79. https://doi.org/10.1140/epjc/s10052-014-3233-4. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for [Formula: see text] decays in [Formula: see text] collisions at [Formula: see text] = 8 TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 4 (January 2015): 165. https://doi.org/10.1140/epjc/s10052-015-3372-2. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of three-jet production cross-sections in [Formula: see text] collisions at 7 [Formula: see text] centre-of-mass energy using the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 5 (January 2015): 228. https://doi.org/10.1140/epjc/s10052-015-3363-3. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the top-quark mass in the fully hadronic decay channel from ATLAS data at [Formula: see text]." The European Physical Journal. C, Particles and Fields 75, no. 4 (January 2015): 158. https://doi.org/10.1140/epjc/s10052-015-3373-1. ATLAS Collaboration, Martin G., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Measurement of the top quark mass in the [Formula: see text] and [Formula: see text] channels using [Formula: see text] [Formula: see text] ATLAS data." The European Physical Journal. C, Particles and Fields 75, no. 7 (January 2015): 330. https://doi.org/10.1140/epjc/s10052-015-3544-0. ATLAS collaboration, Alan R., G. Aad, B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, et al. "Determination of spin and parity of the Higgs boson in the [Formula: see text] decay channel with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 5 (January 2015): 231. https://doi.org/10.1140/epjc/s10052-015-3436-3. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for a new resonance decaying to a W or Z boson and a Higgs boson in the [Formula: see text] final states with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 6 (January 2015): 263. https://doi.org/10.1140/epjc/s10052-015-3474-x. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for supersymmetry in events containing a same-flavour opposite-sign dilepton pair, jets, and large missing transverse momentum in [Formula: see text] TeV pp collisions with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 7 (January 2015): 318. https://doi.org/10.1140/epjc/s10052-015-3518-2. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 7 (January 2015): 335. https://doi.org/10.1140/epjc/s10052-015-3542-2. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for invisible decays of the Higgs boson produced in association with a hadronically decaying vector boson in pp collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 7 (January 2015): 337. https://doi.org/10.1140/epjc/s10052-015-3551-1. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for the Standard Model Higgs boson produced in association with top quarks and decaying into [Formula: see text] in [Formula: see text] collisions at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 7 (January 2015): 349. https://doi.org/10.1140/epjc/s10052-015-3543-1. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for heavy long-lived multi-charged particles in pp collisions at [Formula: see text] TeV using the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 8 (January 2015): 362. https://doi.org/10.1140/epjc/s10052-015-3534-2. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for higgs bosons decaying to aa in the µµττ final state in pp collisions at √s = 8 TeV with the ATLAS experiment." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 5 (January 1, 2015): 052002-1-052002–24. https://doi.org/10.1103/PhysRevD.92.052002. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurement of the correlation between flow harmonics of different order in lead-lead collisions at √sNN = 2.76 TeV with the ATLAS detector." Physical Review C Nuclear Physics 92, no. 3 (January 1, 2015). https://doi.org/10.1103/PhysRevC.92.034903. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for photonic signatures of gauge-mediated supersymmetry in 8 TeV pp collisions with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 7 (January 1, 2015). https://doi.org/10.1103/PhysRevD.92.072001. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Search for Higgs boson pair production in the [Formula: see text] final state from pp collisions at [Formula: see text] TeVwith the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 9 (January 2015): 412. https://doi.org/10.1140/epjc/s10052-015-3628-x. Aad, G., B. Abbott, J. Abdallah, O. Abdinov, R. Aben, M. Abolins, O. S. AbouZeid, et al. "Measurements of the top quark branching ratios into channels with leptons and quarks with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 92, no. 7 (January 1, 2015). https://doi.org/10.1103/PhysRevD.92.072005. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for production of [Formula: see text] resonances decaying to a lepton, neutrino and jets in [Formula: see text] collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 5 (January 2015): 209. https://doi.org/10.1140/epjc/s10052-015-3425-6. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in [Formula: see text] TeV [Formula: see text] collisions with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 5 (January 2015): 208. https://doi.org/10.1140/epjc/s10052-015-3408-7. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Observation and measurements of the production of prompt and non-prompt [Formula: see text] mesons in association with a [Formula: see text] boson in [Formula: see text] collisions at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 5 (January 2015): 229. https://doi.org/10.1140/epjc/s10052-015-3406-9. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for massive supersymmetric particles decaying to many jets using the ATLAS detector in pp collisions at s =8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 91, no. 11 (January 1, 2015). https://doi.org/10.1103/PhysRevD.91.112016. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at [Formula: see text]TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75, no. 7 (January 2015): 299. https://doi.org/10.1140/epjc/s10052-015-3517-3. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Identification and energy calibration of hadronically decaying tau leptons with the ATLAS experiment in pp collisions at [Formula: see text][Formula: see text]." The European Physical Journal. C, Particles and Fields 75, no. 7 (January 2015): 303. https://doi.org/10.1140/epjc/s10052-015-3500-z. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Erratum: Search for production of WW/WZ resonances decaying to a lepton, neutrino and jets in pp collisions at √s = 8 TeV with the ATLAS detector (The European Physical Journal C (2015) 75 (209) DOI: 10.1140/epjc/s10052-015-3425-6)." European Physical Journal C 75, no. 8 (January 1, 2015). https://doi.org/10.1140/epjc/s10052-015-3593-4. Atlas Collaboration, John, G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, et al. "Jet energy measurement and its systematic uncertainty in proton-proton collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 75 (January 2015): 17. https://doi.org/10.1140/epjc/s10052-014-3190-y. The ATLAS collaboration, Charles L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for anomalous production of prompt same-sign lepton pairs and pair-produced doubly charged Higgs bosons with s =8 TeV pp collisions using the ATLAS detector." Journal of High Energy Physics 2015, no. 3 (January 1, 2015). https://doi.org/10.1007/JHEP03(2015)041. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for the b¯b decay of the Standard Model Higgs boson in associated (W/Z)H production with the ATLAS detector." Journal of High Energy Physics 2015, no. 1 (January 1, 2015). https://doi.org/10.1007/JHEP01(2015)069. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the W W + W Z cross section and limits on anomalous triple gauge couplings using final states with one lepton, missing transverse momentum, and two jets with the ATLAS detector at √s = 7 TeV." Journal of High Energy Physics 2015, no. 1 (January 1, 2015): 1–42. https://doi.org/10.1007/JHEP01(2015)049. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of indirect CP -violating asymmetries in D0 →k+K- and D0 →π+π- decays at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 11 (December 30, 2014). https://doi.org/10.1103/PhysRevD.90.111103. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurements of spin correlation in top-antitop quark events from proton-proton collisions at √s =7 TeV using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 11 (December 24, 2014). https://doi.org/10.1103/PhysRevD.90.112016. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of Higgs boson production in the diphoton decay channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 11 (December 24, 2014). https://doi.org/10.1103/PhysRevD.90.112015. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of inclusive jet charged-particle fragmentation functions in Pb+Pb collisions at √sNN=2.76 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 739 (December 12, 2014): 320–42. https://doi.org/10.1016/j.physletb.2014.10.065. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Comprehensive measurements of t-channel single top-quark production cross sections at √s=7 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 11 (December 11, 2014). https://doi.org/10.1103/PhysRevD.90.112006. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of the total cross section from elastic scattering in pp collisions at √s=7 TeV with the ATLAS detector." Nuclear Physics B 889 (December 1, 2014): 486–548. https://doi.org/10.1016/j.nuclphysb.2014.10.019. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "ATLAS Collaboration." Nuclear Physics A 932 (December 1, 2014): 572–94. https://doi.org/10.1016/S0375-9474(14)00601-0. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the single top quark production cross section and |Vtb| in events with one charged lepton, large missing transverse energy, and jets at CDF." Physical Review Letters 113, no. 26 (December 2014): 261804. https://doi.org/10.1103/physrevlett.113.261804. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for pair and single production of new heavy quarks that decay to a Z boson and a third-generation quark in pp collisions at √ S = 8TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 11 (November 19, 2014). https://doi.org/10.1007/JHEP11(2014)104. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the top-quark mass in the all-hadronic channel using the full CDF data set." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 9 (November 18, 2014). https://doi.org/10.1103/PhysRevD.90.091101. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for neutral Higgs bosons of the minimal supersymmetric standard model in pp collisions at vs= 8TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 11 (November 11, 2014): 1–47. https://doi.org/10.1007/JHEP11(2014)056. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for new resonances in Wγ and Zγ final states in pp collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 738 (November 10, 2014): 428–47. https://doi.org/10.1016/j.physletb.2014.10.002. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of the cross section of high transverse momentum Z→bb¯ production in proton–proton collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 738 (November 10, 2014): 25–43. https://doi.org/10.1016/j.physletb.2014.09.020. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for the Standard Model Higgs boson decay to μ+μ− with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 738 (November 10, 2014): 68–86. https://doi.org/10.1016/j.physletb.2014.09.008. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, R. Aben, et al. "Measurement of the cross-section of high transverse momentum vector bosons reconstructed as single jets and studies of jet substructure in pp collisions at √s = 7 TeV with the ATLAS detector." New Journal of Physics 16 (November 4, 2014). https://doi.org/10.1088/1367-2630/16/11/113013. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Observation of an excited Bc(±) meson state with the ATLAS detector." Physical Review Letters 113, no. 21 (November 2014): 212004. https://doi.org/10.1103/physrevlett.113.212004. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in √ s = 8 TeV pp collisions with the ATLAS detector." Journal of High Energy Physics 2014, no. 11 (November 1, 2014). https://doi.org/10.1007/JHEP11(2014)118. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for the lepton flavor violating decay Z → eμ in pp collisions at √s = 8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 7 (October 23, 2014). https://doi.org/10.1103/PhysRevD.90.072010. The ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Electron and photon energy calibration with the ATLAS detector using LHC Run 1 data." European Physical Journal C 74, no. 10 (October 21, 2014): 1–48. https://doi.org/10.1140/epjc/s10052-014-3071-4. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of long-range pseudorapidity correlations and azimuthal harmonics in √sNN =5.02 TeV proton-lead collisions with the ATLAS detector." Physical Review C Nuclear Physics 90, no. 4 (October 9, 2014). https://doi.org/10.1103/PhysRevC.90.044906. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for WZ resonances in the fully leptonic channel using pp collisions at s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 737 (October 7, 2014): 223–43. https://doi.org/10.1016/j.physletb.2014.08.039. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Evidence for electroweak production of W±W±jj in pp collisions at sqrt[s] = 8 TeV with the ATLAS detector." Physical Review Letters 113, no. 14 (October 2014): 141803. https://doi.org/10.1103/physrevlett.113.141803. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for scalar diphoton resonances in the mass range 65-600 GeV with the ATLAS detector in pp collision data at √s=8 TeV." Physical Review Letters 113, no. 17 (October 2014): 171801. https://doi.org/10.1103/physrevlett.113.171801. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for squarks and gluinos with the ATLAS detector in final states with jets and missing transverse momentum using √s = 8 TeV proton-proton collision data." Journal of High Energy Physics 2014, no. 9 (September 30, 2014). https://doi.org/10.1007/JHEP09(2014)176. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for pair-produced third-generation squarks decaying via charm quarks or in compressed supersymmetric scenarios in pp collisions at √s =8TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 5 (September 24, 2014). https://doi.org/10.1103/PhysRevD.90.052008. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Flavor tagged time-dependent angular analysis of the Bs0 →J/ψφ decay and extraction of ΔΓs and the weak phase φs in ATLAS." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 5 (September 23, 2014). https://doi.org/10.1103/PhysRevD.90.052007. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for high-mass dilepton resonances in pp collisions at s =8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 5 (September 19, 2014). https://doi.org/10.1103/PhysRevD.90.052005. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurements of fiducial and differential cross sections for Higgs boson production in the diphoton decay channel at √s = 8 TeV with ATLAS." Journal of High Energy Physics 2014, no. 9 (September 19, 2014). https://doi.org/10.1007/JHEP09(2014)112. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of the Higgs boson mass from the H → γγ and H → ZZ∗- 4ℓ channels inpp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 5 (September 9, 2014). https://doi.org/10.1103/PhysRevD.90.052004. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for supersymmetry in events with four or more leptons in √s =8 TeV pp collisions with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 5 (September 4, 2014). https://doi.org/10.1103/PhysRevD.90.052001. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, R. Aben, B. Abi, et al. "A neural network clustering algorithm for the ATLAS silicon pixel detector." Journal of Instrumentation 9, no. 9 (September 1, 2014). https://doi.org/10.1088/1748-0221/9/09/P09009. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for direct pair production of the top squark in all-hadronic final states in proton-proton collisions at √s = 8TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 9 (September 1, 2014): 1–51. https://doi.org/10.1007/JHEP09(2014)015. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for new particles in events with one lepton and missing transverse momentum in pp collisions at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 9 (September 1, 2014). https://doi.org/10.1007/JHEP09(2014)037. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of the Z/γ* boson transverse momentum distribution in pp collisions at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 9 (September 1, 2014). https://doi.org/10.1007/JHEP09(2014)145. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of event-plane correlations in √sNN = 2.76 TeV lead-lead collisions with the ATLAS detector." Physical Review C Nuclear Physics 90, no. 2 (August 12, 2014). https://doi.org/10.1103/PhysRevC.90.024905. The ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, et al. "Measurement of the underlying event in jet events from 7 TeV proton–proton collisions with the ATLAS detector." European Physical Journal C 74, no. 8 (August 1, 2014): 1–29. https://doi.org/10.1140/epjc/s10052-014-2965-5. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of χc1 and ψχc2 production with √s = 7 TeV pp collisions at ATLAS." Journal of High Energy Physics 2014, no. 7 (July 30, 2014): 1–51. https://doi.org/10.1007/JHEP07(2014)154. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Study of orbitally excited B mesons and evidence for a new Bπ resonance." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 1 (July 28, 2014). https://doi.org/10.1103/PhysRevD.90.012013. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for new physics in trilepton events and limits on the associated chargino-neutralino production at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 1 (July 23, 2014). https://doi.org/10.1103/PhysRevD.90.012011. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for dark matter in events with a Z boson and missing transverse momentum in pp collisions at s =8TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 90, no. 1 (July 10, 2014). https://doi.org/10.1103/PhysRevD.90.012004. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Monitoring and data quality assessment of the ATLAS liquid argon calorimeter." Journal of Instrumentation 9, no. 7 (July 1, 2014). https://doi.org/10.1088/1748-0221/9/07/P07024. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the inclusive leptonic asymmetry in top-quark pairs that decay to two charged leptons at CDF." Physical Review Letters 113, no. 4 (July 2014): 042001. https://doi.org/10.1103/physrevlett.113.042001. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the ZZ production cross section using the full CDF II data set." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 11 (June 3, 2014). https://doi.org/10.1103/PhysRevD.89.112001. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurements of four-lepton production at the Z resonance in pp collisions at sqrt[s] = 7 and 8 TeV with ATLAS." Physical Review Letters 112, no. 23 (June 2014): 231806. https://doi.org/10.1103/physrevlett.112.231806. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Evidence for s-channel single-top-quark production in events with one charged lepton and two jets at CDF." Physical Review Letters 112, no. 23 (June 2014): 231804. https://doi.org/10.1103/physrevlett.112.231804. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for s-channel single-top-quark production in events with missing energy plus jets in pp collisions at sqrt[s] = 1.96 TeV." Physical Review Letters 112, no. 23 (June 2014): 231805. https://doi.org/10.1103/physrevlett.112.231805. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of B(t→Wb)/B(t→Wq) in top-quark-pair decays using dilepton events and the full CDF Run II data set." Physical Review Letters 112, no. 22 (June 2014): 221801. https://doi.org/10.1103/physrevlett.112.221801. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, J. P. Agnew, et al. "Observation of s-channel production of single top quarks at the Tevatron." Physical Review Letters 112, no. 23 (June 2014): 231803. https://doi.org/10.1103/physrevlett.112.231803. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the parity-violating asymmetry parameter αb and the helicity amplitudes for the decay Λb0 →j /ψ Λ0 with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 9 (May 27, 2014). https://doi.org/10.1103/PhysRevD.89.092009. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for direct production of charginos, neutralinos and sleptons in final states with two leptons and missing transverse momentum in pp collisions at √s = 8TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 5 (May 16, 2014). https://doi.org/10.1007/JHEP05(2014)071. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Study of top quark production and decays involving a tau lepton at CDF and limits on a charged Higgs boson contribution." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 9 (May 13, 2014). https://doi.org/10.1103/PhysRevD.89.091101. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Invariant-mass distribution of jet pairs produced in association with a W boson in p p ̄ collisions at s =1.96TeV using the full CDF Run II data set invariant-mass distribution of ⋯ T. Aaltonen et al." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 9 (May 5, 2014). https://doi.org/10.1103/PhysRevD.89.092001. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for Higgs boson decays to a photon and a Z boson in pp collisions at s=7 and 8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 732 (May 1, 2014): 8–27. https://doi.org/10.1016/j.physletb.2014.03.015. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the production of a W boson in association with a charm quark in pp collisions at √s = 7TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 5 (May 1, 2014): 1–67. https://doi.org/10.1007/JHEP05(2014)068. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Mass and lifetime measurements of bottom and charm baryons in p p ̄ collisions at s =1.96TeV MASS and LIFETIME MEASUREMENTS of BOTTOM and ⋯ T. AALTONEN." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 7 (April 22, 2014). https://doi.org/10.1103/PhysRevD.89.072014. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Study of heavy-flavor quarks produced in association with top-quark pairs at s =7TeV using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 7 (April 21, 2014). https://doi.org/10.1103/PhysRevD.89.072012. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Indirect measurement of sin2θW (or MW) using μ+μ- pairs from γ* /Z bosons produced in p p ̄ collisions at a center-of-momentum energy of 1.96 TeV INDIRECT MEASUREMENT of sin2θW (OR ... T. AALTONEN et al." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 7 (April 4, 2014). https://doi.org/10.1103/PhysRevD.89.072005. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Precise measurement of the W -boson mass with the Collider Detector at Fermilab." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 7 (April 3, 2014). https://doi.org/10.1103/PhysRevD.89.072003. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, J. P. Agnew, et al. "Combination of measurements of the top-quark pair production cross section from the Tevatron Collider." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 7 (April 1, 2014). https://doi.org/10.1103/PhysRevD.89.072001. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the production cross section of prompt J/ψ mesons in association with a W± boson in pp collisions at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 4 (April 1, 2014). https://doi.org/10.1007/JHEP04(2014)172. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the inclusive isolated prompt photons cross section in pp collisions at √s =7 TeV with the ATLAS detector using 4.6 fb-1." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 5 (March 24, 2014). https://doi.org/10.1103/PhysRevD.89.052004. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for quantum black hole production in high-invariant-mass lepton+jet final states using pp collisions at √s=8 TeV and the ATLAS detector." Physical Review Letters 112, no. 9 (March 2014): 091804. https://doi.org/10.1103/physrevlett.112.091804. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "First search for exotic Z Boson decays into photons and neutral pions in hadron collisions." Physical Review Letters 112, no. 11 (March 2014): 111803. https://doi.org/10.1103/physrevlett.112.111803. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for a multi-Higgs-boson cascade in W+W -bb̄ events with the ATLAS detector in pp collisions at √s = 8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 89, no. 3 (February 19, 2014). https://doi.org/10.1103/PhysRevD.89.032002. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the mass difference between top and anti-top quarks in pp collisions at √s=7 TeV using the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 728 (January 20, 2014): 363–79. https://doi.org/10.1016/j.physletb.2013.12.010. ATLAS Collaboration, Joshua L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurements of jet vetoes and azimuthal decorrelations in dijet events produced in [Formula: see text] collisions at [Formula: see text] using the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 11 (January 2014): 3117. https://doi.org/10.1140/epjc/s10052-014-3117-7. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for direct top squark pair production in events with a [Formula: see text] boson, [Formula: see text]-jets and missing transverse momentum in [Formula: see text] TeV [Formula: see text] collisions with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 6 (January 2014): 2883. https://doi.org/10.1140/epjc/s10052-014-2883-6. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the centrality and pseudorapidity dependence of the integrated elliptic flow in lead-lead collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 8 (January 2014): 2982. https://doi.org/10.1140/epjc/s10052-014-2982-4. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the muon reconstruction performance of the ATLAS detector using 2011 and 2012 LHC proton-proton collision data." The European Physical Journal. C, Particles and Fields 74, no. 11 (January 2014): 3130. https://doi.org/10.1140/epjc/s10052-014-3130-x. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "A measurement of the ratio of the production cross sections for [Formula: see text] and [Formula: see text] bosons in association with jets with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 12 (January 2014): 3168. https://doi.org/10.1140/epjc/s10052-014-3168-9. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for contact interactions and large extra dimensions in the dilepton channel using proton-proton collisions at [Formula: see text] 8 TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 12 (January 2014): 3134. https://doi.org/10.1140/epjc/s10052-014-3134-6. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of flow harmonics with multi-particle cumulants in Pb+Pb collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 11 (January 2014): 3157. https://doi.org/10.1140/epjc/s10052-014-3157-z. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, O. Abdinov, et al. "Electron reconstruction and identification efficiency measurements with the ATLAS detector using the 2011 LHC proton-proton collision data." The European Physical Journal. C, Particles and Fields 74, no. 7 (January 2014): 2941. https://doi.org/10.1140/epjc/s10052-014-2941-0. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Muon reconstruction efficiency and momentum resolution of the ATLAS experiment in proton-proton collisions at [Formula: see text] TeV in 2010." The European Physical Journal. C, Particles and Fields 74, no. 9 (January 2014): 3034. https://doi.org/10.1140/epjc/s10052-014-3034-9. ATLAS Collaboration, Larry B., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "The differential production cross section of the [Formula: see text](1020) meson in [Formula: see text] = 7 TeV [Formula: see text] collisions measured with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 7 (January 2014): 2895. https://doi.org/10.1140/epjc/s10052-014-2895-2. ATLAS Collaboration, R. J., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of distributions sensitive to the underlying event in inclusive Z-boson production in [Formula: see text] collisions at [Formula: see text] TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 12 (January 2014): 3195. https://doi.org/10.1140/epjc/s10052-014-3195-6. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for supersymmetry at √s= 8 TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector." Journal of High Energy Physics 2014, no. 6 (January 1, 2014). https://doi.org/10.1007/JHEP06(2014)035. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for microscopic black holes and string balls in final states with leptons and jets with the ATLAS detector at √s = 8 TeV." Journal of High Energy Physics 2014, no. 8 (January 1, 2014). https://doi.org/10.1007/JHEP08(2014)103. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Search for top quark decays t → qH with H → γγ using the ATLAS detector." Journal of High Energy Physics 2014, no. 6 (January 1, 2014). https://doi.org/10.1007/JHEP06(2014)008. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Light-quark and gluon jet discrimination in [Formula: see text] collisions at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 8 (January 2014): 3023. https://doi.org/10.1140/epjc/s10052-014-3023-z. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Operation and performance of the ATLAS semiconductor tracker." Journal of Instrumentation 9, no. 8 (January 1, 2014). https://doi.org/10.1088/1748-0221/9/08/P08009. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in pp collisions at √s=8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 738 (January 1, 2014): 234–53. https://doi.org/10.1016/j.physletb.2014.09.054. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, B. Abi, et al. "Measurement of the [Formula: see text] production cross-section using [Formula: see text] events with [Formula: see text]-tagged jets in [Formula: see text] collisions at [Formula: see text] and 8 TeV with the ATLAS detector." The European Physical Journal. C, Particles and Fields 74, no. 10 (January 2014): 3109. https://doi.org/10.1140/epjc/s10052-014-3109-7. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, R. Aben, et al. "Measurement of the electroweak production of dijets in association with a Z-boson and distributions sensitive to vector boson fusion in proton-proton collisions at √s= 8 TeV using the ATLAS detector." Journal of High Energy Physics 2014, no. 4 (January 1, 2014). https://doi.org/10.1007/JHEP04(2014)031. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, R. Aben, et al. "Measurement of the low-mass Drell-Yan differential cross section at √s = 7 TeV using the ATLAS detector." Journal of High Energy Physics 2014, no. 6 (January 1, 2014). https://doi.org/10.1007/JHEP06(2014)112. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Erratum: Search for new phenomena in final states with large jet multiplicities and missing transverse momentum at √s = 8 TeV proton-proton collisions using the ATLAS experiment (Journal of High Energy Physics (2013) 10 (130))." Journal of High Energy Physics 2014, no. 1 (January 1, 2014). https://doi.org/10.1007/JHEP01(2014)109. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for dark matter in events with a hadronically decaying W or Z boson and missing transverse momentum in pp collisions at √s=8 TeV with the ATLAS detector." Physical Review Letters 112, no. 4 (January 2014): 041802. https://doi.org/10.1103/physrevlett.112.041802. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for new phenomena in photon + jet events collected in proton-proton collisions at √s = 8 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 728, no. 1 (January 1, 2014): 562–78. https://doi.org/10.1016/j.physletb.2013.12.029. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the top quark pair production charge asymmetry in proton-proton collisions at √s = 7 TeV using the ATLAS detector." Journal of High Energy Physics 2014, no. 2 (January 1, 2014). https://doi.org/10.1007/JHEP02(2014)107. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Standalone vertex finding in the ATLAS muon spectrometer." Journal of Instrumentation 9, no. 2 (January 1, 2014). https://doi.org/10.1088/1748-0221/9/02/P02001. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in pv s = 8 TeV pp collisions with the ATLAS detector." Journal of High Energy Physics 2014, no. 4 (January 1, 2014). https://doi.org/10.1007/JHEP04(2014)169. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of dijet cross-sections in pp collisions at 7 TeV centre-of-mass energy using the ATLAS detector." Journal of High Energy Physics 2014, no. 5 (January 1, 2014). https://doi.org/10.1007/JHEP05(2014)059. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for direct top-squark pair production in final states with two leptons in pp collisions at √s = 8 TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 6 (January 1, 2014). https://doi.org/10.1007/JHEP06(2014)124. The ATLAS collaboration, A., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 20 fb−1 of √s= 8 TeV proton-proton collision data with the ATLAS detector." Social Psychiatry and Psychiatric Epidemiology 2014, no. 9 (2014). https://doi.org/10.1007/JHEP09(2014)103. The ATLAS collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for long-lived neutral particles decaying into lepton jets in proton-proton collisions at √s= 8TeV with the ATLAS detector." Journal of High Energy Physics 2014, no. 11 (January 1, 2014). https://doi.org/10.1007/JHEP11(2014)088. The, ATLAS collaboration, G. Aad, B. Abbott, J. Abdallah, S. A. Khalek, O. Abdinov, R. Aben, et al. "Search for the direct production of charginos, neutralinos and staus in final states with at least two hadronically decaying taus and missing transverse momentum in pp collisions at √ $$ \sqrt{s}=8 $$ TeV with the ATLAS detector." Critical Ultrasound Journal, January 1, 2014. https://doi.org/10.1007/JHEP10(2014)096. The, ATLAS collaboration, G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for strong production of supersymmetric particles in final states with missing transverse momentum and at least three b-jets at (formula presented) TeV proton-proton collisions with the ATLAS detector." Critical Ultrasound Journal, January 1, 2014. https://doi.org/10.1007/JHEP10(2014)024. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of top quark polarization in top-antitop events from proton-proton collisions at √s=7 TeV using the ATLAS detector." Physical Review Letters 111, no. 23 (December 2013): 232002. https://doi.org/10.1103/physrevlett.111.232002. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Observation of D⁰-D¯⁰ mixing using the CDF II detector." Physical Review Letters 111, no. 23 (December 2013): 231802. https://doi.org/10.1103/physrevlett.111.231802. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Production of KS0, K*(892) and φ0(1020) in minimum bias events and KS0 and Λ0 in jets in pp̄ collisions at √s=1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 9 (November 26, 2013). https://doi.org/10.1103/PhysRevD.88.092005. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for a dijet resonance in events with jets and missing transverse energy in pp̄ collisions at √s=1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 9 (November 26, 2013). https://doi.org/10.1103/PhysRevD.88.092004. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for the production of ZW and ZZ boson pairs decaying into charged leptons and jets in pp̄ collisions at √s=1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 9 (November 13, 2013). https://doi.org/10.1103/PhysRevD.88.092002. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the top-quark pair-production cross section in events with two leptons and bottom-quark jets using the full CDF data set." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 9 (November 11, 2013). https://doi.org/10.1103/PhysRevD.88.091103. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the differential cross section dσ/d(cosθ(t)) for Top-Quark Pair Production in pp Collisions at sqrt[s] = 1.96 TeV." Physical Review Letters 111, no. 18 (November 2013): 182002. https://doi.org/10.1103/physrevlett.111.182002. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Direct measurement of the total decay width of the top quark." Physical Review Letters 111, no. 20 (November 2013): 202001. https://doi.org/10.1103/physrevlett.111.202001. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Erratum: Indirect measurement of sin-2θW (MW) using e+e - pairs in the Z-boson region with pp̄ collisions at a center-of-momentum energy of 1.96 TeV (Physical Review D (2013) 88 (072002))." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 7 (October 29, 2013). https://doi.org/10.1103/PhysRevD.88.079905. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the leptonic asymmetry in tt̄ events produced in pp̄ collisions at √s=1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 7 (October 10, 2013). https://doi.org/10.1103/PhysRevD.88.072003. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Indirect measurement of sin2θw (MW) using e+e- pairs in the Z-boson region with p̄p collisions at a center-of-momentum energy of 1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 7 (October 2, 2013). https://doi.org/10.1103/PhysRevD.88.072002. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Evidence for a bottom baryon resonance Λb*0 in CDF data." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 7 (October 1, 2013). https://doi.org/10.1103/PhysRevD.88.071101. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the high-mass Drell–Yan differential cross-section in pp collisions at √s=7TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 725, no. 4–5 (October 1, 2013): 223–42. https://doi.org/10.1016/j.physletb.2013.07.049. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for new phenomena in final states with large jet multiplicities and missing transverse momentum at √s = 8 TeV proton-proton collisions using the ATLAS experiment." Journal of High Energy Physics 2013, no. 10 (October 1, 2013). https://doi.org/10.1007/JHEP10(2013)130. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Measurement of the azimuthal angle dependence of inclusive jet yields in Pb+Pb collisions at √(sNN)=2.76 TeV with the ATLAS detector." Physical Review Letters 111, no. 15 (October 2013): 152301. https://doi.org/10.1103/physrevlett.111.152301. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for microscopic black holes in a like-sign dimuon final state using large track multiplicity with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 7 (October 1, 2013). https://doi.org/10.1103/PhysRevD.88.072001. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, J. P. Agnew, et al. "Combination of CDF and D0 W-Boson mass measurements." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 5 (September 23, 2013). https://doi.org/10.1103/PhysRevD.88.052018. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Searches for the Higgs boson decaying to W+W -→ℓ+νℓ-ν̄ with the CDF II detector." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 5 (September 17, 2013). https://doi.org/10.1103/PhysRevD.88.052012. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Combination of searches for the Higgs boson using the full CDF data set." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 5 (September 17, 2013). https://doi.org/10.1103/PhysRevD.88.052013. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, J. P. Agnew, et al. "Higgs boson studies at the Tevatron." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 5 (September 17, 2013). https://doi.org/10.1103/PhysRevD.88.052014. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. AbdelKhalek, O. Abdinov, R. Aben, et al. "Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 726, no. 1–3 (September 2, 2013): 88–119. https://doi.org/10.1016/j.physletb.2013.08.010. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Performance of jet substructure techniques for large-R jets in proton-proton collisions at √s=7 TeV using the ATLAS detector." Journal of High Energy Physics 2013, no. 9 (September 1, 2013). https://doi.org/10.1007/JHEP09(2013)076. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for excited electrons and muons in √s=8 TeV proton-proton collisions with the ATLAS detector." New Journal of Physics 15 (September 1, 2013). https://doi.org/10.1088/1367-2630/15/9/093011. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. AbdelKhalek, O. Abdinov, R. Aben, et al. "Evidence for the spin-0 nature of the Higgs boson using ATLAS data." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 726, no. 1–3 (August 30, 2013): 120–44. https://doi.org/10.1016/j.physletb.2013.08.026. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Signature-based search for delayed photons in exclusive photon plus missing transverse energy events from pp̄ collisions with √s=1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 3 (August 23, 2013). https://doi.org/10.1103/PhysRevD.88.031103. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement with the ATLAS detector of multi-particle azimuthal correlations in p + Pb collisions at √sNN= 5.02 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 725, no. 1–3 (August 9, 2013): 60–78. https://doi.org/10.1016/j.physletb.2013.06.057. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of charged-particle event shape variables in inclusive √(s)=7 TeV proton-proton interactions with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 3 (August 6, 2013). https://doi.org/10.1103/PhysRevD.88.032004. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Exclusion of exotic top-like quarks with -4/3 electric charge using jet-charge tagging in single-lepton tt̄ events at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 3 (August 5, 2013). https://doi.org/10.1103/PhysRevD.88.032003. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Top-quark mass measurement in events with jets and missing transverse energy using the full CDF data set." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 1 (July 1, 2013). https://doi.org/10.1103/PhysRevD.88.011101. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for nonpointing photons in the diphoton and ETmiss final state in √s=7 TeV proton-proton collisions using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 1 (July 1, 2013). https://doi.org/10.1103/PhysRevD.88.012001. Aaltonen, T., E. Albin, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for pair production of strongly interacting particles decaying to pairs of jets in pp collisions at √s=1.96 TeV." Physical Review Letters 111, no. 3 (July 2013): 031802. https://doi.org/10.1103/physrevlett.111.031802. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the cross section for direct-photon production in association with a heavy quark in pp[over ¯] collisions at sqrt[s]=1.96 TeV." Physical Review Letters 111, no. 4 (July 2013): 042003. https://doi.org/10.1103/physrevlett.111.042003. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for a heavy narrow resonance decaying to eμ eτ or μτ with the ATLAS detector in √s=7 TeV pp collisions at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 723, no. 1–3 (June 10, 2013): 15–32. https://doi.org/10.1016/j.physletb.2013.04.035. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurements of Wγ and Zγ production in pp collisions at √s=7 TeV with the ATLAS detector at the LHC." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 11 (June 4, 2013). https://doi.org/10.1103/PhysRevD.87.112003. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of R=B(t→Wb)/B(t→Wq) in top-quark-pair decays using lepton+jets events and the full CDF run II dataset." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 11 (June 3, 2013). https://doi.org/10.1103/PhysRevD.87.111101. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of W+W- production in pp collisions at √s=7 TeV with the ATLAS detector and limits on anomalous WWZ and WWγ couplings." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 11 (June 3, 2013). https://doi.org/10.1103/PhysRevD.87.112001. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for long-lived, multi-charged particles in pp collisions at √s=7 TeV using the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 722, no. 4–5 (May 24, 2013): 305–23. https://doi.org/10.1016/j.physletb.2013.04.036. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "W-boson polarization measurement in the tt- dilepton channel using the CDF II detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 722, no. 1–3 (May 13, 2013): 48–54. https://doi.org/10.1016/j.physletb.2013.03.032. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the top quark forward-backward production asymmetry and its dependence on event kinematic properties." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 9 (May 3, 2013). https://doi.org/10.1103/PhysRevD.87.092002. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Observation of associated near-side and away-side long-range correlations in sqrt[s(NN)]=5.02 TeV proton-lead collisions with the ATLAS detector." Physical Review Letters 110, no. 18 (May 2013): 182302. https://doi.org/10.1103/physrevlett.110.182302. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for supersymmetry with like-sign lepton-tau events at CDF." Physical Review Letters 110, no. 20 (May 2013): 201802. https://doi.org/10.1103/physrevlett.110.201802. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for single b⁎-quark production with the ATLAS detector at √s=7 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 721, no. 4–5 (April 25, 2013): 171–89. https://doi.org/10.1016/j.physletb.2013.03.016. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for dark matter candidates and large extra dimensions in events with a jet and missing transverse momentum with the ATLAS detector." Journal of High Energy Physics 2013, no. 4 (April 24, 2013). https://doi.org/10.1007/JHEP04(2013)075. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for charged Higgs bosons through the violation of lepton universality in tt events using pp collision data at √S=7 TeV with the ATLAS experiment." Journal of High Energy Physics 2013, no. 3 (April 22, 2013). https://doi.org/10.1007/JHEP03(2013)076. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for displaced muonic lepton jets from light Higgs boson decay in proton–proton collisions at √s = 7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 721, no. 1–3 (April 10, 2013): 32–50. https://doi.org/10.1016/j.physletb.2013.02.058. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for Bs0→μ+μ- and B0→μ+μ- decays with the full CDF Run II data set." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 7 (April 4, 2013). https://doi.org/10.1103/PhysRevD.87.072003. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for extra dimensions in diphoton events from proton-proton collisions at √s= 7 TeV in the ATLAS detector at the LHC." New Journal of Physics 15 (April 1, 2013). https://doi.org/10.1088/1367-2630/15/4/043007. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for WH production with a light Higgs boson decaying to prompt electron-jets in proton-proton collisions at √s=7 TeV with the ATLAS detector." New Journal of Physics 15 (April 1, 2013). https://doi.org/10.1088/1367-2630/15/4/043009. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Searches for heavy long-lived sleptons and R-hadrons with the ATLAS detector in pp collisions at √s=7 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 720, no. 4–5 (March 26, 2013): 277–308. https://doi.org/10.1016/j.physletb.2013.02.015. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of hard double-parton interactions in W(→ lν) + 2-jet events at √s = 7 TeV with the ATLAS detector." New Journal of Physics 15, no. 3 (March 25, 2013). https://doi.org/10.1088/1367-2630/15/3/033038. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for light top squark pair production in final states with leptons and b-jets with the ATLAS detector in √s=7 TeV proton–proton collisions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 720, no. 1–3 (March 13, 2013): 13–31. https://doi.org/10.1016/j.physletb.2013.01.049. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of angular correlations in Drell–Yan lepton pairs to probe Z/γ⁎ boson transverse momentum at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 720, no. 1–3 (March 13, 2013): 32–51. https://doi.org/10.1016/j.physletb.2013.01.054. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Updated search for the standard model Higgs boson in events with jets and missing transverse energy using the full CDF data set." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 5 (March 6, 2013). https://doi.org/10.1103/PhysRevD.87.052008. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of upsilon production in 7 TeV pp collisions at ATLAS." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 5 (March 4, 2013). https://doi.org/10.1103/PhysRevD.87.052004. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for new phenomena in events with three charged leptons at √s=7 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 5 (March 4, 2013). https://doi.org/10.1103/PhysRevD.87.052002. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Single hadron response measurement and calorimeter jet energy scale uncertainty with the ATLAS detector at the LHC." European Physical Journal C 73, no. 3 (March 2, 2013). https://doi.org/10.1140/epjc/s10052-013-2305-1. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of ZZ production in pp collisions at √ = 7 TeV and limits on anomalous ZZZ and ZZγ couplings with the ATLAS detector." Journal of High Energy Physics 2013, no. 3 (March 1, 2013). https://doi.org/10.1007/JHEP03(2013)128. Aaltonen, T., J. Adelman, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, et al. "Search for a two-Higgs-boson doublet using a simplified model in pp collisions at sqrt[s] = 1.96 TeV." Physical Review Letters 110, no. 12 (March 2013): 121801. https://doi.org/10.1103/physrevlett.110.121801. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the cross section for prompt isolated diphoton production using the full CDF run II data sample." Physical Review Letters 110, no. 10 (March 2013): 101801. https://doi.org/10.1103/physrevlett.110.101801. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Search for resonant top-antitop production in the lepton plus jets decay mode using the full CDF data set." Physical Review Letters 110, no. 12 (March 2013): 121802. https://doi.org/10.1103/physrevlett.110.121802. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for long-lived, heavy particles in final states with a muon and multi-track displaced vertex in proton-proton collisions at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 719, no. 4–5 (February 26, 2013): 280–98. https://doi.org/10.1016/j.physletb.2013.01.042. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of W-boson polarization in top-quark decay using the full CDF Run II data set." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 3 (February 21, 2013). https://doi.org/10.1103/PhysRevD.87.031104. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the flavour composition of dijet events in pp collisions at √s = 7 TeV with the ATLAS detector." European Physical Journal C 73, no. 2 (February 12, 2013). https://doi.org/10.1140/epjc/s10052-013-2301-5. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for direct chargino production in anomaly-mediated supersymmetry breaking models based on a disappearing-track signature in pp collisions at √s=7TeV with the ATLAS detector." Journal of High Energy Physics 2013, no. 1 (February 12, 2013). https://doi.org/10.1007/JHEP01(2013)131. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for resonances decaying into top-quark pairs using fully hadronic decays in pp collisions with ATLAS at √s=7 TeV." Journal of High Energy Physics 2013, no. 1 (February 12, 2013). https://doi.org/10.1007/JHEP01(2013)116. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the Λb0 lifetime and mass in the ATLAS experiment." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 3 (February 4, 2013). https://doi.org/10.1103/PhysRevD.87.032002. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Observation of the production of a W boson in association with a single charm quark." Physical Review Letters 110, no. 7 (February 2013): 071801. https://doi.org/10.1103/physrevlett.110.071801. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for squarks and gluinos with the ATLAS detector in final states with jets and missing transverse momentum using 4.7 fb-1 of √s=7 TeV proton-proton collision data." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 1 (January 22, 2013). https://doi.org/10.1103/PhysRevD.87.012008. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurements of top quark pair relative differential cross-sections with ATLAS in pp collisions at √s = 7 TeV." European Physical Journal C 73, no. 1 (January 15, 2013). https://doi.org/10.1140/epjc/s10052-012-2261-1. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for pair-produced massive coloured scalars in four-jet final states with the ATLAS detector in proton–proton collisions at √s =7 TeV." European Physical Journal C 73, no. 1 (January 15, 2013). https://doi.org/10.1140/epjc/s10052-012-2263-z. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s=7 TeV pp collisions with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 718, no. 3 (January 8, 2013): 841–59. https://doi.org/10.1016/j.physletb.2012.11.039. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for contact interactions and large extra dimensions in dilepton events from pp collisions at √s=7 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 1 (January 4, 2013). https://doi.org/10.1103/PhysRevD.87.015010. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the Bc-meson lifetime in the decay Bc-→J/ψπ." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 1 (January 2, 2013). https://doi.org/10.1103/PhysRevD.87.011101. Aaltonen, T., B. A. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for the Higgs boson in the all-hadronic final state using the full CDF data set." Journal of High Energy Physics 2013, no. 2 (January 1, 2013). https://doi.org/10.1007/JHEP02(2013)004. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the mass difference between top and antitop quarks." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 5 (2013). https://doi.org/10.1103/PhysRevD.87.052013. Kang, S. B., S. J. Han, S. B. Nam, I. S. Nam, B. K. Cho, C. H. Kim, and S. H. Oh. "Effect of aging atmosphere on thermal sintering of modern commercial TWCs." Topics in Catalysis 56, no. 1–8 (2013): 298–305. https://doi.org/10.1007/s11244-013-9970-z. Kim, P. S., M. K. Kim, B. K. Cho, I. S. Nam, and S. H. Oh. "Effect of H2 on deNOx performance of HC-SCR over Ag/Al2O3: Morphological, chemical, and kinetic changes." Journal of Catalysis 301 (2013): 65–76. https://doi.org/10.1016/j.jcat.2013.01.026. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Measurement of jet shapes in top-quark pair events at [Formula: see text] using the ATLAS detector." The European Physical Journal. C, Particles and Fields 73, no. 12 (January 2013): 2676. https://doi.org/10.1140/epjc/s10052-013-2676-3. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Multi-channel search for squarks and gluinos in [Formula: see text]pp collisions with the ATLAS detector at the LHC." The European Physical Journal. C, Particles and Fields 73, no. 3 (January 2013): 2362. https://doi.org/10.1140/epjc/s10052-013-2362-5. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Measurement of the [Formula: see text] production cross section in the tau + jets channel using the ATLAS detector." The European Physical Journal. C, Particles and Fields 73, no. 3 (January 2013): 2328. https://doi.org/10.1140/epjc/s10052-013-2328-7. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Jet energy resolution in proton-proton collisions at [Formula: see text] recorded in 2010 with the ATLAS detector." The European Physical Journal. C, Particles and Fields 73, no. 3 (January 2013): 2306. https://doi.org/10.1140/epjc/s10052-013-2306-0. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Measurement of kT splitting scales in W→ℓν events at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 73, no. 5 (January 2013): 2432. https://doi.org/10.1140/epjc/s10052-013-2432-8. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Measurement of the inclusive jet cross-section in pp collisions at [Formula: see text] and comparison to the inclusive jet cross-section at [Formula: see text] using the ATLAS detector." The European Physical Journal. C, Particles and Fields 73, no. 8 (January 2013): 2509. https://doi.org/10.1140/epjc/s10052-013-2509-4. ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Improved luminosity determination in pp collisions at [Formula: see text] using the ATLAS detector at the LHC." The European Physical Journal. C, Particles and Fields 73, no. 8 (January 2013): 2518. https://doi.org/10.1140/epjc/s10052-013-2518-3. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Jet energy measurement with the ATLAS detector in proton-proton collisions at √s =7 TeV." European Physical Journal C 73, no. 3 (January 1, 2013). https://doi.org/10.1140/epjc/s10052-013-2304-2. Aad, G., B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Measurement of the jet radius and transverse momentum dependence of inclusive jet suppression in lead-lead collisions at root S-NN=2.76 TeV with the ATLAS detector." Physics Letters B 719, no. 4–5 (2013): 220–41. https://doi.org/10.1016/j.physletb.2013.01.024. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for pair production of heavy top-like quarks decaying to a high-p(T) W boson and a b quark in the lepton plus jets final state at root s=7 TeV with the ATLAS detector." Physics Letters B 718, no. 4–5 (2013): 1284–1302. https://doi.org/10.1016/j.physletb.2012.11.071. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for new phenomena in the WW -> vertical bar v vertical bar ' v ' final state in pp collisions at root s=7 TeV with the ATLAS detector." Physics Letters B 718, no. 3 (2013): 860–78. https://doi.org/10.1016/j.physletb.2012.11.040. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for supersymmetry in events with photons, bottom quarks, and missing transverse momentum in proton-proton collisions at a centre-of-mass energy of 7 TeV with the ATLAS detector." Physics Letters B 719, no. 4–5 (2013): 261–79. https://doi.org/10.1016/j.physletb.2013.01.041. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "A search for prompt lepton-jets in pp collisions at root s=7 TeV with the ATLAS detector." Physics Letters B 719, no. 4–5 (2013): 299–317. https://doi.org/10.1016/j.physletb.2013.01.034. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "A search for high-mass resonances decaying to tau(+)tau(-) in pp collisions at root s=7 TeV with the ATLAS detector." Physics Letters B 719, no. 4–5 (2013): 242–60. https://doi.org/10.1016/j.physletb.2013.01.040. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for direct slepton and gaugino production in final states with two leptons and missing transverse momentum with the ATLAS detector in pp collisions at root s=7 TeV." Physics Letters B 718, no. 3 (2013): 879–901. https://doi.org/10.1016/j.physletb.2012.11.058. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Dynamics of isolated-photon plus jet production in pp collisions at s=7 TeV with the ATLAS detector." Nuclear Physics B 875, no. 3 (2013): 483–535. https://doi.org/10.1016/j.nuclphysb.2013.07.025. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of isolated-photon pair production in pp collisions at √s=7TeV with the ATLAS detector." Journal of High Energy Physics 2013, no. 1 (January 1, 2013). https://doi.org/10.1007/JHEP01(2013)086. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for third generation scalar leptoquarks in pp collisions at √s=7 TeV with the ATLAS detector." Journal of High Energy Physics 2013, no. 6 (January 1, 2013). https://doi.org/10.1007/JHEP06(2013)033. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the differential cross-section of B+ meson production in pp collisions at √s = 7 TeV at ATLAS." Journal of High Energy Physics 2013, no. 10 (January 1, 2013): 1–37. https://doi.org/10.1007/JHEP10(2013)042. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the top quark charge in pp collisions at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2013, no. 11 (January 1, 2013). https://doi.org/10.1007/JHEP11(2013)031. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the distributions of event-by-event flow harmonics in lead-lead collisions at √SNN = 2.76 TeV with the ATLAS detector at the LHC." Journal of High Energy Physics 2013, no. 11 (January 1, 2013). https://doi.org/10.1007/JHEP11(2013)183. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for dark matter candidates and large extra dimensions in events with a photon and missing transverse momentum in pp collision data at sqrt[s]=7 TeV with the ATLAS detector." Physical Review Letters 110, no. 1 (January 2013): 011802. https://doi.org/10.1103/physrevlett.110.011802. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of Z boson production in Pb-Pb collisions at sqrt[s(NN)]=2.76 TeV with the ATLAS detector." Physical Review Letters 110, no. 2 (January 2013): 022301. https://doi.org/10.1103/physrevlett.110.022301. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "ATLAS search for new phenomena in dijet mass and angular distributions using pp collisions at √s = 7 TeV." Journal of High Energy Physics 2013 (January 1, 2013). https://doi.org/10.1007/JHEP01(2013)029. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for the neutral Higgs bosons of the minimal supersymmetric standard model in pp collisions at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2013, no. 2 (January 1, 2013): 1–46. https://doi.org/10.1007/JHEP02(2013)095. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for resonant diboson production in the WW/WZ → ℓvjj decay channels with the ATLAS detector at √s = 7 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 87, no. 11 (January 1, 2013). https://doi.org/10.1103/PhysRevD.87.112006. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for tt̄ resonances in the lepton plus jets final state with ATLAS using 4.7 fb–1of pp collisions at √s = 7 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 1 (January 1, 2013). https://doi.org/10.1103/PhysRevD.88.012004. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Characterisation and mitigation of beam-induced backgrounds observed in the ATLAS detector during the 2011 proton-proton run." Journal of Instrumentation 8, no. 7 (January 1, 2013). https://doi.org/10.1088/1748-0221/8/07/P07004. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Triggers for displaced decays of long-lived neutral particles in the ATLAS detector." Journal of Instrumentation 8, no. 7 (January 1, 2013). https://doi.org/10.1088/1748-0221/8/07/P07015. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of the production cross section of jets in association with a Z boson in pp collisions at √s=7 TeV with the ATLAS detector." Journal of High Energy Physics 2013, no. 7 (January 1, 2013). https://doi.org/10.1007/JHEP07(2013)032. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for charginos nearly mass degenerate with the lightest neutralino based on a disappearing-track signature in pp collisions at √(s) = 8 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 11 (January 1, 2013): 112006-1-112006–23. https://doi.org/10.1103/PhysRevD.88.112006. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, O. Abdinov, R. Aben, et al. "Search for long-lived stopped R-hadrons decaying out of time with pp collisions using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 88, no. 11 (January 1, 2013): 112003-1-112003–30. https://doi.org/10.1103/PhysRevD.88.112003. The ATLAS Collaboration, L., G. Aad, T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, et al. "Search for a light charged Higgs boson in the decay channel [Formula: see text] in [Formula: see text] events using pp collisions at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 73, no. 6 (January 2013): 2465. https://doi.org/10.1140/epjc/s10052-013-2465-z. Aaltonen, T., J. Adelman, B. Lvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, et al. "Search for a heavy vector boson decaying to two gluons in pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 11 (December 5, 2012). https://doi.org/10.1103/PhysRevD.86.112002. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for diphoton events with large missing transverse momentum in 7 TeV proton-proton collision data with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 718, no. 2 (December 5, 2012): 411–30. https://doi.org/10.1016/j.physletb.2012.10.069. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for the Higgs boson in the H→WW→ℓνjj decay channel at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 718, no. 2 (December 5, 2012): 391–410. https://doi.org/10.1016/j.physletb.2012.10.066. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for doubly charged Higgs bosons in like-sign dilepton final states at √s = 7 TeV with the ATLAS detector." European Physical Journal C 72, no. 12 (December 4, 2012). https://doi.org/10.1140/epjc/s10052-012-2244-2. ATLAS Collaboration, L. "A particle consistent with the Higgs boson observed with the ATLAS detector at the Large Hadron Collider." Science (New York, N.Y.) 338, no. 6114 (December 2012): 1576–82. https://doi.org/10.1126/science.1232005. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for R-parity-violating supersymmetry in events with four or more leptons in √s =7 TeV pp collisions with the ATLAS detector." Journal of High Energy Physics 2012, no. 12 (December 1, 2012). https://doi.org/10.1007/JHEP12(2012)124. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for magnetic monopoles in sqrt[s]=7 TeV pp collisions with the ATLAS detector." Physical Review Letters 109, no. 26 (December 2012): 261803. https://doi.org/10.1103/physrevlett.109.261803. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for resonant top quark plus jet production in tt̄+jets events with the ATLAS detector in pp collisions at √s=7TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 9 (November 26, 2012). https://doi.org/10.1103/PhysRevD.86.091103. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Measurement of event shapes at large momentum transfer with the ATLAS detector in pp collisions at √s =7 TeV." European Physical Journal C 72, no. 11 (November 20, 2012). https://doi.org/10.1140/epjc/s10052-012-2211-y. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 7 TeV proton-proton collision data with the ATLAS detector." European Physical Journal C 72, no. 11 (November 20, 2012). https://doi.org/10.1140/epjc/s10052-012-2215-7. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Erratum: Novel inclusive search for the Higgs boson in the four-lepton final state at CDF (Physical Review D (2012) 86 (072012) DOI:10.1103/PhysRevD. 86.072012)." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 9 (November 8, 2012). https://doi.org/10.1103/PhysRevD.86.099902. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, G. D. Alexeev, et al. "Search for neutral Higgs bosons in events with multiple bottom quarks at the Tevatron." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 9 (November 6, 2012). https://doi.org/10.1103/PhysRevD.86.091101. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, G. D. Alexeev, et al. "Combination of the top-quark mass measurements from the Tevatron collider." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 9 (November 2, 2012). https://doi.org/10.1103/PhysRevD.86.092003. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Further search for supersymmetry at √s=7TeV in final states with jets, missing transverse momentum, and isolated leptons with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 9 (November 2, 2012). https://doi.org/10.1103/PhysRevD.86.092002. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for a supersymmetric partner to the top quark in final states with jets and missing transverse momentum at sqrt[s] = 7 TeV with the ATLAS detector." Physical Review Letters 109, no. 21 (November 2012): 211802. https://doi.org/10.1103/physrevlett.109.211802. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for direct top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in sqrt[s] = 7 TeV pp collisions using 4.7 fb(-10 of ATLAS data." Physical Review Letters 109, no. 21 (November 2012): 211803. https://doi.org/10.1103/physrevlett.109.211803. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurements of the top-quark mass and the tt cross section in the hadronic τ+jets decay channel at sqrt[s] = 1.96 TeV." Physical Review Letters 109, no. 19 (November 2012): 192001. https://doi.org/10.1103/physrevlett.109.192001. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for the standard model Higgs Boson produced in association with top quarks using the full CDF data set." Physical Review Letters 109, no. 18 (November 2012): 181802. https://doi.org/10.1103/physrevlett.109.181802. Collaboration, AV Kotwal with A. T. L. A. S. "Search for high-mass resonances decaying to dilepton final states in pp collisions at √s = 7 TeV with the ATLAS detector." Jhep 11 (November 2012): 138. Mu2e collaboration, Peter D. "Mu2e Conceptual Design Report." Arxiv:1211.7019, November 2012. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Novel inclusive search for the Higgs boson in the four-lepton final state at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 7 (October 31, 2012). https://doi.org/10.1103/PhysRevD.86.072012. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Measurement of the t-channel single top-quark production cross section in pp collisions at s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 717, no. 4–5 (October 31, 2012): 330–50. https://doi.org/10.1016/j.physletb.2012.09.031. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for scalar top quark production in (Formula presented.) collisions at (Formula presented.) = 1.96 TeV." Journal of High Energy Physics 2012, no. 10 (October 25, 2012). https://doi.org/10.1007/JHEP10(2012)158. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for a Higgs boson in the diphoton final state using the full CDF data set from pp collisions at s=1.96TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 717, no. 1–3 (October 22, 2012): 173–81. https://doi.org/10.1016/j.physletb.2012.08.051. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for a standard model Higgs boson in the H→ZZ→ℓ+ℓ−νν¯ decay channel using 4.7 fb−1 of √s=7 TeV data with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 717, no. 1–3 (October 22, 2012): 29–48. https://doi.org/10.1016/j.physletb.2012.09.016. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Measurement of Wγ and Zγ production cross sections in pp collisions at √s=7 TeV and limits on anomalous triple gauge couplings with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 717, no. 1–3 (October 22, 2012): 49–69. https://doi.org/10.1016/j.physletb.2012.09.017. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "ATLAS measurements of the properties of jets for boosted particle searches." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 7 (October 15, 2012). https://doi.org/10.1103/PhysRevD.86.072006. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Underlying event characteristics and their dependence on jet size of charged-particle jet events in pp collisions at √(s)=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 7 (October 9, 2012). https://doi.org/10.1103/PhysRevD.86.072004. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurement of W±Z production in proton-proton collisions At √s = 7 TeV with the ATLAS detector." European Physical Journal C 72, no. 10 (October 6, 2012). https://doi.org/10.1140/epjc/s10052-012-2173-0. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for top and bottom squarks from gluino pair production in final states with missing transverse energy and at least three b-jets with the ATLAS detector." European Physical Journal C 72, no. 10 (October 6, 2012). https://doi.org/10.1140/epjc/s10052-012-2174-z. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Precision top-quark mass measurement at CDF." Physical Review Letters 109, no. 15 (October 2012): 152003. https://doi.org/10.1103/physrevlett.109.152003. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the bottom-strange meson mixing phase in the full CDF data set." Physical Review Letters 109, no. 17 (October 2012): 171802. https://doi.org/10.1103/physrevlett.109.171802. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Transverse momentum cross section of e +e - pairs in the Z-boson region from pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 5 (September 26, 2012). https://doi.org/10.1103/PhysRevD.86.052010. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Evidence for the associated production of a W boson and a top quark in ATLAS at √s=7TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 716, no. 1 (September 17, 2012): 142–59. https://doi.org/10.1016/j.physletb.2012.08.011. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 716, no. 1 (September 17, 2012): 1–29. https://doi.org/10.1016/j.physletb.2012.08.020. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the azimuthal ordering of charged hadrons with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 5 (September 14, 2012). https://doi.org/10.1103/PhysRevD.86.052005. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for second generation scalar leptoquarks in pp collisions at √s = 7 TeV with the ATLAS detector." European Physical Journal C 72, no. 9 (September 13, 2012). https://doi.org/10.1140/epjc/s10052-012-2151-6. Kotwal, A. V., and C. D. F. Collaboration. "Search for the standard model Higgs boson decaying to a b-bbar pair in events with two oppositely charged leptons using the full CDF data set." Phys. Rev. Lett. 109 (September 2012): 111803. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "A search for flavour changing neutral currents in top-quark decays in pp collision data collected with the ATLAS detector at √s = 7 TeV." Journal of High Energy Physics 2012, no. 9 (September 1, 2012): 1–36. https://doi.org/10.1007/JHEP09(2012)139. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the difference in CP-violating asymmetries in D(0)→K(+)K(-) and D(0)→π(+)π(-) decays at CDF." Physical Review Letters 109, no. 11 (September 2012): 111801. https://doi.org/10.1103/physrevlett.109.111801. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for the standard model Higgs boson decaying to a bb pair in events with one charged lepton and large missing transverse energy using the full CDF data set." Physical Review Letters 109, no. 11 (September 2012): 111804. https://doi.org/10.1103/physrevlett.109.111804. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for the standard model Higgs boson decaying to a bb pair in events with no charged leptons and large missing transverse energy using the full CDF data set." Physical Review Letters 109, no. 11 (September 2012): 111805. https://doi.org/10.1103/physrevlett.109.111805. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Combined search for the standard model Higgs boson decaying to a bb pair using the full CDF data set." Physical Review Letters 109, no. 11 (September 2012): 111802. https://doi.org/10.1103/physrevlett.109.111802. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the WZ cross section and triple gauge couplings in pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 3 (August 23, 2012). https://doi.org/10.1103/PhysRevD.86.031104. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for the standard model Higgs boson produced in association with a W ± boson with 7.5fb -1 integrated luminosity at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 3 (August 20, 2012). https://doi.org/10.1103/PhysRevD.86.032011. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Jet mass and substructure of inclusive jets in √s = 7TeV pp collisions with the ATLAS experiment." Journal of High Energy Physics 2012, no. 5 (August 20, 2012). https://doi.org/10.1007/JHEP05(2012)128. Aaltonen, T., M. Albrow, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, et al. "Diffractive dijet production in p̄p collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 3 (August 17, 2012). https://doi.org/10.1103/PhysRevD.86.032009. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for supersymmetry with jets, missing transverse momentum and at least one hadronically decaying τ lepton in proton-proton collisions at s=7TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 714, no. 2–5 (August 14, 2012): 197–214. https://doi.org/10.1016/j.physletb.2012.06.061. Aaltonen, T., B. Ãlvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of CP-violation asymmetries in D0→KS0π +π -." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 3 (August 13, 2012). https://doi.org/10.1103/PhysRevD.86.032007. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for charged Higgs bosons decaying via H± → τν in tt̄ events using pp collision data at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2012, no. 6 (August 6, 2012). https://doi.org/10.1007/JHEP06(2012)039. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Combined search for the Standard Model Higgs boson in pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 3 (August 2, 2012). https://doi.org/10.1103/PhysRevD.86.032003. Kotwal, A. V., and C. D. F. Collaboration. "Search for the standard model Higgs boson produced in association with a Z boson in 7.9/fb of ppbar collisions at sqrt(s)=1.96 TeV using the CDF II detector." Phys. Lett. B 715 (August 2012): 98. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for tb resonances in proton-proton collisions at √s=7 TeV with the ATLAS detector." Physical Review Letters 109, no. 8 (August 2012): 081801. https://doi.org/10.1103/physrevlett.109.081801. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for pair production of a new b' quark that decays into a Z boson and a bottom quark with the ATLAS detector." Physical Review Letters 109, no. 7 (August 2012): 071801. https://doi.org/10.1103/physrevlett.109.071801. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, G. D. Alexeev, et al. "Evidence for a particle produced in association with weak bosons and decaying to a bottom-antibottom quark pair in higgs boson searches at the tevatron." Physical Review Letters 109, no. 7 (August 2012): 071804. https://doi.org/10.1103/physrevlett.109.071804. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the top quark mass in the all-hadronic mode at CDF." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 714, no. 1 (July 24, 2012): 24–31. https://doi.org/10.1016/j.physletb.2012.06.007. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of inclusive jet and dijet production in pp collisions at √s=7TeV using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 1 (July 24, 2012). https://doi.org/10.1103/PhysRevD.86.014022. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for pair-produced heavy quarks decaying to Wq in the two-lepton channel at √(s)=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 1 (July 23, 2012). https://doi.org/10.1103/PhysRevD.86.012007. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for down-type fourth generation quarks with the ATLAS detector in events with one lepton and hadronically decaying W bosons." Physical Review Letters 109, no. 3 (July 2012): 032001. https://doi.org/10.1103/physrevlett.109.032001. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Determination of the strange-quark density of the proton from ATLAS measurements of the W→ℓν and Z→ℓℓ cross sections." Physical Review Letters 109, no. 1 (July 2012): 012001. https://doi.org/10.1103/physrevlett.109.012001. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Hunt for new phenomena using large jet multiplicities and missing transverse momentum with ATLAS in 4.7 fb-1 of √s = 7 TeV proton-proton collisions." Journal of High Energy Physics 2012, no. 7 (July 1, 2012): 1–39. https://doi.org/10.1007/JHEP07(2012)167. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for resonant WZ production in the WZ → ℓνℓ′ ℓ ′ channel in √ (s) = 7 TeV pp collisions with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 11 (June 25, 2012). https://doi.org/10.1103/PhysRevD.85.112012. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for supersymmetry in pp collisions at √s=7TeV in final states with missing transverse momentum and b-jets with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 11 (June 15, 2012). https://doi.org/10.1103/PhysRevD.85.112006. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for FCNC single top-quark production at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 712, no. 4–5 (June 12, 2012): 351–69. https://doi.org/10.1016/j.physletb.2012.05.022. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the cross section for top-quark pair production in pp collisions at √s = 7 TeV with the ATLAS detector using final states with two high-pT leptons." Journal of High Energy Physics 2012, no. 5 (June 11, 2012). https://doi.org/10.1007/JHEP05(2012)059. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for pair production of a heavy up-type quark decaying to a W boson and a b quark in the lepton + jets channel with the ATLAS detector." Physical Review Letters 108, no. 26 (June 2012): 261802. https://doi.org/10.1103/physrevlett.108.261802. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for a light Higgs boson decaying to long-lived weakly interacting particles in proton-proton collisions at sqrt[s] = 7 TeV with the ATLAS detector." Physical Review Letters 108, no. 25 (June 2012): 251801. https://doi.org/10.1103/physrevlett.108.251801. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for anomaly-mediated supersymmetry breaking with the ATLAS detector based on a disappearing-track signature in pp collisions at √s=7 TeV." European Physical Journal C 72, no. 4 (June 1, 2012): 1–20. https://doi.org/10.1140/epjc/s10052-012-1993-2. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for gluinos in events with two same-sign leptons, jets, and missing transverse momentum with the ATLAS detector in pp collisions at sqrt[s]=7 TeV." Physical Review Letters 108, no. 24 (June 2012): 241802. https://doi.org/10.1103/physrevlett.108.241802. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for supersymmetry in events with three leptons and missing transverse momentum in √[s]=7 TeV pp collisions with the ATLAS detector." Physical Review Letters 108, no. 26 (June 2012): 261804. https://doi.org/10.1103/physrevlett.108.261804. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the production cross section of an isolated photon associated with jets in proton-proton collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 9 (May 23, 2012). https://doi.org/10.1103/PhysRevD.85.092014. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Erratum to: "Search for first generation scalar leptoquarks in pp collisions at s=7TeV with the ATLAS detector" [Phys. Lett. B 709 (2012) 158]." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 711, no. 5 (May 23, 2012): 442–55. https://doi.org/10.1016/j.physletb.2012.03.023. Aaltonen, T., S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, et al. "Measurement of the masses and widths of the bottom baryons Σb± and Σb*±." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 9 (May 16, 2012). https://doi.org/10.1103/PhysRevD.85.092011. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for anomalous production of multiple leptons in association with W and Z bosons at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 9 (May 3, 2012). https://doi.org/10.1103/PhysRevD.85.092001. Aaltonen, T., R. Alon, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, et al. "Study of substructure of high transverse momentum jets produced in proton-antiproton collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 9 (May 3, 2012). https://doi.org/10.1103/PhysRevD.85.091101. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Study of jets produced in association with a W boson in pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 9 (May 2, 2012). https://doi.org/10.1103/PhysRevD.85.092002. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for production of resonant states in the photon-jet mass distribution using pp collisions at √s=7 TeV collected by the ATLAS detector." Physical Review Letters 108, no. 21 (May 2012): 211802. https://doi.org/10.1103/physrevlett.108.211802. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for scalar bottom quark pair production with the ATLAS detector in pp collisions at sqrt[s]=7 TeV." Physical Review Letters 108, no. 18 (May 2012): 181802. https://doi.org/10.1103/physrevlett.108.181802. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Observation of spin correlation in tt¯ events from pp collisions at √s=7 TeV using the ATLAS detector." Physical Review Letters 108, no. 21 (May 2012): 212001. https://doi.org/10.1103/physrevlett.108.212001. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for dark matter in events with one jet and missing transverse energy in pp¯ collisions at √s=1.96 TeV." Physical Review Letters 108, no. 21 (May 2012): 211804. https://doi.org/10.1103/physrevlett.108.211804. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Evidence for the charmless annihilation decay mode B(s)(0)→π(+)π(-)." Physical Review Letters 108, no. 21 (May 2012): 211803. https://doi.org/10.1103/physrevlett.108.211803. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for a dark matter candidate produced in association with a single top quark in pp collisions at √[s]=1.96 TeV." Physical Review Letters 108, no. 20 (May 2012): 201802. https://doi.org/10.1103/physrevlett.108.201802. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of Bs0→Ds(*)+Ds(*)- branching ratios." Physical Review Letters 108, no. 20 (May 2012): 201801. https://doi.org/10.1103/physrevlett.108.201801. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for a low-mass standard model Higgs boson in the ττ decay channel in pp collisions at sqrt[s]=1.96 TeV." Physical Review Letters 108, no. 18 (May 2012): 181804. https://doi.org/10.1103/physrevlett.108.181804. Aaltonen, T., J. Adelman, B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, et al. "Search for a heavy particle decaying to a top quark and a light quark in pp ¯ collisions at √s=1.96 TeV." Physical Review Letters 108, no. 21 (May 2012): 211805. https://doi.org/10.1103/physrevlett.108.211805. Aaltonen, T., V. M. Abazov, B. Abbott, B. S. Acharya, M. Adams, T. Adams, G. D. Alexeev, et al. "Combination of CDF and D0 measurements of the W boson helicity in top quark decays." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 7 (April 27, 2012). https://doi.org/10.1103/PhysRevD.85.071106. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for excited leptons in proton-proton collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 7 (April 27, 2012). https://doi.org/10.1103/PhysRevD.85.072003. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the CP-violating phase βsJ/ψ in Bs0→J/ψ decays with the CDF II detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 7 (April 24, 2012). https://doi.org/10.1103/PhysRevD.85.072002. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the inclusive W± and Z/γ∗ cross sections in the e and μ decay channels in pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 7 (April 24, 2012). https://doi.org/10.1103/PhysRevD.85.072004. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the centrality dependence of the charged particle pseudorapidity distribution in lead-lead collisions at √s NN=2.76 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 710, no. 3 (April 12, 2012): 363–82. https://doi.org/10.1016/j.physletb.2012.02.045. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "An additional study of multi-muon events produced in pp- collisions at s=1.96 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 710, no. 2 (April 4, 2012): 278–83. https://doi.org/10.1016/j.physletb.2012.02.081. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for standard model Higgs boson production in association with a W boson using a matrix element technique at CDF in pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 7 (April 2, 2012). https://doi.org/10.1103/PhysRevD.85.072001. Kotwal, A. V., and C. D. F. Collaboration. "Precise Measurement of the W-Boson Mass with the CDF II Detector." Phys. Rev. Lett. 108 (April 2012): 151803. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Observation of a new χ(b) state in radiative transitions to Υ(1S) and Υ(2S) at ATLAS." Physical Review Letters 108, no. 15 (April 2012): 152001. https://doi.org/10.1103/physrevlett.108.152001. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for same-sign top-quark production and fourth-generation down-type quarks in pp collisions at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2012, no. 4 (April 1, 2012). https://doi.org/10.1007/JHEP04(2012)069. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurements of the angular distributions of muons from Υ decays in pp collisions at sqrt[s] = 1.96 TeV." Physical Review Letters 108, no. 15 (April 2012): 151802. https://doi.org/10.1103/physrevlett.108.151802. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for squarks and gluinos using final states with jets and missing transverse momentum with the Atlas detector in s=7 TeV proton-proton collisions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 710, no. 1 (March 29, 2012): 67–85. https://doi.org/10.1016/j.physletb.2012.02.051. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Combined search for the Standard Model Higgs boson using up to 4.9 fb-1 of pp collision data at s=7TeV with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 710, no. 1 (March 29, 2012): 49–66. https://doi.org/10.1016/j.physletb.2012.02.044. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for strong gravity signatures in same-sign dimuon final states using the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 709, no. 4–5 (March 23, 2012): 322–40. https://doi.org/10.1016/j.physletb.2012.02.049. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the W ±Z production cross section and limits on anomalous triple gauge couplings in proton-proton collisions at s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 709, no. 4–5 (March 23, 2012): 341–57. https://doi.org/10.1016/j.physletb.2012.02.053. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of D *± meson production in jets from pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 5 (March 19, 2012). https://doi.org/10.1103/PhysRevD.85.052005. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Searches for supersymmetry with the ATLAS detector using final states with two leptons and missing transverse momentum in s=7TeV proton-proton collisions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 709, no. 3 (March 19, 2012): 137–57. https://doi.org/10.1016/j.physletb.2012.01.076. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for first generation scalar leptoquarks in pp collisions at s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 709, no. 3 (March 19, 2012): 158–76. https://doi.org/10.1016/j.physletb.2012.02.004. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Rapidity gap cross sections measured with the ATLAS detector in pp collisions at √s =7 TeV." European Physical Journal C 72, no. 3 (March 13, 2012). https://doi.org/10.1140/epjc/s10052-012-1926-0. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for the Higgs boson in the H→WW(*)→ℓ(+)νℓ(-)ν decay channel in pp collisions at √s=7 TeV with the ATLAS detector." Physical Review Letters 108, no. 11 (March 2012): 111802. https://doi.org/10.1103/physrevlett.108.111802. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Electron performance measurements with the ATLAS detector using the 2010 LHC proton-proton collision data." European Physical Journal C 72, no. 3 (March 1, 2012): 1–46. https://doi.org/10.1140/epjc/s10052-012-1909-1. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for the standard model Higgs boson in the diphoton decay channel with 4.9 fb(-1) of pp collision data at √s=7 TeV with ATLAS." Physical Review Letters 108, no. 11 (March 2012): 111803. https://doi.org/10.1103/physrevlett.108.111803. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of ZZ production in leptonic final states at sqrt[s] of 1.96 TeV at CDF." Physical Review Letters 108, no. 10 (March 2012): 101801. https://doi.org/10.1103/physrevlett.108.101801. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the production cross section for Z/γ* in association with jets in pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 3 (February 28, 2012). https://doi.org/10.1103/PhysRevD.85.032009. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "A measurement of the ratio of the W and Z cross sections with exactly one associated jet in pp collisions at s=7TeV with ATLAS." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 708, no. 3–5 (February 28, 2012): 221–40. https://doi.org/10.1016/j.physletb.2012.01.042. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for Higgs bosons produced in association with b quarks." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 3 (February 22, 2012). https://doi.org/10.1103/PhysRevD.85.032005. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for anomalous production of prompt like-sign muon pairs and constraints on physics beyond the standard model with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 3 (February 17, 2012). https://doi.org/10.1103/PhysRevD.85.032004. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for new physics in the dijet mass distribution using 1 fb -1 of pp collision data at √s=7 TeV collected by the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 708, no. 1–2 (February 14, 2012): 37–54. https://doi.org/10.1016/j.physletb.2012.01.035. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurements of the electron and muon inclusive cross-sections in proton-proton collisions at √s=7TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 707, no. 5 (February 7, 2012): 438–58. https://doi.org/10.1016/j.physletb.2011.12.054. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the top quark pair production cross section in pp collisions at √s=7TeV in dilepton final states with ATLAS." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 707, no. 5 (February 7, 2012): 459–77. https://doi.org/10.1016/j.physletb.2011.12.055. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for displaced vertices arising from decays of new heavy particles in 7 TeV pp collisions at ATLAS." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 707, no. 5 (February 7, 2012): 478–96. https://doi.org/10.1016/j.physletb.2011.12.057. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for the rare radiative decay W→πγ in pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 3 (February 2, 2012). https://doi.org/10.1103/PhysRevD.85.032001. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the cross section for the production of a W boson in association with b-jets in pp collisions at √s = 7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 707, no. 5 (February 1, 2012): 418–37. https://doi.org/10.1016/j.physletb.2011.12.046. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the pseudorapidity and transverse momentum dependence of the elliptic flow of charged particles in lead-lead collisions at sNN=2.76 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 707, no. 3–4 (February 1, 2012): 330–48. https://doi.org/10.1016/j.physletb.2011.12.056. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurements of the angular distributions in the decays B→K(*)μ(+)μ(-) at CDF." Physical Review Letters 108, no. 8 (February 2012): 081807. https://doi.org/10.1103/physrevlett.108.081807. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of CP-violating asymmetries in D0→π +π - and D0→K +K - decays at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 27, 2012). https://doi.org/10.1103/PhysRevD.85.012009. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for heavy metastable particles decaying to jet pairs in pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 24, 2012). https://doi.org/10.1103/PhysRevD.85.012007. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for high-mass resonances decaying into ZZ in pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 24, 2012). https://doi.org/10.1103/PhysRevD.85.012008. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the transverse momentum distribution of W bosons in pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 18, 2012). https://doi.org/10.1103/PhysRevD.85.012005. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for supersymmetry in final states with jets, missing transverse momentum and one isolated lepton in √s=7TeV pp collisions using 1fb -1 of ATLAS data." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 18, 2012). https://doi.org/10.1103/PhysRevD.85.012006. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for a heavy Standard Model Higgs boson in the channel H→ZZ→ℓ+ℓ-qq̄ using the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 707, no. 1 (January 16, 2012): 27–45. https://doi.org/10.1016/j.physletb.2011.11.056. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the isolated diphoton cross section in pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 11, 2012). https://doi.org/10.1103/PhysRevD.85.012003. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for WZ+ZZ production with missing transverse energy+jets with b enhancement at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 6, 2012). https://doi.org/10.1103/PhysRevD.85.012002. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Ks0 and Λ production in pp interactions at √s=0.9 and 7 TeV measured with the ATLAS detector at the LHC." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 1 (January 6, 2012). https://doi.org/10.1103/PhysRevD.85.012001. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the W→τντ cross section in pp collisions at s=7 TeV with the ATLAS experiment." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 706, no. 4–5 (January 5, 2012): 276–94. https://doi.org/10.1016/j.physletb.2011.11.057. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the cross-section for b-jets produced in association with a Z boson at s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 706, no. 4–5 (January 5, 2012): 295–313. https://doi.org/10.1016/j.physletb.2011.11.059. Aaltonen, T., B. A. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Production of Λ0, Λ ̄0, Ξ ±, and Ω± hyperons in pp̄ collisions at √s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 86, no. 1 (2012). https://doi.org/10.1103/PhysRevD.86.012002. Aaltonen, T., B. Á. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the branching fraction B(Λb0→Λc+π -π +π -) at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 85, no. 3 (2012). https://doi.org/10.1103/PhysRevD.85.032003. Abelev, B., J. Adam, D. Adamova, A. Marshall Adare, M. Aggarwal, G. Aglieri Rinella, A. Gabor Agocs, et al. "Measurement of prompt J/ψand beauty hadron production cross sections at mid-rapidity in pp collisions at √s = 7 TeV." Journal of High Energy Physics 2012, no. 11 (January 1, 2012). https://doi.org/10.1007/JHEP11(2012)065. Cho, B. K., J. H. Lee, C. C. Crellin, K. L. Olson, D. L. Hilden, M. K. Kim, P. S. Kim, I. Heo, S. H. Oh, and I. S. Nam. "Selective catalytic reduction of NO x by diesel fuel: Plasma-assisted HC/SCR system." Catalysis Today 191, no. 1 (2012): 20–24. https://doi.org/10.1016/j.cattod.2012.03.044. Kang, S. B., S. J. Han, S. B. Nam, I. S. Nam, B. K. Cho, C. H. Kim, and S. H. Oh. "Activity function describing the effect of Pd loading on the catalytic performance of modern commercial TWC." Chemical Engineering Journal 207–208 (2012): 117–21. https://doi.org/10.1016/j.cej.2012.06.003. Kim, M. K., P. S. Kim, B. K. Cho, I. S. Nam, and S. H. Oh. "Enhanced NOx reduction and byproduct removal by (HC + OHC)/SCR over multifunctional dual-bed monolith catalyst." Catalysis Today 184, no. 1 (2012): 95–106. https://doi.org/10.1016/j.cattod.2011.11.010. Kim, M. K., P. S. Kim, H. J. Kwon, I. S. Nam, B. K. Cho, and S. H. Oh. "Simulation of OHC/SCR process over Ag/Al2O3 catalyst for removing NOx from diesel engine." Chemical Engineering Journal 209 (2012): 280–92. https://doi.org/10.1016/j.cej.2012.08.002. Kotwal, A. V., and C. D. F. Collaboration. "Search for new phenomena in events with two Z bosons and missing transverse momentum in ppbar collisions at sqrt(s)=1.96 TeV." Phys. Rev. D 85 (January 2012): 011104(R). Wiebenga, M. H., C. H. Kim, S. J. Schmieg, S. H. Oh, D. B. Brown, D. H. Kim, J. H. Lee, and C. H. F. Peden. "Deactivation mechanisms of Pt/Pd-based diesel oxidation catalysts." Catalysis Today 184, no. 1 (2012): 197–204. https://doi.org/10.1016/j.cattod.2011.11.014. ATLAS Collaboration, Atlas. "Search for light top squark pair production in final states with leptons and b-jets with the ATLAS detector in sqrt(s) = 7 TeV proton-proton collisions (Submitted)." Physics Letters B, 2012. ATLAS Collaboration, L. "A search for ttbar resonances with the ATLAS detector in 2.05 fb^-1 of proton-proton collisions at sqrt(s) = 7 TeV (Submitted)." The European Physical Journal C, 2012. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for lepton flavour violation in the eμ continuum with the ATLAS detector in [Formula: see text]pp collisions at the LHC." The European Physical Journal. C, Particles and Fields 72, no. 6 (January 2012): 2040. https://doi.org/10.1140/epjc/s10052-012-2040-z. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Measurement of the top quark mass with the template method in the [Formula: see text] channel using ATLAS data." The European Physical Journal. C, Particles and Fields 72, no. 6 (January 2012): 2046. https://doi.org/10.1140/epjc/s10052-012-2046-6. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Measurement of the charge asymmetry in top quark pair production in pp collisions at [Formula: see text] using the ATLAS detector." The European Physical Journal. C, Particles and Fields 72, no. 6 (January 2012): 2039. https://doi.org/10.1140/epjc/s10052-012-2039-5. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, et al. "Measurement of τ polarization in W→τν decays with the ATLAS detector in pp collisions at [Formula: see text]." The European Physical Journal. C, Particles and Fields 72, no. 7 (January 2012): 2062. https://doi.org/10.1140/epjc/s10052-012-2062-6. ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, et al. "Measurement of [Formula: see text] production with a veto on additional central jet activity in pp collisions at [Formula: see text] TeV using the ATLAS detector." The European Physical Journal. C, Particles and Fields 72, no. 6 (January 2012): 2043. https://doi.org/10.1140/epjc/s10052-012-2043-9. ATLAS Collaboration, V. "Search for a heavy top-quark partner in final states with two leptons with the ATLAS detector at the LHC (Accepted)." Jhep 1211 (2012): 094. https://doi.org/10.1007/JHEP11(2012)094. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the top quark pair production cross-section with ATLAS in the single lepton channel." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 711, no. 3–4 (2012): 244–63. https://doi.org/10.1016/j.physletb.2012.03.083. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for TeV-scale gravity signatures in final states with leptons and jets with the ATLAS detector at √ s=7 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 716, no. 1 (2012): 122–41. https://doi.org/10.1016/j.physletb.2012.08.009. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for new particles decaying to ZZ using final states with leptons and jets with the ATLAS detector in √s=7 TeV proton-proton collisions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 712, no. 4–5 (2012): 331–50. https://doi.org/10.1016/j.physletb.2012.05.020. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for diphoton events with large missing transverse momentum in 1 fb -1 of 7 TeV proton-proton collision data with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 710, no. 4–5 (2012): 519–37. https://doi.org/10.1016/j.physletb.2012.02.054. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for extra dimensions using diphoton events in 7 TeV proton-proton collisions with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 710, no. 4–5 (2012): 538–56. https://doi.org/10.1016/j.physletb.2012.03.022. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for contact interactions in dilepton events from pp collisions at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 712, no. 1–2 (2012): 40–58. https://doi.org/10.1016/j.physletb.2012.04.026. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Forward-backward correlations and charged-particle azimuthal distributions in pp interactions using the ATLAS detector." Journal of High Energy Physics 2012, no. 7 (2012). Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the polarisation of W bosons produced with large transverse momentum in pp collisions at √s =7 TeV with the ATLAS experiment." European Physical Journal C 72, no. 5 (January 1, 2012): 1–30. https://doi.org/10.1140/epjc/s10052-012-2001-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for decays of stopped, long-lived particles from 7 TeV pp collisions with the ATLAS detector." European Physical Journal C 72, no. 4 (January 1, 2012): 1–21. https://doi.org/10.1140/epjc/s10052-012-1965-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for new phenomena in tt events with large missing transverse momentum in proton-proton collisions at sqrt[s] = 7 TeV with the ATLAS detector." Physical Review Letters 108, no. 4 (January 2012): 041805. https://doi.org/10.1103/physrevlett.108.041805. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the ZZ production cross section and limits on anomalous neutral triple gauge couplings in proton-proton collisions at sqrt[s] = 7 TeV with the ATLAS detector." Physical Review Letters 108, no. 4 (January 2012): 041804. https://doi.org/10.1103/physrevlett.108.041804. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "A study of the material in the ATLAS inner detector using secondary hadronic interactions." Journal of Instrumentation 7, no. 1 (January 1, 2012). https://doi.org/10.1088/1748-0221/7/01/P01013. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Performance of missing transverse momentum reconstruction in proton-proton collisions at√s = 7 TeV with atlas." European Physical Journal C 72, no. 1 (January 1, 2012): 1–35. https://doi.org/10.1140/epjc/s10052-011-1844-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Performance of the ATLAS trigger system in 2010." European Physical Journal C 72, no. 1 (January 1, 2012): 1–61. https://doi.org/10.1140/epjc/s10052-011-1849-1. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Forward-backward correlations and charged-particle azimuthal distributions in pp interactions using the ATLAS detector." Journal of High Energy Physics 2012, no. 7 (January 1, 2012). https://doi.org/10.1007/JHEP07(2012)019. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of inclusive two-particle angular correlations in pp collisions with the atlas detector at the LHC." Journal of High Energy Physics 2012, no. 5 (January 1, 2012). https://doi.org/10.1007/JHEP05(2012)157. Aad, G., B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for the Standard Model Higgs boson produced in association with a vector boson and decaying to a b-quark pair with the ATLAS detector." Physics Letters B 718, no. 2 (2012): 369–90. https://doi.org/10.1016/j.physletb.2012.10.061. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for heavy vector-like quarks coupling to light quarks in proton-proton collisions at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 712, no. 1–2 (2012): 22–39. https://doi.org/10.1016/j.physletb.2012.03.082. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Measurement of the WW cross section in √s=7 TeV pp collisions with the ATLAS detector and limits on anomalous gauge couplings." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 712, no. 4–5 (2012): 289–308. https://doi.org/10.1016/j.physletb.2012.05.003. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for the Standard Model Higgs boson in the decay channel H→ZZ(*)→4ℓ with 4.8 fb-1 of pp collision data at √s=7 TeV with ATLAS." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 710, no. 3 (2012): 383–402. https://doi.org/10.1016/j.physletb.2012.03.005. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for events with large missing transverse momentum, jets, and at least two tau leptons in 7 TeV proton-proton collision data with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 714, no. 2–5 (2012): 180–96. https://doi.org/10.1016/j.physletb.2012.06.055. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for a Standard Model Higgs boson in the mass range 200-600GeV in the H→ZZ→ℓ+ℓ-qq̄ decay channel with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 717, no. 1–3 (2012): 70–88. https://doi.org/10.1016/j.physletb.2012.09.020. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Measurement of the top quark pair cross section with ATLAS in pp collisions at s=7 TeV using final states with an electron or a muon and a hadronically decaying τ lepton." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 717, no. 1–3 (2012): 89–108. https://doi.org/10.1016/j.physletb.2012.09.032. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Measurement of the b-hadron production cross section using decays to D *+μ -X final states in pp collisions at √s = 7 TeV with the ATLAS detector." Nuclear Physics B 864, no. 3 (2012): 341–81. https://doi.org/10.1016/j.nuclphysb.2012.07.009. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for the decay Bs0→μ+μ- with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 713, no. 4–5 (2012): 387–407. https://doi.org/10.1016/j.physletb.2012.06.013. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for the Standard Model Higgs boson in the H→WW({star operator})→ℓνℓν decay mode with 4.7 fb-1 of ATLAS data at √s=7 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 716, no. 1 (2012): 62–81. https://doi.org/10.1016/j.physletb.2012.08.010. Aad, G., B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for scalar top quark pair production in natural gauge mediated supersymmetry models with the ATLAS detector in pp collisions at s=7TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 715, no. 1–3 (2012): 44–60. https://doi.org/10.1016/j.physletb.2012.07.010. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Measurement of the azimuthal anisotropy for charged particle production in √SNN = 2.76 TeV lead-lead collisions with the ATLAS detector." Physical Review C Nuclear Physics 86, no. 1 (January 1, 2012): 014907-1-014907–41. https://doi.org/10.1103/PhysRevC.86.014907. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for a fermiophobic Higgs boson in the diphoton decay channel with the ATLAS detector." European Physical Journal C 72, no. 9 (January 1, 2012). https://doi.org/10.1140/epjc/s10052-012-2157-0. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Search for the Standard Model Higgs boson in the H → τ+τ- decay mode in √s = 7TeV pp collisions with ATLAS." Journal of High Energy Physics 2012, no. 9 (January 1, 2012). https://doi.org/10.1007/JHEP09(2012)070. Aad, G., B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, B. Abi, et al. "Measurement of the W boson polarization in top quark decays with the ATLAS detector." Journal of High Energy Physics 2012, no. 6 (January 1, 2012). https://doi.org/10.1007/JHEP06(2012)088. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Time-dependent angular analysis of the decay B-s(0) -> J/psi phi and extraction of Delta Gamma(s) and the CP-violating weak phase phi(s) by ATLAS." Journal of High Energy Physics, no. 12 (2012). https://doi.org/10.1007/JHEP12(2012)072. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for anomalous production of prompt like-sign lepton pairs at √s =7 TeV with the ATLAS detector." Journal of High Energy Physics 2012, no. 12 (January 1, 2012). https://doi.org/10.1007/JHEP12(2012)007. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. A. Khalek, A. A. Abdelalim, O. Abdinov, et al. "Search for pair production of massive particles decaying into three quarks with the ATLAS detector in √s = 7 TeV pp collisions at the LHC." Journal of High Energy Physics 2012, no. 12 (January 1, 2012). https://doi.org/10.1007/JHEP12(2012)086. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "A search for tt̄ resonances in lepton+jets events with highly boosted top quarks collected in pp collisions at √s = 7TeV with the ATLAS detector." Journal of High Energy Physics 2012, no. 9 (January 1, 2012). https://doi.org/10.1007/JHEP09(2012)041. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "Measurements of the pseudorapidity dependence of the total transverse energy in proton-proton collisions at √s = 7TeV with ATLAS." Journal of High Energy Physics 2012, no. 11 (January 1, 2012): 1–53. https://doi.org/10.1007/JHEP11(2012)033. Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. "ATLAS search for a heavy gauge boson decaying to a charged lepton and a neutrino in pp collisions at √s =7 TeV." European Physical Journal C 72, no. 12 (January 1, 2012). https://doi.org/10.1140/epjc/s10052-012-2241-5. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for a Higgs boson in the diphoton final state in pp collisions at sqrt[s]=1.96 TeV." Physical Review Letters 108, no. 1 (January 2012): 011801. https://doi.org/10.1103/physrevlett.108.011801. The ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Search for heavy neutrinos and right-handed W bosons in events with two leptons and jets in pp collisions at [Formula: see text] with the ATLAS detector." The European Physical Journal. C, Particles and Fields 72, no. 7 (January 2012): 2056. https://doi.org/10.1140/epjc/s10052-012-2056-4. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the jet fragmentation function and transverse profile in proton–proton collisions at a center-of-mass energy of 7 TeV with the ATLAS detector." European Physical Journal C 71, no. 11 (December 21, 2011). https://doi.org/10.1140/epjc/s10052-011-1795-y. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the Z→ττ cross section with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 84, no. 11 (December 14, 2011). https://doi.org/10.1103/PhysRevD.84.112006. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for a heavy neutral particle decaying into an electron and a muon using 1 fb-1 of atlas data." European Physical Journal C 71, no. 12 (December 7, 2011). https://doi.org/10.1140/epjc/s10052-011-1809-9. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the inclusive isolated prompt photon cross-section in pp collisions at √s=7 TeV using 35 pb-1 of ATLAS data." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 706, no. 2–3 (December 6, 2011): 150–67. https://doi.org/10.1016/j.physletb.2011.11.010. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for the Higgs boson in the H→WW→lνjj decay channel in pp collisions at √s=7 TeV with the ATLAS detector." Physical Review Letters 107, no. 23 (December 2011): 231801. https://doi.org/10.1103/physrevlett.107.231801. Aaltonen, T., B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the B(s)0 lifetime in fully and partially reconstructed B(s)0→D(s)(-)(ϕπ(-))X decays in p¯p collisions at √s=1.96 TeV." Physical Review Letters 107, no. 27 (December 2011): 272001. https://doi.org/10.1103/physrevlett.107.272001. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for a heavy toplike quark in pp collisions at √s=1.96 TeV." Physical Review Letters 107, no. 26 (December 2011): 261801. https://doi.org/10.1103/physrevlett.107.261801. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of polarization and search for CP violation in B(s)0→φφ decays." Physical Review Letters 107, no. 26 (December 2011): 261802. https://doi.org/10.1103/physrevlett.107.261802. Collaboration, AV Kotwal with G Aad et al A. T. L. A. S. "Search for dilepton resonances in pp collisions at 7 TeV with the ATLAS detector." Physical Review Letters 107 (December 2011): 272002. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for the Standard Model Higgs boson in the two photon decay channel with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 705, no. 5 (November 24, 2011): 452–70. https://doi.org/10.1016/j.physletb.2011.10.051. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for the Standard Model Higgs boson in the decay channel H→ZZ(*)→4ℓ with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 705, no. 5 (November 24, 2011): 435–51. https://doi.org/10.1016/j.physletb.2011.10.034. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the transverse momentum distribution of Z/γ* bosons in proton-proton collisions at s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 705, no. 5 (November 24, 2011): 415–34. https://doi.org/10.1016/j.physletb.2011.10.018. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for new phenomena with the monojet and missing transverse momentum signature using the ATLAS detector in s=7TeV proton-proton collisions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 705, no. 4 (November 17, 2011): 294–312. https://doi.org/10.1016/j.physletb.2011.10.006. Aaltonen, T et al. "Top-Quark Mass Measurement Using Events with Missing Transverse Energy and Jets at CDF." Phys. Rev. Lett. 107 (November 2011): 232002. https://doi.org/10.1103/PhysRevLett.107.232002. Aaltonen, T et al. "Observation of the Baryonic Flavor-Changing Neutral Current Decay $\Lambda_{b}^{0}\rightarrow\Lambda\mu^{+}\mu^{-}$." Phys. Rev. Lett. 107 (November 2011): 201802. https://doi.org/10.1103/PhysRevLett.107.201802. Aaltonen, T et al. "Search for New ${T}^\prime$ Particles in Final States with Large Jet Multiplicities and Missing Transverse Energy in $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 107 (November 2011): 191803. https://doi.org/10.1103/PhysRevLett.107.191803. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for a standard model Higgs boson in the H→ZZ→ℓ(+)ℓ(-)νν decay channel with the ATLAS detector." Physical Review Letters 107, no. 22 (November 2011): 221802. https://doi.org/10.1103/physrevlett.107.221802. Aaltonen, R. M., R. M. T, and R. M. others. "Measurements of branching fraction ratios and $CP$-asymmetries in suppressed ${B}^{-}\rightarrow D(\rightarrow{K}^{+}{\pi}^{-}){K}^{-}$ and ${B}^{-}\rightarrow D(\rightarrow{K}^{+} \pi^{-}){\pi}^{-}$ decays." Phys. Rev. D 84 (November 2011): 091504. https://doi.org/10.1103/PhysRevD.84.091504. Aaltonen, T et al. "Measurement of the top-quark mass in the lepton+jets channel using a matrix element technique with the {CDF II} detector." Phys. Rev. D 84 (October 2011): 071105. https://doi.org/10.1103/PhysRevD.84.071105. Aaltonen, T et al. "Search for resonant production of $t\overline{t}$ pairs in $4.8{\,}{\,}{\mathrm{fb}}^{\mathbf{-}1}$ of integrated luminosity of $p\overline{p}$ collisions at $\sqrt{s}\mathbf{=}1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. D 84 (October 2011): 072004. https://doi.org/10.1103/PhysRevD.84.072004. Aaltonen, T et al. "Search for New Physics in High ${p}_{T}$ Like-Sign Dilepton Events at {CDF II}." Phys. Rev. Lett. 107 (October 2011): 181801. https://doi.org/10.1103/PhysRevLett.107.181801. Aaltonen, T et al. "Search for resonant production of $t\overline{t}$ decaying to jets in $p\overline{p}$ collisions at $\sqrt{s}\mathbf{=}1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. D 84 (October 2011): 072003. https://doi.org/10.1103/PhysRevD.84.072003. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Inclusive search for same-sign dilepton signatures in pp collisions at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2011, no. 10 (October 1, 2011): 1–47. https://doi.org/10.1007/JHEP10(2011)107. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Properties of jets measured from tracks in proton-proton collisions at center-of-mass energy √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 84, no. 5 (September 20, 2011). https://doi.org/10.1103/PhysRevD.84.054001. Aaltonen, T et al. "Measurement of branching ratio and ${B}_{s}^{0}$ lifetime in the decay ${B}_{s}^{0}\rightarrow{J}/\psi{f}_{0}(980)$ at {CDF}." Phys. Rev. D 84 (September 2011): 052012. https://doi.org/10.1103/PhysRevD.84.052012. Aaltonen, T et al. "Search for the {H}iggs boson in the all-hadronic final state using the {CDF II} detector." Phys. Rev. D 84 (September 2011): 052010. https://doi.org/10.1103/PhysRevD.84.052010. Aaltonen, T et al. "Measurement of the Cross Section for Prompt Isolated Diphoton Production in $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 107 (September 2011): 102003. https://doi.org/10.1103/PhysRevLett.107.102003. ATLAS Collaboration, L. "Measurement of the inelastic proton-proton cross-section at √s=7 TeV with the ATLAS detector." Nature Communications 2 (September 2011): 463. https://doi.org/10.1038/ncomms1472. Aaltonen, G., G. T, and et al. "Measurement of the top pair production cross section in the $\mathrm{{lepton}}+\mathrm{{jets}}$ channel using a jet flavor discriminant." Phys. Rev. D 84 (August 2011): 031101. https://doi.org/10.1103/PhysRevD.84.031101. Aaltonen, S., S. T, and et al. "Measurement of the $t\overline{t}$ production cross section in $p\overline{p}$ collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$ using events with large missing transverse energy and jets." Phys. Rev. D 84 (August 2011): 032003. https://doi.org/10.1103/PhysRevD.84.032003. Aaltonen, S., S. T, and et al. "Observation of the $\Xi_{b}^{0}$ Baryon." Phys. Rev. Lett. 107 (August 2011): 102001. https://doi.org/10.1103/PhysRevLett.107.102001. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the W+ W- cross section in sqrt(s) = 7 TeV pp collisions with ATLAS." Physical Review Letters 107, no. 4 (July 2011): 041802. https://doi.org/10.1103/physrevlett.107.041802. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for contact interactions in dimuon events from pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 84, no. 1 (July 1, 2011). https://doi.org/10.1103/PhysRevD.84.011101. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for supersymmetric particles in events with lepton pairs and large missing transverse momentum in √s = 7 TeV proton-proton collisions with the ATLAS experiment." European Physical Journal C 71, no. 7 (July 1, 2011): 18–19. https://doi.org/10.1140/epjc/s10052-011-1682-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for an excess of events with an identical flavour lepton pair and significant missing transverse momentum in √s = TeV proton-proton collisions with the ATLAS detector." European Physical Journal C 71, no. 7 (July 1, 2011): 1–18. https://doi.org/10.1140/epjc/s10052-011-1647-9. Aaltonen, R. J., R. J. T, and et al. "Search for New Dielectron Resonances and {Randall-Sundrum} Gravitons at the {Collider Detector at Fermilab}." Phys. Rev. Lett. 107 (July 2011): 051801. https://doi.org/10.1103/PhysRevLett.107.051801. Aaltonen, S., S. T, and et al. "Limits on Anomalous Trilinear Gauge Couplings in ${Z}\gamma$ Events from $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 107 (July 2011): 051802. https://doi.org/10.1103/PhysRevLett.107.051802. Aaltonen, S., S. T, and et al. "First Search for Multijet Resonances in $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$ $p\overline{p}$ Collisions." Phys. Rev. Lett. 107 (July 2011): 042001. https://doi.org/10.1103/PhysRevLett.107.042001. Aaltonen, S., S. T, and et al. "Search for a Very Light ${CP}$-Odd {H}iggs Boson in Top Quark Decays from $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 107 (July 2011): 031801. https://doi.org/10.1103/PhysRevLett.107.031801. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the W charge asymmetry in the W→μν decay mode in pp collisions at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 701, no. 1 (June 27, 2011): 31–49. https://doi.org/10.1016/j.physletb.2011.05.024. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for high-mass states with one lepton plus missing transverse momentum in proton–proton collisions at s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 701, no. 1 (June 27, 2011): 50–69. https://doi.org/10.1016/j.physletb.2011.05.043. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for stable hadronising squarks and gluinos with the ATLAS experiment at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 701, no. 1 (June 27, 2011): 1–19. https://doi.org/10.1016/j.physletb.2011.05.010. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for pair production of first or second generation leptoquarks in proton-proton collisions at √s=7TeV using the ATLAS detector at the LHC." Physical Review D Particles, Fields, Gravitation and Cosmology 83, no. 11 (June 15, 2011). https://doi.org/10.1103/PhysRevD.83.112006. Abat, E., J. M. Abdallah, T. N. Addy, P. Adragna, M. Aharrouche, A. Ahmad, T. P. A. Akesson, et al. "A layer correlation technique for pion energy calibration at the 2004 ATLAS Combined Beam Test." Journal of Instrumentation 6, no. 6 (June 1, 2011). https://doi.org/10.1088/1748-0221/6/06/P06001. Oh, S. H., W. L. Ebenstein, and C. W. Wang. "Multi-anode wire straw tube tracker." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 640, no. 1 (June 1, 2011): 160–63. https://doi.org/10.1016/j.nima.2011.02.105. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for a heavy particle decaying into an electron and a muon with the ATLAS detector in sqrt[s] = 7 TeV pp collisions at the LHC." Physical Review Letters 106, no. 25 (June 2011): 251801. https://doi.org/10.1103/physrevlett.106.251801. Aaltonen, A. M., A. M. T, and et al. "First Measurement of the Angular Coefficients of {Drell-Yan} ${e}^{+}{e}^{-}$ Pairs in the ${Z}$ Mass Region from $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 106 (June 2011): 241801. https://doi.org/10.1103/PhysRevLett.106.241801. Aaltonen, D., D. T, and et al. "Top quark mass measurement using the template method at {CDF}." Phys. Rev. D 83 (June 2011): 111101. https://doi.org/10.1103/PhysRevD.83.111101. Aaltonen, R. A., R. A. T, and et al. "Measurement of event shapes in $p\overline{p}$ collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. D 83 (June 2011): 112007. https://doi.org/10.1103/PhysRevD.83.112007. Aaltonen, S., S. T, and et al. "Evidence for a mass dependent forward-backward asymmetry in top quark pair production." Phys. Rev. D 83 (June 2011): 112003. https://doi.org/10.1103/PhysRevD.83.112003. Aaltonen, S., S. T, and et al. "Search for new heavy particles decaying to ${ZZ}\rightarrow llll$, $lljj$ in $p\overline{p}$ collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. D 83 (June 2011): 112008. https://doi.org/10.1103/PhysRevD.83.112008. Collaboration, AV Kotwal with G Aad et al A. T. L. A. S. "Search for high mass dilepton resonances in pp collisions at 7 TeV with the ATLAS experiment." Physics Letters B 700, no. 3–4 (June 2011): 163–80. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of underlying event characteristics using charged particles in pp collisions at √s=900GeV and 7 TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 83, no. 11 (May 31, 2011). https://doi.org/10.1103/PhysRevD.83.112001. Aaltonen, S., S. T, and et al. "Measurements of Direct ${CP}$ Violating Asymmetries in Charmless Decays of Strange Bottom Mesons and Bottom Baryons." Phys. Rev. Lett. 106 (May 2011): 181802. https://doi.org/10.1103/PhysRevLett.106.181802. Aaltonen, S., S. T, and et al. "Search for Production of Heavy Particles Decaying to Top Quarks and Invisible Particles in $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 106 (May 2011): 191801. https://doi.org/10.1103/PhysRevLett.106.191801. Aaltonen, T., B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurement of the top quark mass in the lepton+jets channel using the lepton transverse momentum." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 698, no. 5 (April 25, 2011): 371–79. https://doi.org/10.1016/j.physletb.2011.03.041. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for massive long-lived highly ionising particles with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 698, no. 5 (April 25, 2011): 353–70. https://doi.org/10.1016/j.physletb.2011.03.033. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the production cross section for W-bosons in association with jets in pp collisions at √s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 698, no. 5 (April 25, 2011): 325–45. https://doi.org/10.1016/j.physletb.2011.03.012. Abat, E., M. Abdallah, N. Addy, P. Adragna, M. Aharrouche, A. Ahmad, A. Akesson, et al. "Photon reconstruction in the ATLAS inner detector and liquid argon barrel calorimeter at the 2004 combined test beam." Journal of Instrumentation 6, no. 4 (April 1, 2011). https://doi.org/10.1088/1748-0221/6/04/P04001. Aad, D., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Luminosity determination in pp collisions at √s = 7 TeV using the ATLAS detector at the LHC." European Physical Journal C 71, no. 4 (April 1, 2011): 1–37. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of dijet azimuthal decorrelations in pp collisions at sqrt(s)=7 TeV." Physical Review Letters 106, no. 17 (April 2011): 172002. https://doi.org/10.1103/physrevlett.106.172002. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for supersymmetry using final states with one lepton, jets, and missing transverse momentum with the ATLAS detector in √s=7 TeV pp collisions." Physical Review Letters 106, no. 13 (April 2011): 131802. https://doi.org/10.1103/physrevlett.106.131802. Aaltonen, C. A., C. A. T, and et al. "Measurement of the Forward-Backward Asymmetry in the ${B}\rightarrow{K}^{(*)}\mu^{+}\mu^{-}$ Decay and First Observation of the ${B}_{s}^{0}$\rightarrow${}$\phi${}{$\mu${}}^{+}{$\mu${}}^{-}$ Decay." Phys. Rev. Lett. 106 (April 2011): 161801. https://doi.org/10.1103/PhysRevLett.106.161801. Aaltonen, S., S. T, and et al. "Measurement of the Mass Difference between $t$ and $\overline{t}$ Quarks." Phys. Rev. Lett. 106 (April 2011): 152001. https://doi.org/10.1103/PhysRevLett.106.152001. Aaltonen, S., S. T, and et al. "Search for Heavy Bottomlike Quarks Decaying to an Electron or Muon and Jets in $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 106 (April 2011): 141803. https://doi.org/10.1103/PhysRevLett.106.141803. Aaltonen, S., S. T, and et al. "Invariant Mass Distribution of Jet Pairs Produced in Association with a ${W}$ Boson in $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 106 (April 2011): 171801. https://doi.org/10.1103/PhysRevLett.106.171801. Aaltonen, S., T. and, and et al. "Measurement of the $t\overline{t}$ production cross section with an \textit{in~situ} calibration of $b$-jet identification efficiency." Phys. Rev. D 83 (April 2011): 071102. https://doi.org/10.1103/PhysRevD.83.071102. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the inclusive isolated prompt photon cross section in pp collisions at √s=7TeV with the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 83, no. 5 (March 18, 2011). https://doi.org/10.1103/PhysRevD.83.052005. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the centrality dependence of J/Ψ yields and observation of Z production in lead-lead collisions with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 697, no. 4 (March 14, 2011): 294–312. https://doi.org/10.1016/j.physletb.2011.02.006. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for a heavy gauge boson decaying to a charged lepton and a neutrino in 1 fb-1 of pp collisions at √s = 7 TeV using the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 705, no. 1–2 (March 11, 2011): 28–46. https://doi.org/10.1016/j.physletb.2011.09.093. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Study of jet shapes in inclusive jet production in pp collisions at √s=7TeV using the ATLAS detector." Physical Review D Particles, Fields, Gravitation and Cosmology 83, no. 5 (March 8, 2011). https://doi.org/10.1103/PhysRevD.83.052003. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for diphoton events with large missing transverse energy in 7 TeV proton-proton collisions with the ATLAS detector." Physical Review Letters 106, no. 12 (March 2011): 121803. https://doi.org/10.1103/physrevlett.106.121803. Aaltonen, S., S. T, and et al. "Measurement of ${b}$ Hadron Lifetimes in Exclusive Decays Containing a ${J}/\psi$ in $p\overline{p}$ Collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. Lett. 106 (March 2011): 121804. https://doi.org/10.1103/PhysRevLett.106.121804. Aaltonen, S., S. T, and et al. "Search for High Mass Resonances Decaying to Muon Pairs in $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$ $p\overline{p}$ Collisions." Phys. Rev. Lett. 106 (March 2011): 121801. https://doi.org/10.1103/PhysRevLett.106.121801. Aaltonen, S., S. T, and et al. "Observation of ${B}_{s}^{0}$\rightarrow${}J/$\psi${}{K}^{*}(892{)}^{0}$ and ${B}_{s}^{0}$\rightarrow${}J/$\psi${}{K}_{S}^{0}$ decays." Phys. Rev. D 83 (March 2011): 052012. https://doi.org/10.1103/PhysRevD.83.052012. Aaltonen, Kevin S., Kevin S. T, and et al. "Measurement of $t\overline{t}$ spin correlation in $p\overline{p}$ collisions using the CDF II detector at the Tevatron." Phys. Rev. D 83 (February 2011): 031104. https://doi.org/10.1103/PhysRevD.83.031104. Aaltonen, S., S. T, and et al. "Measurement of the ${B}^{-}$ lifetime using a simulation free approach for trigger bias correction." Phys. Rev. D 83 (February 2011): 032008. https://doi.org/10.1103/PhysRevD.83.032008. Aaltonen, S., S. T, and et al. "Search for a new heavy gauge boson ${W}^{$'${}}$ with event signature $\mathrm{\text{electron}}+\mathrm{\text{missing}}$ transverse energy in $p\overline{p}$ collisions at $\sqrt{s}=1.96\text{\,}\text{\,}\mathrm{TeV}$." Phys. Rev. D 83 (February 2011): 031102. https://doi.org/10.1103/PhysRevD.83.031102. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for quark contact interactions in dijet angular distributions in pp collisions at s=7 TeV measured with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 694, no. 4–5 (January 3, 2011): 327–45. https://doi.org/10.1016/j.physletb.2010.10.021. Aaltonen, T., B. Á. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Search for Bs0→μ+μ- and B0→μ+μ- decays with CDF II." Physical Review Letters 107, no. 19 (2011). https://doi.org/10.1103/PhysRevLett.107.191801. Aaltonen, T., B. Á. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Evidence for tt̄γ production and measurement of σtt̄γ/σtt̄." Physical Review D Particles, Fields, Gravitation and Cosmology 84, no. 3 (2011). https://doi.org/10.1103/PhysRevD.84.031104. Aaltonen, T., B. Á. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Erratum: Search for Bs0→μ+μ- and B0→μ+μ- decays with CDF II (Physical Review Letters (2011) 107 (191801))." Physical Review Letters 107, no. 23 (2011). https://doi.org/10.1103/PhysRevLett.107.239903. Aaltonen, T., B. ÁlvarezGonzález, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Improved determination of the sample composition of dimuon events produced in pp̄ collisions at √s=1.96 TeV." European Physical Journal C 71, no. 8 (January 1, 2011): 1–14. https://doi.org/10.1140/epjc/s10052-011-1720-4. Aaltonen, T., B. Ã. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Measurements of the properties of Λc(2595), Λc(2625), Σc(2455), and Σ c(2520) baryons." Physical Review D Particles, Fields, Gravitation and Cosmology 84, no. 1 (2011). https://doi.org/10.1103/PhysRevD.84.012003. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for massive colored scalars in four-jet final states in root s=7 TeV proton-proton collisions with the ATLAS detector." European Physical Journal C 71, no. 12 (2011). https://doi.org/10.1140/epjc/s10052-011-1828-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for neutral MSSM Higgs bosons decaying to τ+τ- pairs in proton-proton collisions at s=7 TeV with the ATLAS detector." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 705, no. 3 (January 1, 2011): 174–92. https://doi.org/10.1016/j.physletb.2011.10.001. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the Υ(1S) production cross-section in pp collisions at s=7 TeV in ATLAS." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 705, no. 1–2 (January 1, 2011): 9–27. https://doi.org/10.1016/j.physletb.2011.09.092. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for heavy long-lived charged particles with the ATLAS detector in pp collisions at √s=7 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 703, no. 4 (January 1, 2011): 428–46. https://doi.org/10.1016/j.physletb.2011.08.042. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the differential cross-sections of inclusive, prompt and non-prompt J/Ψ production in proton-proton collisions at √s=7 TeV." Nuclear Physics B 850, no. 3 (January 1, 2011): 387–444. https://doi.org/10.1016/j.nuclphysb.2011.05.015. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for squarks and gluinos using final states with jets and missing transverse momentum with the ATLAS detector in √s=7 TeV proton-proton collisions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 701, no. 2 (January 1, 2011): 186–203. https://doi.org/10.1016/j.physletb.2011.05.061. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for supersymmetry in pp collisions at √s=7 TeV in final states with missing transverse momentum and b-jets." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 701, no. 4 (January 1, 2011): 398–416. https://doi.org/10.1016/j.physletb.2011.06.015. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "A search for new physics in dijet mass and angular distributions in ppcollisions at √s=7 TeV measured with the ATLAS detector." New Journal of Physics 13 (January 1, 2011). https://doi.org/10.1088/1367-2630/13/5/053044. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Charged-particle multiplicities in ppinteractions measured with the ATLAS detector at the LHC." New Journal of Physics 13 (January 1, 2011). https://doi.org/10.1088/1367-2630/13/5/053033. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for new phenomena in final states with large jet multiplicities and missing transverse momentum using √s = 7 TeV pp collisions with the ATLAS detector." Journal of High Energy Physics 2011, no. 11 (January 1, 2011). https://doi.org/10.1007/JHEP11(2011)099. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of multi-jet cross sections in proton–proton collisions at a 7 TeV center-of-mass energy." European Physical Journal C 71, no. 11 (January 1, 2011): 1–27. https://doi.org/10.1140/epjc/s10052-011-1763-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for diphoton events with large missing transverse energy with 36 pb-1 of 7 TeV proton-proton collision data with the ATLAS detector." European Physical Journal C 71, no. 10 (January 1, 2011). https://doi.org/10.1140/epjc/s10052-011-1744-9. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of inclusive jet and dijet cross sections in proton-proton collisions at 7 TeV centre-of-mass energy with the ATLAS detector." European Physical Journal C 71, no. 2 (January 1, 2011): 1–59. https://doi.org/10.1140/epjc/s10052-010-1512-2. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the top quark-pair production cross section with ATLAS in pp collisions at √s = 7 TeV." European Physical Journal C 71, no. 3 (January 1, 2011). https://doi.org/10.1140/epjc/s10052-011-1577-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Studies of the performance of the ATLAS detector using cosmic-ray muons." European Physical Journal C 71, no. 3 (January 1, 2011): 1–36. https://doi.org/10.1140/epjc/s10052-011-1593-6. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of Wγ and Zγ production in proton-proton collisions at √s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2011, no. 9 (January 1, 2011). https://doi.org/10.1007/JHEP09(2011)072. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of dijet production with a veto on additional central jet activity in pp collisions at √s = 7TeV using the ATLAS detector." Journal of High Energy Physics 2011, no. 9 (January 1, 2011): 1–36. https://doi.org/10.1007/JHEP09(2011)053. Aad, G., B. Abbott, J. Abdallah, A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at root s=7 TeV with the ATLAS detector." European Physical Journal C 71, no. 12 (2011). https://doi.org/10.1140/epjc/s10052-011-1846-4. Aaltonen, S., S. T, and et al. "Search for {Randall-Sundrum} gravitons in the diphoton channel at {CDF}." Phys. Rev. D 83 (January 2011): 011102. https://doi.org/10.1103/PhysRevD.83.011102. Liang, Z., B. Martin, J. Meyer, S. Tanaka, P. D. Thompson, J. Wang, Y. Yang, et al. "Measurements of underlying-event properties using neutral and charged particles in pp collisions at √s = 900 GeV and √s = 7 TeV with the ATLAS detector at the LHC." European Physical Journal C 71, no. 5 (January 1, 2011). https://doi.org/10.1140/epjc/s10052-011-1636-z. The ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Limits on the production of the standard model Higgs boson in pp collisions at √s=7 TeV with the ATLAS detector." European Physical Journal C 71, no. 9 (January 1, 2011): 1–30. https://doi.org/10.1140/epjc/s10052-011-1728-9. Aaltonen, T., J. Adelman, T. Akimoto, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, et al. "Erratum: Measurement of particle production and inclusive differential cross sections in pp̄ collisions at √s=1.96TeV (Physical Review D - Particles, Fields, Gravitation and Cosmology)." Physical Review D Particles, Fields, Gravitation and Cosmology 82, no. 11 (December 17, 2010). https://doi.org/10.1103/PhysRevD.82.119903. Aad, G., E. Abat, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Performance of the ATLAS detector using first collision data." Journal of High Energy Physics 2010, no. 9 (December 14, 2010). https://doi.org/10.1007/JHEP09(2010)056. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Observation of a centrality-dependent dijet asymmetry in lead-lead collisions at sqrt[S(NN)] =2.76 TeV with the ATLAS detector at the LHC." Physical Review Letters 105, no. 25 (December 2010): 252303. https://doi.org/10.1103/physrevlett.105.252303. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Readiness of the ATLAS Tile Calorimeter for LHC collisions." European Physical Journal C 70, no. 4 (December 1, 2010): 1193–1236. https://doi.org/10.1140/epjc/s10052-010-1508-y. Aaltonen, I. M., I. M. T, and I. M. others. "Top Quark Mass Measurement in the $lepton+jets$ Channel Using a Matrix Element Method and in situ Jet Energy Calibration." Phys. Rev. Lett. 105, no. 25 (December 2010): 252001. https://doi.org/10.1103/PhysRevLett.105.252001. Aaltonen, K. K., K. K. T, and K. K. others. "Observation of single top quark production and measurement of |$V_{tb}|$ with CDF." Phys. Rev. D 82, no. 11 (December 2010): 112005. https://doi.org/10.1103/PhysRevD.82.112005. Aaltonen, S., S. T, and S. others. "Measurement of the $WW+WZ$ production cross section using a matrix element technique in $lepton+jets$ events." Phys. Rev. D 82, no. 11 (December 2010): 112001. https://doi.org/10.1103/PhysRevD.82.112001. Aaltonen, S., S. T, and S. others. "Diffractive $W$ and $Z$ production at the Fermilab Tevatron." Phys. Rev. D 82, no. 11 (December 2010): 112004. https://doi.org/10.1103/PhysRevD.82.112004. Aaltonen, S., S. T, and S. others. "Direct Top-Quark Width Measurement at CDF." Phys. Rev. Lett. 105, no. 23 (December 2010): 232003. https://doi.org/10.1103/PhysRevLett.105.232003. The ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "The ATLAS Inner Detector commissioning and calibration." European Physical Journal C 70, no. 3 (December 1, 2010): 787–821. https://doi.org/10.1140/epjc/s10052-010-1366-7. The ATLAS Collaboration, L., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Readiness of the ATLAS liquid argon calorimeter for LHC collisions." European Physical Journal C 70, no. 3 (December 1, 2010): 723–53. https://doi.org/10.1140/epjc/s10052-010-1354-y. The ATLAS Collaboration, V., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Drift Time Measurement in the ATLAS Liquid Argon Electromagnetic Calorimeter using Cosmic Muons." European Physical Journal C 70, no. 3 (December 1, 2010): 755–85. https://doi.org/10.1140/epjc/s10052-010-1403-6. The ATLAS Collaboration, V., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Commissioning of the ATLAS Muon Spectrometer with cosmic rays." European Physical Journal C 70, no. 3 (December 1, 2010): 875–916. https://doi.org/10.1140/epjc/s10052-010-1415-2. Abat, E., J. M. Abdallah, T. N. Addy, P. Adragna, M. Aharrouche, A. Ahmad, T. P. A. Akesson, et al. "Combined performance studies for electrons at the 2004 ATLAS combined test-beam." Journal of Instrumentation 5, no. 11 (November 1, 2010). https://doi.org/10.1088/1748-0221/5/11/P11006. Aaltonen, R. B., R. B. T, and R. B. others. "Search for $R$-Parity Violating Decays of Sneutrinos to $e\mu{}$, $\mu{}\tau{}$, and $e\tau{}$ Pairs in $pp[over \ifmmode\bar\else\textasciimacron\fi{}]$ Collisions at $\sqrt{s}=1.96\,\,TeV$." Phys. Rev. Lett. 105, no. 19 (November 2010): 191801. https://doi.org/10.1103/PhysRevLett.105.191801. Aaltonen, S., S. T, and S. others. "Updated search for the flavor-changing neutral-current decay $D^{0}\rightarrow{}\mu{}^{+}\mu{}^{-}$ in $pp[over \ifmmode\bar\else\textasciimacron\fi{}]$ collisions at $\sqrt{s}=1.96$ TeV." Phys. Rev. D 82, no. 9 (November 2010): 091105. https://doi.org/10.1103/PhysRevD.82.091105. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Search for new particles in two-jet final states in 7 TeV proton-proton collisions with the ATLAS detector at the LHC." Physical Review Letters 105, no. 16 (October 2010): 161801. https://doi.org/10.1103/physrevlett.105.161801. Aaltonen, T., J. Adelman, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, et al. "Measurement of dσ/dy of Drell-Yan e+e- pairs in the Z mass region from pp̄ collisions at √s=1.96 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 692, no. 4 (September 1, 2010): 232–39. https://doi.org/10.1016/j.physletb.2010.06.043. Aaltonen, S., S. T, and S. others. "Search for New Physics with a Dijet Plus Missing $E_{T}$ Signature in $pp[over \ifmmode\bar\else\textasciimacron\fi{}]$ Collisions at $\sqrt{s}=1.96\,\,TeV$." Phys. Rev. Lett. 105, no. 13 (September 2010): 131801. https://doi.org/10.1103/PhysRevLett.105.131801. Aaltonen, S., S. T, and S. others. "Search for anomalous production of events with two photons and additional energetic objects at CDF." Phys. Rev. D 82, no. 5 (September 2010): 052005. https://doi.org/10.1103/PhysRevD.82.052005. Aaltonen, S., S. T, and S. others. "Measurement of the top pair production cross section in the dilepton decay channel in $pp[over \ifmmode\bar\else\textasciimacron\fi{}]$ collisions at $\sqrt{s}=1.96\,\,TeV$." Phys. Rev. D 82, no. 5 (September 2010): 052002. https://doi.org/10.1103/PhysRevD.82.052002. Aaltonen, C., C. T, and C. others. "Measurement of $Z\gamma{}$ production in $pp[over \ifmmode\bar\else\textasciimacron\fi{}]$ collisions at $\sqrt{s}=1.96\,\,TeV$." Phys. Rev. D 82, no. 3 (August 2010): 031103. https://doi.org/10.1103/PhysRevD.82.031103. Aaltonen, Irene, Irene T, and Irene others. "Exclusion of an Exotic Top Quark with -4/3 Electric Charge Using Soft Lepton Tagging." Phys. Rev. Lett. 105, no. 10 (August 2010): 101801. https://doi.org/10.1103/PhysRevLett.105.101801. Aaltonen, T., J. Adelman, B. Álvarez Gonzalez, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, et al. "Erratum: Measurement of the W+W- production cross section and search for anomalous WWγ and WWZ couplings in pp̄ collisions at √s=1.96 TeV (Phys. Rev. Lett. (2010) 104 (201801))." Physical Review Letters 105, no. 1 (July 1, 2010). https://doi.org/10.1103/PhysRevLett.105.019905. Aaltonen, H. L., H. L. T, and H. L. others. "Combined Tevatron upper limit on $gg\rightarrow{}H\rightarrow{}W^{+}W^{-}$ and constraints on the Higgs boson mass in fourth-generation fermion models." Phys. Rev. D 82, no. 1 (July 2010): 011102. https://doi.org/10.1103/PhysRevD.82.011102. Aaltonen, S., S. T, and S. others. "Measurement of W-Boson Polarization in Top-Quark Decay in ppbar Collisions at sqrt{s}=1.96 TeV." Phys. Rev. Lett. 105, no. 4 (July 2010): 042002. https://doi.org/10.1103/PhysRevLett.105.042002. Bethke, S., T. Aaltonen, J. Adelman, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, et al. "Comments and Reply on: "Study of multi-muon events produced in pp̄ interactions at √s = 1.96 TeV"; T. Aaltonen et al. (The CDF collaboration)." European Physical Journal C 68, no. 1 (June 15, 2010): 119–23. https://doi.org/10.1140/epjc/s10052-010-1337-z. Abat, E., J. M. Abdallah, T. N. Addy, P. Adragna, M. Aharrouche, A. Ahmad, T. P. A. Akesson, et al. "Study of energy response and resolution of the ATLAS barrel calorimeter to hadrons of energies from 20 to 350 GeV." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 621, no. 1–3 (June 14, 2010): 134–50. https://doi.org/10.1016/j.nima.2010.04.054. Aad, G., E. Abat, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "Charged-particle multiplicities in pp interactions at √ s = 900 GeV measured with the ATLAS detector at the LHC." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 688, no. 1 (April 26, 2010): 21–42. https://doi.org/10.1016/j.physletb.2010.03.064. Aaltonen, T., B. Á. González, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, et al. "Improved search for a Higgs boson produced in association with Z→l+l- in pp̄ collisions at √s=1.96TeV." Physical Review Letters 105, no. 25 (2010). https://doi.org/10.1103/PhysRevLett.105.251802. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "Measurement of the W → ℓν and Z/γ* → ℓℓ production cross sections in proton-proton collisions at √ s = 7 TeV with the ATLAS detector." Journal of High Energy Physics 2010, no. 12 (January 1, 2010). https://doi.org/10.1007/JHEP12(2010)060. Aaltonen, A., A. T, and A. others. "Search for Technicolor Particles Produced in Association with a $W$ Boson at CDF." Phys. Rev. Lett. 104 (2010): 111802. https://doi.org/10.1103/PhysRevLett.104.111802. Aaltonen, H. F., H. F. T, and H. F. others. "A study of the associated production of photons and $b$-quark jets in $p \bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV." Phys. Rev. D81 (2010): 052006. https://doi.org/10.1103/PhysRevD.81.052006. Aaltonen, J., J. T, and J. others. "First Measurement of the b-jet Cross Section in Events with a $W$ Boson in $p \bar{p}$ Collisions at $\sqrt{s} = 1.96$ TeV." Phys. Rev. Lett. 104 (2010): 131801. https://doi.org/10.1103/PhysRevLett.104.131801. Aaltonen, S., S. T, and S. others. "Study of multi-muon events produced in $p \bar{p}$ interactions at $\sqrt{s} = 1.96$-TeV." Eur. Phys. J. C68 (2010): 109–18. https://doi.org/10.1140/epjc/s10052-010-1336-0. Aaltonen, S., S. T, and S. others. "Search for Pair Production of Supersymmetric Top Quarks in Dilepton Events from $p\bar{p}$ Collisions at $\sqrt{s}$ = 1.96 TeV." Phys. Rev. Lett. 104 (2010): 251801. https://doi.org/10.1103/PhysRevLett.104.251801. Aaltonen, S., S. T, and S. others. "Measurements of the top-quark mass using charged particle tracking." Phys. Rev. D81 (2010): 032002. https://doi.org/10.1103/PhysRevD.81.032002. Aaltonen, S., S. T, and S. others. "Search for New Color-Octet Vector Particle Decaying to $t \bar{t}$ in $p \bar{p}$ Collisions at $\sqrt{s}=1.96$ TeV." Phys. Lett. B691 (2010): 183–90. https://doi.org/10.1016/j.physletb.2010.06.036. Aaltonen, S., S. T, and S. others. "Measurement of the Top Quark Mass and $p \bar{p} \to t \bar{t}$ Cross Section in the All-Hadronic Mode with the CDFII Detector." Phys. Rev. D81 (2010): 052011. https://doi.org/10.1103/PhysRevD.81.052011. Aaltonen, S., S. T, and S. others. "Search for New Bottomlike Quark Pair Decays $Q\bar{Q} \to (tW^\mp)(\bar{t}W^\pm)$ in Same-Charge Dilepton Events." Phys. Rev. Lett. 104 (2010): 091801. https://doi.org/10.1103/PhysRevLett.104.091801. Aaltonen, S., S. T, and S. others. "Search for single top quark production in $\bar{p} p$ collisions at $\sqrt{s} = 1.96$ TeV in the missing transverse energy plus jets topology." Phys. Rev. D81 (2010): 072003. https://doi.org/10.1103/PhysRevD.81.072003. Aaltonen, S., S. T, and S. others. "First Measurement of the Ratio $\sigma_{(t\bar{t})} / \sigma_{(Z/\gamma^*\to ll)}$ and Precise Extraction of the $t\bar{t}$ Cross Section." Phys. Rev. Lett. 105 (2010): 012001. https://doi.org/10.1103/PhysRevLett.105.012001. Aaltonen, S., S. T, and S. others. "Measurement of the $WW+WZ$ Production Cross Section Using the Lepton+Jets Final State at CDF II." Phys. Rev. Lett. 104 (2010): 101801. https://doi.org/10.1103/PhysRevLett.104.101801. Aaltonen, S., S. T, and S. others. "Search for $WW$ and $WZ$ resonances decaying to electron, missing $E_T$, and two jets in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV." Phys. Rev. Lett. 104 (2010): 241801. https://doi.org/10.1103/PhysRevLett.104.241801. Aaltonen, S., S. T, and S. others. "Measurement of the $t \bar{t}$ Production Cross Section in $p \bar{p}$ Collisions at $\sqrt{s} = 1.96$ TeV using Soft Electron $b$-Tagging." Phys. Rev. D81 (2010): 092002. https://doi.org/10.1103/PhysRevD.81.092002. Aaltonen, S., S. T, and S. others. "Measurement of the $\Lambda_b$ Lifetime in $\Lambda_b \to \Lambda_c^+ \pi^-$ Decays in $p\bar{p}$ Collisions at $\sqrt{s} = 1.96$ TeV." Phys. Rev. Lett. 104 (2010): 102002. https://doi.org/10.1103/PhysRevLett.104.102002. Aaltonen, S., S. T, and S. others. "Inclusive Search for Standard Model Higgs Boson Production in the WW Decay Channel using the CDF II Detector." Phys. Rev. Lett. 104 (2010): 061803. https://doi.org/10.1103/PhysRevLett.104.061803. Aaltonen, S., S. T, and S. others. "Search for Supersymmetry with Gauge-Mediated Breaking in Diphoton Events with Missing Transverse Energy at CDF II." Phys. Rev. Lett. 104 (2010): 011801. https://doi.org/10.1103/PhysRevLett.104.011801. Aaltonen, S., S. T, and S. others. "Measurement of the $W^+W^-$ Production Cross Section and Search for Anomalous $WW\gamma$ and $WWZ$ Couplings in $p \bar p$ Collisions at $\sqrt{s} = 1.96$ TeV." Phys. Rev. Lett. 104 (2010): 201801. https://doi.org/10.1103/PhysRevLett.104.201801. Aaltonen, S., S. T, and S. others. "Combination of Tevatron searches for the standard model Higgs boson in the $W^+W^-$ decay mode." Phys. Rev. Lett. 104 (2010): 061802. https://doi.org/10.1103/PhysRevLett.104.061802. Aaltonen, S., S. T, and S. others. "A Search for the Higgs Boson Using Neural Networks in Events with Missing Energy and $b$-quark Jets in $p\bar p$ Collisions at $\sqrt{s}=1.96$ TeV." Phys. Rev. Lett. 104 (2010): 141801. https://doi.org/10.1103/PhysRevLett.104.141801. Aaltonen, S., S. T, and S. others. "Top Quark Mass Measurement using $m_{T2}$ in the Dilepton Channel at CDF." Phys. Rev. D81 (2010): 031102. https://doi.org/10.1103/PhysRevD.81.031102. Aaltonen, S., S. T, and S. others. "Search for the Production of Scalar Bottom Quarks in $p \bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV." Phys. Rev. Lett. 105 (2010): 081802. https://doi.org/10.1103/PhysRevLett.105.081802. Aaltonen, S., S. T, and S. others. "Studying the Underlying Event in Drell-Yan and High Transverse Momentum Jet Production at the Tevatron." Phys. Rev. D82 (2010): 034001. https://doi.org/10.1103/PhysRevD.82.034001. Aaltonen, on behalf of the ISPOR Good Research Practices Economic Data Transferability Task, on behalf of the ISPOR Good Research Practices Economic Data Transferability Task T, and on behalf of the ISPOR Good Research Practices Economic Data Transferability Task others. "Measurements of branching fraction ratios and CP asymmetries in $B^\pm \to D_{CP} K^\pm$ decays in hadron collisions." Phys. Rev. D81 (2010): 031105. https://doi.org/10.1103/PhysRevD.81.031105. The ATLAS Collaboration, V., G. Aad, B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, et al. "The ATLAS Simulation Infrastructure." European Physical Journal C 70, no. 3 (January 1, 2010): 823–74. https://doi.org/10.1140/epjc/s10052-010-1429-9. Aaltonen, A. D., A. D. T, and A. D. others. "Measurement of the inclusive isolated prompt photon cross section in $pp[over \ifmmode\bar\else\textasciimacron\fi{}]$ collisions at $\sqrt{s}=1.96\,\,TeV$ using the CDF detector." Phys. Rev. D 80, no. 11 (December 2009): 111106. https://doi.org/10.1103/PhysRevD.80.111106. Aad, G., B. Abbott, J. Abdallah, A. A. Abdelalim, A. Abdesselam, O. Abdinov, B. Abi, et al. "The ATLAS Collaboration." Nuclear Physics A 830, no. 1–4 (November 1, 2009): 925c-940c. https://doi.org/10.1016/j.nuclphysa.2009.10.143. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. A. González, S. Amerio, D. Amidei, et al. "Publisher's Note: Search for hadronic decays of W and Z bosons in photon events in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 80, no. 7 (October 7, 2009). https://doi.org/10.1103/PhysRevD.80.079901. Abat, E., J. M. Abdallah, T. N. Addy, P. Adragna, M. Aharrouche, A. Ahmad, T. P. A. Akesson, et al. "Study of the response of the ATLAS central calorimeter to pions of energies from 3 to 9 GeV." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 607, no. 2 (August 11, 2009): 372–86. https://doi.org/10.1016/j.nima.2009.05.158. Aaltonen, T., J. Adelman, T. Akimoto, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, et al. "Search for narrow resonances lighter than Υ mesons." European Physical Journal C 62, no. 2 (July 1, 2009): 319–26. https://doi.org/10.1140/epjc/s10052-009-1057-4. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Álvarez González, S. Amerio, D. Amidei, et al. "Erratum: Measurement of the inclusive jet cross section at the Fermilab Tevatron pp̄ collider using a cone-based jet algorithm (Physical Review D (2008) 78 (052006))." Physical Review D Particles, Fields, Gravitation and Cosmology 79, no. 11 (June 23, 2009). https://doi.org/10.1103/PhysRevD.79.119902. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the ratio of branching fractions B(B±→J/ ψπ±)/B(B±→J/ψK±)." Physical Review D Particles, Fields, Gravitation and Cosmology 79, no. 11 (June 10, 2009). https://doi.org/10.1103/PhysRevD.79.112003. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez González, S. Amerio, D. Amidei, et al. "Search for exclusive Z-boson production and observation of high-mass pp[over ]-->pgammagammap[over ]-->pl;{+}l;{-}p[over ] events in pp[over ] collisions at sqrt[s]=1.96 TeV." Physical Review Letters 102, no. 22 (June 2009): 222002. https://doi.org/10.1103/physrevlett.102.222002. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez González, S. Amerio, D. Amidei, et al. "Observation of exclusive charmonium production and gammagamma --> micro;{+}micro;{-} in pp[over] collisions at sqrt[s] = 1.96 TeV." Physical Review Letters 102, no. 24 (June 2009): 242001. https://doi.org/10.1103/physrevlett.102.242001. Aaltonen, T., J. Adelman, T. Akimoto, B. Alvarez González, S. Amerio, D. Amidei, A. Anastassov, et al. "Search for the decays B_{(s)};{0} --> e;{+} micro;{-} and B_{(s)};{0} --> e;{+} e;{-} in CDF run II." Physical Review Letters 102, no. 20 (May 2009): 201801. https://doi.org/10.1103/physrevlett.102.201801. Aaltonen, T., J. Adelman, T. Akimoto, B. Álvarez González, S. Amerio, D. Amidei, A. Anastassov, et al. "Measurement of W-boson helicity fractions in top-quark decays using cos θ*." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 674, no. 3 (April 20, 2009): 160–67. https://doi.org/10.1016/j.physletb.2009.02.040. Aaltonen, T., J. Adelman, T. Akimoto, B. A. L. González, S. Amerio, D. Amidei, A. Anastassov, et al. "Measurement of cross sections for b jet production in events with a Z boson in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 79, no. 5 (March 2, 2009). https://doi.org/10.1103/PhysRevD.79.052008. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B Alvarez González, S. Amerio, D. Amidei, et al. "Inclusive search for squark and gluino production in pp[over ] collisions at sqrt[s]=1.96 TeV." Physical Review Letters 102, no. 12 (March 2009): 121801. https://doi.org/10.1103/physrevlett.102.121801. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Álvarez González, S. Amerio, D. Amidei, et al. "Erratum: Search for high-mass e+e- Resonances in pp̄ collisions at s=1.96TeV (Physical Review Letters (2009) 102 (031801))." Physical Review Letters 102, no. 5 (February 2, 2009). https://doi.org/10.1103/PhysRevLett.102.059901. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. A. González, S. Amerio, D. Amidei, et al. "Global search for new physics with 2.0fb-1 at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 79, no. 1 (January 5, 2009). https://doi.org/10.1103/PhysRevD.79.011101. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Álvarez González, S. Amerio, D. Amidei, et al. "Search for the rare decays B+→μ+μ-K+, B0→μ+μ- K*(892)0, and Bs0→μ+μ- at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 79, no. 1 (January 5, 2009). https://doi.org/10.1103/PhysRevD.79.011104. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Á. González, S. Amerio, D. Amidei, et al. "First measurement of the ratio of branching fractions B(Λb 0→Λc+μ-ν̄μ)/B(Λb 0→Λc+π-)." Physical Review D Particles, Fields, Gravitation and Cosmology 79, no. 3 (2009). https://doi.org/10.1103/PhysRevD.79.032001. Aaltonen, T., and T. others. "Search for the Associated Production of the Standard-Model Higgs Boson in the All-Hadronic Channel." Phys. Rev. Lett. 103, no. 22 (2009): 221801. Aaltonen, T., and T. others. "Search for Higgs Bosons Predicted in Two-Higgs-Doublet Models via Decays to Tau Lepton Pairs in 1.96 TeV pp̅ Collisions." Phys. Rev. Lett. 103, no. 20 (2009): 201801. Aaltonen, T., and T. others. "Search for a Fermiophobic Higgs Boson Decaying into Diphotons in pp̅ Collisions at sqrt[s]=1.96 TeV." Phys. Rev. Lett. 103, no. 6 (2009): 061803. Aaltonen, T., and T. others. "Search for the neutral current top quark decay t→Zc using the ratio of Z-boson+4 jets to W-boson+4 jets production." Phys. Rev. D 80, no. 5 (2009): 052001. Aaltonen, T., and T. others. "Observation of the Ωb- baryon and measurement of the properties of the Ξb- and Ωb- baryons." Phys. Rev. D 80, no. 7 (2009): 072003. Aaltonen, T., and T. others. "Production of ψ(2S) mesons in pp̅ collisions at 1.96 TeV." Phys. Rev. D 80, no. 3 (2009): 031103. Aaltonen, T., and T. others. "Measurement of particle production and inclusive differential cross sections in pp̅ collisions at sqrt[s] =1.96 TeV." Phys. Rev. D 79, no. 11 (2009): 112005. Aaltonen, T., and T. others. "Measurement of the top quark mass using the invariant mass of lepton pairs in soft muon b-tagged events." Phys. Rev. D 80, no. 5 (2009): 051104. Aaltonen, T., and T. others. "Search for hadronic decays of W and Z bosons in photon events in pp̅ collisions at sqrt[s] =1.96 TeV." Phys. Rev. D 80, no. 5 (2009): 052011. Aaltonen, T., and T. others. "Search for the Higgs boson produced in association with Z→ℓ+ℓ- using the matrix element method at CDF II." Phys. Rev. D 80, no. 7 (2009): 071101. Aaltonen, T., and T. others. "Search for a Standard Model Higgs Boson in WH→ℓvbb̅ in pp̅ Collisions at sqrt[s] =1.96 TeV." Phys. Rev. Lett. 103, no. 10 (2009): 101802. Aaltonen, T., and T. others. "Search for the Production of Narrow tb̅ Resonances in 1.9 fb-1 of pp̅ Collisions at sqrt[s] =1.96 TeV." Phys. Rev. Lett. 103, no. 4 (2009): 041801. Aaltonen, T., and T. others. "Observation of New Charmless Decays of Bottom Hadrons." Phys. Rev. Lett. 103, no. 3 (2009): 031801. Aaltonen, T., and T. others. "Search for standard model Higgs boson production in association with a W boson using a neural network discriminant at CDF." Phys. Rev. D 80, no. 1 (2009): 012002. Aaltonen, T., and T. others. "Observation of Electroweak Single Top-Quark Production." Phys. Rev. Lett. 103, no. 9 (2009): 092002. Aaltonen, T., and T. others. "First Observation of B̅ s0→Ds ± K∓ and Measurement of the Ratio of Branching Fractions B(B̅ s0→Ds ± K∓)/B(B̅ s0→Ds+π-)." Phys. Rev. Lett. 103, no. 19 (2009): 191802. Aaltonen, T., and T. others. "Search for Charged Higgs Bosons in Decays of Top Quarks in pp̅ Collisions at sqrt[s] =1.96 TeV." Phys. Rev. Lett. 103, no. 10 (2009): 101803. Aaltonen, T., and T. others. "Measurement of the Top-Quark Mass with Dilepton Events Selected Using Neuroevolution at CDF." Phys. Rev. Lett. 102, no. 15 (2009): 152001. Aaltonen, T., and T. others. "Evidence for a Narrow Near-Threshold Structure in the J/ψϕ Mass Spectrum in B+→J/ψϕK+ Decays." Phys. Rev. Lett. 102, no. 24 (2009): 242002. Aaltonen, T., and T. others. "Measurement of the b-hadron production cross section using decays to mu[sup -]D[sup 0]X final states in p[overline p] collisions at sqrt(s) = 1.96 TeV." Phys. Rev. D 79, no. 9 (2009): 092003. Aaltonen, T., and T. others. "Top quark mass measurement in the lepton plus jets channel using a modified matrix element method." Phys. Rev. D 79, no. 7 (2009): 072001. Aaltonen, T., and T. others. "Search for new particles decaying into dijets in proton-antiproton collisions at sqrt(s) = 1.96 TeV." Phys. Rev. D 79, no. 11 (2009): 112002. Aaltonen, T., and T. others. "Search for Top-Quark Production via Flavor-Changing Neutral Currents in W + 1 Jet Events at CDF." Phys. Rev. Lett. 102, no. 15 (2009): 151801. Aaltonen, T., and T. others. "Measurement of the top quark mass at CDF using the ``neutrino phi weighting'' template method on a lepton plus isolated track sample." Phys. Rev. D 79, no. 7 (2009): 072005. Aaltonen, T., and T. others. "Measurement of the tt̅ cross section in pp̅ collisions at sqrt[s] =1.96 TeV using dilepton events with a lepton plus track selection." Phys. Rev. D 79, no. 11 (2009): 112007. Aaltonen, T., and T. others. "Top quark mass measurement in the tt̅ all hadronic channel using a matrix element technique in pp̅ collisions at sqrt[s] =1.96 TeV." Physical Review D 79, no. 7 (2009): 072010. Aaltonen, T., and T. others. "Direct Measurement of the W Production Charge Asymmetry in pp̅ Collisions at sqrt[s] =1.96 TeV." Phys. Rev. Lett. 102, no. 18 (2009): 181801. Aaltonen, T., and T. others. "First simultaneous measurement of the top quark mass in the lepton + jets and dilepton channels at CDF." Phys. Rev. D 79, no. 9 (2009): 092005. Aaltonen, T., and T. others. "Measurement of the k[sub T] Distribution of Particles in Jets Produced in p[overline p] Collisions at sqrt(s) = 1.96 TeV." Phys. Rev. Lett. 102, no. 23 (2009): 232002. Aaltonen, T., and T. others. "Search for WW and WZ production in lepton plus jets final state at CDF." Phys. Rev. D 79, no. 11 (2009): 112011. Aaltonen, T., and T. others. "Measurement of Resonance Parameters of Orbitally Excited Narrow B[sup 0] Mesons." Phys. Rev. Lett. 102, no. 10 (2009): 102003. Aaltonen, T., and T. others. "Search for High-Mass Resonances Decaying to Dimuons at CDF." Phys. Rev. Lett. 102, no. 9 (2009): 091805. Aaltonen, T., and T. others. "Measurement of the t[overline t] production cross section in 2 fb[sup -1] of p[overline p] collisions at sqrt(s) = 1.96 TeV using lepton plus jets events with soft muon b tagging." Phys. Rev. D 79, no. 5 (2009): 052007. Aaltonen, T., and T. others. "Search for new physics in the mu mu + e/mu + E-slash [sub T] channel with a low-p[sub T] lepton threshold at the Collider Detector at Fermilab." Phys. Rev. D 79, no. 5 (2009): 052004. Aaltonen, T., and T. others. "Search for Gluino-Mediated Bottom Squark Production in p[overline p] Collisions at sqrt(s) = 1.96 TeV." Phys. Rev. Lett. 102, no. 22 (2009): 221801. Aaltonen, T., and T. others. "Precision Measurement of the Ξ(3872) Mass in J/ψ π + π- Decays." Phys. Rev. Lett. 103, no. 15 (2009): 152001. Aaltonen, T., and T. others. "Searching the inclusive ℓγE̸T+b-quark signature for radiative top quark decay and non-standard-model processes." Phys. Rev. D 80, no. 1 (2009): 011102. Aaltonen, T., and T. others. "Search for Long-Lived Massive Charged Particles in 1.96 TeV pp̅ Collisions." Phys. Rev. Lett. 103, no. 2 (2009): 021802. Aaltonen, T., and T. others. "Search for anomalous production of events with a photon, jet, b-quark jet, and missing transverse energy." Phys. Rev. D 80, no. 5 (2009): 052003. Aaltonen, T., and T. others. "First Observation of Vector Boson Pairs in a Hadronic Final State at the Tevatron Collider." Phys. Rev. Lett. 103, no. 9 (2009): 091803. Aaltonen, T., and T. others. "Search for High-Mass e+e- Resonances in ppbar Collisions at 1.96 TeV." Prl 102, no. 3 (2009). Aaltonen, T., and T. others. "Observation of Exclusive Charmonium Production and γγ→μ+μ- in pp̅ Collisions at sqrt[s] =1.96 TeV." Phys. Rev. Lett. 102, no. 24 (2009): 242001. Aaltonen, T., J. Adelman, T. Akimoto, B. Á. González, S. Amerio, D. Amidei, A. Anastassov, et al. "First measurement of the tt̄ differential cross section dσ/dMtt̄ in pp̄ collisions at s=1.96TeV." Physical Review Letters 102, no. 22 (2009). https://doi.org/10.1103/PhysRevLett.102.222003. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez González, S. Amerio, D. Amidei, et al. "Direct bound on the total decay width of the top quark in pp collisions at sqrt[s]=1.96 TeV." Physical Review Letters 102, no. 4 (January 2009): 042001. https://doi.org/10.1103/physrevlett.102.042001. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez González, S. Amerio, D. Amidei, et al. "Search for maximal flavor violating scalars in same-charge lepton pairs in pp collisions at sqrt[s]=1.96 TeV." Physical Review Letters 102, no. 4 (January 2009): 041801. https://doi.org/10.1103/physrevlett.102.041801. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez González, S. Amerio, D. Amidei, et al. "Search for a Higgs boson decaying to two W bosons at CDF." Physical Review Letters 102, no. 2 (January 2009): 021802. https://doi.org/10.1103/physrevlett.102.021802. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Á. González, S. Amerio, D. Amidei, et al. "Measurement of the fraction of tt̄ production via gluon-gluon fusion in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 79, no. 3 (2009). https://doi.org/10.1103/PhysRevD.79.031101. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Álvarez González, S. Amerio, D. Amidei, et al. "First measurement of the fraction of top-quark pair production through gluon-gluon fusion." Physical Review D Particles, Fields, Gravitation and Cosmology 78, no. 11 (December 1, 2008). https://doi.org/10.1103/PhysRevD.78.111101. Abat, E., T. N. Addy, T. P. A. Åkesson, J. Alison, F. Anghinolfi, E. Arik, M. Arik, et al. "The ATLAS TRT barrel detector." Journal of Instrumentation 3, no. 2 (December 1, 2008). https://doi.org/10.1088/1748-0221/3/02/P02014. Abat, E., T. N. Addy, T. P. A. Åkesson, J. Alison, F. Anghinolfi, E. Arik, M. Arik, et al. "The ATLAS Transition Radiation Tracker (TRT) proportional drift tube: Design and performance." Journal of Instrumentation 3, no. 2 (December 1, 2008). https://doi.org/10.1088/1748-0221/3/02/P02013. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez Gonzalez, S. Amerio, D. Amidei, et al. "Erratum: Search for pair production of scalar top quarks decaying to a τ Lepton and a b quark in pp̄ collisions at s=1.96TeV (Physical Review Letters (2008) 101 (071802))." Physical Review Letters 101, no. 8 (August 20, 2008). https://doi.org/10.1103/PhysRevLett.101.089901. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Álvarez González, S. Amerio, D. Amidei, et al. "Search for standard model Higgs boson production in association with a W boson at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 78, no. 3 (August 15, 2008). https://doi.org/10.1103/PhysRevD.78.032008. Aaltonen, T., A. Abulencia, J. Adelman, T. Akimoto, M. G. Albrow, B. Álvarez González, S. Amerio, et al. "First run II measurement of the W boson mass at the Fermilab Tevatron." Physical Review D Particles, Fields, Gravitation and Cosmology 77, no. 11 (June 5, 2008). https://doi.org/10.1103/PhysRevD.77.112001. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez González, S. Amerio, D. Amidei, et al. "Search for Bs0 --> micro+ micro- and B0 --> micro+ micro- decays with 2 fb-1 of pp collisions." Physical Review Letters 100, no. 10 (March 2008): 101802. https://doi.org/10.1103/physrevlett.100.101802. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Alvarez González, S. Amerio, D. Amidei, et al. "First measurement of W boson production in association with a single Charm quark in pp collisions at sqrt(s)=1.96 TeV." Physical Review Letters 100, no. 9 (March 2008): 091803. https://doi.org/10.1103/physrevlett.100.091803. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. A. González, S. Amerio, D. Amidei, et al. "Measurement of the cross section for W-boson production in association with jets in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 77, no. 1 (January 31, 2008). https://doi.org/10.1103/PhysRevD.77.011108. Aaltonen, T., and T. others. "Measurement of the Single-Top-Quark Production Cross Section at CDF." Physical Review Letters 101, no. 25 (2008): 252001. Aaltonen, T., and T. others. "Search for the Higgs Boson Produced in Association with Z→ℓ+ℓ- in pp̅ Collisions at sqrt[s] =1.96 TeV." Physical Review Letters 101, no. 25 (2008): 251803. Aaltonen, T., and T. others. "Search for Supersymmetry in pp̅ Collisions at √ s =1.96 TeV Using the Trilepton Signature for Chargino-Neutralino Production." Physical Review Letters 101, no. 25 (2008): 251801. Lee, J. H., S. J. Schmieg, and S. H. Oh. "Improved NOx reduction over the staged Ag/Al2O3 catalyst system." Applied Catalysis A: General 342, no. 1–2 (2008): 78–86. https://doi.org/10.1016/j.apcata.2008.03.012. al, E Abat et. "Combined performance tests before installation of the ATLAS Semiconductor and Transition Radiation Tracking Detectors." Jinst 3 P08003, 2008. collaboration, A. T. L. A. S. T. R. T. "The ATLAS TRT Electronics." Jinst 3 P06007, 2008. collaboration, Atlas T. R. T. "The ATLAS TRT end-cap detectors." Jinst 3 P10003, 2008. Aaltonen, A., A. T, and A. others. "{Measurement of Inclusive Jet Cross Sections in $Z/\gamma^* (\to ee)+$ jets Production in $p\bar{p}$ Collisions at $\sqrt{s}=1.96$ TeV}." Phys. Rev. Lett. 100 (2008): 102001. Aaltonen, D. G., D. G. T, and D. G. others. "{Measurement of Ratios of Fragmentation Fractions for Bottom Hadrons in $p \bar{p}$ Collisions at $\sqrt{s}$ = 1.96-TeV}." Phys. Rev. D77 (2008): 072003. Aaltonen, E., E. T, and E. others. "{Observation of Orbitally Excited ${B_s}$ Mesons}." Phys. Rev. Lett. 100 (2008): 082001. Aaltonen, Emilia, Emilia T, and Emilia others. "{Search for Pair Production of Scalar Top Quarks Decaying to a $\tau$ Lepton and a $b$ Quark in $p\bar{p}$ Collisions at sqrt{s}=1.96 TeV}." Phys. Rev. Lett. 101 (2008): 071802. Aaltonen, G. J., G. J. T, and G. J. others. "{Search for resonant $t \bar{t}$ production in $p \bar{p}$ collisions at $\sqrt{s}$ = 1.96-TeV}." Phys. Rev. Lett. 100 (2008): 231801. Aaltonen, Giorgos C., Giorgos C. T, and Giorgos C. others. "{Search for large extra dimensions in final states containing one photon or jet and large missing transverse energy produced in $p \bar{p}$ collisions at $\sqrt{s}$ = 1.96-TeV}." Phys. Rev. Lett. 101 (2008): 181602. Aaltonen, H. S., H. S. T, and H. S. others. "{Search for third generation vector leptoquarks in $p\bar{p}$ Collisions at $\sqrt{s}$ = 1.96-TeV}." Phys. Rev. D77 (2008): 091105. Aaltonen, Hung-Ying, Hung-Ying T, and Hung-Ying others. "{Search for Standard Model Higgs Bosons Produced in Association with ${W}$ Bosons}." Phys. Rev. Lett. 100 (2008): 041801. Aaltonen, I. J., I. J. T, and I. J. others. "{Two-particle momentum correlations in jets produced in $p \bar{p}$ Collisions at $\sqrt{s}$ = 1.96-TeV}." Phys. Rev. D77 (2008): 092001. Aaltonen, Joshua E. S., Joshua E. S. T, and Joshua E. S. others. "{Measurement of the cross section for ${W}$-boson production in association with jets in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV}." Phys. Rev. D77 (2008): 011108. Aaltonen, K. A., K. A. T, and K. A. others. "{Measurement of the Inclusive Jet Cross Section at the Fermilab Tevatron $p \bar{p}$ Collider Using a Cone-Based Jet Algorithm}." Phys. Rev. D78 (2008): 052006. Aaltonen, L. D., L. D. T, and L. D. others. "{Forward-Backward Asymmetry in Top Quark Production in $p\bar{p}$ Collisions at $sqrt{s}=1.96$ TeV}." Phys. Rev. Lett. 101 (2008): 202001. Aaltonen, L., L. T, and L. others. "{Observation of Exclusive Dijet Production at the Fermilab Tevatron ${p \bar{p}}$ Collider}." Phys. Rev. D77 (2008): 052004. Aaltonen, Maqsudul, Maqsudul T, and Maqsudul others. "{First Flavor-Tagged Determination of Bounds on Mixing- Induced CP Violation in Bs -> J/psi phi Decays}." Phys. Rev. Lett. 100 (2008): 161802. Aaltonen, S., S. T, and S. others. "{Search for chargino-neutralino production in ${p \bar{p}}$ collisions at 1.96 TeV with high pT leptons}." Phys. Rev. D77 (2008): 052002. Aaltonen, S., S. T, and S. others. "{Search for New Heavy Particles Decaying to $Z^0 Z^0 \to eeee$ in $p$ - $\bar{p}$ Collisions at $\sqrt{s}$ = 1.96- TeV}." Phys. Rev. D78 (2008): 012008. Aaltonen, S., S. T, and S. others. "{Limits on the Production of Narrow $t\bar{t}$ Resonances in $p\bar{p}$ Collisions at $\sqrt{s}=1.96$ TeV}." Phys. Rev. D77 (2008): 051102. Aaltonen, S., S. T, and S. others. "{Measurement of $b$-jet Shapes in Inclusive Jet Production in $p \bar{p}$ Collisions at $\sqrt{s}$ = 1.96-TeV}." Phys. Rev. D78 (2008): 072005. Aaltonen, S., S. T, and S. others. "{Measurement of correlated b-bbar production in ${p \bar{p}}$ collisions at s**(1/2) = 1960 GeV}." Phys. Rev. D77 (2008): 072004. Aaltonen, S., S. T, and S. others. "{Search for Heavy, Long-Lived Neutralinos that Decay to Photons at CDF II Using Photon Timing}." Phys. Rev. D78 (2008): 032015. Aaltonen, S., S. T, and S. others. "{Measurement of Lifetime and Decay-Width Difference in $B^0_s \to J/\psi \phi$ Decays}." Phys. Rev. Lett. 100 (2008): 121803. Aaltonen, S., S. T, and S. others. "{Cross Section Constrained Top Quark Mass Measurement from Dilepton Events at the Tevatron}." Phys. Rev. Lett. 100 (2008): 062005. Aaltonen, S., S. T, and S. others. "{Strong Evidence for $Z Z$ Production in panti-p Collisions at $\sqrt{s}$ = 1.96-TeV}." Phys. Rev. Lett. 100 (2008): 201801. Aaltonen, S., S. T, and S. others. "{Evidence for D0-D0bar mixing using the CDF II Detector}." Phys. Rev. Lett. 100 (2008): 121802. Aaltonen, S., S. T, and S. others. "{First observation of the decay $B^0_s \to D^-_s D^+_s$ and measurement of its branching ratio}." Phys. Rev. Lett. 100 (2008): 021803. Aaltonen, S., S. T, and S. others. "{Search for the Higgs boson in events with missing transverse energy and $b$ quark jets produced in proton- antiproton collisions at $\sqrt{s}$ =1.96 TeV}." Phys. Rev. Lett. 100 (2008): 211801. Aaltonen, S., S. T, and S. others. "{Search for Heavy Top-like Quarks Using Lepton Plus Jets Events in 1.96-TeV $p \bar{p}$ Collisions}." Phys. Rev. Lett. 100 (2008): 161803. Aaltonen, S., S. T, and S. others. "{Search for Doubly Charged Higgs Bosons with Lepton-Flavor- Violating Decays involving Tau Leptons}." Phys. Rev. Lett. 101 (2008): 121801. Aaltonen, S., S. T, and S. others. "{Cross section measurements of high-pT dilepton final-state processes using a global fitting method}." Phys. Rev. D78 (2008): 012003. Aaltonen, T. A., T. A. T, and T. A. others. "{Model-Independent and Quasi-Model-Independent Search for New Physics at CDF}." Phys. Rev. D78 (2008): 012002. Aaltonen, T., J. Adelman, T. Akimoto, M. G. Albrow, B. Á. González, S. Amerio, D. Amidei, et al. "Observation of the decay Bc±→J/ψπ± and measurement of the Bc± mass." Physical Review Letters 100, no. 18 (2008). https://doi.org/10.1103/PhysRevLett.100.182002. al, G Aad et. "The ATLAS Experiment at the CERN Large Hadron Collider." Jinst 3s08003, 2008. Aaltonen, T., A. Abulencia, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, S. Amerio, et al. "Limits on anomalous triple gauge couplings in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 76, no. 11 (December 27, 2007). https://doi.org/10.1103/PhysRevD.76.111103. Aaltonen, T., A. Abulencia, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, S. Amerio, et al. "Search for exclusive gammagamma production in Hadron-Hadron collisions." Physical Review Letters 99, no. 24 (December 2007): 242002. https://doi.org/10.1103/physrevlett.99.242002. Aaltonen, T., A. Abulencia, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, S. Amerio, et al. "Observation of the Heavy Baryons Sigma b and Sigma b*." Physical Review Letters 99, no. 20 (November 2007): 202001. https://doi.org/10.1103/physrevlett.99.202001. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the inclusive jet cross section using the kT algorithm in pp̄ collisions at s=1.96TeV with the CDF II detector." Physical Review D Particles, Fields, Gravitation and Cosmology 75, no. 9 (June 8, 2007). https://doi.org/10.1103/PhysRevD.75.092006. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Erratum: Measurement of the inclusive jet cross section using the kT algorithm in pp̄ collisions at s=1.96TeV with the CDF II detector (Physical Review D - Particles, Fields, Gravitation and Cosmology (2007) 75, (092006))." Physical Review D Particles, Fields, Gravitation and Cosmology 75, no. 11 (June 5, 2007). https://doi.org/10.1103/PhysRevD.75.119901. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of σ(pp̄→Z)•B(Z→ττ) in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 75, no. 9 (May 18, 2007). https://doi.org/10.1103/PhysRevD.75.092004. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of sigma Lambda b0/sigma B0 x B(Lambda b0-->Lambda c+pi-)/B(B0-->D+pi-) in pp collisions at square root s=1.96 TeV." Physical Review Letters 98, no. 12 (March 2007): 122002. https://doi.org/10.1103/physrevlett.98.122002. Aaltonen, T., and T. others. "Search for Chargino-Neutralino Production in $p\overline p$ Collisions at $\sqrt{s}$=1.96 TeV." Phys. Rev. Lett. 99 (2007): 191806. Aaltonen, T., and T. others. "Searches for direct pair production of supersymmetric top and supersymmetric bottom quarks in p[overline p] collisions at $\sqrt{s}$=1.96 TeV." Phys. Rev. D76 (2007): 072010. Abulencia, A., and A. others. "Measurements of Inclusive W and Z Cross Sections in p anti-p Collisions at $\sqrt{s} = 1.96$ TeV." J. Phys. G: Nucl. Part. Phys. 34 (2007): 2457–2544. Oh, Seog H. "Search for Exotic S = -2 Baryons in p anti-p Collisions at sqrt(s) = 1.96 TeV." Phys. Rev. D 75, no. 03 (2007): 2003. Oh, Seog H. "Search for Anomalous Production of Multilepton Events in p anti-p Collisions at sqrt(s) = 1.96 TeV." Phys. Rev. Lett. 98, no. 13 (2007): 1804. Aaltonen, D., D. T, and D. others. "Search for New Particles Leading to $Z +$ jets Final States in $p \bar{p}$ Collisions at $\sqrt{s}$ = 1.96-TeV." Phys. Rev. D76 (2007): 072006. Aaltonen, K. A., K. A. T, and K. A. others. "First Measurement of the W Boson Mass in Run II of the Tevatron." Phys. Rev. Lett. 99 (2007): 151801. Aaltonen, M. G., M. G. T, and M. G. others. "Measurement of the top-quark mass using missing $E_T$ + jets events with secondary vertex $b-$tagging at CDF II." Phys. Rev. D75 (2007): 111103. Aaltonen, S., S. T, and S. others. "Search for new physics in high mass electron-positron events in $p \bar{p}$ collisions at $\sqrt{s}$ = 1.96-TeV." Phys. Rev. Lett. 99 (2007): 171802. Aaltonen, S., S. T, and S. others. "Measurement of the top-quark mass in all-hadronic decays in p anti-p collisions at CDF II." Phys. Rev. Lett. 98 (2007): 142001. Aaltonen, S., S. T, and S. others. "Observation and mass measurement of the baryon $Xi_b^-$." Phys. Rev. Lett. 99 (2007): 052002. Aaltonen, S., S. T, and S. others. "Measurement of the $p \bar{p} \to t \bar{t}$ production cross- section and the top quark mass at $\sqrt{s}$ = 1.96- TeV in the all-hadronic decay mode." Phys. Rev. D76 (2007): 072009. Aaltonen, S., S. T, and S. others. "Search for a high-mass diphoton state and limits on Randall-Sundrum gravitons at CDF." Phys. Rev. Lett. 99 (2007): 171801. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of σχc2B(χc2→J/ψγ)/ σχc1B(χc1→J/ψγ) in pp̄ Collisions at s=1.96TeV." Physical Review Letters 98, no. 23 (2007). https://doi.org/10.1103/PhysRevLett.98.232001. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the ratios of branching fractions B(Bs0→Ds- π+π+π-)/B(B0→D-π+π+π-) and B(Bs0→Ds-π+)/ B(B0→D-π+)." Physical Review Letters 98, no. 6 (2007). https://doi.org/10.1103/PhysRevLett.98.061802. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the Λb0 Lifetime in Λb0→J/ψΛ0 in pp̄ Collisions at s=1.96TeV." Physical Review Letters 98, no. 12 (2007). https://doi.org/10.1103/PhysRevLett.98.122001. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, S. Amerio, D. Amidei, et al. "Polarizations of J/ψ and ψ(2S) mesons produced in pp̄ collisions at s=1.96TeV." Physical Review Letters 99, no. 13 (2007). https://doi.org/10.1103/PhysRevLett.99.132001. Abulencia, E., E. A, and E. others. "First Measurement of the Ratio of Central-Electron to Forward-Electron W Partial Cross Sections in p-pbar Collisions at sqrt{s} = 1.96 TeV." Phys. Rev. Lett. 98 (2007): 251801. Abulencia, E., E. A, and E. others. "Measurement of the Top Quark Mass in $p\bar{p}$Collisions at $\sqrt{s} = 1.96$ TeV using the Decay Length Technique." Phys. Rev. D75 (2007): 071102. Abulencia, Iain, Iain A, and Iain others. "Precise measurement of the top quark mass in the lepton+jets topology at CDF II." Phys. Rev. Lett. 99 (2007): 182002. Abulencia, J., J. A, and J. others. "Measurement of the B+ production cross section in p anti-p collisions at s**(1/2) = 1960-GeV." Phys. Rev. D75 (2007): 012010. Abulencia, S., S. A, and S. others. "Search for W' boson decaying to electron-neutrino pairs in p anti-p collisions at s**(1/2) = 1.96-TeV." Phys. Rev. D75 (2007): 091101. Abulencia, S., S. A, and S. others. "Inclusive search for new physics with like-sign dilepton events in $p \bar{p}$ collisions at $\sqrt{s}$ = 1.96-TeV." Phys. Rev. Lett. 98 (2007): 221803. Abulencia, S., S. A, and S. others. "Measurement of the Helicity Fractions of W Bosons from Top Quark Decays using Fully Reconstructed \boldmath${t\bar{t}}$ Events with CDF II." Phys. Rev. D75 (2007): 052001. Abulencia, S., S. A, and S. others. "Search for Exotic $S=-2$ Baryons in $p\bar{p}$ Collisions at $\sqrt{s}=1.96\,TeV$." Phys. Rev. D75 (2007): 032003. Abulencia, S., S. A, and S. others. "Search for anomalous production of multi-lepton events in $p \bar{p}$ collisions at $\sqrt{s}$ = 1.96-TeV." Phys. Rev. Lett. 98 (2007): 131804. Abulencia, S., S. A, and S. others. "Observation of $WZ$ Production." Phys. Rev. Lett. 98 (2007): 161801. Abulencia, S., S. A, and S. others. "Analysis of the quantum numbers J(PC) of the X(3872)." Phys. Rev. Lett. 98 (2007): 132002. Abulencia, S., S. A, and S. others. "Search for heavy, long-lived particles that decay to photons at CDF II." Phys. Rev. Lett. 99 (2007): 121801. Abulencia, S., S. A, and S. others. "Search for V + A current in top quark decay in p anti-p collisions at s**(1/2) = 1.96-TeV." Phys. Rev. Lett. 98 (2007): 072001. Abulencia, S., S. A, and S. others. "Precision measurement of the top quark mass from dilepton events at CDF II." Phys. Rev. D75 (2007): 031105. Abulencia, X., X. A, and X. others. "Observation of exclusive electron positron production in hadron hadron collisions." Phys. Rev. Lett. 98 (2007): 112001. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "A search for t → τ ν q in t over(t, ̄) production." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 639, no. 3–4 (August 10, 2006): 172–78. https://doi.org/10.1016/j.physletb.2006.06.030. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Search for Higgs bosons decaying to bb and produced in association with W bosons in pp collisions at square root of s = 1.96 TeV." Physical Review Letters 96, no. 8 (March 2006): 081803. https://doi.org/10.1103/physrevlett.96.081803. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Measurement of the ratio of branching fractions B(D0→K+π-)/ B(D0→K-π+) using the CDF II detector." Physical Review D Particles, Fields, Gravitation and Cosmology 74, no. 3 (2006). https://doi.org/10.1103/PhysRevD.74.031109. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "Search for W and Z bosons in the reaction p̄p→twojets+γ at s=1.8TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 73, no. 1 (January 1, 2006). https://doi.org/10.1103/PhysRevD.73.012001. Oh, Seog H., and A Abulencia et al. "Search for Z'-e+e- using dielectron mass and angular distriubution." Phys. Rev.Lett 96 (2006): 211802. Oh, Seog H., and A Abulencia et al. "Measurement of the inclusive jet cross section using Kt algorithm." Phys. Rev. Lett 96 (2006): 122001. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Observation of Bs0→ψ(2S) and measurement of the ratio of branching fractions B(Bs0→ψ(2S) /B(Bs0→J/ψ)." Physical Review Letters 96, no. 23 (2006). https://doi.org/10.1103/PhysRevLett.96.231801. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Measurement of the Bc+ meson lifetime using the decay mode Bc+→J/ψe+νe." Physical Review Letters 97, no. 1 (2006). https://doi.org/10.1103/PhysRevLett.97.012002. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Measurement of the ratios of branching fractions B(Bs0→Ds-π+)/ B(B0→D-π+) and B(B+→D̄0π+)/B(B0→D-π+)." Physical Review Letters 96, no. 19 (2006). https://doi.org/10.1103/PhysRevLett.96.191801. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Search for excited and exotic muons in the μy decay channel in pp - collisions at s=1.96 TeV." Physical Review Letters 97, no. 19 (2006). https://doi.org/10.1103/PhysRevLett.97.191802. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Search for neutral Higgs bosons of the minimal supersymmetric standard model decaying to tau pairs in pp collisions at square root of s = 1.96 TeV." Physical Review Letters 96, no. 1 (January 2006): 011802. https://doi.org/10.1103/physrevlett.96.011802. Abulencia, A., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Observation of B s0 - B̄ s0 oscillations." Physical Review Letters 97, no. 24 (2006). https://doi.org/10.1103/PhysRevLett.97.242003. Abulencia, D. D., D. D. A, and D. D. others. "Measurement of the b jet cross section in events with a Z boson in $p\bar{p}$ collisions at $\sqrt{s} =$ 1.96-TeV." Phys. Rev. D74 (2006): 032008. Abulencia, F., F. A, and F. others. "Search for a neutral Higgs boson decaying to a W boson pair in $p\bar{p}$ collisions at $\sqrt{s} =$ 1.96-TeV." Phys. Rev. Lett. 97 (2006): 081802. Abulencia, G. R., G. R. A, and G. R. others. "Search for large extra dimensions in the production of jets and missing transverse energy in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV." Phys. Rev. Lett. 97 (2006): 171802. Abulencia, J., J. A, and J. others. "Precision top quark mass measurement in the lepton + jets topology in p anti-p collisions at $\sqrt{s}=$ 1.96-TeV." Phys. Rev. Lett. 96 (2006): 022004. Abulencia, M. A. F., M. A. F. A, and M. A. F. others. "Observation of $B^0_s \to K^{+} K^{-}$ and Measurements of Branching Fractions of Charmless Two-body Decays of $B^0$ and $B^0_{s}$ Mesons in $\bar{p} p$ Collisions at $\sqrt{s}$ = 1.96-TeV." Phys. Rev. Lett. 97 (2006): 211802. Abulencia, M. G., M. G. A, and M. G. others. "Measurement of the B/s0 anti-B/s0 oscillation frequency." Phys. Rev. Lett. 97 (2006): 062003. Abulencia, S. P., S. P. A, and S. P. others. "Measurement of the top quark mass with the dynamical likelihood method using lepton plus jets events with b-tags in $p\bar{p}$ collisions at $\sqrt{s} =$ 1.96-TeV." Phys. Rev. D73 (2006): 092002. Abulencia, S., S. A, and S. others. "A search for scalar bottom quarks from gluino decays in $\bar{p}p$ collisions at $\sqrt{s} =$ 1.96-TeV." Phys. Rev. Lett. 96 (2006): 171802. Abulencia, S., S. A, and S. others. "Measurement of the t anti-t production cross section in $p\bar{p}$ collisions at $\sqrt{s} =$ 1.96-TeV in the all hadronic decay mode." Phys. Rev. D74 (2006): 072005. Abulencia, S., S. A, and S. others. "Top quark mass measurement from dilepton events at CDF II with the matrix-element method." Phys. Rev. D74 (2006): 032009. Abulencia, S., S. A, and S. others. "Top quark mass measurement from dilepton events at CDF II." Phys. Rev. Lett. 96 (2006): 152002. Abulencia, S., S. A, and S. others. "Measurement of the top quark mass using template methods on dilepton events in proton antiproton collisions at $\sqrt{s} =$ 1.96-TeV." Phys. Rev. D73 (2006): 112006. Abulencia, S., S. A, and S. others. "Search for new physics in lepton + photon + X events with 305-pb**(-1) of p anti-p collisions at s**(1/2) = 1.96- TeV." Phys. Rev. Lett. 97 (2006): 031801. Abulencia, S., S. A, and S. others. "Measurement of the dipion mass spectrum in X(3872) --> J/psi pi+ pi- decays." Phys. Rev. Lett. 96 (2006): 102002. Abulencia, S., S. A, and S. others. "Measurement of mass and width of the excited charmed meson states D10 and D2*0." Phys. Rev. D73 (2006): 051104. Abulencia, S., S. A, and S. others. "Measurement of the $t\bar{t}$ production cross section in $p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV using missing $E_t$ + jets events with secondary vertex $b- $tagging." Phys. Rev. Lett. 96 (2006): 202002. Abulencia, S., S. A, and S. others. "Top quark mass measurement using the template method in the lepton + jets channel at CDF II." Phys. Rev. D73 (2006): 032003. Abulencia, S., S. A, and S. others. "Direct search for Dirac magnetic monopoles in $p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV." Phys. Rev. Lett. 96 (2006): 201801. Abulencia, S., S. A, and S. others. "Search for second-generation scalar leptoquarks in $p\bar{p}$ collisions at $\sqrt{s} =$ 1.96-TeV." Phys. Rev. D73 (2006): 051102. Abulencia, S., S. A, and S. others. "Measurement of the inclusive jet cross section using the k(t) algorithm in p anti-p collisions at $\sqrt{s}=$ 1.96- TeV." Phys. Rev. Lett. 96 (2006): 122001. Abulencia, S., S. A, and S. others. "Search for high-mass resonances decaying to e mu in $p\bar{p}$ collisions at $\sqrt{s} =$ 1.96-TeV." Phys. Rev. Lett. 96 (2006): 211802. Abulencia, S., S. A, and S. others. "Search for anomalous semileptonic decay of heavy flavor hadrons produced in association with a W boson at CDF II." Phys. Rev. D73 (2006): 051101. Abulencia, S., S. A, and S. others. "Search for charged Higgs bosons from top quark decays in p anti-p collisions at $\sqrt{s}$ = 1.96-TeV." Phys. Rev. Lett. 96 (2006): 042003. Abulencia, S., S. A, and S. others. "Measurement of the inclusive jet cross section in ppbar Interactions at sqrt{s}=1.96 TeV Using a Cone-based Jet Algorithm." Phys. Rev. D74 (2006): 071103. Abulencia, W., W. A, and W. others. "Search for $Z^\prime \to e^+ e^-$ using dielectron mass and angular distribution." Phys. Rev. Lett. 96 (2006): 211801. Acosta, H. B., H. B. D, and H. B. others. "Evidence for the exclusive decay B/c+- --> J/psi pi+- and measurement of the mass of the B/c meson." Phys. Rev. Lett. 96 (2006): 082002. Acosta, S., S. D, and S. others. "Measurement of $b$ hadron masses in exclusive $J/\psi$ decays with the CDF detector." Phys. Rev. Lett. 96 (2006): 202001. Abulencia, A., D. Acosta, J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, et al. "Erratum: Search for Bs0 → μ+μ and Bd0 → μ+μ decays in pp_ collisions with CDF II (Phys. Rev. Lett. (2005) 95 (221805))." Physical Review Letters 95, no. 24 (December 9, 2005). https://doi.org/10.1103/PhysRevLett.95.249905. Cwetanski, P., T. Åkesson, P. Eerola, B. Lundberg, U. Mjornmark, F. Anghinolfi, S. Baron, et al. "Acceptance tests and criteria of the ATLAS transition radiation tracker." Ieee Transactions on Nuclear Science 52, no. 6 (December 1, 2005): 2911–16. https://doi.org/10.1109/TNS.2005.862799. Åkesson, T., F. Anghinolfi, E. Arik, O. K. Baker, E. Banas, S. Baron, D. Benjamin, et al. "Aging effects in the ATLAS transition radiation tracker and gas filtration studies." Ieee Nuclear Science Symposium Conference Record 2 (December 1, 2005): 1185–90. https://doi.org/10.1109/NSSMIC.2005.1596462. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "Search for supersymmetric Higgs bosons in the di-tau decay mode in pp̄ collisions at s=1.8TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 72, no. 7 (October 1, 2005). https://doi.org/10.1103/PhysRevD.72.072004. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "KS0 and Λ0 production studies in pp̄ collisions at s=1800 and 630 GeV." Physical Review D Particles, Fields, Gravitation and Cosmology 72, no. 5 (September 1, 2005). https://doi.org/10.1103/PhysRevD.72.052001. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Search for scalar leptoquark pairs decaying to νν̄qq̄ in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 11 (June 1, 2005): 1–7. https://doi.org/10.1103/PhysRevD.71.112001. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Search for ZZ and ZW production in pp̄ collisions at s=1.96TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 9 (May 1, 2005): 1–7. https://doi.org/10.1103/PhysRevD.71.091105. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the tt̄ production cross section in pp̄ collisions at √s = 1.96 TeV using lepton + jets events with secondary vertex b -tagging." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 5 (March 1, 2005): 1–28. https://doi.org/10.1103/PhysRevD.71.052003. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "Erratum: Measurement of the W boson polarization in top decay at CDF at √s p = 1:8 TeV (Physical Review D (2005) 71 (031101))." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 5 (March 1, 2005): 1–2. https://doi.org/10.1103/PhysRevD.71.059901. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the lifetime difference between Bs mass eigenstates." Physical Review Letters 94, no. 10 (March 2005): 101803. https://doi.org/10.1103/physrevlett.94.101803. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Search for excited and exotic electrons in the egamma decay channel in pp collisions at sqrt[s] = 1.96 TeV." Physical Review Letters 94, no. 10 (March 2005): 101802. https://doi.org/10.1103/physrevlett.94.101802. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of Wγ and Zγ production in pp̄ collisions at √s = 1.96 TeV." Physical Review Letters 94, no. 4 (February 4, 2005). https://doi.org/10.1103/PhysRevLett.94.041803. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Search for electroweak single-top-quark production in pp̄ collisions at √s = 1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 1 (2005): 012005-1-012005–7. https://doi.org/10.1103/PhysRevD.71.012005. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the J/ψ meson and b-hadron production cross sections in pp̄ collisions at √s = 1960 GeV." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 3 (2005): 032001-1-032001–26. https://doi.org/10.1103/PhysRevD.71.032001. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the tt̄ production cross section in pp̄ collisions at √s=1:96 TeV using kinematic fitting of b-tagged lepton+jet events." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 7 (2005): 1–11. https://doi.org/10.1103/PhysRevD.71.072005. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the moments of the hadronic invariant mass distribution in semileptonic B decays." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 5 (2005): 1–9. https://doi.org/10.1103/PhysRevD.71.051103. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the tt̄ production cross section in pp̄ collisions at s=1.96 TeV using lepton plus jets events with semileptonic B decays to muons." Physical Review D Particles, Fields, Gravitation and Cosmology 72, no. 3 (2005): 1–20. https://doi.org/10.1103/PhysRevD.72.032002. Acosta, D., J. Adelman, T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Measurement of the forward-backward charge asymmetry from W→ev production in pp̄ collisions at √s=1.96 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 5 (2005): 1–9. https://doi.org/10.1103/PhysRevD.71.051104. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "Measurement of the W boson polarization in top decay at CDF at √s = 1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 3 (2005): 031101-1-031101–7. https://doi.org/10.1103/PhysRevD.71.031101. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "Comparison of three-jet events in pp̄ collisions at √s = 1.8 TeV to predictions from a next-to-leading order QCD calculation." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 3 (2005): 032002-1-032002–12. https://doi.org/10.1103/PhysRevD.71.032002. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "Measurements of bottom-antibottom azimuthal production correlations in proton-antiproton collisions at s=1.8TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 71, no. 9 (2005): 1–39. https://doi.org/10.1103/PhysRevD.71.092001. Acosta, D., T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, D. Amidei, et al. "Measurement of partial widths and search for direct CP violation in D 0 meson decays to K-K+ and π -π+." Physical Review Letters 94, no. 12 (2005). https://doi.org/10.1103/PhysRevLett.94.122001. Oh, Seog H., and A Abulencia et al. "Search for high mas particles decaying to lepton pairs (CDF)." Phys. Rev. Lett 95 (2005): 252001. Oh, Seog H., and D Acosta et al. "Measurement of charged particle multiplicities in gluson and quark jets." Phys. Rev. Lett 94 (2005): 171802. al, D Acosta et. "Study of jet shapes in inclusive jet production (CDF)." Phys. Rev. D71 (2005): 092001. Acosta, P. C., P. C. D, and P. C. others. "Search for anomalous production of diphoton events with missing transverse energy at CDF and limits on gauge- mediated supersymmetry-breaking models." Phys. Rev. D71 (2005): 031104. Acosta, P. J., P. J. D, and P. J. others. "First measurements of inclusive W and Z cross sections from Run II of the Tevatron collider." Phys. Rev. Lett. 94 (2005): 091803. Acosta, S., S. D, and S. others. "Study of jet shapes in inclusive jet production in p anti-p collisions at s**(1/2) = 1.96-TeV." Phys. Rev. D71 (2005): 112002. Acosta, S., S. D, and S. others. "Measurement of the cross section for t anti-t production in p anti-p collisions using the kinematics of lepton + jets events." Phys. Rev. D72 (2005): 052003. Acosta, S., S. D, and S. others. "Measurement of the t anti-t production cross section in p anti-p collisions at sqrt[s]=1.96 TeV using lepton + jets events with secondary vertex b-tagging." Phys. Rev. D71 (2005): 052003. Acosta, S., S. D, and S. others. "Measurement of the cross section for prompt diphoton production in p anti-p collisions at sqrt[s]=1.96 TeV." Phys. Rev. Lett. 95 (2005): 022003. Acosta, S., S. D, and S. others. "Search for first-generation scalar leptoquarks in pp-bar collisions at sqrt[s]=1.96 TeV." Phys. Rev. D72 (2005): 051107. al, D Acosta et. "Inclusive Double-Pomeron Exchange at the Fermilab Tevatron p-barp Collide." Physical Review Letters 93 (October 2004): 141601. al, D Acosta et. "Direct photon cross section with conversions at CDF." Physical Review D70 (October 2004): 074008. al, D Acosta et. "The Underlying event in hard interactions at the Fermilab Tevatron p-barp collider." Physical Review D70 (October 2004): 072002. al, D Acosta et. "Inclusive Search for Anomalous Production of High-pT Like-Sign Lepton Pairs in pp-bar Collisions at sqrt[s]=1.8 TeV." Physical Review Letters 93 (August 2004): 061802. al, D Acosta et. "Search for B0s-->µ+µ- and B0d-->µ+µ- Decays in pp-bar Collisions at sqrt[s]=1.96 TeV." Physical Review Letters 93 (July 2004): 032001. Acousta, D., T. Affolder, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, D. Amidei, et al. "Search for B0s-->micro+micro- and B0d-->micro+micro- decays in pp collisions at square root s = 1.96 TeV." Physical Review Letters 93, no. 3 (July 2004): 032001. https://doi.org/10.1103/physrevlett.93.032001. Capeans, M., T. Åkesson, F. Anghinolfi, E. Arik, O. K. Baker, S. Baron, D. Benjamin, et al. "Recent aging studies for the ATLAS transition radiation tracker." Ieee Transactions on Nuclear Science 51, no. 3 III (June 1, 2004): 960–67. https://doi.org/10.1109/TNS.2004.829496. Akesson, T., E. Arik, K. Baker, S. Baron, D. Benjamin, H. Bertelsen, V. Bondarenko, et al. "Operation of the ATLAS Transition Radiation Tracker under very high irradiation at the CERN LHC." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 522, no. 1–2 (April 11, 2004): 25–32. https://doi.org/10.1016/j.nima.2004.01.013. Akesson, T., E. Arik, K. Baker, S. Baron, D. Benjamin, H. Bertelsen, V. Bondarenko, et al. "ATLAS Transition Radiation Tracker test-beam results." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 522, no. 1–2 (April 11, 2004): 50–55. https://doi.org/10.1016/j.nima.2004.01.017. Akesson, T., F. Anghinolfi, E. Arik, O. K. Baker, S. Baron, D. Benjamin, H. Bertelsen, et al. "Status of design and construction of the Transition Radiation Tracker (TRT) for the ATLAS experiment at the LHC." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 522, no. 1–2 (April 11, 2004): 131–45. https://doi.org/10.1016/j.nima.2004.01.033. Acosta, D., and et al. "Heavy flavor properties of jets produced in p anti-p interactions at s**(1/2) = 1.8-TeV." Phys. Rev. D69 (April 2004): 072004. Collaboration, D Acosta et al C. D. F. I. I. "Search for Kaluza-Klein Graviton Emission in pp-bar Collisions at sqrt[s]=1.8 TeV Using the Missing Energy Signature." Physical Review Letters 92, no. 12 (March 2004): 121802. al, D Acosta et. "Optimized search for single top quark production at the Fermilab Tevatron." Phys. Rev. D69 (March 2004): 052003. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, et al. "Search for pair production of scalar top quarks in R-parity violating decay modes in pp collisions at square root of s=1.8 TeV." Physical Review Letters 92, no. 5 (February 2004): 051803. https://doi.org/10.1103/physrevlett.92.051803. Abazov, V. M., B. Abbott, A. Abdesselam, M. Abolins, V. Abramov, B. S. Acharya, D. Acosta, et al. "Combination of CDF and D0 results on the [Formula Presented] boson mass and width." Physical Review D Particles, Fields, Gravitation and Cosmology 70, no. 9 (January 1, 2004). https://doi.org/10.1103/PhysRevD.70.092008. Acosta, D., T. Affolder, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, J. Antos, et al. "Measurement of the polar-angle distribution of leptons from W boson decay as a function of the W transverse momentum in [Formula Presented] collisions at [Formula Presented]." Physical Review D Particles, Fields, Gravitation and Cosmology 70, no. 3 (January 1, 2004). https://doi.org/10.1103/PhysRevD.70.032004. Acosta, D., and et al. "Measurement of the average time-integrated mixing probability of b-flavored hadrons produced at the Tevatron." Phys. Rev. D69 (January 2004): 012002. Oh, Seog H., and T Akesson et al. "Design and construction of the TRT for the ATLAS experiment at the LHC." Nucear Instrucments and Methods, 2004. Acosta, D., T. Affolder, M. H. Ahn, T. Akimoto, M. G. Albrow, D. Ambrose, S. Amerio, et al. "Observation of the narrow state X(3872) → J/ψπ+π- in p̄p collisions at √s = 1.96 TeV." Physical Review Letters 93, no. 7 (2004): 072001-1-072001–6. https://doi.org/10.1103/PhysRevLett.93.072001. Akesson, T., E. Barberio, V. Bondarenko, M. Capeans, A. Catinaccio, P. Cwetanski, H. Danielsson, et al. "Aging studies for the ATLAS Transition Radiation Tracker (TRT)." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 515, no. 1–2 (December 1, 2003): 166–79. https://doi.org/10.1016/j.nima.2003.08.145. Collaboration, D Acosta et al C. D. F. I. I. "Measurement of Prompt Charm Meson Production Cross Sections in pp-bar Collisions at sqrt[s]=1.96 TeV." Physical Review Letters 91 (December 2003): 241804. Collaboration, D Acosta et al C. D. F. "Search for the flavor-changing neutral current decay D0-->µ+µ- in pp-bar collisions at sqrt[s]=1.96 TeV." Physical Review D68 (November 2003): 091101. Collaboration, D Acosta et al C. D. F. "Measurement of the mass difference m(D+s)-m(D+) at CDF II." Physical Review D68 (October 2003): 072004. Collaboration, D Acosta et al The C. D. F. "Search for Lepton Flavor Violating Decays of a Heavy Neutral Particle in pp-bar Collisions at sqrt[s]=1.8 TeV." Physical Review Letters 91 (October 2003): 171602. Akesson, T., E. Arik, K. Assamagan, K. Baker, D. Benjamin, H. Bertelsen, V. Bytchkov, et al. "An X-ray scanner for wire chambers." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 507, no. 3 (July 21, 2003): 622–35. https://doi.org/10.1016/S0168-9002(03)01389-5. Collaboration, D Acosta et al C. D. F. "Central Pseudorapidity Gaps in Events with a Leading Antiproton at the Fermilab Tevatron p-barp Collider." Physical Review Letters 91 (July 2003): 011802. Collaboration, D Acosta et al C. D. F. "Momentum distribution of charged particles in jets in dijet events in pp-bar collisions at sqrt[s]=1.8 TeV and comparisons to perturbative QCD predictions." Physical Review D68 (July 2003): 012003. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, et al. "Search for the supersymmetric partner of the top quark in dilepton events from pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 90, no. 25 I (June 27, 2003): 2518011–17. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Erratum: Measurement of the tt̄ production cross in pp̄ collisions at √s= 1.8 Tev (Physical Review D (2001) 64 (32002))." Physical Review D Particles, Fields, Gravitation and Cosmology 67, no. 11 (June 1, 2003): 1199011–12. Collaboration, D Acosta et al C. D. F. "Search for the Supersymmetric Partner of the Top Quark in Dilepton Events from pp-bar Collisions at sqrt[s]=1.8 TeV." Physical Review Letters 90 (June 2003): 251801. Collaboration, D Acosta et al C. D. F. "Search for Associated Production of Upsilon and Vector Boson in pp-bar Collisions at sqrt[s]=1.8 TeV." Physical Review Letters 90 (June 2003): 221803. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, et al. "Search for associated production of Y and vector boson in pp̄ Collisions at √s = 1.8 TeV." Physical Review Letters 90, no. 22 (May 22, 2003): 2218031–37. Collaboration, D Acosta et al C. D. F. "Search for Long-Lived Charged Massive Particles in pp-bar Collisions at sqrt[s]=1.8 TeV." Physical Review Letters 90 (April 2003): 131801. Collaboration, D Acosta et al C. D. F. "Search for a W' Boson Decaying to a Top and Bottom Quark Pair in 1.8 TeV pp-bar Collisions." Physical Review Letters 90 (February 2003): 081802. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, et al. "Search for Lepton Flavor Violating Decays of a Heavy Neutral Particle in p p̄ Collisions at √s = 1.8 TeV." Physical Review Letters 91, no. 17 (2003): 1716021–26. al, D Acosta et. "Limits on Extra Dimensions and New Particle Production in the Exclusive Photon and Missing Energy Signature in pp-bar Collisions at sqrt[s]=1.8 TeV." Physical Review Letters 89 (December 2002): 281801. Akesson, T., K. Baker, V. Bondarenko, V. Bytchkov, H. Carling, H. Danielsson, F. Dittus, et al. "Tracking performance of the transition radiation tracker prototype for the ATLAS experiment." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 485, no. 3 (June 11, 2002): 298–310. https://doi.org/10.1016/S0168-9002(01)02030-7. Alexopoulos, T., E. W. Anderson, A. T. Bujak, D. D. Carmony, A. R. Erwin, L. J. Gutay, A. S. Hirsch, et al. "Evidence for hadronic deconfinement in p̄-p collisions at 1.8 TeV." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 528, no. 1–2 (February 28, 2002): 43–48. https://doi.org/10.1016/S0370-2693(02)01213-3. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Erratum: Measurement of the two-jet differential cross section in pp collisions at √s=1800 GeV (Physical Review D- Particles, Fields, Gravitation and Cosmology (2001) 64 (012001)." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 3 (February 1, 2002): 399021. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Erratum: Measurement of the inclusive jet cross section in pp collisions at √s= 1.8 Tev (Physical Review D- Particles, Fields, Gravitation and Cosmology (2001) 64 (032001)." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 3 (February 1, 2002): 399031. Acosta, D., T. Affolder, H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, et al. "Search for new physics in photon-lepton events in [Formula Presented] collisions at [Formula Presented]." Physical Review D Particles, Fields, Gravitation and Cosmology 66, no. 1 (January 1, 2002). https://doi.org/10.1103/PhysRevD.66.012004. Acosta, D., T. Affolder, H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, et al. "Search for the decay Bs→μ+μ-φ in pp̄ collisions at √s = 1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 11 (2002): 1111011–16. Acosta, D., T. Affolder, H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, et al. "Search for the decay [Formula Presented] in [Formula Presented] collisions at [Formula Presented] TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 11 (January 1, 2002). https://doi.org/10.1103/PhysRevD.65.111101. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, et al. "Measurement of [Formula Presented]-meson lifetimes using fully reconstructed [Formula Presented] decays produced in [Formula Presented] collisions at [Formula Presented]." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 9 (January 1, 2002): 7. https://doi.org/10.1103/PhysRevD.65.092009. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, et al. "Comparison of the isolated direct photon cross sections in [Formula Presented] collisions at [Formula Presented] and [Formula Presented]." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 11 (January 1, 2002): 10. https://doi.org/10.1103/PhysRevD.65.112003. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, D. Ambrose, D. Amidei, K. Anikeev, et al. "Search for radiative [Formula Presented]-hadron decays in [Formula Presented] collisions at [Formula Presented]." Physical Review D Particles, Fields, Gravitation and Cosmology 66, no. 11 (January 1, 2002). https://doi.org/10.1103/PhysRevD.66.112002. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, P. Amaral, D. Ambrose, D. Amidei, et al. "Search for single-top-quark production in [Formula Presented] collisions at [Formula Presented]." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 9 (January 1, 2002): 6. https://doi.org/10.1103/PhysRevD.65.091102. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, P. Amaral, D. Ambrose, D. Amidei, et al. "Soft and hard interactions in [Formula Presented] collisions at [Formula Presented] and 630 GeV." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 7 (January 1, 2002): 12. https://doi.org/10.1103/PhysRevD.65.072005. Acosta, D., T. Affolder, H. Akimoto, M. G. Albrow, P. Amaral, D. Ambrose, D. Amidei, et al. "Diffractive dijet production at √s = 630 and 1800 GeV at the Fermilab Tevatron." Physical Review Letters 88, no. 15 (2002): 1518021–26. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Charged jet evolution and the underlying event in proton-antiproton collisions at 1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 65, no. 9 (January 1, 2002). https://doi.org/10.1103/PhysRevD.65.092002. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Study of B0 → J/ψK(*)0π+π- decays with the collider detector at Fermilab." Physical Review Letters 88, no. 7 (2002): 718011–16. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Search for new heavy particles in the WZ0 final state in pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 88, no. 7 (2002): 718061–66. al, D Acosta et. "``Cross section for forward J/$ production in pp collisions at $\sqrt{s} = 1.8 TeV,''." Physical Review D66 (2002): 092001. al, D Acosta et. "``Branching ratio measurements of exclusive B+ decays to charmonium with the Collider Detector at Fermilab." Physical Review D66 (2002): 052005. al, D Acosta et. "Measurement of the ratio of b quark production cross sections in pp collisions at $\sqrt{s}=630$ GeV and $\sqrt{s} = 1800 GeV." Physical Review, no. D66 (2002): 032002. al, D Acosta et. "Search for New Physics in Photon-Lepton Events in pp Collisions at $\sqrt{s} = 1.8TeV,." Physical Review Letters 89 (2002): 041802. al, D Acosta et. "Upsilon Production and Polarization in pp Collisions at $\sqrt{s} = 1.8TeV." Physical Review Letters 88 (2002): 161802. al, D Acosta et. "Diffractive Dijet Production at $\sqrt{s} = 630 and 1800 GeV at the Fermilab Tevatron." Physical Review Letters 88 (2002): 151802. al, D Acosta et. "Measurement of the B+ total cross section and B+ differential cross section do/dpT in pp collisions at $\sqrt{s} = 1.8 TeV." Physical Review D65 (2002): 052005. al, D Acosta et. "Study of the heavy flavor content of jets produced in association with W bosons in pp collisions at $\sqrt{s} = 1.8 TeV." Physical Review D65 (2002): 052006. al, T Affolder et. "Searches for new physics in events with a photon and b-quark jet at CDF." Physical Review D65 (2002): 052006. al, T Affolder et. "Search for New Heavy Particles in the W Z0 Final State in pp Collisions at $\sqrt{s} = 1.8 TeV." Physical Review Letters 88 (2002): 071806. al, T Affolder et. "Study of B0 J/K(*)0pi^+ pi Decays with the Collider Detector at Fermilab." Physical Review Letters 88 (2002): 071801. al, T Affolder et. "Measurement of the Strong Coupling Constant from Inclusive Jet Production at the Tevatron pb Collider." Physical Review Letters 88 (2002): 042001. al, T Affolder et. "Search for Gluinos and Scalar Quarks in pp Collisionsat $\sqrt{s} = 1.8 TeV Using the Missing Energy plus Multijets Signature." Physical Review Letters 88 (2002): 041801. al, T Affolder et. "Cross section and heavy quark composition of gamma+ mu events produced in pp collisions." Physical Review D65 (2002): 012003. Akesson, T., E. Arik, K. Assamagan, K. Baker, E. Barberio, D. Barberis, H. Bertelsen, et al. "Particle identification using the time-over-threshold method in the ATLAS transition radiation tracker." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 474, no. 2 (December 1, 2001): 172–87. https://doi.org/10.1016/S0168-9002(01)00878-6. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Measurement of dσ/dM and forward-backward charge asymmetry for high-mass Drell-Yan e+e- pairs from pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 87, no. 13 (September 24, 2001). Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Search for the supersymmetric partner of the top quark in pp̄ collisions at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 63, no. 9 (August 16, 2001): 911011–16. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Production of χc1 and χc2 in pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 86, no. 18 (April 30, 2001): 3963–68. https://doi.org/10.1103/PhysRevLett.86.3963. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Observation of diffractive J/ψ production at the Fermilab Tevatron." Physical Review Letters 87, no. 24 (2001): 2418021–26. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Charged-particle multiplicity p p̄ collisions at √s = 1.8 TeV." Physical Review Letters 87, no. 21 (2001): 2118041–46. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Search for quark-lepton compositeness and a heavy W′ Boson using the eν channel in pp̄ Collisions at √s = 1.8 TeV." Physical Review Letters 87, no. 23 (2001): 2318031–36. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Tests of enhanced leading order QCD in W boson plus jets events from 1.8 TeV p̄p collisions." Physical Review D Particles, Fields, Gravitation and Cosmology 63, no. 7 (2001): 720031–329. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of [Formula Presented] for high mass Drell-Yan [Formula Presented] pairs from [Formula Presented] collisions at [Formula Presented] TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 63, no. 1 (January 1, 2001). https://doi.org/10.1103/PhysRevD.63.011101. Schmieg, S. J., B. K. Cho, and S. H. Oh. "Hydrocarbon reactivity in a plasma-catalyst system: Thermal versus plasma-assisted lean NOx reduction." Sae Technical Papers, 2001. https://doi.org/10.4271/2001-01-3565. al, T Affolder et. "``Observation of Diffractive $J/\psi$ Production at the Fermilab Tevatron,''." Physical Review Letters 87 (2001): 241802. al, T Affolder et. "``Charged-Particle Multiplicity \ppb Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 87 (2001): 211804. al, T Affolder et. "``Observation of orbitally excited $B$ mesons in \ppb collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review D64 (2001): 072002. al, T Affolder et. "``Double Diffraction Dissociation at the Fermilab Tevatron Collider,''." Physical Review Letters 87 (2001): 141802. al, T Affolder et. "``Measurement of the Top Quark $p_T$ Distribution,''." Physical Review Letters 87 (2001): 102001. al, T Affolder et. "``Search for Neutral Supersymmetric Higgs Bosons in \ppb Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 86 (2001): 4472. al, T Affolder et. "``First Measurement of the Ratio $B(t \ra Wb)/B(t\ra Wq)$ and Associated Limit on the Cabibbo-Kobayashi-Maskawa Element $|V_{tb}|$,''." Physical Review Letters 86 (2001): 3233. al, T Affolder et. "``Tests of enhanced leading order QCD in $W$ boson plus jets events from 1.8 TeV \pbp collisions,''." Physical Review D63 (2001): 072003. al, T Affolder et. "``Search for the supersymmetric partner of the top quark in \pbp collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review D63 (2001): 091101. al, T Affolder et. "``Measurement of the two-jet differential cross section in \ppb collisions at $\sqrt{s} = 1800$ GeV,''." Physical Review D64 (2001): 012001. al, T Affolder et. "``Search for Quark-Lepton Compositeness and a Heavy $W^\prime$ Boson Using the $e \nu$ Channel in \ppb Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 87 (2001): 231803. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, D. Amidei, K. Anikeev, et al. "Search for narrow diphoton resonances and for γγ+W/Z signatures in pp̄ collisions at √s = 1.8 TeV." Physical Review D 64, no. 9 (2001). Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of the W boson mass with the collider detector at Fermilab." Physical Review D 64, no. 5 (2001). Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of the decay amplitudes of B0 → J/ψK*0 and Bs0 → J/ψφ decays." Physical Review Letters 85, no. 22 (November 27, 2000): 4668–73. https://doi.org/10.1103/PhysRevLett.85.4668. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Direct measurement of the W Boson width in pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 85, no. 16 (October 16, 2000): 3347–52. https://doi.org/10.1103/PhysRevLett.85.3347. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Search for new particles decaying to tt̄ in pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 85, no. 10 (September 4, 2000): 2062–67. https://doi.org/10.1103/PhysRevLett.85.2062. Akesson, T., A. Antonov, V. Bondarenko, V. Bytchkov, H. Carling, F. Dittus, B. Dolgoshein, et al. "Straw tube drift-time properties and electronics parameters for the ATLAS TRT detector." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 449, no. 3 (July 21, 2000): 446–60. https://doi.org/10.1016/S0168-9002(99)01470-9. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Search for a W' boson via the decay mode W'-->munumu in 1.8 TeV pp collisions." Physical Review Letters 84, no. 25 (June 2000): 5716–21. https://doi.org/10.1103/physrevlett.84.5716. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of bb̄ rapidity correlations in pp̄ collisions at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 61, no. 3 (2000): 1–18. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of J / ψ and ψ (2S) polarization in pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 85, no. 14 (2000): 2886–91. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Search for the charged Higgs boson in the decays of top quark pairs in the eτ and μτ channels at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 62, no. 1 (2000): 1–7. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of the differential dijet mass cross section in pp̄ collisions at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 61, no. 9 (2000): 1–6. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of sin2β from B→J/ψKS0 with the CDF detector." Physical Review D Particles, Fields, Gravitation and Cosmology 61, no. 7 (2000): 1–16. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Transverse Momentum and Total Cross Section of e+e- Pairs in the Z-Boson Region from pp̄ Collisions at √s = 1.8 TeV." Physical Review Letters 84, no. 5 (2000): 845–50. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of sin 2β from B→J/φKs0 with the CDF detector." Physical Review D 61, no. 7 (2000). Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of [formula presented] from [formula presented] with the CDF detector." Physical Review D Particles, Fields, Gravitation and Cosmology 61, no. 7 (January 1, 2000). https://doi.org/10.1103/PhysRevD.61.072005. Alexopoulos, T., E. W. Anderson, N. N. Biswas, A. Bujak, D. D. Carmony, A. R. Erwin, C. Findeisen, et al. "Cross sections for deuterium, tritium, and helium production in [formula presented] collisions at [formula presented] TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 62, no. 7 (January 1, 2000): 8. https://doi.org/10.1103/PhysRevD.62.072004. Brooks, T. C., M. E. Convery, W. L. Davis, K. W. Del Signore, T. L. Jenkins, E. Kangas, M. G. Knepley, K. L. Kowalski, and C. C. Taylor. "Search for disoriented chiral condensate at the Fermilab Tevatron." Physical Review D Particles, Fields, Gravitation and Cosmology 61, no. 3 (January 1, 2000). https://doi.org/10.1103/PhysRevD.61.032003. Collaboration, C. D. F. "Search for a Fourth-Generation Quark More Massive than the Z0 Boson in pp Collisions at = 1.8 TeV." Phys. Rev. Lett. 84 (2000): 835. al, T Affolder et. "``Limits on gravitino production and new processes with large missing transverse energy in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 85 (2000): 1378. al, T Affolder et. "``Search for scalar top quark production in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 84 (2000): 5273. al, T Affolder et. "``Diffractive dijets with a leading antiproton in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 84 (2000): 5043. al, T Affolder et. "``Search for the charged Higgs boson in the decays of top quark pairs in the $e \tau$ and $\mu \tau$ channels at $\sqrt{s} = 1.8$ TeV,''." Physical Review D62 (2000): 012004. al, T Affolder et. "``Production of $\Upsilon(1S)$ Mesons from $\chi_b$ Decays in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 84 (2000): 2094. al, T Affolder et. "``The Transverse Momentum and Total Cross Section of $e^+ e^-$ Pairs in the $Z$ Boson Region from $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review Letters 84 (2000): 845. al, T Affolder et. "``Search for Color Singlet Technicolor Particles in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review Letters 84 (2000): 1110. al, T Affolder et. "``Measurement of $b$-Quark Fragmentation Fractions in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review Letters 84 (2000): 1663. al, T Affolder et. "``Search for second and third generation leptoquarks including production via technicolor interactions in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 85 (2000): 2056. al, T Affolder et. "``A Measurement of the Differential Dijet Mass Cross Section in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review D61 (2000): 091101. al, T Affolder et. "``Measurement of $J/\psi$ and $\psi(2S)$ polarization in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 85 (2000): 2886. al, T Affolder et. "``Direct measurement of the $W$ boson width in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 85 (2000): 3347. collaboration, C. D. F. "Measurement of the Helicity of W Bosons in Top Quark Decays." Phys.Rev.Lett. 84 (January 2000): 216–21. Ukegawa, F., J. Valls, S. Vejcik, G. Velev, R. Vidal, R. Vilar, I. Vologouev, et al. "Measurement of the helicity of W bosons in top quark decays." Physical Review Letters 84, no. 2 (January 2000): 216–21. https://doi.org/10.1103/physrevlett.84.216. al, T Affolder et, and The C. D. F. Collaboration. "A Measurement of sin 2 from B with the CDF Detector." Phys. Rev. D61 (2000): 072005. al, T Affolder et. "``Dijet production by double pomeron exchange at the Fermilab Tevatron,''." Physical Review Letters 85 (2000): 4215. al, T Affolder et. "``Observation of Diffractive $b$-Quark Production at the Fermilab Tevatron,"." Physical Review Letters 84 (2000): 232. Oh, S. H., C. H. Wang, and W. L. Ebenstein. "Super high rate straw drift chamber." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 425, no. 1 (April 1, 1999): 75–83. https://doi.org/10.1016/S0168-9002(98)01376-X. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Kinematics of tt̄ events at CDF." Physical Review D Particles, Fields, Gravitation and Cosmology 59, no. 9 (1999): 1–20. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Searches for new physics in diphoton events in pp̄ collisions at √s = 1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 59, no. 9 (1999): 1–29. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Search for Bs0-B̄s0 Oscillations Using the Semileptonic Decay Bs0 → φℓ+Xν." Physical Review Letters 82, no. 18 (1999): 3576–80. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Search for new particles decaying to b¯b in p¯p collisions at √s=1.8TeV." Physical Review Letters 82, no. 10 (January 1, 1999): 2038–43. https://doi.org/10.1103/PhysRevLett.82.2038. Adamczyk, A. A., C. P. Hubbard, F. Ament, S. H. Oh, M. J. Brady, and M. C. Yee. "Experimental and modeling evaluations of a vacuum-insulated catalytic converter." Sae Technical Papers, 1999. https://doi.org/10.4271/1999-01-3678. Collaboration, C. D. F. "Measurement of the Associated ƒ× + ƒÝ"b Production Cross Section in pp Collisions at = 1.8 TeV." Phys. Rev. D60 (1999): 092003. Collaboration, C. D. F. "A Search for B - B Oscillations Using the Semileptonic Decay B -> o;=Xu." Phys. Rev. Lett. 82 (1999): 3576. al, F Abe et. "``Measurement of $B^0$-$\overline{B^0}$ Flavor Oscillations using Jet-Charge and Lepton Flavor Tagging in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D60 (1999): 072003. al, F Abe et. "``Measurement of the $B_d^0\overline{B_{d}^0}$ Oscillation Frequency using Dimuon Data in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D60 (1999): 051101. al, F Abe et. "``Search for New Physics in Diphoton Events in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D59 (1999): 092002. al, F Abe et. "``Measurement of the $B_{S}^{0}$ Meson Lifetime using Semileptonic Decays,"." Physical Review D59 (1999): 032004. al, F Abe et. "``Measurement of the $B_{d}^{0}$-$\overline {B_{d}^{0}}$ Flavor Oscillation Frequency and Study of Same Side Flavor Tagging of $B$ Mesons in $p\overline p$ Collisions,"." Physical Review D59 (1999): 032001. al, F Abe et. "``Measurement of $Z^{0}$ and Drell-Yan Production Cross Sections using Dimuons in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D59 (1999): 052002. al, F Abe et. "``Search for Third-Generation Leptoquarks from Technicolor Models in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review Letters 82 (1999): 3206. al, F Abe et. "``Measurement of $b$ Quark Fragmentation Fractions in the Production of Strange and Light $B$ Mesons in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D60 (1999): 092005. al, F Abe et. "``Measurement of the Associated $\gamma + \mu^{\pm}$ Production Cross Section in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D60 (1999): 092003. al, T Affolder et. "``Measurement of the $B^0 \overline{B^0}$ Oscillation Frequency using $lÐD^{*+}$ Pairs and Lepton Flavor Tags,"." Physical Review D60 (1999): 112004. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of the BS0 meson lifetime using semileptonic decays." Physical Review D Particles, Fields, Gravitation and Cosmology 59, no. 3 (1999): 1–14. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of Z0 and Drell-Yan production cross sections using dimuons in p̄p collisions at √s=1.8TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 59, no. 5 (1999): 1–15. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of the Bd0-B̄d0 flavor oscillation frequency and study of same side flavor tagging of B mesons in pp̄ collisions." Physical Review D Particles, Fields, Gravitation and Cosmology 59, no. 3 (1999): 1–41. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of b quark fragmentation fractions in the production of strange and light B mesons in pp̄ collisions at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 60, no. 9 (1999): 1–14. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of B0-B̄0 flavor oscillations using jet-charge and lepton flavor tagging in pp̄ collisions at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 60, no. 7 (1999): 1–22. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of the Bd0B̄d0 oscillation frequency using dimuon data in pp̄ collisions at √s = 1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 60, no. 5 (1999): 1–6. Affolder, T., H. Akimoto, A. Akopian, M. G. Albrow, P. Amaral, S. R. Amendolia, D. Amidei, et al. "Measurement of the B0B̄0 oscillation frequency using l-D*+ pairs and lepton flavor tags." Physical Review D Particles, Fields, Gravitation and Cosmology 60, no. 11 (1999): 1–12. Alexopoulos, T., E. W. Anderson, N. N. Biswas, A. Bujak, D. D. Carmony, A. R. Erwin, L. J. Gutay, et al. "The role of double parton collisions in soft hadron interactions." Physics Letters, Section B: Nuclear, Elementary Particle and High Energy Physics 435, no. 3–4 (September 10, 1998): 453–57. https://doi.org/10.1016/S0370-2693(98)00921-6. Akesson, T., A. Antonov, V. Bondarenko, V. Bytchkov, H. Carling, K. Commichau, H. Danielsson, et al. "Electron identification with a prototype of the Transition Radiation Tracker for the ATLAS experiment." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 412, no. 2–3 (August 1, 1998): 200–215. https://doi.org/10.1016/S0168-9002(98)00457-4. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Events with a rapidity gap between jets in p̄p collisions at √s = 630 GeV." Physical Review Letters 81, no. 24 (1998): 5278–83. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Search for long-lived parents of Z° bosons inpp collisions at √s = 1.8 TeV." Physical Review D 58, no. 5 (January 1, 1998). Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Observation of Bc mesons in pp̄ collisions at √s = 1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 58, no. 11 (1998): 1120041+11200429. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Observation of B+ → ψ(2S)K+ and B0 → ψ(2S)K*(892)0 decays and measurements of B-meson branching fractions into J/ ψ and ψ(2S) final states." Physical Review D Particles, Fields, Gravitation and Cosmology 58, no. 7 (1998): 720011-720012+7200115. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Search for the decays B0d→μ+μ- and B0s→μ+μ- in pp̄ collisions at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 57, no. 7 (1998): R3811–16. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Search for the rare decay W± → Dsplusmnγ in pp̄ collisions at √s = 1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 58, no. 9 (1998): 911011–15. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of the B- and B̄0meson lifetimes using semileptonic decays." Physical Review D Particles, Fields, Gravitation and Cosmology 58, no. 9 (1998): 920021–212. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of the top quark mass and t¯t production cross section from dilepton events at the collider detector at fermilab." Physical Review Letters 80, no. 13 (January 1, 1998): 2779–84. https://doi.org/10.1103/PhysRevLett.80.2779. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of the CP-violation parameter sin(2β) in Bd0/B̄d0 → J/ψKS0 decays." Physical Review Letters 81, no. 25 (1998): 5513–18. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Search for the decays Bs0, Bd0 → e±μ∓ and Pati-Salam leptoquarks." Physical Review Letters 81, no. 26 (1998): 5742–47. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of the t¯t Production Cross Section in p¯p Collisions at √s = 1.8TeV." Physical Review Letters 80, no. 13 (January 1, 1998): 2773–78. https://doi.org/10.1103/PhysRevLett.80.2773. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, A. Amadon, S. R. Amendolia, D. Amidei, et al. "Measurement of the B0-B̄0 oscillation frequency using π-B meson charge-flavor correlations in pp̄ collisions at √s= 1.8 TeV." Physical Review Letters 80, no. 10 (1998): 2057–62. Collaboration, C. D. F. "Search for Chargino-Neutralino Associated Production at the Fermilab Tevatron Collider." Phys. Rev. Lett. 80 (1998): 5275. Collaboration, C. D. F. "Search for the Rare Decay W+ ->ƒà+ + ƒ× in Proton-Antiproton Collisions at = 1.8 TeV." Phys. Rev. Rapid Communications D58 (1998): 031101. Collaboration, C. D. F. "Observation of the Bc Meson in pp Collisions at "© s = 1.8 TeV." Phys. Rev. Lett. 81 (1998): 2432. Collaboration, C. D. F. "Search for the Decays B^0->____ and B^0-> ___ in pp Collisions at "©s = 1.8 TeV." Phys. Rev. Rapid Communications D57 (1998): R3811. Collaboration, C. D. F. "Searches for New Physics in Diphoton Events in pp at "©s = 1.8 TeV." Phys. Rev. Lett. 81 (1998): 1791. Collaboration, C. D. F. "Search for the Rare Decay W+,->D+ pp in Collisions at "© s= 1.8 TeV." Phys. Rev. D58 (1998). Collboration, C. D. F. "Measurement of the _(W +> 1 Jet)/_ (W) Cross Section Ratio from pp Collisions at "© s = 1.8 TeV." Phys. Rev. Lett. 81 (1998): 1367. Oh, S. H., R. M. Sinkevitch, J. A. Baker, and G. E. Nichols. "Use of catalytic monoliths for on-road ozone destruction." Sae Technical Papers, 1998. https://doi.org/10.4271/980677. al, F Abe et. "``Observation of $B^{+}\rightarrow\psi$(2S)$K^{+}$ and $B^{0}\rightarrow\psi(2S)K^{*}(892)^{0}$ Decays and Measurements of $B$-Meson Branching Fractions into $J/\psi$ and $\psi$(2S) Final States,"." Physical Review D58 (1998): 112004. al, F Abe et. "``Events with a Rapidity Gap Between Jets in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review Letters 81 (1998): 5278. al, F Abe et. "``Observation of $B_{c}$ Mesons in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D58 (1998): 112004. al, F Abe et. "``Measurement of the $B^0$-$\overline B^0$ Oscillation Frequency Using $\pi$-$B$ Meson Charge-Flavor Correlations in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 80 (1998): 2057. al, F Abe et. "``Properties of Photon Plus Two-Jet Events in $\overline pp$ Collisions at $\sqrt s$ =1.8 TeV,''." Physical Review D57 (1998): 67. al, F Abe et. "``Measurement of the Top Quark Mass,''." Physical Review Letters 80 (1998): 2767. al, F Abe et. "``Measurement of $B$ Hadron Lifetimes Using J/$\psi$ final States at CDF,''." Physical Review D57 (1998): 5382. al, F Abe et. "``Measurement of the $B^-$ and $\overline B^0$ Meson Lifetimes using Semileptonic Decays,"." Physical Review D58 (1998): 092002. al, F Abe et. "``Search for Second Generation Leptoquarks in the Dimuon plus Dijet Channel of $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review Letters 81 (1998): 4806. al, F Abe et. "Measurement of the Lepton Charge Asymmetry in W-Boson Decays Produced in pp Collisions." Physical Review Letters 81 (1998): 5754. al, F Abe et. "Search for the Decays B^0s,B^0d -> e+u+and Pati-Salam Leptoquarks." Physical Review Letters 81 (1998): 5742. al, F Abe et. "Measurement of the CP-Violation Parameter sin(2B) in B0d/B0d -> J/PsiK0s Decays." Physical Review Letters 81 (1998): 5513. al, F Abe et. "``Search for Flavor-Changing Neutral Current Decays of the Top Quark in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 80 (1998): 2525. al, F Abe et. "Dijet Production by Color-Singlet Exchange at the Fermilab Tevatron." Phys. Rev. Lett. 80 (1998): 1156. al, F Abe et. "``Measurement of the Differential Cross Section for Events with Large Total Transverse Energy in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,''." Physical Review Letters 80 (1998): 3461. al, F Abe et. "``Search for Long-Lived Parents of $Z^{0}$ Bosons in $p\overline p$ Collisions at $\sqrt{s} = 1.8$ TeV,"." Physical Review D58 (1998): 051102. al, F Abe et. "Search for Higgs Bosons Produced in Association with a Vector Boson in pp collisions at $\sqrt{s} = 1.8 TeV." Physical Review Letters 81 (1998): 5748. al, F Abe et. "``Jet Pseudorapidity Distribution in Direct Photon Events in $\overline pp$ Collisions at $\sqrt s$ =1.8 TeV,''." Physical Review D57 (1998): 1359. al, F Abe et. "Search for first generation leptoquark pair production in p(p)over-bar collisions at root s=1.8 TeV." Phys. Rev. Lett. 79, no. 22 (December 1997): 4327–32. al, F Abe et. "Properties of jets in W boson events from 1.8 TeV (p)over-bar-p collisions." Phys. Rev. Lett. 79, no. 24 (December 1997): 4760–65. al, F Abe et. "Search for new particles decaying into b(b)over-bar and produced in association with W bosons decaying into e nu or mu nu at the Fermilab Tevatron." Phys. Rev. Lett. 79, no. 20 (November 1997): 3819–24. al, F Abe et. "Measurement of diffractive dijet production at the Fermilab Tevatron." Phys. Rev. Lett. 79, no. 14 (October 1997): 2636–41. al, F Abe et. "Double parton scattering in (p)over-bar-p collisions at root s=1.8 TeV." Phys. Rev. D 56, no. 7 (October 1997): 3811–32. al, F Abe et. "Limits on quark-lepton compositeness scales from dileptons produced in 1.8 TeV p(p)over-bar collisions." Phys. Rev. Lett. 79, no. 12 (September 1997): 2198–2203. al, F Abe et. "First observation of the all-hadronic decay of t(t)over-bar pairs." Phys. Rev. Lett. 79, no. 11 (September 1997): 1992–97. al, F Abe et. "Search for new gauge bosons decaying into dileptons in (p)over-bar-p collisions at root s=1.8 TeV." Phys. Rev. Lett. 79, no. 12 (September 1997): 2192–97. al, F Abe et. "Properties of six-jet events with large six-jet mass at the Fermilab proton-antiproton collider." Phys. Rev. D 56, no. 5 (September 1997): 2532–43. al, F Abe et. "Search for gluinos and squarks at the Fermilab tevatron collider." Phys. Rev. D 56, no. 3 (August 1997): R1357–62. al, F Abe et. "Measurement of double parton scattering in (p)over-bar-p collisions at root s=1.8 TeV." Phys. Rev. Lett. 79, no. 4 (July 1997): 584–89. al, F Abe et. "Search for charged Higgs boson decays of the top quark using hadronic decays of the tau lepton." Phys. Rev. Lett. 79, no. 3 (July 1997): 357–62. al, F Abe et. "Search for new particles decaying to dijets at CDF." Phys. Rev. D 55, no. 9 (May 1997): R5263–68. al, F Abe et. "Observation of diffractive W-boson production at the Fermilab tevatron." Phys. Rev. Lett. 78, no. 14 (April 1997): 2698–2703. al, F Abe et. "Search for third generation leptoquarks in (p)over-bar-p collisions at root s=1.8 TeV." Phys. Rev. Lett. 78, no. 15 (April 1997): 2906–11. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of bb̄ production correlations, B0B̄0 mixing, and a limit on ∈B in pp̄ collisions at √s=1.8 TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 55, no. 5 (March 1, 1997): 2547–58. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "The μτ and eτ decays of top quark pairs produced in pp¯ collisions at √s=1.8TeV." Physical Review Letters 79, no. 19 (January 1, 1997): 3585–90. https://doi.org/10.1103/PhysRevLett.79.3585. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Observation of Λ0b→J/ψ Λ at the Fermilab proton-antiproton collider." Physical Review D Particles, Fields, Gravitation and Cosmology 55, no. 3 (1997): 1142–52. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of double parton scattering in pp collisons at s." Physical Review Letters 79, no. 4 (January 1, 1997): 578–83. https://doi.org/10.1103/PhysRevLett.79.584. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of [Formula presented] production correlations, [Formula presented] mixing, and a limit on [Formula presented] in [Formula presented] collisions at [Formula presented] TeV." Physical Review D Particles, Fields, Gravitation and Cosmology 55, no. 5 (January 1, 1997): 2546–58. https://doi.org/10.1103/PhysRevD.55.2546. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "J/ψ and ψ(2s) Production pp collisions at √ s = 1.8TeV." Physical Review Letters 79, no. 4 (January 1, 1997): 572–77. https://doi.org/10.1103/PhysRevLett.79.572. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Observation of [Formula presented]→J/ψ Λ at the Fermilab proton-antiproton collider." Physical Review D Particles, Fields, Gravitation and Cosmology 55, no. 3 (January 1, 1997): 1142–52. https://doi.org/10.1103/PhysRevD.55.1142. Brooks, T. C., M. E. Convery, W. L. Davis, K. W. Del Signore, T. L. Jenkins, E. Kangas, M. G. Knepley, et al. "Analysis of charged-particle–photon correlations in hadronic multiparticle production." Physical Review D Particles, Fields, Gravitation and Cosmology 55, no. 9 (January 1, 1997): 5667–80. https://doi.org/10.1103/PhysRevD.55.5667. al, F Abe et. "Ratios of bottom meson branching fractions involving J/psi mesons and determination of b quark fragmentation fractions." Phys. Rev. D 54, no. 11 (December 1996): 6596–6609. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of the gamma +D*+/- Cross Section in p-barp Collisions at sqrt s=1.8 TeV." Physical Review Letters 77, no. 25 (December 1996): 5005–10. https://doi.org/10.1103/physrevlett.77.5005. al, F Abe et. "Measurement of Dijet angular distributions by the Collider Detector at Fermilab." Phys. Rev. Lett. 77, no. 27 (December 1996): 5336–41. al, F Abe et. "Measurement of the branching fraction B(B-u(+)-$>$J/psi pi(+)) and search for B-c(+)-$>$J/psi pi(+)." Phys. Rev. Lett. 77, no. 26 (December 1996): 5176–81. al, Y Arai et. "A modular straw drift tube tracking system for the solenoidal detector collaboration experiment .2. Performance." Nucl. Instr. & Meth. 381, no. 2–3 (November 1996): 372–84. al, Y Arai et. "A modular straw drift tube tracking system for the solenoidal detector collaboration experiment .1. Design." Nucl. Instr. & Meth. 381, no. 2–3 (November 1996): 355–71. al, F Abe et. "Further properties of high-mass multijet events at the Fermilab proton-antiproton collider." Phys. Rev. D 54, no. 7 (October 1996): 4221–33. al, F Abe et. "Forward-backward charge asymmetry of electron Paris above the Z(0) pole." Phys. Rev. Lett. 77, no. 13 (September 1996): 2616–21. al, F Abe et. "Measurement of the lifetime of the B-s(0) meson using the exclusive decay mode B-s(0)-$>$J/psi phi." Phys. Rev. Lett. 77, no. 10 (September 1996): 1945–49. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of Lambda 0b Lifetime Using Lambda 0b--> Lambda +c." Physical Review Letters 77, no. 8 (August 1996): 1439–43. https://doi.org/10.1103/physrevlett.77.1439. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Search for charged Higgs boson decays of the top quark using hadronic tau decays." Physical Review. D, Particles and Fields 54, no. 1 (July 1996): 735–42. https://doi.org/10.1103/physrevd.54.735. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Inclusive Jet Cross Section in p-barp Collisions at sqrt s=1.8 TeV." Physical Review Letters 77, no. 3 (July 1996): 438–43. https://doi.org/10.1103/physrevlett.77.438. al, F Abe et. "Properties of jets in Z boson events from 1.8 TeV (p)over-bar-p collisions." Phys. Rev. Lett. 77, no. 3 (July 1996): 448–53. al, F Abe et. "Search for chargino-neutralino production in p(p)over-bar collisions at root s=1.8 TeV." Phys. Rev. Lett. 76, no. 23 (June 1996): 4307–11. al, F Abe et. "Measurement of the B- and (B)over-bar(0) meson lifetimes using semileptonic decays." Phys. Rev. Lett. 76, no. 24 (June 1996): 4462–67. al, F Abe et. "Search for flavor-changing neutral current B meson decays in p(p)over-bar collisions at root s=1.8 TeV." Phys. Rev. Lett. 76, no. 25 (June 1996): 4675–80. al, F Abe et. "Measurement of the mass of the B-s(0) meson." Phys. Rev. D 53, no. 7 (April 1996): 3496–3505. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Measurement of sigma B(W-->e nu ) and sigma B(Z0-->e+e-) in pp-bar collisions at sqrt s =1.8 TeV." Physical Review Letters 76, no. 17 (April 1996): 3070–75. https://doi.org/10.1103/physrevlett.76.3070. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Search for the rare decay W+/---> pi +/-+ gamma." Physical Review Letters 76, no. 16 (April 1996): 2852–57. https://doi.org/10.1103/physrevlett.76.2852. al, F Abe et. "Reconstruction of B-0-$>$J/psi K-S(0) and measurement of ratios of branching ratios involving B-$>$J/psi K* and B+-$>$J/psi K+." Phys. Rev. Lett. 76, no. 12 (March 1996): 2015–20. al, F Abe et. "Search for gluino and squark cascade decays at the Fermilab tevatron collider." Phys. Rev. Lett. 76, no. 12 (March 1996): 2006–10. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Measurement of correlated micro-b-bar jet cross sections in pp-bar collisions at s=1.8 TeV." Physical Review. D, Particles and Fields 53, no. 3 (February 1996): 1051–65. https://doi.org/10.1103/physrevd.53.1051. Oh, S. H., C. H. Wang, and D. K. Wesson. "High rate test of a straw drift chamber." Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 369, no. 1 (January 21, 1996): 37–44. https://doi.org/10.1016/0168-9002(95)00728-8. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of the lifetime of the Bs0 meson using the exclusive decay mode Bs0 → J/ψ φ." Physical Review Letters 77, no. 10 (1996): 1945–49. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of the Λb0 lifetime using Λb0 → Λc+l- ν̄." Physical Review Letters 77, no. 8 (1996): 1439–43. Abe, F., H. Akimoto, A. Akopian, M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, et al. "Measurement of the branching fraction B(Bu+→J/ψπ+) and search for Bc+→J/ψπ+." Physical Review Letters 77, no. 26 (1996): 5176–81. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Measurement of the mass of the Bs0 meson." Physical Review D Particles, Fields, Gravitation and Cosmology 53, no. 7 (1996): 3496–3505. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Search for the rare decay W± → π ± + γ." Physical Review Letters 76, no. 16 (1996): 2852–57. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Reconstruction of B0 → J/ψ KS0 and measurement of ratios of branching ratios involving B → J/ψK* and B+ → J/ψK+." Physical Review Letters 76, no. 12 (1996): 2015–20. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Measurement of σB(W→ev) and σB(Z0→e+e-) in pp̄ collisions at √s = 1.8 TeV." Physical Review Letters 76, no. 17 (1996): 3070–75. Convery, M., W. L. Davis, K. DelSignore, T. L. Jenkins, E. Kangas, M. Knepley, K. L. Kowalski, et al. "MiniMax: What has been learned thus far." Nuovo Cimento Della Societa Italiana Di Fisica C 19, no. 6 (January 1, 1996): 1045–49. https://doi.org/10.1007/BF02508149. Oh, S. H., E. J. Bissett, D. B. Brown, and F. Ament. "Mathematical modeling of electrically heated monolith converters: Power and energy reduction strategies." Sae Technical Papers, 1996. https://doi.org/10.4271/961213. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "Erratum: Precision measurement of the prompt photon cross section in pp collisions at s=1.8 TeV (Physical Review Letters(1995) 74, 10 (1891-1893))." Physical Review Letters 74, no. 10 (December 1, 1995): 1891–93. https://doi.org/10.1103/PhysRevLett.74.1891. al, F Abe et. "Measurement of the w-boson mass." Phys. Rev. D 52, no. 9 (November 1995): 4784–4827. al, F Abe et. "Study of t(t)over-bar production in p(p)over-bar collisions using total transverse energy." Phys. Rev. Lett. 75, no. 22 (November 1995): 3997–4002. al, F Abe et. "Measurement of the polarization in the decays b-d-]j/psi-k-(asterisk-0) and bs-]j/psi-phi." Phys. Rev. Lett. 75, no. 17 (October 1995): 3068–72. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Measurement of the ratio sigma B(pp-bar-->W-->e nu )/ sigma B(pp-bar-->Z0-->ee) in pp-bar collisions at sqrt s =1800 GeV." Physical Review. D, Particles and Fields 52, no. 5 (September 1995): 2624–55. https://doi.org/10.1103/physrevd.52.2624. al, F Abe et. "Identification of top quarks using kinematic variables." Phys. Rev. D 52, no. 5 (September 1995): R2605–9. Alexopoulos, T., C. Allen, E. W. Anderson, V. Balamurali, S. Banerjee, P. D. Beery, P. Bhat, et al. "φ meson production from {Mathematical expression} collisions at {Mathematical expression} TeV." Zeitschrift Für Physik C Particles and Fields 67, no. 3 (September 1, 1995): 411–16. https://doi.org/10.1007/BF01624584. al, F Abe et. "Limits on wwz and ww-gamma couplings from ww and wz production in p(p)over-bar collisions at root-s=1.8 tev." Phys. Rev. Lett. 75, no. 6 (August 1995): 1017–22. al, F Abe et. "Measurement of the b-meson differential cross-section d-sigma/dp(t) in p(p)over-bar collisions at root-s=1.8 tev." Phys. Rev. Lett. 75, no. 8 (August 1995): 1451–55. al, F Abe et. "Search for 2nd-generation leptoquarks in p(p)over-bar collisions at root-s=1.8 tev." Phys. Rev. Lett. 75, no. 6 (August 1995): 1012–16. Aoki, S., S. Y. Bahk, K. S. Chung, S. H. Chung, H. Funahashi, C. H. Hahn, T. Hara, et al. "Production of a twin single hypernuclei and the Ξ--nuclear interaction." Physics Letters B 355, no. 1–2 (July 27, 1995): 45–51. https://doi.org/10.1016/0370-2693(95)00688-H. al, F Abe et. "Properties of high-mass multijet events at the fermilab proton-antiproton collider." Phys. Rev. Lett. 75, no. 4 (July 1995): 608–12. al, F Abe et. "Measurement of the w-boson mass." Phys. Rev. Lett. 75, no. 1 (July 1995): 11–16. Alexopoulos, T., C. Allen, E. W. Anderson, V. Balamurali, S. Banerjee, P. D. Beery, P. Bhat, et al. "Charged particle multiplicity correlations in pp̄ collisions at s=0.3-1.8 TeV." Physics Letters B 353, no. 1 (June 22, 1995): 155–60. https://doi.org/10.1016/0370-2693(95)00554-X. al, F Abe et. "Measurement of the b-s meson lifetime." Phys. Rev. Lett. 74, no. 25 (June 1995): 4988–92. al, F Abe et. "Kinematic evidence for top-quark pair production in w plus multijet events in p(p)over-bar collisions at root-s=1.8 tev." Phys. Rev. D 51, no. 9 (May 1995): 4623–37. al, F Abe et. "Search for new particles decaying to dijets in p(p)over-bar collisions at root-s=1.8 tev." Phys. Rev. Lett. 74, no. 18 (May 1995): 3538–43. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "Search for charged bosons heavier than the W boson in pp-bar collisions at sqrt s=1800 GeV." Physical Review Letters 74, no. 15 (April 1995): 2900–2904. https://doi.org/10.1103/physrevlett.74.2900. al, F Abe et. "Observation of top-quark production in (p)over-bar-p collisions with the collider detector at fermilab." Phys. Rev. Lett. 74, no. 14 (April 1995): 2626–31. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "Limits on Z-photon couplings from p-p-bar interactions at sqrt s=1.8TeV." Physical Review Letters 74, no. 11 (March 1995): 1941–45. https://doi.org/10.1103/physrevlett.74.1941. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "Measurement of W-photon couplings in p-p-bar collisions at sqrt s=1.8 TeV." Physical Review Letters 74, no. 11 (March 1995): 1936–40. https://doi.org/10.1103/physrevlett.74.1936. al, F Abe et. "Charge asymmetry in w-boson decays produced in p(p)over-bar collisions at root-s=1.8 tev." Phys. Rev. Lett. 74, no. 6 (February 1995): 850–54. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "Direct measurement of the W boson width." Physical Review Letters 74, no. 3 (January 1995): 341–45. https://doi.org/10.1103/physrevlett.74.341. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Production in pp» collisions at s=1.8 TeV." Physical Review Letters 75, no. 24 (1995): 4358–63. https://doi.org/10.1103/PhysRevLett.75.4358. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "Precision measurement of the prompt photon cross section in pp-bar collisions at sqrt s =1.8 TeV." Physical Review Letters 73, no. 20 (November 1994): 2662–66. https://doi.org/10.1103/physrevlett.73.2662. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "Search for the top quark decaying to a charged Higgs boson in p-barp collisions at sqrt s =1.8 TeV." Physical Review Letters 73, no. 20 (November 1994): 2667–71. https://doi.org/10.1103/physrevlett.73.2667. al, F Abe et. "Evidence for top-quark production in (p)over-bar-p collisions at root-s=1.8 tev." Phys. Rev. D 50, no. 5 (September 1994): 2966–3026. al, T Alexopoulos et. "Multiplicity dependence of transverse-momentum spectra of centrally produced hadrons in (p)overbar-p collisions at 0.3, 0.54, 0.9, and 1.8 tev center-of-mass energy." Phys. Lett. B 336, no. 3–4 (September 1994): 599–604. Abe, F., M. G. Albrow, S. R. Amendolia, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, et al. "Evidence for top quark production in p-barp collisions at sqrt s =1.8 TeV." Physical Review Letters 73, no. 2 (July 1994): 225–31. https://doi.org/10.1103/physrevlett.73.225. al, NT Porile et. "Recent results from e735 - search for quark-gluon plasma in p-(p)over-bar collisions at 0.3-1.8 tev." Nucl. Phys. A 566 (January 1994): C431–34. Abe, F., M. Albrow, D. Amidei, C. Anway-Wiese, G. Apollinari, H. Areti, P. Auchincloss, et al. "Measurement of the ratio σB(W→eν)σB(Z0→e+e-) in p̄p collisions at s=1.8 TeV." Physical Review Letters 73, no. 2 (1994): 220–24. https://doi.org/10.1103/PhysRevLett.73.220. Abe, F., M. G. Albrow, D. Amidei, J. Antos, C. Anway-Wiese, G. Apollinari, H. Areti, et al. "W Boson + Jet Angular distribution in pp̄ collsions at s=1.8 TeV." Physical Review Letters 73, no. 17 (1994): 2296–2300. https://doi.org/10.1103/PhysRevLett.73.2296. al, T Alexopoulos et. "Study of source size in p-(p)over-bar collisions at root-s = 1.8 tev using pion interferometry." Phys. Rev. D 48, no. 5 (September 1993): 1931–42. al, T Alexopoulos et. "Inclusive photon production from p(p)over-bar collisions at root-s = 1.8 tev." Phys. Rev. Lett. 71, no. 10 (September 1993): 1490–93. al, T Alexopoulos et. "Mass-identified particle-production in proton-antiproton collisions at root-s = 300, 540, 1000, and 1800 gev." Phys. Rev. D 48, no. 3 (August 1993): 984–97. Oh, S. H., C. Wang, and M. Yin. "A technique to measure the positions of sense wires in low mass drift chambers." Nuclear Inst. and Methods in Physics Research, A 325, no. 1–2 (February 1, 1993): 142–46. https://doi.org/10.1016/0168-9002(93)91014-E. Oh, S. H., and E. J. Bissett. "Mathematical modeling of monolith warmup behavior in variable-fuel vehicle exhaust." Sae Technical Papers, 1993. https://doi.org/10.4271/932721. Oh, S. H., and R. M. Sinkevitch. "Carbon Monoxide Removal from Hydrogen-Rich Fuel Cell Feedstreams by Selective Catalytic Oxidation." Journal of Catalysis 142, no. 1 (1993): 254–62. https://doi.org/10.1006/jcat.1993.1205. al, T Alexopoulos et. "Hyperon production from proton-antiproton collisions at root-s = 1.8 tev." Phys. Rev. D 46, no. 7 (October 1992): 2773–86. al, CS Lindsey et. "Results from e735 at the tevatron proton-antiproton collider with root-s=1.8 tev." Nucl. Phys. A 544, no. 1–2 (July 1992): C343–56. Alexopoulos, T., C. Allen, E. W. Anderson, H. Areti, S. Banerjee, P. D. Beery, P. Bhat, et al. "Average transverse momentum vs. dNc dη for mass-identified particles at tevatron energies." Nuclear Physics B (Proceedings Supplements) 25, no. C (January 1, 1992): 40–47. https://doi.org/10.1016/0920-5632(92)90373-Z. Brick, D. H., M. Widgoff, P. Beilliere, P. Lutz, J. L. Narjoux, N. Gelfand, E. D. Alyea, et al. "Neutral-strange-particle production in 200-Gev/c p+K+ interactions on Au, Ag, and Mg." Physical Review D 45, no. 3 (January 1, 1992): 734–42. https://doi.org/10.1103/PhysRevD.45.734. Oh, S. H., A. T. Goshaw, and W. J. Robertson. "Construction and performance of a 2.7 m long straw drift tube prototype chamber for the SSC." Nuclear Inst. and Methods in Physics Research, A 309, no. 3 (November 15, 1991): 368–76. https://doi.org/10.1016/0168-9002(91)90240-Q. Oh, S. H., D. K. Wesson, J. Cooke, A. T. Goshaw, W. J. Robertson, and W. D. Walker. "Design and performance of a straw tube drift chamber." Nuclear Inst. and Methods in Physics Research, A 303, no. 2 (June 1, 1991): 277–84. https://doi.org/10.1016/0168-9002(91)90794-Q. Turkot, F., T. Alexopoulos, C. Allen, E. W. Anderson, H. Areti, S. Banerjee, P. D. Beery, et al. "A quark-gluon plasma search in p̄-p at s = 1.8 TeV." Nuclear Physics, Section A 525, no. C (January 1, 1991): 165–70. https://doi.org/10.1016/0375-9474(91)90323-X. Alexopoulos, T., C. Allen, E. W. Anderson, H. Areti, S. Banerjee, P. D. Beery, N. N. Biswas, et al. "Mass-identified particle yields in antiproton-proton collisions at square-root-s=1.8 tev." Physical Review Letters 64, no. 9 (February 1990): 991–94. Alexopoulos, T., C. Allen, E. W. Anderson, H. Areti, S. Banerjee, P. Bhat, J. Bishop, et al. "Recent results from E-735 at the Fermilab tevatron proton-antiproton collider at √S = 1.8 TeV." Nuclear Physics B (Proceedings Supplements) 12, no. C (January 1, 1990): 267–76. https://doi.org/10.1016/0920-5632(90)90196-2. Brick, D. H., M. Widgoff, P. Beilliere, P. Lutz, J. L. Narjoux, N. Gelfand, E. D. Alyea, et al. "Rapidities of produced particles in 200-GeV/c +/p/K+ interactions on Au, Ag, and Mg." Physical Review D 41, no. 3 (January 1, 1990): 765–73. https://doi.org/10.1103/PhysRevD.41.765. Carter, T., A. T. Goshaw, S. H. Oh, W. D. Walker, D. K. Wesson, H. Areti, C. Hojvat, et al. "Deconfinement Signature, Mass Dependence of Transverse Flow and Time Evolution in Antiproton-Proton Collisions at s = 1.8 TeV." Physica Scripta 1990, no. T32 (January 1, 1990): 122–25. https://doi.org/10.1088/0031-8949/1990/T32/018. Alexopoulos, T., C. Allen, E. W. Anderson, H. Areti, S. Banerjee, P. D. Beery, N. N. Biswas, et al. "Recent results from E-735 at the Fermilab Tevatron proton-antiproton collider with √s = 1.8TeV." Nuclear Physics, Section A 498, no. C (January 1, 1989): 181–92. https://doi.org/10.1016/0375-9474(89)90598-8. Banerjee, S., P. D. Beery, N. N. Biswas, A. Bujak, D. D. Carmony, T. Carter, Y. Choi, et al. "0 and »0 Production from Proton-Antiproton Collisions at s=1.8 TeV." Physical Review Letters 62, no. 1 (1989): 12–15. https://doi.org/10.1103/PhysRevLett.62.12. Brick, D. H., M. Widgoff, P. Beilliere, P. Lutz, J. L. Narjoux, N. Gelfand, E. D. Alyea, et al. "Multiparticle production by 200-GeV/c hadrons on gold, silver, and magnesium targets." Physical Review D 39, no. 9 (January 1, 1989): 2484–93. https://doi.org/10.1103/PhysRevD.39.2484. Alexopoulos, T., C. Allen, E. W. Anderson, H. Areti, S. Banerjee, P. D. Beery, N. N. Biswas, et al. "Multiplicity dependence of the transverse-momentum spectrum for centrally produced hadrons in antiproton-proton collisions at square-root-s=1.8 tev." Physical Review Letters 60, no. 16 (April 1988): 1622–25. Abe, K., G. Alexander, E. D. Alyea, M. Badiak, P. Beilliere, M. Bloomer, T. Bober, et al. "Leading particle distributions in 200 GeV/c p+A interactions." Physics Letters B 200, no. 3 (January 14, 1988): 266–71. https://doi.org/10.1016/0370-2693(88)90769-1. Chen, D. K. S., E. J. Bissett, S. H. Oh, and D. L. V. Ostrom. "A three-dimensional model for the analysis of transient thermal and conversion characteristics of monolithic catalytic converters." Sae Technical Papers, 1988. https://doi.org/10.4271/880282. Oh, S. H. "Thermal response of monolithic catalytic converters during sustained engine misfiring: A computational study." Sae Technical Papers, 1988. https://doi.org/10.4271/881591. Peden, C. H. F., D. W. Goodman, D. S. Blair, P. J. Berlowitz, G. B. Fisher, and S. H. Oh. "Kinetics of CO oxidation by O2 or NO on Rh(111) and Rh(100) single crystal." Journal of Physical Chemistry 92, no. 6 (1988): 1563–67. Murphy, T., Z. Chang, C. C. Mao, Y. S. Tai, S. Wang, Y. R. Wu, S. W. Xu, et al. "Hadron showers in iron and muon identification." Nuclear Inst. and Methods in Physics Research, A 251, no. 3 (November 1, 1986): 478–92. https://doi.org/10.1016/0168-9002(86)90642-X. Goloskie, D., V. Kistiakowsky, S. Oh, I. A. Pless, T. Stoughton, V. Suchorebrow, B. Wadsworth, O. Murphy, R. Steiner, and H. D. Taft. "The performance of crisis and its calibration." Nuclear Inst. and Methods in Physics Research, A 238, no. 1 (July 15, 1985): 61–73. https://doi.org/10.1016/0168-9002(85)91027-7. Oh, S. H., and J. C. Cavendish. "MATHEMATICAL MODELING OF CATALYTIC CONVERTER LIGHTOFF. PART II: MODEL VERIFICATION BY ENGINE-DYNAMOMETER EXPERIMENTS." Aiche Journal 31, no. 6 (1985): 935–42. Brick, D., H. Rudnicka, A. M. Shapiro, M. Widgoff, R. E. Ansorge, W. W. Neale, D. R. Ward, et al. "Search for long-lived charge +2 hadrons." Physical Review D 30, no. 5 (January 1, 1984): 1134–36. https://doi.org/10.1103/PhysRevD.30.1134. Brick, D. H., H. Rudnicka, A. M. Shapiro, M. Widgoff, R. E. Ansorge, W. W. Neale, D. R. Ward, et al. "Planar events produced in hadron-proton collisions at 147 GeV/c and their jet-like structures." Zeitschrift Für Physik C Particles and Fields 24, no. 1 (January 1, 1984): 19–29. https://doi.org/10.1007/BF01576283. Brick, D. H., H. Rudnicka, A. M. Shapiro, M. Widgoff, R. E. Ansorge, W. W. Neale, D. R. Ward, et al. "The reactions pp→ppπ+π-, K+p→K+pπ+π-π, π+p→ π+,pπ+π- and π-p →π+π- at 147 GeV/c." Zeitschrift Für Physik C Particles and Fields 19, no. 1 (March 1, 1983): 1–9. https://doi.org/10.1007/BF01572330. Brick, D. H., H. Rudnicka, A. M. Shapiro, W. Smith, M. Widgoff, P. Beillière, P. Lutz, et al. "Measurement of the multiplicities in the collision of hadrons with heavy nuclei at 200 GeV/c." Nuclear Physics, Section B 201, no. 2 (June 21, 1982): 189–96. https://doi.org/10.1016/0550-3213(82)90428-X. Brick, D., H. Rudnicka, A. M. Shapiro, M. Widgoff, R. E. Ansorge, W. W. Neale, D. R. Ward, et al. "Topological, total, and elastic cross sections for K+p, +p, and pp interactions at 147 GeV/c." Physical Review D 25, no. 11 (January 1, 1982): 2794–2806. https://doi.org/10.1103/PhysRevD.25.2794. Bugg, W. M., G. T. Condo, T. Handler, E. L. Hart, H. O. Cohn, T. Kitagaki, S. Tanaka, et al. "Some g(1700) decay modes." Physical Review D 26, no. 9 (January 1, 1982): 2183–89. https://doi.org/10.1103/PhysRevD.26.2183. Kitagaki, T., S. Tanaka, H. Yuta, K. Abe, K. Hasegawa, A. Yamaguchi, T. Nozaki, et al. "Elastic scattering and particle production in two-prong -p interactions at 8 GeV/c." Physical Review D 26, no. 7 (January 1, 1982): 1572–87. https://doi.org/10.1103/PhysRevD.26.1572. Kitagaki, T., S. Tanaka, H. Yuta, K. Abe, K. Hasegawa, A. Yamaguchi, T. Nozaki, et al. "Reaction p+-p at 8 GeV/c." Physical Review D 26, no. 7 (January 1, 1982): 1554–71. https://doi.org/10.1103/PhysRevD.26.1554. Brick, D. H., H. Rudnicka, A. M. Shapiro, M. Widgoff, R. E. Ansorge, W. W. Neale, D. R. Ward, et al. "Hadron production in π+, K+p and pp collisions at 147 GeV/c and properties of jet-like multiparticle systems - International hybrid spectrometer consortium." Zeitschrift Für Physik C Particles and Fields 15, no. 1 (January 1, 1982): 1–10. https://doi.org/10.1007/BF01573420. Brick, D., H. Rudnicka, A. M. Shapiro, M. Widgoff, E. D. Alyea, E. S. Hafen, R. I. Hulsizer, et al. "Comparison of 147 GeV/c π-p low transverse momentum hadron production with deep-inelastic leptoproduction." Zeitschrift Für Physik C Particles and Fields 11, no. 4 (January 1, 1982): 335–41. https://doi.org/10.1007/BF01578282. Brick, D., H. Rudnicka, A. M. Shapiro, M. Widgoff, R. E. Ansorge, W. W. Neale, D. R. Ward, et al. "Approach to scaling in inclusive π+/π- ratios at 147 GeV/c." Zeitschrift Für Physik C Particles and Fields 13, no. 1 (January 1, 1982): 11–17. https://doi.org/10.1007/BF01553627. Brick, D., H. Rudnicka, A. M. Shapiro, M. Widgoff, R. E. Ansorge, W. W. Neale, D. R. Ward, et al. "The effective-energy dependence of the charged particle's multiplicity in p/π+/K+ interactions on protons at 147 GeV/c." Physics Letters B 103, no. 3 (July 23, 1981): 241–46. https://doi.org/10.1016/0370-2693(81)90750-4. Brick, D., A. M. Shapiro, M. Widgoff, E. D. Alyea, E. S. Hafen, R. I. Hulsizer, V. Kistiakowsky, et al. "Multiple correlations and high-transverse-momentum jets in 147-GeV/c -p interactions." Physical Review D 24, no. 3 (January 1, 1981): 590–604. https://doi.org/10.1103/PhysRevD.24.590. Schouten, M., H. de Bock, F. Crijns, Z. Dziembowski, W. Kittel, W. Metzger, C. Pols, et al. "Inclusive and semi-inclusive ϱ<sup>0</sup> production in π<sup>+</sup>/π<sup>-</sup>/K<sup>+</sup>/pp interactions at 147 GeV/c." Zeitschrift Für Physik C Particles and Fields 9, no. 2 (January 1, 1981): 93–104. https://doi.org/10.1007/BF01410644. Oh, S. H., Y. S. Kim, and M. E. Noz. "Lorentz deformation and the jet phenomenon. II. Explanation of the nearly constant average jet transverse momentum." Foundations of Physics 10, no. 7–8 (August 1, 1980): 635–39. https://doi.org/10.1007/BF00715044. Kim, Y. S., M. E. Noz, and S. H. Oh. "Lorentz deformation and the jet phenomenon." Foundations of Physics 9, no. 11–12 (December 1, 1979): 947–54. https://doi.org/10.1007/BF00708703. Brick, D., A. M. Shapiro, M. Widgoff, F. Barreiro, O. Benary, J. E. Brau, E. S. Hafen, et al. "Inclusive and semi-inclusive charge structure in π-p multiparticle production at 147 GeV/c." Nuclear Physics, Section B 152, no. 1 (May 21, 1979): 45–60. https://doi.org/10.1016/0550-3213(79)90078-6. Hegedus, L. L., R. K. Herz, S. H. Oh, and R. Aris. "Effect of catalyst loading on the simultaneous reactions of NO, CO, and O2." Journal of Catalysis 57, no. 3 (1979): 513–15. Kim, Y. S., M. E. Noz, and S. H. Oh. "Lorentz deformation in the O(4) and light-cone coordinate systems." Journal of Mathematical Physics 21, no. 5 (January 1, 1979): 1224–28. https://doi.org/10.1063/1.524513. Oh, S. H., K. Baron, E. M. Sloan, and L. L. Hegedus. "Effects of catalyst particle size on multiple steady states." Journal of Catalysis 59, no. 2 (1979): 272–77. Kim, Y. S., M. E. Noz, and S. H. Oh. "Representations of the Poincaré group for relativistic extended hadrons." Journal of Mathematical Physics 20, no. 7 (January 1, 1978): 1341–44. https://doi.org/10.1063/1.524237. Teaching & Mentoring PHYSICS 417S: Advanced Physics Laboratory and Seminar 2022 PHYSICS 493: Research Independent Study 2022 PHYSICS 495: Thesis Independent Study 2021 Scholarly, Clinical, & Service Activities Presentations & Appearances The discover of Higgs boson. August 1, 2012 2012 Muli Anode Straw Tracker. July 1, 2012 2012 WW Cross Section Analysis. March 1, 2012 2012 Resonance Production in minimum biased events and jets. April 15, 2011 2011 Hyperon Production in CDF. April 10, 2011 2011 Service to the Profession Mu2e institutional representative. October 1, 2010 2010 Reviewer : Journal of Physics. December 28, 2009 2009 Reviewer : DOE. June 2, 2009 2009
CommonCrawl
Match Fishtank is now Fishtank Learning! Fishtank Learning Vector Kindergarten 1st Grade 2nd Grade 3rd Grade 4th Grade 5th Grade 6th Grade 7th Grade 8th Grade 9th Grade 10th Grade 11th Grade 12th Grade 3rd Grade 4th Grade 5th Grade 6th Grade 7th Grade 8th Grade Algebra 1 Geometry Algebra 2 Search Icon Created with Sketch. About Us Curriculum Teacher Tools Search Login Join Now Mathematics / 4th Grade / Unit 6: Fraction Operations Students start to operate on fractions, learning how to add fractions with like denominators and multiply a whole number by any fraction. Download Full Unit Download Icon Created with Sketch. Unit Prep Unit Practice Lesson Map Unit Summary In this unit, students begin their work with operating with fractions by understanding them as a sum of unit fractions or a product of a whole number and a unit fraction. Students will then add fractions with like denominators and multiply a whole number by any fraction. Students will apply this knowledge to word problems and line plots. In Grade 3, students developed their understanding of the meaning of fractions, especially using the number line to make sense of fractions as numbers themselves. They also did some rudimentary work with equivalent fractions and comparison of fractions. In Grade 4 Unit 5, they deepened this understanding of equivalence and comparison, learning the fundamental property that "multiplying the numerator and denominator of a fraction by the same non-zero whole number results in a fraction that represents the same number as the original fraction" (NF Progression, p. 6). Thus, in this unit, armed with a deep understanding of fractions and their value, students start to operate on them for the first time. The unit is structured so that students build their understanding of fraction operations gradually, first working with the simplest case where the total is a fraction less than 1, then the case where the total is a fraction between 1 and 2 (to understand regrouping when operating in simple cases), and finally the case where the total is a fraction greater than 2. With each of these numerical cases, they first develop an understanding of non-unit fractions as sums and multiples of unit fractions. Next, they learn to add and subtract fractions. And finally, they apply these understandings to complex cases, such as word problems or fraction addition involving fractions where one denominator is a divisor of the other, which helps prepare students for similar work with decimal fractions in Unit 7. After working with all three numerical cases in the context of fraction addition and subtraction, they work with fraction multiplication, learning strategies for multiplying a whole number by a fraction and a mixed number and using those skills in the context of word problems. Finally, students apply this unit's work to the context of line plots. Students will solve problems by using information presented in line plots, requiring them to use their recently acquired skills of fraction addition, subtraction, and even multiplication, creating a contextual way for this supporting cluster content to support the major work of the grade. The unit provides lots of opportunity for students to reason abstractly and quantitatively (MP.2) and construct viable arguments and critique the reasoning of others (MP.3). Students' understanding of fractions is developed further in Unit 7, in which students explore decimal numbers via their relationship to decimal fractions, expressing a given quantity in both fraction and decimal forms (4.NF.5—7). Then, in Grade 5, students extend their understanding and ability with operations with fractions (5.NF.1—7), working on all cases of fraction addition, subtraction, and multiplication and the simple cases of division of a unit fraction by a whole number or vice versa. Students then develop a comprehensive understanding of and ability to compute fraction division problems in all cases in Grade 6 (6.NS.1). Beyond these next few units and years, it is easy to find the application of this learning in nearly any mathematical subject in middle school and high school, from ratios and proportions in the middle grades to functional understanding in algebra. Pacing: 25 instructional days (22 lessons, 2 flex days, 1 assessment day) For guidance on adjusting the pacing for the 2020-2021 school year due to school closures, see our 4th Grade Scope and Sequence Recommended Adjustments. Subscribe to Fishtank Plus to unlock access to additional resources for this unit, including: Expanded Assessment Package Problem Sets for Each Lesson Student Handout Editor Google Classroom Integration Vocabulary Package This assessment accompanies Unit 6 and should be given on the suggested assessment day or after completing the unit. Download Unit Assessment Download Unit Assessment Answer Key Download Student Self-Assessment Intellectual Prep Intellectual Prep for All Units Read and annotate "Unit Summary" and "Essential Understandings" portion of the unit plan. Do all the Target Tasks and annotate them with the "Unit Summary" and "Essential Understandings" in mind. Take the unit assessment. Unit-specific Intellectual Prep Read pp. 7—9 of the Progressions for the Common Core State Standards in Mathematics, 3-5 Numbers and Operations - Fractions. Note: Stop reading at the section header "Decimals." Read the following table that includes models used in this unit. Area model Example: Use an area model to solve $$\frac{3}{6}+\frac{2}{6}$$. Example: Use a number line to solve $$\frac{7}{8}-\frac{3}{8}$$. Tape diagram Example: Use a tape diagram to solve $$2\times \frac{3}{10}$$. Line plot Essential Understandings "The meaning of addition is the same for both fractions and whole numbers, even though algorithms for calculating their sums can be different. Just as the sum of $$4$$ and $$7$$ can be seen as the length of the segment obtained by joining together two segments of lengths 4 and 7, so the sum of $${{{{2\over3}}}}$$ and $${{{{8\over5}}}}$$ can be seen as the length of the segment obtained joining together two segments of length $${{{{2\over3}}}}$$ and $${{{{8\over5}}}}$$" (Progressions for the Common Core State Standards in Mathematics, 3-5 Numbers and Operations - Fractions, p. 7). Quantities cannot be added or subtracted if they do not have like units. Just like one cannot add 4 pencils and 3 bananas to have 7 of anything of meaning (unless one changes the unit of both to "objects"), the same applies for the units of fractions (their denominators). This explains why one must find a common denominator to be able to add fractions with unlike denominators when adding and subtracting fractions. Further, when you add or subtract quantities with like units, their units do not change. Just like one adds 5 bananas and 2 bananas and gets 7 bananas, one adds 5 eighths and 2 eighths and gets 7 eighths. "Converting a mixed number to a fraction should not be viewed as a separate technique to be learned by rote, but simply as a case of fraction addition. Similarly, converting an improper fraction to a mixed number is a matter of decomposing the fraction into a sum of a whole number and a number less than 1" (Progressions for the Common Core State Standards in Mathematics, 3-5 Numbers and Operations - Fractions, p. 8). "It is possible to over-emphasize the importance of simplifying fractions. There is no mathematical reason why fractions must be written in simplified form, although it may be convenient to do so in some cases" (Progressions for the Common Core State Standards in Mathematics, 3-5 Numbers and Operations - Fractions, p. 6). Thus, students should not be expected to simplify fractions in all cases where it's possible to do so. mixed number fraction greater than one Related Teacher Tools: 4th Grade Vocabulary Glossary Unit Materials, Representations and Tools Fraction Strips (or 5 white, 5 red, 5 light green, and 5 purple Cuisenaire rods) Area models Buttons (with various diameters) Additional Unit Practice With Fishtank Plus you can access our Daily Word Problem Practice and our content-aligned Fluency Activities created to help students strengthen their application and fluency skills. Topic A: Building, Adding, and Subtracting Fractions Less Than or Equal to 1 4.NF.B.3.B Decompose fractions as a sum of unit fractions and as a sum of smaller fractions. 4.NF.B.4.A Decompose non-unit fractions and represent them as a whole number times a unit fraction. Add and subtract fractions within 1 with the same units. 4.NF.B.3.D Solve word problems that involve the addition and subtraction of fractions where the total is less than or equal to one. Topic B: Building, Adding, and Subtracting Fractions Less Than or Equal to 2 Decompose non-unit fractions less than or equal to 2 as a sum of unit fractions, as a sum of non-unit fractions, and as a whole number times a unit fraction. Add and subtract fractions that require regrouping where the total is less than or equal to two. Add two fractions where one denominator is a divisor of the other using the denominators 2, 3, 4, 5, 6, 8, 10, and 12. Topic C: Building, Adding, and Subtracting Fractions More Than 2 Decompose and compose non-unit fractions greater than two as a sum of unit fractions, as a sum of non-unit fractions, and as a whole number times a unit fraction. 4.NF.B.3.C Convert fractions greater than 1 to mixed numbers. Convert mixed numbers to fractions greater than 1. Compare and order fractions greater than 1 using various methods. Add a mixed number and a fraction. Add mixed numbers. Subtract a fraction from a mixed number. Subtract a mixed number from a mixed number. Solve word problems involving addition and subtraction of fractions. Topic D: Multiplication of Fractions Multiply a whole number by a non-unit fraction. Multiply a whole number by a mixed number. Solve word problems involving multiplication of fractions. Solve word problems involving addition, subtraction, and multiplication of fractions. Topic E: Line Plots 4.MD.B.4 Make a line plot (dot plot) representation to display a data set of measurements in fractions of a unit. Solve problems by using information presented in line plots. Key: Major Cluster Supporting Cluster Additional Cluster Core Standards 4.MD.B.4 — Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection. Number and Operations—Fractions 4.NF.B.3 — Understand a fraction a/b with a > 1 as a sum of fractions 1/b. 4.NF.B.3.A — Understand addition and subtraction of fractions as joining and separating parts referring to the same whole. 4.NF.B.3.B — Decompose a fraction into a sum of fractions with the same denominator in more than one way, recording each decomposition by an equation. Justify decompositions, e.g., by using a visual fraction model. Examples: 3/8 = 1/8 + 1/8 + 1/8 ; 3/8 = 1/8 + 2/8 ; 2 1/8 = 1 + 1 + 1/8 = 8/8 + 8/8 + 1/8. 4.NF.B.3.C — Add and subtract mixed numbers with like denominators, e.g., by replacing each mixed number with an equivalent fraction, and/or by using properties of operations and the relationship between addition and subtraction. 4.NF.B.3.D — Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.g., by using visual fraction models and equations to represent the problem. 4.NF.B.4 — Apply and extend previous understandings of multiplication to multiply a fraction by a whole number. 4.NF.B.4.A — Understand a fraction a/b as a multiple of 1/b. For example, use a visual fraction model to represent 5/4 as the product 5 × (1/4), recording the conclusion by the equation 5/4 = 5 × (1/4). 4.NF.B.4.B — Understand a multiple of a/b as a multiple of 1/b, and use this understanding to multiply a fraction by a whole number. For example, use a visual fraction model to express 3 × (2/5) as 6 × (1/5), recognizing this product as 6/5. (In general, n × (a/b) = (n × a)/b.) 4.NF.B.4.C — Solve word problems involving multiplication of a fraction by a whole number, e.g., by using visual fraction models and equations to represent the problem. For example, if each person at a party will eat 3/8 of a pound of roast beef, and there will be 5 people at the party, how many pounds of roast beef will be needed? Between what two whole numbers does your answer lie? 3.MD.B.4 — Generate measurement data by measuring lengths using rulers marked with halves and fourths of an inch. Show the data by making a line plot, where the horizontal scale is marked off in appropriate units— whole numbers, halves, or quarters. 3.NF.A.1 3.NF.A.1 — Understand a fraction 1/b as the quantity formed by 1 part when a whole is partitioned into b equal parts; understand a fraction a/b as the quantity formed by a parts of size 1/b. 3.NF.A.2 — Understand a fraction as a number on the number line; represent fractions on a number line diagram. 4.NF.A.1 — Explain why a fraction a/b is equivalent to a fraction (n × a)/(n × b) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Use this principle to recognize and generate equivalent fractions. 4.NF.A.2 — Compare two fractions with different numerators and different denominators, e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1/2. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model. Future Standards 4.MD.A.2 4.MD.A.2 — Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale. 5.MD.B.2 — Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Use operations on fractions for this grade to solve problems involving information presented in line plots. For example, given different measurements of liquid in identical beakers, find the amount of liquid each beaker would contain if the total amount in all the beakers were redistributed equally. 4.NF.C.5 4.NF.C.5 — Express a fraction with denominator 10 as an equivalent fraction with denominator 100, and use this technique to add two fractions with respective denominators 10 and 100. Students who can generate equivalent fractions can develop strategies for adding fractions with unlike denominators in general. But addition and subtraction with unlike denominators in general is not a requirement at this grade. For example, express 3/10 as 30/100, and add 3/10 + 4/100 = 34/100. 5.NF.A.1 — Add and subtract fractions with unlike denominators (including mixed numbers) by replacing given fractions with equivalent fractions in such a way as to produce an equivalent sum or difference of fractions with like denominators. For example, 2/3 + 5/4 = 8/12 + 15/12 = 23/12. (In general, a/b + c/d = (ad + bc)/bd.) 5.NF.B.4 5.NF.B.4 — Apply and extend previous understandings of multiplication to multiply a fraction or whole number by a fraction. 5.NF.B.7 — Apply and extend previous understandings of division to divide unit fractions by whole numbers and whole numbers by unit fractions. Students able to multiply fractions in general can develop strategies to divide fractions in general, by reasoning about the relationship between multiplication and division. But division of a fraction by a fraction is not a requirement at this grade CCSS.MATH.PRACTICE.MP1 — Make sense of problems and persevere in solving them. CCSS.MATH.PRACTICE.MP2 — Reason abstractly and quantitatively. CCSS.MATH.PRACTICE.MP3 — Construct viable arguments and critique the reasoning of others. CCSS.MATH.PRACTICE.MP4 — Model with mathematics. CCSS.MATH.PRACTICE.MP5 — Use appropriate tools strategically. CCSS.MATH.PRACTICE.MP6 — Attend to precision. CCSS.MATH.PRACTICE.MP7 — Look for and make use of structure. CCSS.MATH.PRACTICE.MP8 — Look for and express regularity in repeated reasoning. Fraction Equivalence and Ordering Decimal Fractions At Fishtank Learning, we believe that teachers and their students deserve access to the highest quality instructional materials. Browse our comprehensive unit and lesson plans in a convenient, openly-licensed format that you can download, use, and adapt—all for free.
CommonCrawl
Classifying Reviewers by Experience Modeling The Evolution of Reviewer Experience Notice anything backwards, missing, misleading, or wrong? Arturo Leon Jason Cohen Founder & Lead Data Scientist Identifying experienced tasters within a quality control team is important to our clients as their opinions are valuable. For example, a brewery is considering a change to their brewhouse equipment. They would seek the most experienced reviewer and ask him/her how the new equipment will affect their products flavor profile. This paper will outline how we model the latent classification of reviewer experience over time. At the heart of the issue is creating a model to identify which individuals in a panel are the most experienced tasters - and weight their reviews accordingly. The obvious approach is to ask the reviewer with the most reviews, lets call this reviewer X. Reviewer X may have the most reviews but the majority are of only two products. Another reviewer named Y has the most variety of product reviews, but has fewer reviews than X. Management could survey the tasting teams opinion regarding the new equipment, but due to time constraints, this is typically not the case. In turn, management would have to blindly pick a taster or decide without input from a tasting team; both are very poor options. With the development of Gastrograph, Analytical Flavor Systems can identify the most experienced reviewer on a team. Various methods have been applied in an attempt to classify reviews with respect to experience. The Partitioning Around the Medoids (PAM) algorithm under various metrics and several cluster numbers was applied but did not prove to be successful. In order for PAM to be successful, a transformation needs to be determined or we learn a metric from our data to properly weight our reviews. Linear and Support Vector regression techniques were applied in an attempt to predict Review Number, however, these methods proved to be ineffective. K Nearest Neighbors (kNN) was successfully applied to our data. The next section will discuss how kNN is carried out before discussing its application to our reviews. Background - kNN Consider two sets of data \(Classified\) and \(Unclassified\). We wish to assign a class to all members of the set \(Unclassified\). In other words we want to discover the requirements for membership for each class in \(Classified\). Then classify the elements in \(Unclassified\) accordingly. kNN addresses the problem indirectly by first considering \({ u \in Unclassified}\) and the k nearest elements to \(u\) who belong to \(Classified\). Nearness can be determined either by distance or a context specific similarity metric. In this case, the Manhattan distance will be used as our metric. For the non-technical reader, the Manhattan distance between two points a, b in "four dimensional space" is: \[\sum_{k=1}^4 \begin{vmatrix} a_k - b_k \end{vmatrix} \] Then u is classified as the mode of the class of the k nearest elements in \(Classified\). The intuition behind the algorithm is: elements that are "close" to each other should be of the same class. The process can be visualized in the image below. In the image above, the green circle is an unlabeled data point. If k = 3, the circle would be labeled as a red triangle because that is the mode class of the 3 nearest labeled data points. If k = 5, the circle would be labeled as a blue square because three of the five nearest points are blue squares. Results and Analysis In order to apply kNN to all of our coffee reviews, we must first classify the training/validation set (which is 80 % of all our data, the remaining 20% will be the testing set) by Review Number into three mutually exclusive classes. Review number is used so that the classification carried out by kNN is a reflection of experience. The classes are plotted below: Our flavor variables range from 0 – 5, while Review Number ranges from 1 – 600. If kNN is applied to the data without scaling Review Number, the results would be similar to our previous labeling. Instead we scale Review Numbers have been scaled so that it now ranges from 0 to 5. The results of the cross validated kNN model applied to the scaled data are displayed below. Clearly, the results are not very good. So exploratory analysis was conducted, and it turned out that the data had many unique products with less than 5 reviews. This accounted for about 15% of the data, and so these reviews were removed. Consequently, the results were much better. (Red dots are the outliers in our data) In order to understand why some reviews are seemingly mislabeled, reviewer X's submissions will be examined. Reviewer X has 5 years of sensory quality control experience, but is new to the system. (A common scenario amongst clients.) Reviewer X's reviews are plotted below. X would have been labeled as average in terms of experience by the naive classification. The new results do not suffer from this shortcoming. Further analysis reveals that the majority of Reviewer X's submissions are in group 3 - and after the 90th review, nearly all reviews are labeled into classes 2 and 3. In other words, Reviewer X was reviewing comparable to people with a higher Review Number because of experience. Thus, kNN is able to properly classify reviews submitted by tasters with prior experience, as well tasters whose reviews are similar to inexperienced tasters. It should be noted that certain reviews from an experienced taster can be classified as lower experience due to other factors such as distraction or palate fatigue, such as the reviews in the 250 region marked at the lowest experience score of 1. Given the fashion in which the data was labeled and the nature of the results, we can now view these groups as, 1:= least experienced (lambda), 2:= average experience (alpha) and 3:= most experienced (mu). Next, the test set was labeled by the seven nearest neighbors from the training set. The average Review Number is compared for the experience groups in both of the test and training sets. $$ \begin{array}{c|lcr} \text{} & \text{Test} & \text{Train} \\ \hline Class~1 & 161 & 158 \\ Class~2 & 246 & 230 \\ Class~3 & 298 & 284 \end{array} $$ Conclusion and Applications From our analysis it can be concluded that we have found a satisfactory method of labeling reviews by experience, and by extension the reviewers who submitted them, has been found. This new method can now be used to improve and streamline other tools that have been developed. For example we can now compare the sensitivities and biases of experienced and novice reviewers. Therefore studying the evolution of flavor sensitivity over time. We have updated our objectives for a model for consistency by requiring that the average consistency of experienced reviewers should be high. We will build on our experience classifier by developing an experience coefficient as a way to quantify experience. Get more content just like this via email We're changing the world of flavor with AI. We'd love to show you how with a demo. Sign up below and we will be in touch within 24 hours. We'd love to get in touch and give you more insight into how we're changing the world for flavor and AI. What are you interested in learning about? *
CommonCrawl
Home IACR EprintInception makes non-malleable codes shorter as well! Inception makes non-malleable codes shorter as well! Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs in ICS 2010, have emerged in the last few years as a fundamental object at the intersection of cryptography and coding theory. Non-malleable codes provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely "unrelated value''. Although such codes do not exist if the family of "tampering functions'' {\mathcal F} allowed to modify the original codeword is completely unrestricted, they are known to exist for many broad tampering families {\mathcal F}. The family which received the most attention is the family of tampering functions in the so called (2-part) {\em split-state} model: here the message x is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with each L and R individually. Dodis, Kazana, and the authors in STOC 2015 developed a generalization of non-malleable codes called the concept of non-malleable reduction, where a non-malleable code for a tampering family {\mathcal F} can be seen as a non-malleable reduction from {\mathcal F} to a family NM of functions comprising the identity function and constant functions. They also gave a constant-rate reduction from a split-state tampering family to a tampering family {\mathcal G} containing so called $2$-lookahead functions, and forgetful functions. In this work, we give a constant rate non-malleable reduction from the family {\mathcal G} to NM, thereby giving the first {\em constant rate non-malleable code in the split-state model.} Central to our work is a technique called inception coding which was introduced by Aggarwal, Kazana and Obremski in TCC 2017, where a string that detects tampering on a part of the codeword is concatenated to the message that is being encoded.
CommonCrawl
Perspectives on modelling the distribution of ticks for large areas: so far so good? Agustín Estrada-Peña1Email author, Neil Alexander2 and G.R. William Wint2 © Estrada-Peña et al. 2016 This paper aims to illustrate the steps needed to produce reliable correlative modelling for arthropod vectors, when process-driven models are unavailable. We use ticks as examples because of the (re)emerging interest in the pathogens they transmit. We argue that many scientific publications on the topic focus on: (i) the use of explanatory variables that do not adequately describe tick habitats; (ii) the automatic removal of variables causing internal (statistical) problems in the models without considering their ecological significance; and (iii) spatial pattern matching rather than niche mapping, therefore losing information that could be used in projections. We focus on extracting information derived from modelling the environmental niche of ticks, as opposed to pattern matching exercises, as a first step in the process of identifying the ecological determinants of tick distributions. We perform models on widely reported species of ticks in Western Palaearctic to derive a set of covariates, describing the climate niche, reconstructing a Fourier transformation of remotely-sensed information. We demonstrate the importance of assembling ecological information that drives the distribution of ticks before undertaking any mapping exercise, from which this kind of information is lost. We also show how customised covariates are more relevant to tick ecology than the widely used set of "Bioclimatic Indicators" ("Biovars") derived from interpolated datasets, and provide programming scripts to easily calculate them. We demonstrate that standard pre-tailored vegetation categories also fail to describe tick habitats and are best used to describe absence rather than presence of ticks, but could be used in conjunction with the climate based suitability models. We stress the better performance of climatic covariates obtained from remotely sensed information as opposed to interpolated explanatory variables derived from ground measurements which are flawed with internal issues affecting modelling performance. Extracting ecological conclusions from modelling projections is necessary to gain information about the variables driving the distribution of arthropod vectors. Mapping exercises should be a secondary aim in the study of the distribution of health threatening arthropods. Correlative distribution modelling MODIS Fourier transformation Environmental suitability varies within the range of virtually all species, causing patchy rather than homogeneous distributions, with a presumed lower abundance in the less suitable areas [1]. It is this environmentally driven heterogeneity that underpins species distribution modelling (i.e. habitat modelling, ecological, environmental, or climate niche modelling), which all identify places suitable for the survival of populations by identifying their environmental requirements [2]. These models commonly use associations between environmental variables and known species occurrence and/or absence records to identify conditions within which populations can be maintained [3, 4]. The niche concept was popularized in 1957 by Hutchinson [5]. The Hutchinsonian niche is an n-dimensional hypervolume, where the dimensions are environmental conditions and resources that define the requirements for a population to persist. The niche can thus be mathematically quantified and the "position" of a species in the n-dimensional volume plotted and analysed. An "environmental niche" can be defined as the combination of a series of (often climatic or vegetation related) variables that influence survival and reproductive rates, thereby "driving" the population growth [6] and limiting a species' range. A broader definition of the niche can incorporate features of topographic parameters (like the slope or aspect), as well as demographic, social and agricultural features of an area. In the case of parasitic arthropod vectors, "biotic" features of their life - cycles, like the availability, number and abundance of host species used, can also play an important role in sustaining or restricting population growth in addition to the climate conditions [7]. There is an increasing interest in predicting the possible effects of changing climate on the distributions of parasitic arthropods. Studies have been carried out on many organisms, including mosquito, sandfly or tick vectors [i.e. 8–10] which capture, at various spatial resolutions, the factors driving their distribution, and attempt to understand how variation in these factors might shape future distributions. The (often mapped) results of these modelling exercises are used to disseminate information and help decision makers produce strategies of preparedness, adaptation or response. We acknowledge that the best methods for modelling the impact of environmental variables on the tick life - cycle are process-driven models because they describe each development or mortality process [11]. The lack of knowledge of the drivers of physiological processes for many species of ticks precludes their use on a wide scale, and correlative or statistical distribution modelling is therefore extensively used instead. The validity of these methods is sometimes, however, undermined by several methodological issues which can arise from an insufficient understanding of a) the biology of the organism to be modelled, b) the statistical rules that drive the concept of niche modelling, or c) which variables should be used to build the models. For example: an association with a covariate does not prove cause and effect [12] and so extrapolating such associations to future climate scenarios may not be appropriate as our current knowledge of ticks is often insufficient to reliably predict the impact of changing environments on tick biology. Instead, the scientific landscape of tick modelling has focused on relatively few methods, and has very often relied on a single set of explanatory variables derived from interpolated climatic datasets [13], which have not always proven suitable for correlative modelling of tick distributions [14]. This paper explores the process of correlative tick distribution modelling based on a large and updated dataset of tick records from the Palaearctic region [10] which are made available in the supplementary information of this paper. We emphasize that this paper is neither a review of statistical methods, nor a comparison of the performance of different techniques. Our aim is rather to use the results of ad hoc analyses, in the context of recently published developments, to show how best to implement the models, focussing on several discrete stages. The first is producing the habitat suitability models including: (i) choosing raw covariates; (ii) producing derivatives that are relevant to tick ecology and incorporating them into models of tick suitability; (iii) mapping the outputs in a way that allows flexibility of presentation. The second stage illustrates how to use the suitability models to identify which of several types of environmental characteristic are associated with high suitability, including; (i) combinations of the predictor covariates; (ii) additional customised ecologically meaningful derivatives of covariates; and (iii) land use/land cover. In selected cases, we provide scripts written in the widely used R programming environment [15] to improve the understanding of the methods explained here and to enable readers to do their own analyses. We will not explicitly address the issues of the presentation of results, which are usually delivered in the form of maps, but will provide arguments to use a plot of the niche in the environmental rather than spatial dimensions, arguing the importance of defining the drivers of tick distributions, something rarely explored in this field of research. The choice between interpolated or remotely sensed environmental covariates Correlative spatial modelling aims to establish how environmental covariates (or predictors) are associated with distribution patterns. This section focusses on the use of eco-climatic variables and deliberately ignores the other possible drivers of distribution, such as host presence, socio-economic factors, transport, trade in animals, or geographic barriers that may affect the spread of ticks [16]. There is a growing tendency to use interpolated weather station datasets [13] to identify the geographical range in which arthropod vectors may survive and then to project these trained models into future scenarios. These data are enormously useful to describe large scale patterns in climate, if the stations are close together, but if they are far apart, as in many remote areas, interpolations are increasingly inaccurate. In addition, because they are interpolated, monthly summaries suffer from significant co-linearity and auto-correlation [14, 17, 18]. As a result, using consecutive (highly auto-correlated) monthly summaries can lead to over-performing and often unreliable models [17, 18]. Some authors have attempted to avoid these problems by using the so-called Bioclimatic Indicators or "Biovars" (e.g. maximum temperature of the warmest quarter) which are intended to represent ecological descriptors of an abiotic niche, and are assumed to suffer less from co-linearity [19]. Given that tick life-cycles are largely driven by two variables - temperature and water losses - the latter driven to a large degree by a combination of temperature and humidity [20], some authors claim that combinations of "Biovars" [i.e. 21–24] can be used as reliable covariates. Whilst an improvement on simple monthly summaries, we assert that "Biovars" are general indicators and (i) are not tailored for any species; (ii) they include interpolated rainfall data, but these are not necessarily related to the measures of humidity that are most relevant to ticks [25]; (iii) "Biovars" are simple values and at best have seasonality or timing values with a resolution of 1 month or more; and finally, (iv) "Biovars" are based on the raw monthly summaries so are also affected by co-linearity [14, 17]. A common procedure in these cases is simply to drop the affected variable(s) from the model, because they are considered "disturbing covariates", or, in other words, variables that do not adequately train the modelling algorithms, therefore affecting its outcome. In this sense, Araújo & Guisan [26] stated that the "use of automated solutions to predictor selection … should not be seen as a substitution for preselecting sound eco-physiological predictors based on deep knowledge of the bio-geographical and ecological theory" (see also reference [27] for comments about the arbitrary selection of explanatory covariates). Studies automatically dropping covariates from a model are focused on statistical purity rather than on ecological explanations: these models will probably gain in statistical "correctness" but may lose biological relevance. We wonder why generalist variables affected by statistical issues should be used if we can tailor our own variables for large areas and for the particular species to be modelled. Satellite - derived information has a long tradition as a descriptor of the environment affecting parasitic arthropods [i.e. 28–30]. Most remotely sensed time series of temperature and vegetation are available as 8 or 16-day composites, for periods of many years. Selection of a particular parameter as a covariate - which year, which month, which measure (mean, minimum, maximum, etc.) - becomes something of a lottery, and the use of too many variables, e.g. weekly values, will produce over-performing models. It is therefore desirable to perform some sort of data reduction to produce relatively few variables to choose from. As there are a continuous time series of remotely sensed data, they can be reduced to their most basic temporal components, therefore reducing the redundancy (which should be avoided in any correlative modelling) while retaining ecological meaning. One such data reduction method uses Temporal Fourier Transformations (TFT) to convert a time series of data into a mean and a number of fixed components of different periods of time (annual, biannual, triannual, etc.) each described by a phase and an amplitude. It has been reported elsewhere [31] that TFT can be used to decompose time series satellite image data into their harmonic components that are less prone to co-linearity and so more suitable to build and train correlative models. However, the harmonic regression that computes these terms also produces a series of other coefficients, which have more potential as descriptors of the climate or the vegetation seasonality, and TFT derivatives of MODIS imagery have been reported as effective and ecologically meaningful predictors for several tick species with regional or worldwide distributions [14]. A linear regression has the form y = a + bx, but in a harmonic regression the coefficients have sine and cosine transformations which capture the periodic behaviour of the values. These coefficients can be used as covariates. The regression has the form: $$ \mathrm{y}={\mathrm{a}}_1+\left({\mathrm{a}}_2*\left(\mathrm{SIN}\left(2\pi \mathrm{t}\right)\right)\right)+\left({\mathrm{a}}_3*\left(\mathrm{C}\mathrm{O}\mathrm{S}\left(2\pi \mathrm{t}\right)\right)\right)+\left({\mathrm{a}}_4*\left(\mathrm{SIN}\left(4\pi \mathrm{t}\right)\right)\right)+\left({\mathrm{a}}_5*\left(\mathrm{C}\mathrm{O}\mathrm{S}\left(4\pi \mathrm{t}\right)\right)..\kern0.24em \right).\kern0.48em +\dots \kern0.48em \left({\mathrm{a}}_{\mathrm{n}}*\left(\mathrm{C}\mathrm{O}\mathrm{S}\left(\mathrm{x}\pi \mathrm{t}\right)\right)\right)+\left({\mathrm{a}}_{\mathrm{n}}*\left(\mathrm{SIN}\left(\mathrm{x}\pi \mathrm{t}\right)\right)\right) $$ In this regression, the "a's" are the coefficients, numbered consecutively; "t" is the time (in days, weeks, or any chosen interval); and "y" is the value of the variable for that time. The first term (a1) is the average of the time series, while every pair of consecutive coefficients describe the slope and the duration of a seasonal change. The coefficients a2 and a3 describe the slope and the duration of spring, while a4 and a5 describe the negative slope and the duration of autumn. By replacing the values of coefficients in the equation above, the complete series can be reconstructed and can be used to reproduce the original series, which reduces the use of redundant variables [14, 32]. A script is provided in Additional file 1 that calculates these coefficients. We can obtain coefficients for as many components of this equation as necessary, but in practice it has been shown that three or four components (plus the independent term, a1) are sufficient for reliable correlative modelling [14, 32]. Building models of distribution for species of ticks in the Western Palaearctic The following examples are based on multiple logistic regression models for eight species of ticks recorded in the Western Palaearctic: Dermacentor marginatus, D. reticulatus, Hyalomma marginatum, Haemaphysalis punctata, Ixodes ricinus, Rhipicephalus annulatus and R. bursa. The results in the main text focus on I. ricinus and H. marginatum and are used to illustrate other features of the correlative modelling of tick distribution. The explanatory variables are the Fourier coefficients of MODIS satellite data of the daytime land surface temperature (LSTD) and the Normalized Difference Vegetation Index (NDVI), a measure of photosynthetic activity. NDVI has been used as a proxy for vegetation stress [31, 33] and can be used to derive relative humidity within the vegetation layer [31]. Both LSTD and NDVI were obtained at 1 km spatial resolution every 8 or 16 days, respectively, for the period 2001 until 2014, from the MODIS website (http://modis.gsfc.nasa.gov/data/dataprod accessed December 2014). Note that the MODIS datasets also include night-time land surface temperature, which could be used instead though this is unlikely to affect the models [34]. After calculating monthly averaged values for both LSTD and NDVI over the complete period, we obtained five coefficients of the harmonic regression for each parameter. Note that any potential correlations of the raw LSTD and NDVI [17] are removed by the calculation of TFT variables. The resulting ten explanatory variables (LSTD1 to LSTD5 and NDVI1 to NDVI5) were the only covariates included in the models. The tick occurrence dataset used in this exercise has been described elsewhere [10] and has been updated with records from the literature for this exercise, as published to December, 2014. This dataset is available as Additional file 2. To reduce the impact of variability in geo-referencing of the tick dataset and to produce a common output system, we used a grid of hexagonal polygons as mapping units, at a spatial resolution of 0.1°, covering the region of interest. The use of a hexagonal grid also allows the data to be aggregated and presented for administrative divisions preferred by planners, or indeed, any other polygon areas (see Fig. 1). The geographical distribution of suitable environmental conditions for the tick Ixodes ricinus in the western Palaearctic, as obtained with the coefficients from the logistic regression shown in Table 1 and applying a grid of 0.05° to the complete target territory a. The method has potential not only to map such suitable conditions but to overlay with administrative divisions allowing the planning of active surveys in territories yet undetected but with positive suitability (b, from the square in a). It has also a potential for decisions makers to apply effective measurements of impact at defined territories. Because the grid covering the territory, trends of weather can also be evaluated, together with the variables shaping the distribution of a given species of tick The presence of each tick species was extracted for each hexagon, as was the median value of each of the ten explanatory covariates. The output model values are the probability of occurrence for each tick species converted to values ranging from 0 (unsuitable) to 100 (completely suitable). Figure 1 displays the predicted suitability for I. ricinus. Results for four additional species for which enough distribution points were available are provided in Additional files 3, 4, 5 and 6. Additional file 7 shows the model parameter estimates and the importance of the explanatory covariates for each of the eight models (including those where the low number of records precludes further analysis). Identifying the factor(s) that limit the occurrence of ticks It has been shown [35] that the maps displaying the probability of occurrence for any organism are essentially an exercise in pattern matching and gap filling of a known distribution. They have an obvious use to show distributions, but do not explain the processes affecting the ecology of the modelled species. This limits the epidemiological conclusions that can be drawn from them, and precludes assessing the ecological consequences of changing covariates on the species' distribution. An indication of the ecological significance of the driving variables can, however, be achieved by plotting the modelled occurrence in relation to environmental space to identify the impact of the predictors, and then looking at the geographical distributions of the predictors so identified. The former is necessary to understand the ecological determinants that affect the modelled distribution of the organism, the latter is simply the translation into a more easily visualised format. The following paragraphs provide an overview of this process. Since ten variables were used for model building, it is desirable to reduce the dimensions of the niche to improve the readability of the resulting charts. We therefore reduced the number of dimensions by applying a Principal Components Analysis (PCA) to the results above using the R programming environment. Additional files 8 and 9 show the results of the PCA derived from the explanatory covariates. For the sake of simplicity, we show these results only for Ixodes ricinus and Hyalomma marginatum because they have widely diverging ecological requirements and their plots clearly illustrate our rationale. The PCA outputs illustrate the contribution of the different explanatory variables to the probability of occurrence of the species. The cells with highest predicted probability of occurrence (warm colours) for I. ricinus (Additional file 8) have high mean values of both NDVI (NDVI1) and LSTD5, which implies small difference of temperature between autumn and winter. The picture for H. marginatum is different (Additional file 9), with more suitable environments implied in sites with high mean temperatures (LSTD1) and high seasonality of vegetation in both spring and autumn (NDVI2, NDVI5). Even this limited example clearly shows how a few explanatory covariates can be used to identify the factors influencing a species' distribution, conclusions which cannot be drawn from a simple species distribution map. Deriving covariates with ecological meaning The drivers identified using the PCA shown above illustrate the role of the explanatory covariates in determining the distribution of the modelled species. Whilst the reduction represented by the PCA space is both synthetic and statistically robust, its ecological meaning is sometimes difficult to capture [36]. A precise understanding of the drivers of tick distributions requires the use of ecological parameters with more widely acknowledged ecological impact, such as cumulative spring temperatures, rates of vegetation increase, or the temperature deficit below a given threshold in winter [14, 20]. These are similar to the Bioclimatic Indicators provided with interpolated datasets, but much more precisely defined to be relevant to the modelled organism. As mentioned previously, it is possible to reconstruct the original time series on which TFT has been performed at any given temporal resolution, even days, and then build tailored variables that incorporate an ecological context into the calculated probability of occurrence. Once the daily time series is reconstructed, it is straightforward to prepare specific combinations of weather traits. A script in R is provided in Additional file 10, which imports a series of TFT coefficients derived from remotely sensed images, and computes a large set of variables that are biologically relevant for ticks [20]. The script can be easily tailored to produce other sets of derived variables, according to the organism's requirements or assumptions used in modelling. Whilst we illustrate the use of such variables below, we emphasise there are no "silver bullets" in choosing the parameters to investigate in this way. Rather we acknowledge that the variables of significance in the climate niche may differ according to the species of tick and even the geographical context, and that selecting which to investigate is likely to be an iterative process. These variables are not intended as potential covariates for modelling but aim to provide simple descriptions of the factors matching the predicted probability of occurrence. They are also "data driven" in that, for example, variables with a seasonal component (e.g. the sum of temperatures in spring) are based on the actual temperature changes and its slope, rather than a pre-defined date [37] and are therefore not related to the astronomical event of the change of season, which is obviously meaningless for the physiology of the modelled organisms. We encourage researchers to investigate the possibilities of producing such tailored data for any part of the Earth's surface and for any species of tick using the backbone of the script provided and using the coefficients of the TFT provided (see examples in Additional file 11). As with the PCA plots above, these tailored covariates can be plotted against the axes of the environmental niche. Examples for I. ricinus and H. marginatum are presented in Additional files 12 and 13, respectively. The aim is illustrative, to show how different variables are associated with the predicted occurrence of ticks, and the wide range of composites with ecological meaning that can be specifically tailored for this purpose. For I. ricinus, we used the sum of NDVI in spring, the accumulated temperature over 0 °C in winter, the 90 % quantile of NDVI values, and the amplitude of temperature in spring. For H. marginatum, we used the cumulative temperature over 0 °C in autumn, the number of days over 0 °C in winter, the 75 % quantile NDVI values, and the number of days over 0 °C in autumn. The plot shows that the predicted occurrence for I. ricinus follows a pattern where (i) the 90 % quantile of the NDVI is high, implying a dense vegetation layer; and (ii) the cumulative NDVI in spring is high, implying a dense vegetation layer following the winter. However, there is a weaker relation between the slope of the NDVI in spring and the predicted probability of occurrence (Additional file 12). Some of these variables may be of course correlated, but this is less relevant as our purpose is not modelling but quantifying ecologically relevant descriptions of suitable habitats. The plot also shows that I. ricinus has a high probability of occurrence along a narrow range of high temperatures (measured by the values of 90 % quartile of annual temperatures and the cumulative in spring temperatures) and has a strong negative association with the slope of temperature in spring. The highest values of predicted occurrence are also where temperatures increase slowly in spring and where NDVI is high. Sites with adequate relative humidity (as measured by high values of the variables related to NDVI, which are indicators of relative humidity) are likely to support permanent populations of this tick only if cumulative winter temperature is within the defined upper and lower thresholds (measured either as the number of days above 0 °C or the cumulative temperature). The example of H. marginatum identifies a different set of limiting variables (Additional file 13). High probability of occurrence is predicted where (i) vegetation is relatively poor; (ii) there are intermediate values of cumulative temperature in autumn; and (iii) there is a wide range of maximum annual temperatures. These findings are consistent with previous reports about the regulation of the life cycle of H. marginatum [38] in which the temperature in autumn and winter are important factors limiting the distribution of year round populations. The analysis also suggests that this species favours areas that have a comparatively low relative humidity (interpreted from the low NDVI). We stress that this process is not modelling the distribution of these two species of ticks, but is an analysis of the ecological factors that are associated with and so may drive their modelled distribution. We do not state that the covariates identified for I. ricinus and H. marginatum are the only factors restricting its distribution, but they are illustrative of the factors governing their distributions: in this way we have added an ecological dimension to the modelling. There are some criticisms of this approach. First, it is based on satellite imagery, which cannot measure the microclimate, which is what actually affects the development, mortality and questing activity of ticks. Secondly, it is still based on regressions, using covariates assumed to have a greater explanatory power for the life processes of the ticks. As stated at the beginning of this paper, it is well established that process-driven models would perform better [36], but we believe that if, as is generally the case, not enough is known to build area wide process based models, this type of analysis provides a viable alternative for identifying ecological drivers of tick distributions. As seen in the previous example, the rich environmental information derived from simple datasets provides researchers with an initial look of the factors that restrict distributions or the predicted probability of occurrence. Such information cannot be extracted from a map and an algorithm processing static (and so possibly unreliable) explanatory variables. The use of categories of vegetation as descriptors for tick presence/absence There have been a number of attempts to define habitat or environmental suitability for tick vectors and their hosts using land use or land cover rather than climate variables or vegetation indices. Examples focusing on ticks include (i) using vegetation categories derived from classifying satellite imagery to map the habitats of the invasive tick Amblyomma variegatum in the Caribbean [39] and of H. marginatum in the UK [40]; or (ii) the use of plant species alliances to map the distribution of I. ricinus for a territory in France [41, 42] or to map the reported distribution of a tick-borne virus at a national level [43]. This is a topic of potential interest, because the vegetation type has a straightforward interpretation, it can be regularly updated, is driven by climate, may reflect anthropogenic influences and change, and has the potential to describe tick habitats. Vegetation is commonly mapped at a high "resolution" (in terms of species, coverage, height, etc.) at the local or regional scale. At national or larger scale, the smaller number of predefined vegetation categories in the standard datasets may not be suitable to define tick habitats. This is simply a matter of the number of categories commonly used to draw the maps of potential vegetation, and so is not an issue of the discriminative power of the vegetation alone. In an attempt to identify associations between tick distributions and vegetation category, we cross-tabulated the observed presence of ticks in relation to two widely acknowledged categorical descriptions of the vegetation, namely the CORINE-3 for Europe (http://www.eea.europa.eu/data-and-maps/data/corine-land-cover-2006-raster-3, accessed February 2015) and the GlobCover 2009 Scheme of Vegetation Classification (http://due.esrin.esa.int/page_globcover.php, accessed February 2015). Both of them are standardized descriptors of vegetation at continental or global scales, at a relatively high spatial resolution (around 90–300 m). We cross-tabulated the presence of each tick species and the dominant vegetation (calculated as the majority of the vegetation classes in the territory of each of the grid before). The results (Tables 1 and 2) show that whilst tick presence is not consistently associated with particular categories of vegetation, tick absence may be ascribed to a set of categorical descriptors. This suggests that, unless they are tailored to better reflect tick niches, the current schemes of vegetation categories at national or continental scales may not be effective enough descriptors of suitable tick habitats, though they could perhaps be used to describe unsuitable habitats. In addition, we believe that combining vegetation and climatic (or other) limiting factors of the sort discussed in previous sections could improve the resolution of solely climate - based environmental niche maps, which could better guide the planning of local surveys to confirm the presence of a tick species. The % tick records reported in the western Palaearctic, obtained through a systematic literature search in the western Palaearctic [15] tabulated against the CORINE-3 land cover classification scheme Category of CORINE Agro-forestry areas Annual crops associated with permanent crops Broad-leaved forest Complex cultivation patterns Coniferous forest Continuous urban fabric Discontinuous urban fabric Fruit trees and berry plantations Green urban areas Industrial or commercial units Inland marshes Land principally occupied by agriculture, with significant areas of natural vegetation Moors and heathland Natural grasslands Non-irrigated arable land Olive groves Peat bogs Permanently irrigated land Rice fields Sclerophyllous vegetation Sparsely vegetated areas Transitional woodland-shrub Tabulation was done using the records in the grid against the majority of the vegetation classes in the layer of vegetation Abbreviations: for the species of ticks are DM, D. marginatus; DR, D. reticulatus; HM, H. marginatum; HP, H. punctata; IR, I. ricinus; RA, R. annulatus; RB, R. bursa The percent of records of ticks reported in the western Palaearctic, obtained through a systematic literature search in the western Palaearctic [15] tabulated against the GlobalCov land cover classification scheme Category of GlobalCov Artificial surfaces and associated areas (urban areas > 50 %) Bare areas Closed (> 40 %) broadleaved deciduous forest (> 5 m) Closed (> 40 %) needle leaved evergreen forest (> 5 m) Closed to open (> 15 %) (broadleaved or needleleaved, evergreen or deciduous) shrubland (< 5 m) Closed to open (> 15 %) grassland or woody vegetation on regularly flooded or waterlogged soil - Fresh, brackish or saline water Closed to open (> 15 %) herbaceous vegetation (grassland, savannas or lichens/mosses) Closed to open (> 15 %) mixed broadleaved and needleleaved forest (> 5 m) Mosaic cropland (50–70 %)/vegetation (grassland/shrubland/forest) (20–50 %) Mosaic forest or shrubland (50–70 %)/grassland (20–50 %) Mosaic grassland (50–70 %)/forest or shrubland (20–50 %) Mosaic vegetation (grassland/shrubland/forest) (50–70 %)/cropland (20–50 %) Open (15–40 %) needleleaved deciduous or evergreen forest (> 5 m) Post-flooding or irrigated croplands (or aquatic) Rainfed croplands Sparse (< 15 %) vegetation Our aim has not been to compare the performance of modelling algorithms for the environmental suitability of an organism as there is already a rich literature on the topic [i.e. 44, 45]. Rather our objectives have been to illustrate how to obtain reliable estimations of environmental suitability for vectors by highlighting the importance of (i) a good set of explanatory covariates; (ii) the need to understand the ecological requirements of the target species; and (iii) deriving a set of ecologically sound information from a set of customised explanatory covariates. Our focus on ticks is especially relevant in the context of the increasing interest in (re)emerging pathogens they transmit. We have built our examples in the context of the growing tendency to simply "map" the predicted distribution of ticks using algorithms that operate on a set of interpolated datasets that do not provide adequate ecological descriptions of the observed patterns of distributions. We suggest that the use of the widely used interpolated climatic covariates in spatial modelling of ticks can produce flawed outputs because the parameters themselves (i) may not be effective proxies for the variables that drive tick distributions; (ii) have variable accuracy depending on weather station density; and as importantly, (iii) by their very nature can be statistically inappropriate because of spatial autocorrelation and colinearity. We have provided examples of spatial models of environmental suitability that use variables derived from harmonic regressions of remotely sensed proxies of the climatic covariates that drive distributions of ticks with widely diverging environmental constraints. Not only do these parameters have readily definable biological meaning, but they are also less prone to statistical flaws, and can produce reliable models. We have provided examples and evidence of how a gridded modelling at a continental scale is able to extract the information about the environmental suitability for ticks, at a relatively coarse scale, which can be used to design surveillance programmes or be converted to administrative level outputs for use by public health decision makers to improve preparedness and response strategies. We have also shown that tailored covariates, created from the coefficients of the harmonic regressions, can improve upon the widely used set of bioclimatic indicators derived from interpolated datasets. The association between these variables and the modelled environmental suitability helps to identify which parameters are related to tick distributions, something that commonly cannot be achieved with pre-tailored covariates. In theory, this approach could produce a simple classification of the habitats for ticks based on remotely sensed surrogates. We have also demonstrated that the standardized vegetation categories provided by Land use/Land cover datasets are best used to describe absence rather than presence; however, they could be used in conjunction with the climate based suitability models to enhance spatial resolutions. Such classifications (that can be obtained at other resolutions and over different regions) could then be monitored using the wide array of satellite platforms, to evaluate the environmental changes over large regions and its impact on the ecology of ticks. These thematic zones, together with an explicit evaluation of the reservoir capacity of the species of vertebrates over large regions, could eventually produce an actual estimation of the "hazard from ticks and pathogens" over wide areas. We intend these examples to be a road map to guide researchers in producing statistically robust and biologically sound distribution models for arthropod vectors when process-driven models are unavailable, as well as using them to derive ecologically meaningful conclusions. Parts of this work were carried out under a) VectorNet, a European network for sharing data on the geographic distribution of arthropod vectors, transmitting human and animal disease agents (framework contract OC/EFSA/AHAW/2013/02-FWC1) funded by the European Food Safety Authority (EFSA) and the European Centre for Disease prevention and Control (ECDC) and b), EU FP7 613996 Emerging Viral Vector-Borne Diseases Project (VMERGE) and is catalogued by the VMERGE Steering Committee as VMERGE-0007. The statements in this paper are not the official opinions of either ECDC, EFSA or the European Commission. Additional file 1: A set of 6213 pairs of coordinates of ticks and hosts, as compiled from the literature. The dataset is in csv format and includes (i) specific binomial name of the tick, (ii) Name of the Genus of the tick, (iii) Specific binomial name of the host, (iv) Higher systematic details (Genus, Family and Order) of the host, (v) Country of collection, province (if not in Europe) or NUTS3 (standard denomination of European administrative units), and (vi) Latitude and Longitude in decimal degrees for 5630 records. Records that are not georeferenced records, but refer to administrative divisions, are also included for completeness. (R 2 kb) Additional file 2: A script in the R environment for programming that loads the coefficients of an harmonic regression, and calculated derived variables that can provide a more ecological meaning of the effects of climate on ticks. The example in the script calculates the derived variables for the Day Temperature, as included in Table 2 of this paper. Data calculated by this script were used to plot the Figs. 4 and 5 in the paper. (CSV 573 kb) Additional file 3: Geographic projection of the predicted probability of occurrence of Hyalomma marginatum. (PDF 1506 kb) Additional file 4: Geographic projection of the predicted probability of occurrence of Rhipicephalus bursa. (PDF 1440 kb) Additional file 5: Geographic projection of the predicted probability of occurrence of Dermacentor marginatus. (PDF 1542 kb) Additional file 6: Geographic projection of the predicted probability of occurrence of Rhipicephalus annulatus. (PDF 1629 kb) Additional file 7: Parameter estimates and metrics for the best models based on multiple logistic regression between the coefficients of Fourier harmonic regression on climate time series and seven species of ticks. There are five coefficients for the diurnal land surface temperature (LSTD) and five for the Normalized Difference Vegetation Index (NDVI). They together describe the phenology of the climate (temperature and vegetation) in the period 2001–2014. The column "Prob" displays the significance of a given coefficient in the multiple regression for each species of tick with an asterisk for those highly significant. (PDF 56 kb) Additional file 8: Principal Components Analysis decomposition of the influence of the coefficients of a Fourier regression, used to obtain the climate suitability for the tick Ixodes ricinus. The plot shows how the different variables used in the logistic regression are related to the modelled occurrence of the tick. In A, the predicted occurrence of the tick is plotted against the first two principal components. In B, the length and the direction of the arrows indicate how the environmental variables drive predicted tick occurrence. (PDF 742 kb) Additional file 9: Principal Components Analysis decomposition of the influence of the coefficients of a Fourier regression, used to obtain the climate suitability for the tick Hyalomma marginatum. The plot shows how the different variables used in the logistic regression shape the potential occurrence of the tick. In A, the predicted occurrence of the tick is plotted against the first two principal components. In B, the length and the direction of the arrows indicate how the environmental variables drive the predicted tick occurrence. (PDF 735 kb) Additional file 10: A script in the R environment for programming that loads images from MODIS repositories, transformed to monthly averages, and calculates the coefficients of an harmonic regression, as proposed in this paper as explanatory variables. The example in the script loads the monthly images of temperature for a period of several years, and saves the coefficients. (R 6 kb) Additional file 11: The list of variables derived from the coefficients of the Fourier harmonic regression, intended capture the environmental variables that restrict the distribution of the ticks. The table shows the example with temperature, and the same list of variables was calculated from NDVI. Some of these variables are auto correlated and they are not meant to develop models of distribution but should be used to understand what factors shape the predicted distribution of the focal species of tick. (PDF 24 kb) Additional file 12: Plot of the Fourier-derived variables (see Additional file 11 for a list) that partially delineate the predicted probability of occurrence for Ixodes ricinus. The purpose of the chart is to show the potential use of a set of traits derived from the main Fourier coefficients. The chart in A plots the quartile 90 of the NDVI in the year, and the sum of NDVI in spring. The size of the dots is proportional to the slope of NDVI in spring, and the colour indicates the predicted probability of occurrence of the tick. The chart in B plots the 90 % quartile of average annual temperature and the sum of temperature in spring, with the size of the dots being proportional to the slope of the temperature in spring. (PDF 3811 kb) Additional file 13: Plot of the Fourier-derived variables (see Additional file 11 for a list) that partially delineate the predicted probability of occurrence for Hyalomma marginatum. The chart in A plots the 75 % quartile of the NDVI in the year, and the annual amplitude of temperature. The size of the dots is proportional to the sum of temperature in autumn, and the colour indicates the predicted probability of occurrence of the tick. The chart in B plots the 90 % quartile of average annual temperature and the sum of temperature in spring, with the size of the dots being proportional to the slope of the temperature in spring. (PDF 2754 kb) AEP and GRWW participated in the design of the study. AEP developed the scripts and did the computations of the examples provided in the paper. AEP, NA and GRWW discussed the results, prepared the charts and wrote the paper. All authors read and approved the final version of the manuscript. Department of Animal Pathology, Faculty of Veterinary Medicine, Miguel Servet 177, 50013 Zaragoza, Spain Environmental Research Group Oxford, Department of Zoology, South Parks Road, Oxford, OX1 3PS, UK Peterson AT, Soberón J, Sánchez-Cordero V. Conservatism of ecological niches in evolutionary time. Science. 1999;285:1265–7.View ArticlePubMedGoogle Scholar Allison PD. Multiple Regression: A Primer (Research Methods and Statistics). Thousand Oaks, California, USA: Pine Forge Press. ISBN-13: 978-0761985334. 1999.Google Scholar Soberón J, Nakamura M. Niches and distributional areas: concepts, methods, and assumptions. Proc Natl Acad Sci U S A. 2009;106:19644–50.View ArticlePubMedPubMed CentralGoogle Scholar Pearson RG, Dawson TP, Berry PM, Harrison PA. SPECIES: a spatial evaluation of climate impact on the envelope of species. Ecol Model. 2002;154:289–300.View ArticleGoogle Scholar Hutchinson GE. Concluding remarks. Cold Spring Harb Symp. 1957;22:415–27.View ArticleGoogle Scholar Leibold M. The niche concept revisited: mechanistic models and community context. Ecology. 1996;76:1371–82.View ArticleGoogle Scholar Cumming GS. Comparing climate and vegetation as limiting factors for species ranges of African ticks. Ecology. 2002;83:255–68.View ArticleGoogle Scholar Kearney M, Porter WP, Williams C, Ritchie S, Hoffmann AA. Integrating biophysical models and evolutionary theory to predict climatic impacts on species' ranges: the dengue mosquito Aedes aegypti in Australia. Funct Ecol. 2009;23:528–38.View ArticleGoogle Scholar Gálvez R, Descalzo MA, Guerrero I, Miró G, Molina R. Mapping the current distribution and predicted spread of the leishmaniosis sand fly vector in the Madrid region (Spain) based on environmental variables and expected climate change. Vector-Borne Zoon Dis. 2011;11:799–806.View ArticleGoogle Scholar Estrada-Peña A, Farkas R, Jaenson TG, Koenen F, Madder M, Pascucci I, et al. Association of environmental traits with the geographic ranges of ticks (Acari: Ixodidae) of medical and veterinary importance in the western Palearctic. A digital data set. Exp Appl Acarol. 2013;59:351–66.View ArticlePubMedPubMed CentralGoogle Scholar Ogden NH, Bigras-Poulin M, O'callaghan CJ, Barker IK, Lindsay LR, Maarouf A, et al. A dynamic population model to investigate effects of climate on geographic range and seasonality of the tick Ixodes scapularis. Int J Parasitol. 2005;35:375–89.View ArticlePubMedGoogle Scholar Allison PD. Missing data: quantitative applications in the social sciences. British J Math, Stat Psychol. 2002;55:193–6.View ArticleGoogle Scholar Hijmans RJ, Cameron SE, Parra JL. WorldClim Global Climate Layers Version 1.4. http://www.worldclim.org 2006. Accessed May, 2015 Estrada-Peña A, Estrada-Sánchez A, de la Fuente J. A global set of Fourier-transformed remotely sensed covariates for the description of abiotic niche in epidemiological studies of tick vector species. Parasites Vector. 2014;7:302.View ArticleGoogle Scholar R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2014. http://www.R-project.org/ February 2016. Estrada-Peña A, Jameson L, Medlock J, Vatansever Z, Tishkova F. Unraveling the ecological complexities of tick-associated Crimean-Congo hemorrhagic fever virus transmission: a gap analysis for the western Palearctic. Vector-Borne Zoon Dis. 2012;12:743–52.View ArticleGoogle Scholar Estrada-Peña A, Estrada-Sánchez A, Estrada-Sánchez D. Methodological caveats in the environmental modelling and projections of climate niche for ticks, with examples for Ixodes ricinus (Ixodidae). Vet Parasitol. 2015;208:14–25.View ArticlePubMedGoogle Scholar Dormann CF, Elith J, Bacher S, Buchmann C, Carl G, Carré G, et al. Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography. 2013;36:27–46.View ArticleGoogle Scholar Porretta D, Mastrantonio V, Amendolia S, Gaiarsa S, Epis S, Genchi C, et al. Effects of global changes on the climatic niche of the tick Ixodes ricinus inferred by species distribution modelling. Parasite Vector. 2013;6:271.View ArticleGoogle Scholar Estrada-Peña A, Gray JS, Kahl O, Lane RS, Nijhof AM. Research on the ecology of ticks and tick-borne pathogens - methodological principles and caveats. Frontiers Cell Infect Microbiol. 2013;3:29.View ArticleGoogle Scholar Cavender-Bares J, Gonzalez-Rodriguez A, Pahlich A, Koehler K, Deacon N. Phylogeoography and climatic niche evolution in live oaks (Quercus series Virentes) from the tropics to the temperate zone. J Biogeogr. 2011;38:962–81.View ArticleGoogle Scholar Chusco A, Phimmachak S, Sivongxay N, Stuart B. Predicting environmental suitability for a rare and threatened species (Lao Newt, Laotriton laoensis) using validated species distribution models. PLoS One. 8(3): e59853.Google Scholar Miller MJ, Loaiza JR. Geographic expansion of the invasive mosquito Aedes albopictus across Panama - implications for control of dengue and chikungunya viruses. PLoS Negl Trop Dis. 2015;9:e3383.Google Scholar Porretta D, Mastrantonio V, Bellini R, Somboon P, Urbanelli S. Glacial history of a modern invader: phylogeography and species distribution modelling of the Asian tiger mosquito Aedes albopictus. PLoS One. 2012;7, e44515.View ArticlePubMedPubMed CentralGoogle Scholar Alonso-Carné J, García-Martín A, Estrada-Peña A. Assessing the statistical relationships among water-derived climate variables, rainfall, and remotely sensed features of vegetation: implications for evaluating the habitat of ticks. Exp Appl Acarol. 2015;65:107–24.View ArticlePubMedGoogle Scholar Araujo MB, Guisan A. Five (or so) challenges for species distribution modelling. J Biogeogr. 2006;33:1677–88.View ArticleGoogle Scholar Braunisch V, Coppes J, Arlettaz R, Suchant R, Schmid H, Bollmann K. Selections from correlated climate variables: a major source of uncertainty for predicting species distribution under climate change. Ecography. 2013;36:1–13.View ArticleGoogle Scholar Rogers DJ, Hay SI, Packer J. Predicting the distribution of tsetse flies in west Africa using temporal Fourier processed meteorological satellite data. Ann Trop Med Parasitol. 1996;3:225–41.Google Scholar Hendrickx G, Napala A, Slingebbergh JH, De Deken R, Rogers DJ. A contribution towards simplifying area-wide tsetse surveys using medium resolution meteorological satellite data. Bull Entomol Res. 2001;91:333–46.View ArticlePubMedGoogle Scholar Hay SI, Randolph SE, Rogers DJ. Remote sensing and geographical information systems in epidemiology. Adv Parasites. 2000;47:353.View ArticleGoogle Scholar Benefetti R, Rossini P. On the use of NDVI profiles as a tool for agricultural statistics: the case study of wheat yield estimates and forecast in Emilia Romagna. Remote Sens Environ. 1993;45:311–26.View ArticleGoogle Scholar Scharlemann JPW, Benz D, Hay SI, Purse BV, Tatem AJ, Wint GRW, et al. Global data for ecology and epidemiology: a novel algorithm for temporal Fourier processing MODIS data. PLoS One. 2008;3, e1408.View ArticlePubMedPubMed CentralGoogle Scholar Rogers DJ, Randolph SE. Mortality rates and population density of tsetse flies correlated with satellite imagery. Nature. 1991;351:739–41.View ArticlePubMedGoogle Scholar Alonso-Carné J, García-Martín A, Estrada-Peña A. Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model. Geospatial Hlth. 2013;8:1–12.View ArticleGoogle Scholar Wiens JA, Stralberg D, Jongsomjit D, Howell CA, Snyder MA. Niches, models, and climate change: assessing the assumptions and uncertainties. Proc Natl Acad Sci U S A. 2009;106:19729–36.View ArticlePubMedPubMed CentralGoogle Scholar Dobson AD, Finnie TJ, Randolph SE. A modified matrix model to describe the seasonal population ecology of the European tick Ixodes ricinus. J Appl Ecol. 2011;48:1017–28.View ArticleGoogle Scholar Killick R, Fearnhead P, Eckley IA. Optimal detection of changepoints with a linear computational cost. JASA. 2012;107:1590–8.View ArticleGoogle Scholar Estrada‐Peña A, Martínez Avilés M, Muñoz Reoyo MJ. A population model to describe the distribution and seasonal dynamics of the tick Hyalomma marginatum in the Mediterranean basin. Transbound Emerg Dis. 2011;58:213–23.View ArticlePubMedGoogle Scholar Hugh-Jones M, Barre N, Nelson G, Wehnes K, Warner J, Garvin J, et al. Landsat-TM identification of Amblyomma variegatum (Acari: Ixodidae) habitats in Guadeloupe. Remote Sens Environ. 1992;40:43–55.View ArticleGoogle Scholar England ME. Understanding the risks and factors associated with the introduction of Crimean-Congo haemorrhagic fever into Great Britain. PhD thesis. UK: University of Southampton; 2013.Google Scholar Gilot B, Guiguen C, Degeilh B, Doche B, Pichot J, Beaucournu JC. Phytoecological mapping of Ixodes ricinus as an approach to the distribution of Lyme borreliosis in France. In Lyme borreliosis. US: Springer; 1994. p. 105–12.Google Scholar Gilot B, Degeilh B, Pichot J, Doche B, Guiguen C. Prevalence of Borrelia burgdorferi (sensu lato) in Ixodes ricinus (L.) populations in France, according to a phytoecological zoning of the territory. Eur J Epidemiol. 1996;12:395–401.View ArticlePubMedGoogle Scholar Daniel M, Kolár J, Zeman P, Pavelka K, Sádlo J. Predictive map of Ixodes ricinus high-incidence habitats and a tick-borne encephalitis risk assessment using satellite data. Exp Appl Acarol. 1998;22:417–33.View ArticlePubMedGoogle Scholar Elith J, Phillips SJ, Hastie T, Dudík M, Chee YE, Yates CJ. A statistical explanation of MaxEnt for ecologists. Divers Distrib. 2011;17:43–57.View ArticleGoogle Scholar Elith J, Graham CH. Do they? How do they? WHY do they differ? on finding reasons for differing performances of species distribution models. Ecography. 2009;32:66–77.View ArticleGoogle Scholar
CommonCrawl
How To Make An Array Of Prime Numbers In C In this section, we consider hashing, an extension of this simple method that handles more complicated types of keys. To print the prime numbers from an array, user has to. I found them in wikipedia searching "HP Prime" and then looking in Amazon store the newest serial numbers - 7263892. Follow all the topics you care about, and we'll deliver the best stories for you to your homepage and inbox. This R tutorial on loops will look into the constructs available in R for looping, when the constructs should be used, and how to make use of alternatives, such as R's vectorization feature, to perform your looping tasks more efficiently. A palindrome is a word, phrase, number or other sequence of units that has the property of reading the same in either direction. This program is being made by using the nested for loop statements and if statements. 5 allow for signed integral index types only. You compare each element to the one that comes after it. Prime number is a positive integer greater than 1 that is only divisible by 1 and itself. It's the same code, except it uses an array of booleans instead of ints for the sieve. For loop in C; A Prime number is a natural number greater than 1 that is only divisible by either 1 or itself. Prime Numbers A prime number is an integer greater than 1 that has exactly two divisors, 1 and itself. First create two more array to store odd and even numbers. For example: 2, 3 , 5, 7, 11 are the first five prime numbers. 2 Write a programmer in C# to check number is prime or not ? Answer : The following code snippet to check prime number or not. A prime number is a whole number that is greater than one and the only factors of a prime number should be one and itself. Online C array programs for computer science and information technology students pursuing BE, BTech, MCA, MTech, MCS, MSc, BCA, BSc. , where guests were served an array of salmon dishes — from Hawaiian poke to ceviche verde to sushi rolls — all made from. if you search these threads you will find several threads on this topic. Here's a simple implementation in C. In this Java Random Program, we have one argument for number and then we are generating random number using Math. It counts the primes below 10^10 in just 0. As you make your way home this evening there's a new worry on the roads. Hello, I'm trying to make a matlab code for an Integral controller where I can find the order (u). This is a simple c program to print prime numbers in output up to a given range. To do that, we will use a new variable arrangement called an array. What's wrong with the scrap of code in the question? The array is of size 5, but the loop is from 1 to 5, so an attempt will. This is used in Dictionary. For example, 5 is prime because the only ways of writing it as a product. In real life, you may need to read in the value from the user. 30, 1972 neither knew who had a future as prime minister. Instead of using the imported C function arc4random(), you can now use Swift's own native functions. A prime number (or a prime) is a natural number that has exactly two distinct natural number divisors: 1 and itself. (c) Calculate the number of nanoseconds taken to sort the same array, using each of the 3 algorithms. Create a structure to hold the array of prime numbers. Private Sub cmdDisplay_Click() Dim a() As Integer. Though that may seem silly, it's the basis for just about every computer game ever invented. By using two-dimensional array, write C# program to display a table of numbers as shown below:. First of all algorithm requires a bit array isComposite to store n - 1 numbers: isComposite[2. It'd be more useful to generate the prime factorization for any requested number. Print out the odd and even numbers arrays. How to Find Factors of a Given Number Using Python The following python program prints all the positive factors of a given input number. Having studied mathematics a bit (and realizing I could work for 35 years and never publish. And now find the difference between consecutive squares: 1 to 4 = 3 4 to 9 = 5 9 to 16 = 7 16 to 25 = 9 25 to 36 = 11 … Huh? The odd numbers are sandwiched between the squares? Strange, but true. Write a C Program to check if the number is prime number or not. Online C++ decision & looping programs and examples with solutions, explanation and output for computer science and information technology students pursuing BE, BTech, MCA, MTech, MCS, MSc, BCA, BSc. Answer: Use the PHP count() or sizeof() function You can simply use the PHP count() or sizeof() function to get the number of elements or values in an array. If you look for any C programs that are not listed her, kindly create a new topic and in C program Discussion Forum. py and make it executable: $ chmod +x is-prime-number. Cls End Sub. A prime number is one that is only divisible by 1 and itself. C Tutorial - A Star pyramid and String triangle using for loops In this C language tutorial we will look at a much-requested subject, printing a star pyramid and a string triangle using for loops. I need a way to identify if a number is a prime. Array is – 23 98 45 101 6. This way of making rectangles could lead to some very long, very skinny rectangles, but even an array many millions of tiles long would still have a width equal to one tile and be a valid rectangle. TheisPrime function should use the countDivisors function to determine if number is prime. Implement the following functions. It has 4 columns with 2 dots in each column. This means at least one more prime number exists beyond those in the list. The user will insert the elements into the array. C program to check whether a number is prime or not. There are multiple methods to find GCD , GDF or HCF of two numbers but Euclid's algorithm is very popular and easy to understand, of course, only if you understand how recursion works. The count() and sizeof() function returns 0 for a variable that has been initialized with an empty array, but it may also return 0 for a variable that isn't set. In real life, you may need to read in the value from the user. Integers that are not prime are called composite numbers. //the list that will store our numbers var ret = new List (); //This is a byte array - one per number. Print the array using the Arrays. With that said, let's think of a few possible ways to approach this problem. Related posts: C Program to print prime numbers up to the inputted number. Make program to print prime number. php?title=C_Source_Code/Sorting_array_in_ascending_and_descending_order&oldid=2058047". Using this function, we have to write a code fragment that fills up an integer array of size 10 with the first ten prime numbers. C++ code to print all odd and even numbers in given range C++ program to find HCF and LCM of two numbers Write a c++ program to find LCM of two numbers # include # include using namespace std ; int m. To make an array to show this, you could use pennies to make three rows of four. After sorting the strings, the array would look like {"abc", "abc", "abelrt", "act", "act", "aegt", "gorw"}. Prime number logic: a number is prime if it is divisible only by one and itself. The first index value in the array is zero, thus, the index with value four is used to access the fifth element in the array. You may have to convert from bytes to a built-in data type after you read bytes off the network, for example. For example, you might want to create a collection of five integers. Leap Year; Largest of three numbers; Second largest among three numbers; Adding two numbers using pointers; Sum of first and last digit. In mathematics, the factorial of a number (that cannot be negative and must be an integer) n, denoted by n!, is the product of all positive integers less than or equal to n. Given an array of integer elements and we have to check which prime numbers using C program are. nice program…. Lemoine's Conjecture : Any odd integer greater than 5 can be expressed as a sum of an odd prime (all primes other than 2 are odd) and an even semiprime. However, here the order of the LEDs is determined by their order in the array, not by their physical order. I know how to write the function that check if number is prime, but i dont know how to enter the numbers to an array. Find Largest and Smallest Number in an Array Example. The Who rocker Pete Townshend has admitted he fears Jeremy Corbyn will come after him for his wealth after he said the world should not have billionaires. For example: 2, 3, 5, 7, 11 are the first 5 prime numbers. We know that this question is often asked as part of homework (lab) assignments, but we got so much requests that we couldn't ignore it. Print out the odd and even numbers arrays. This program takes the input number and checks whether the number is prime number or not using a function. Write a program to initialize an array to the first 10 prime numbers: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29. for example if the user entered 53, it will return [2 ,3 ,5 ,7 ,11 ,13 ,17. If you are looking for sort the array in descending order program in C, here in this tutorial we will help you to learn how to write a c program to C program to arrange the given numbers in descending order. Here, user need to enter two numbers as the lower and upper limits for the iteration loop to find the prime number in between. ) Now, since the definition of a prime number is: a number that is divisible only by itself and one. How to Use Arrays with Arduino. C Program to Read Array Elements; C Program to Print Array Elements; C Program to Delete an element from the specified location from Array; C Program to Insert an element in an Array; C Program to Copy all elements of an array into Another array; C Program to Search an element in Array; C Program to Merge Two arrays in C Programming; C Program. Create Matrix Example Java Program Definition A matrix (plural matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns that is treated in certain prescribed ways. Find sum of n Numbers; Print first n Prime Numbers; Find Largest among n Numbers; Exponential without pow() method; Find whether number is int or float; Print Multiplication Table of input Number; Arrays. Example: C program to encrypt and decrypt the string using RSA algorithm. C++ code to print all odd and even numbers in given range C++ program to find HCF and LCM of two numbers Write a c++ program to find LCM of two numbers # include # include using namespace std ; int m. 2 is the only even prime number. Initially the array contains zeros in all cells. This program takes the input number and checks whether the number is prime number or not using a function. Prime Number Theorem: The probability that a given, randomly chosen number n is prime is inversely proportional to its number of digits, or to the logarithm of n. Deciding about a prime number which is not very large is easy but if you are presented by a very large number to tell that weather it is a prime number or not is. If you've wondered what material effect Senator Elizabeth Warren's proposed tax increases for the wealthy would have, look no further than the estimates by two economists who advised her. balance[4] = 50. It'd be more useful to generate the prime factorization for any requested number. MSI files present in C:\AMD folder fails with. First, redo the examples from above. Private Sub cmdDisplay_Click() Dim a() As Integer. ) Perhaps we are storing our hash table in an array and use prime = 101. C++ program to check if the number is prime or composite. TF = isprime(X) returns a logical array the same size as X. The only thing left to explain, therefore, is the mysterious {ccc} which occurs immediately after \begin{array}. Furthermore, if b 1 and b 2 are both coprime with a, then so is their product b 1 b 2 (i. But when I get a chance to poke with randomness or numbers, I always lap it up with joy. This is used in Dictionary. polls stayed open until 11 p. There are several methods to find the all prime number but here, I will discuss the Trial Division method and Sieve of Eratosthenes algorithm. If 99 was input, we'd get this: 3, 3, 11. Write a Program in Java to fill a 2-D array with the first 'm*n' prime numbers, where 'm' is the number of rows and 'n' is the number of columns. Logic: This is advanced version of the previous program. Java Program to check whether number is prime or not Program Logic: We need to divide an input number, say 17 from values 2 to 17 and check the remainder. How to use C# ArrayList Class ArrayList is one of the most flexible data structure from CSharp Collections. Using the Concurrency Runtime found 107254 prime numbers. Definition and Usage. Make sure you also move the strings from the original array in order to maintain the mapping of the sorted array to the original array. A prime number is a natural number greater than one that has no positive divisors other than one and itself. #define CNAME (expression) CNAME The name of the constant. In this tutorial, you will learn how to find whether a number is prime in simple cases. is-prime-number. List is a generic implementation of ArrayList. There are a lot of different ways to do that. The second question is discussed on the page "How Big of an Infinity?. Barr to examine the origins of the FBI's probe of President Trump's 2016 campaign is conducting an investigation officials consider. Boston's. So, if you go till, 98, you will be checking (98, 100) and then stop. Generating Prime Numbers. A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. Do not worry if you don't know what these are yet; we will explain what they are when we need them later. Therefore, we can remove the first prime number from the set of primes which we test potential primes against. factorial of last number 5. Enter the formula that you want to use. C Program to Check whether the Given Number is a Prime - A prime number is a natural number that has only one and itself as factors. How can we efficiently find out if any of those numbers are duplicates? The easiest approach is to make a hash table to keep track of the numbers we've seen so far. 0}; The number of values between braces { } cannot be larger than the number of elements that we declare for the array between square brackets [ ]. A division array can easily be expressed as a repeated subtraction. Elements 0 and 1 of f are set to 1; here [0:1] creates a list of indices to which the operation is applied. C Program to Add N integers using Array; C Program to Swap Two Numbers using Temp/Third Var C program to print Fibonacci series upto N th term C program to find Greatest Of Two Numbers Using Co C program to Find the Number is Prime or Not; C program to print Prime Numbers Between 1 and n; C program to print the Prime Numbers Between. Home Topics Technology & Internet Software & Web Development Development Want to create LARGE array of prime numbers(C++)2 Notices Welcome to Boards. (Using a prime number of buckets that is not too close to a power of two tends to produce a sufficiently uniform key distribution. Below is a program to find and remove any duplicate element present in the specified array. Checking for Palindrome Strings or Numbers in C Language In this programming algorithm tutorial we will look at how to find out if a string or number is a palindrome or not. If 99 was input, we'd get this: 3, 3, 11. 5 Prime Numbers. Leap Year; Largest of three numbers; Second largest among three numbers; Adding two numbers using pointers; Sum of first and last digit. A crash data analysis by. WriteLine(planet); } The foreach statement can be used to make the code more compact when traversing arrays or other collections. Hence,given a number ,to find whether it is a prime or not,you can use the below function , [code]bool is_. Definition and Usage. The licensor cannot revoke these freedoms as long as you follow the license terms. C++ program to find prime numbers in a given range. 2 is the only even prime number. In this case the perfect numbers can be generated from Mersenne primes, and there are only 42 or so of those which are known. Use a for-each loop to sum the values in the array. first write a program to catch all the prime numbers in to an array in a given or a chosen range and compare the elements of prime numbers with the each element of the array u want if one of ur array element is equal to. For conversion of the array to its mirror image, the program creates 2 loops, one nested inside another. C++, the most basic of computer programming languages, is important for understanding computer concepts, particularly if you plan to delve more into programming. While you don't have 10 of them, see if the next number is prime; if it is, add it to the next spot in the array if it is (and now you have one more prime). WriteLine(planet); } The foreach statement can be used to make the code more compact when traversing arrays or other collections. Implement the following functions. It is the simplest way to find the all prime numbers of an integer n. Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Make sure you always include that header when you use files. * * % java PrimeSieve 25 * The number of primes <= 25 is 9 * * % java PrimeSieve 100 * The number of primes <= 100 is 25 * * % java -Xmx100m PrimeSieve 100000000 * The. pi(x) is the number of primes less than or equal to x Let x be a positive real number. I need to write a function that recieve from the user a number(n), and the function return an array with all the prime numbers until the user number(n). Other Prime numbers 2, 3, 5, 7, 11, 13, 17 Note: 0 and 1 are not prime numbers. 30, 1972 neither knew who had a future as prime minister. from() lets you create Arrays from: array-like objects (objects with a length property and indexed elements) or; iterable objects (objects where you can get its elements, such as Map and Set). If the second is bigger, you swap them. Again, the computer "knows" how many grades there are, so a for loop is appropriate. I'm wondering Is it possible to find prime numbers by using mod() function?. How do I generate a range of numbers under BASH for loop command? For example, I need to run particular command inside loop 100 or 500 times. In this tutorial we are going to use nested loops to find prime numbers between 2 and 100. Full Answer. com) -- Prime numbers have intrigued curious thinkers for centuries. Hi everybody. This Prime Numbers Java example shows how to generate prime numbers between 1 and given number using for loop. You can check more about sieve of Eratosthenes on Wikipedia. Note: These integers are not the first N prime numbers, but a selection of the first N prime numbers. You use a pointer to go from the first element of the array to the second-to last. Create a structure to hold the array of prime numbers. For example 2, 3, 5, 7, 11, 13, 17, 19, 23 are the prime numbers. A spiral array is a square arrangement of the first N 2 natural numbers, where the numbers increase sequentially as you go around the edges of the array spiraling inwards. Inside the loop, if condition statement is used to check that range value is less than 2, if the condition is true. Random Number Functions In Swift 4. This is a simple c program to print prime numbers in output up to a given range. for example how i could find prime number from 2:100. The original poster had a variable named "array" that was basically a list of prime numbers. Python also knows how to manipulate complex numbers as well as octal (base 8) and hexadecimal (base 16) numbers. ArrayList implements the IList interface using an array and very easily we can add , insert , delete , view etc. A prime number is a natural number greater than one that has no positive divisors other than one and itself. C Program to find Sum of Prime Numbers from 1 to N. Simple Snippets 35,565 views. Write a program to initialize an array to the first 10 prime numbers: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29. This article shows you how to use Managed C++ to create and use string arrays in Visual C++. Simple Program to remove Duplicate Element in an Array. c))])) It keeps a list. The parallel_for algorithm and OpenMP 3. What is Prime number? Prime numbers are positive integers greater than 1 that has only two divisors 1 and the number itself. , spreadsheet, which need a two-dimensional array. There are infinitely many prime numbers, here is the list of first few prime numbers 2 3 5 7 11 13 17 19 23 29 31 37. In our case file will be saved in parent directory which is C Drive. Make sure you also move the strings from the original array in order to maintain the mapping of the sorted array to the original array. If yes, please I need your some basic help. The parallel_for algorithm also makes sure that the specified range does not overflow a signed type. Though, there are better algorithms exist today, sieve of Eratosthenes is a great example of the sieve approach. foreach (string planet in planets) { Console. Inside the loop, if condition statement is used to check that range value is less than 2, if the condition is true. After sorting the strings, the array would look like {"abc", "abc", "abelrt", "act", "act", "aegt", "gorw"}. In other words, prime numbers can't be divided by other numbers than itself or 1. This blog provides source code in C Language for BCA, BTECH, MCA students. Make sure you also move the strings from the original array in order to maintain the mapping of the sorted array to the original array. py python script to search for a prime number within first 100 numbers. Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. count #=> 5. For conversion of the array to its mirror image, the program creates 2 loops, one nested inside another. Array Basics Definition An array is an indexed collection of data elements of the same type. C program :. If the number leaves remainder 0 when divided by numbers other than 1 and the number itself then, the number is not said to be a prime number. Rather, it uses the "blood hound" guide RNA to shuttle a new protein component to the target DNA sequence. Prg to show jagged array 53. Sum of n numbers in C: This program adds n numbers which will be entered by a user. C++ Program to Check Whether a Number is Prime or Not Example to check whether an integer (entered by the user) is a prime number or not using for loop and ifelse statement. The small primes will appear in the holder array anyway, so put them in directly. for example how i could find prime number from 2:100. It involves public key and private key, where the public key is known to all and is used to encrypt the message whereas private key is only used to decrypt the encrypted message. You will create exactly the same array as you did in the previous example. Therefore, it isn't a prime number. Tags for Generating n Prime Numbers in C. py and make it executable: $ chmod +x is-prime-number. So if we try to divide that number by any other number between one and that number , we will get a remainder. No, I don't think so. Like any other programming problem, you need to build the logic to solve this problem. Generating Prime Numbers. All the remaining numbers on the list are prime. last three numbers in ascending order 6. I need this in c++. In those days, B. First of all algorithm requires a bit array isComposite to store n - 1 numbers: isComposite[2. And if a prime number was input, the result would be the number itself. It provide C programs like Looping, Recursion, Arrays, Strings, Functions, File Handling and some advance data structures. Each column must contain the same number of objects as the other columns, and each row must have the same number as the other rows. C++ Program to Check whether an input number is Prime or not By Chaitanya Singh | Filed Under: C++ Programs A number which is only divisible by itself and 1 is known as prime number, for example: 5 is a prime number because it is only divisible by itself and 1. Now that registration is open for the 2020 Game Developers Conference, we wanted to quickly remind you that GDC also gives free passes to partners who distribute them via a diverse array of. An array is normally used to write multiplication and division number sentences. is-prime-number. Primality: Given a number N, determine whether it is a prime. checking given number is prime or not. Hello Friends, I am Free Lance Tutor, who helped student in completing their homework. balance[4] = 50. In this tutorial we are going to use nested loops to find prime numbers between 2 and 100. This program allows the user to enter Minimum and Maximum values. Other Prime numbers 2, 3, 5, 7, 11, 13, 17 Note: 0 and 1 are not prime numbers. The list of prime numbers is not affected by state. This would be an example. 's scheduled departure to. In mathematics, the factorial of a number (that cannot be negative and must be an integer) n, denoted by n!, is the product of all positive integers less than or equal to n. Composite numbers have at least three factors. C Program Print First N Prime Number using Recursion by Dinesh Thakur Category: C Programming (Pratical) In this program required variable are declared for functioning criteria. For example: if the number of elements to be added are 4 and if we give the elements one by one as 4 5 6 3, then the sum of elements stored in the array will be 18. org/w/index. All the prime numbers in the array are – 23 101. For example: 2, 3 , 5, 7, 11 are the first five prime numbers. At its simplest, the size of the returned array can be mandated by the function and require that the user use an array that size in order to get all the results. For example, you might want to create a collection of five integers. A prime number is defined as a number which can be only divisible by 1 and that number itself. Bottom line: the C# array is a primitive collection type with size fixed at declaration time (either by fixed number of elements or by an element list). Therefore, we can remove the first prime number from the set of primes which we test potential primes against. Factoring is hard. Jul 24, 2013 7 min read #coding #es6 #javascript #math. The core of this formula is a string that represents rows. It provide C programs like Looping, Recursion, Arrays, Strings, Functions, File Handling and some advance data structures. Next, this C program find the sum of prime numbers between Minimum and Maximum values using For Loop. Prg to print number Pyramid 55. To print a sequence of numbers use GNU seq command. Thanks in advance Bala Just think a bit: what is characteristic about the index values of the elements. Though that may seem silly, it's the basis for just about every computer game ever invented. Inside the loop, if condition statement is used to check that range value is less than 2, if the condition is true. A Mersenne prime must be reducible to the form 2 n - 1, where n is a prime number. : ifstream inFile; // object for reading from a file ofstream outFile; // object for writing to a file Functions. Known Issues Installing RAID Drivers from. Print out the odd and even numbers arrays. First, the code checks to make sure that the number is greater than 1 (anything less than one can't be a prime number because it isn't whole). In this tutorial we are going to use nested loops to find prime numbers between 2 and 100. C++ Program to Check Whether a Number is Prime or Not Example to check whether an integer (entered by the user) is a prime number or not using for loop and ifelse statement. prime_serial , a program which counts the number of primes between 1 and N, and is intended as a starting point for a parallel version. Example This example uses both OpenMP and the Concurrency Runtime to compute the count of prime numbers in an array of random values. In this C# program, using for loop we are finding the prime numbers from 1 to 100. Here we see it in action: 2 is Prime, 3 is Prime, 4 is Composite (=2×2), 5 is Prime, and so on. Generating Prime Numbers. Now, since the definition of a prime number is: a number that is divisible only by itself and one. A Fibonacci sequence is defined as follows: the first and second terms in the sequence are 0 and 1. It'd be more useful to generate the prime factorization for any requested number. It's a treasure trove of know-how about the Python programming language - check us out today!. You are trying to iterate over all numbers and check if each is prime which you do in O(n sqrt n) time. I'm aware that I can find any number of articles on the Internet that explain how the RSA algorithm works to encrypt and decrypt messages, but I can't seem to find any article that explains the algorithm used to generate the p and q large and distinct prime numbers that are used in that algorithm. 0 allow for the index type to be a signed integral type or an unsigned integral type. Step by step descriptive logic to find sum of prime numbers between 1 to n. It's the same code, except it uses an array of booleans instead of ints for the sieve. A prime number is a natural number greater than one that has no positive divisors other than one and itself. If number is divisible by 2 to one less than that number (n-1), then the number is not prime number, otherwise it will be a prime number. The following array, consisting of four columns and three rows, could be used to represent the number sentence 3 x 4 = 12. If remainder is 0 number is not prime. The federal prosecutor tapped by Attorney General William P. Their record stands at 8-1 after a Monday night victory over the Dallas Mavericks, placing them atop the league leader board. from() lets you create Arrays from: array-like objects (objects with a length property and indexed elements) or; iterable objects (objects where you can get its elements, such as Map and Set). Given an array of size n, find number of Co-Prime or mutually prime pairs in the array. This relates to the concept of dynamic memory allocation in C++ which will be covered later in the tutorial. Two-Dimensional Arrays • Arrays that we have consider up to now are one-dimensional arrays, a single line of elements. If you don't remember, a prime number is a whole number which is either divisible by 1 or itself like 2, 3 and 5. Joe Evans author of Program to find the sum of numbers from 1 to n using thread is from London, United Kingdom. An Array is an arrangement of a set of numbers or objects in rows and columns. Write a program in the C programming language to print all the prime numbers up to the inputted number. C language interview questions solution for freshers beginners placement tricky good pointers answers explanation operators data types arrays structures functions recursion preprocessors looping file handling strings switch case if else printf advance linux objective mcq faq online written test prime numbers Armstrong Fibonacci series factorial palindrome code programs examples on c++. Select a cell below or to the right of the numbers for which you want to find the smallest number. The size of the array must be known before its execution. All other even numbers can be divided by 2. To make an array to show this, you could use pennies to make three rows of four. But when I get a chance to poke with randomness or numbers, I always lap it up with joy. collapse all. Actually we will take input of an integer from a user and then. nice program…. Example: Input: Array elements are: 100, 200, 31, 13, 97, 10, 20, 11 Output: 100 - Not Prime 200 - Not Prime 31 - Prime 13 - Prime 97 - Prime 10 - Not Prime 20 - Not Prime 11 - Prime Logic: We are declaring an array (arr) with the elements: 100, 200, 31, 13, 97, 10, 20, 11. For loop in C; A Prime number is a natural number greater than 1 that is only divisible by either 1 or itself. The only question in Number 10 is whether there is time to hold a poll. We know most games go through a number of revisions when being made and rules get thrown out or added in. A very important question in mathematics and security is telling whether a number is prime or not. Click on any square/icon in the array to set the number of risk events, or just set the number in the highlighted box. First, redo the examples from above. Counting prime divisors of a number. Note on the example programs: I use initialized array. A fast prime number list generator (Python recipe) by Wensheng Wang. No, I don't think so. This example shows you how to use the BitConverter class to convert an array of bytes to an int and back to an array of bytes. Once you have all of the numbers in nice rows, circle the first prime number you come across. Template Specialization and Partial Specialization in C++ Intro to C++ classes Initialization lists: Initialization lists are important for allowing template types to work with both primitive and user-defined types Standard Template Library Introduction. If number is divisible by 2 to one less than that number (n-1), then the number is not prime number, otherwise it will be a prime number.
CommonCrawl
In 2007 the Stratospheric Observatory for Infrared Astronomy finally took flight. SOFIA was designed to replace the Kuiper Airborne Observatory, a C-141 which stopped flying in 1995. Modifying a 747 SP into a flying telescope took nearly 10 years in the hangar. As recently as 2006 the project came perilously close to cancellation. On April 26 SOFIA made her first flight in this incarnation. After moving fron Waco, Texas to Dryden Flight Research Center, she completed her initial flight test phase on November 15. Though no books have been written yet, SOFIA could someday capture the public imagination. The name and stubby appearance make her quite personable. In her first life she was christened Clipper Lindbergh by Anne Morrow Lindbergh on the 50th anniversary of the historic flight. After service with Pan Am and United, she was destined for the boneyard before being rescued by NASA as an observatory. SOFIA will search for nebulae, dust and Black Holes in galaxies. Every galaxy yet found contains at it centre a supermassive Black Hole. Once it was thought that giant Black Holes formed from collisions between smaller objects. Galaxies recently found by the Spitzer Space Telescope show no sign of being disturbed, indicating that collisions play role in their formation. Galaxies have been found formed less than 500 million years after the Big Bang, indicating that supermassive Black Holes are primordial. Primordial Black Holes are thought to exist by Stephen Hawking and many other physicists. They are predicted to have formed from quantum fluctuations shortly after the Big Bang. Size of a PBH would be limited by a "horizon distance," that light could travel in a given time. Previously it was thought that the speed of light would cause any PBH's to be tiny. Because c was much larger, Black Holes could have formed of enormous mass, big enough to seed formation of galaxies. A hundred billion galaxies containing massive Black Holes say that the speed of light has changed. SOFIA's stated observation goals also include: star birth and death, formation of new solar systems, identification of complex molecules in space, planets, comets and asteroids in our solar system. Black Holes may also play a role in formation of these smaller objects. Our previous views of infant stars show twin jets like those produced by a Black Hole. Their continued presence would explain the puzzles of Earth's core heat and magnetic field. Study of asteroids is of great importance, as these objects can strike us. 2008 will be another year of seeking Black Holes in unexpected places. Happy New Year to everyone! Labels: 747, black holes, sofia posted by L. Riofrio at 9:12 PM 1 comments Sometimes a scientist must speak up, and that time is here again. Though we often disagreed on politics, the assassination of Benazir Bhutto is a deeply felt tragedy. Whatever men are behind the murder, they are opposed to women having education or influence. They will not win. Today women like Peggy Whitson command space stations and physicists like Angela Merkel lead nations. Bhutto's last interview with David Frost was aired on November 2. She knew, as this scientist has found, that Osama Bin Laden has been dead 6 years. He was killed approximately December 15, 2001 during the US attack at Tora Bora. His death was hastened by diabetes and kidney failure, compounded by a location far from medical facilities. As noted here on September 11, the recent OBL videos are transparently fake but the media treats them as real. They are composed of old footage and stills with poorly dubbed voiceover. Their crudeness is a sign of desperation, as someone with proper equipment could make much more convincing video. The image of OBL is maintained by 1) his organisation attempting to control and keep the brand name alive, 2) a news media that uncritically parrots the message, 3) intelligence agencies with a vested interest in maintaining the threat, and 4) those wishing to deny the West and its leaders a victory. People in New York and elsewhere should be celebrating: Osama is dead! They should do as the British once did with Guy Fawkes. Every year people could burn paper effigies of Osama. That is a useful way of facing up to fear. The lesson of all this is to think critically while looking at the raw data. A physics education forces students to parrot data without question. Many physicists have accepted the existence of "dark energy" without examining the assumptions behind it. As with other times, the true history of our age will be written long after. The real theoretical discoveries are made with pencils and paper working in isolation. These are the best of times and the worst of times, truly an exciting time to be alive. UPDATE: As this is written, a "new" OBL audiotape has appeared. The method used to simulate his voice is the same as Kermit the Frog. Solar Power Satellites Closer 2007 was a year when Solar Power Satellites came closer to reality. Because of the lack of atmosphere, sunlight is about eight times more intense in orbit than on Earth's surface. SPS in geosynchronous orbit has been studied at least since Dr. Peter Glaser in 1968. This year it has been subject of a serious study by the US National Security Space Office, and a blog by Air Force Colonel "Coyote" Smith. In September at a conference in Hyderabad, businessman Peter Reed proposed a receiving antenna on an uninhabited island in the Palau chain. At December's UN Climate Change Conference in Bali, Reed's partners described the idea further. Their plan would place an experimental satellite at 300-mile altitude, avoiding the difficulty of reaching geosynchronous orbit. If scientists in 1907 had lectured about the future of energy it would have been about coal and oil, yet someone had already written E = mc^2. 40 years later we had atomic reactors and an atomic bomb. First we must go beyond dead ends like "dark energy." The power of free thinking will lead to technological surprises. If tiny Black Holes can be contained in an orbiting laboratory, their energy could be tapped. Even nuclear fusion converts only 0.7% of fuel into energy. A Black Hole converts matter into radiation with 2 orders of magnitude greater efficiency, approaching total conversion. The food that a human eats in a year could provide all the electricity needs of the United States! Any sort of mass could be used for fuel, even old AOL disks and issues of National Geographic. Solar Power satellites have not been deployed because of the immense construction costs. An SPS constellation powering the US would require 30-40 satellites, each with kilometers of solar arrays. Note how much trouble it has been constructing one space station in low Earth orbit. Black Hole energy would require just one satellite without all those solar panels! The power of thought is far greater than anything humans have imagined. Labels: energy, solar power posted by L. Riofrio at 9:05 AM 3 comments Christmas Moon Tonight all the world can share a beautiful full Moon. December 24 Mars will be at opposition, spectacularly close to the Moon. Last year we celebrated Christmas with the Earthrise photo from Apollo 8. This year, thanks to Japan's Kaguya spacecraft, we have a new Earthrise in HD. 39 years ago Apollo 8 celebrated Christmas while rounding the Moon. Thanks to a permanent presence, humans will be celebrating Christmas in Space this year and hopefully long into the future. Tonight skywatchers can contemplate the Moon, Mars and Beyond. The Vision has a goal of extending our permanent presence outward. A time is in sight when humans will enjoy Christmas ON the Moon. Thursday December 21 the Jim Benson-founded Spacedev successfully tested a prototype lunar lander. Spacedev's hybrid rocket technology is considered safer and more reliable. (The Lunar Lander challenge earlier this year went uncollected when the only entrant blew up.) While failure is an orphan, success has many fathers and mothers. Spacedev built the engines, key parts of Spaceship One. Since their divorce Scaled Composites has tried to build engines alone, with tragic results. Quietly, Spacedev has been built into a profitable company. Their Dreamchaser spacecraft is a good design that may reach Space before Virgin Galactic. Spacedev has partnered with the International Lunar Observatory to land a telescope on the Moon. Cost for ILO would be about 30 million US, the same amount as Google's Lunar Lander Prize. That would be a nice way to pay back the cost! (Winning the 10 million dollar X-Prize cost Scaled Composites 25 million.) Happy Christmas to Kea, Tommaso, nige, samh, Q9 and everyone! Labels: moon posted by L. Riofrio at 7:30 AM 10 comments Big Trouble in Little Particles The world of particle physics is getting small indeed. The UK budget is caught between Northern Rock and Southern Iraq. On December 11 the UK announced withdrawal from the International Linear Collider. One week later December 18 the US budget was finally released. It ends funding for ILC and US participation in the ITER fusion project. Fermilab is hit particularly hard, having already spent much of its budget for the year. Possibilities include laying off staff and shutting down the lab for a time. There could be very little raison d'etre for Fermilab after the Tevatron shuts down in 2009. Remember FANTASTIC VOYAGE? Particle physics has discovered how to shrink an entire field of science! Soon physicists will be reduced to the size of electrons, and we will only perceive them only as anonymous commenters. They will forever whirl around in a wave function, muttering that the speed of light is constant and "GM=tc^3" is too simple to be right. The 30-year decline has been documented in Smolin's THE TROUBLE WITH PHYSICS and Woit's NOT EVEN WRONG. Almost no one in the mainstream press has taken notice. Perhaps the New York Times, which usually follows physics, will write about this. Most of the taxpaying public quit caring about particle physics long ago. The decline has been happening a long time, so long that physicists have sought jobs in fields like cosmology. Key members of the Supernova Cosmology Project have no training in astronomy, which may explain their strange conclusions. Before they all cry into their milk, we should remember that working in science is a great privilege. Most of the world, no matter how hard they work in farms or factories, will never have the opportunities that these physicists take for granted. Physics enjoyed a free ride after WW2 because of nuclear power. The people's tax dollars have lavishly supported high-energy physics for decades. After all the funding, what has particle physics given back to the public? "Dark energy?" Quietly, out of the mainstream, some big advances in physics are being made. Predicting the speed of light may be just a first step. The product hc is to a link between the Universe of Relativity and the small-scale world of quantum mechanics. Future observations may lead to new theories of planet formation, explaining Earth's core heat and magnetic field. This may lead indirectly to sources of energy that make nuclear fusion look crude. If all this were known, funding for physics would be no problem at all. Fermilab should think about that. Labels: physics Asteroid Coming, Ready Or Not Happy Holidays Mars! Asteroid 2007 WD5, which was just discovered November 20, will approach Mars on January 30 with a 1:75 chance of striking. If it hit Mars, the asteroid would release energy equivalent to a 15-megaton bomb and create a scar the size of Arizona's Meteor Crater. Asteroid collisions are more fun to watch on someone else's planet. The Dawn spacecraft, which was launched September 27, has fired its ion engines for its journey to the asteroid belt. DAWN will rendezvous with asteroid Vesta in 2011 and Ceres in 2015. We are fortunate to have found meteorites from Vesta, otherwise very little is known about the composition of asteroids. More than just big rocks, the asteroids are new worlds that could even harbour life. Bode's Law suggests that another planet should orbit between Mars and Jupiter. Ceres, the largest object in the belt, was long classified as an asteroid but recently has been promoted to minor planet. (It's bigger than Pluto.) Some theories suggest that Ceres is largely made of water, and could contain even more water than Earth. Other observations suggest that Ceres is differentiated into core and mantle, which would mean that it was melted early in its history. Ultraviolet observations have found water vapour near the North Pole. How such a small body could be heated is a complete mystery. Ceres' 10^{21} kg mass could easily hsve coalesced around a small Black Hole. The Constellation system and Ares V booster will create many possibilities for spaceflight. Because Mars is such a big step, some scientists are promoting a manned asteroid mission. Because of the smaller gravity well, an asteroid mission may use even less energy than landing on the Moon. An Orion could rendezvous with a near-Earth asteroid in a 2-3 month mission. New presidents like to impose their own Vision, and an asteroid mission would be a Kennedy-like legacy. In addition to the adventure of landing on another world, the mission could easily be justified to Congress. As GHOSTBUSTERS said to the mayor, "You would be saving the lives of millions of registered voters." Are any New York City mayors listening? Labels: asteroids, orion The week's news reminds us that the universe can be both violent and mysterious. Chandra X-ray Observatory images of 3C321 show that it is composed of two galaxies orbiting one together, with one galaxy firing on the other. Every galaxy yet observed contains at its centre a massive Black Hole. In this image, X-ray data from Chandra is violet, optical data from Hubble is orange, and radio data is blue. The Black Hole at one galaxy's core is obscured by dust, giving it an hourglass shape. One of the twin jets blasts out the upper right into the neighbour galaxy. Chandra Press Release. One reason people cling to belief in "dark energy" is that 2/3 of the Universe's mass seems to be missing. The GM=tc^3 Theory predicts, and WMAP confirms that average density is a "critical" value $\Omega$ = 1. The visible galaxies, and even the dark mass surrounding them, only account for about 1/3 of this total. This has created a void in which theorists can insert fantasies of repulsive energy. Large scale maps like the Sloan Digital Sky Survey show that galaxies are arranged in sheets with enormous voids between them. A fish in the Barrier Reef knows to avoid dark holes, because something hiding in those holes could eat her! We can only hope that scientists are more intelligent than fish. Since the "voids" contain most of the Universe's volume, they could contain many massive unseen objects. Gamma Ray Bursts are some of the most powerful explosions in the Universe. Earlier this year GRB 070125 suddenly flared where no object had appeared before. The location in the constellation Gemini is more than 88,000 light years from any known galaxy. Searches by the Palomar Observatory and telescopes atop Mauna Kea showed no signs of surrounding dust. Something, possibly a giant Black Hole, caused the GRB. There could be many more similiar objects in the voids between galaxies. NASA press release The jet attack of 3C321 shows us how violent the Universe can be. About 2/3 of the Universe is hidden from our eyes. This dark Universe could contain massive Big Gulp Black Holes or other exotic objects. Sometimes this dark Universe erupts into visiblity like GRB 070125. There is far more out there than meets the eye. More about 3C321 and the anniversary of Australia's first satellite at Carnival of Space! Labels: galaxies posted by L. Riofrio at 2:17 PM 14 comments Moon Rising More than 35 years ago, before most of us were born, humans walked on the Moon. With the US Vision, Japanese and other missions the Moon is again rising in public consciousness. To turn back now would be foolish. Last week NASA announced that the Lunar Surface Access Module, whose design is still undefined, will be named Altair. The original Lunar Module had no official name, but individual craft were christened Snoopy, Eagle, etc. Many people are concerned about the "gap" between shuttle retirement in 2010 and introduction of Orion in (maybe) 2015. To keep servicing ISS, NASA would rely on the Russians or (maybe) private craft like SpaceX's Dragon. US Representative Curt Weldon, whose district includes Kennedy Space Center, has proposed to continue flying the shuttles until Orion is ready. In addition to safety concerns, his proposal would cost an additional 10 billion US. Today we have Japanese and Chinese spacecraft orbiting the Moon, with more nations planning to join in. Next year we can look forward to the Lunar Reconnaissance Orbiter, which will search for possible landing sites. LRO will also search for resources that humans can use on the Moon. A Lunar Crater Observation and sensing satellite will impact the South Pole to aid in the search for water. As reported here, Monday at AGU Associate Administrator Alan Stern announced selection of the Gravity Recovery and Interior Laboratory. GRAIL will consist of two small spacecraft orbiting in tandem to intimately measure the Moon's gravity field. This will provide data on the Moon's interior, which heretofore has been only speculation. In turn that will also provide clues to the formation of Earth and other worlds. The Principal Investigator will be Maria Zuber of MIT. Sally Ride will be assisting with public outreach. NASA only wants PI's with spacecraft experience (sorry, Saul) so it it pleasing to have women in charge. UPDATE: for those who weren't there, here are Alan Stern's answers to my other questions: 1) The Alpha Magnetic Spectrometer is permanently grounded for lack of Shuttle flights. 2) Though JWST is eating up the astrophysics budget, room was found to launch the NUSTAR Black Hole Probe in 2011. 3) Since people may not walk on Mars until after 2030, a Mars sample return mission is on the table. Earlier exploration of the Moon yielded benefits too valuable to count. No one can forget the first Earthrise photos taken by the crew of Apollo 8. Before the internet, the entire human race shared the experience via radio and television. The Moon inspired a whole generation to study science. Finally, an anomaly in the Moon's recession is one more verification of a changing speed of light. Labels: moon, speed of light Sign of c Change On the next block from Moscone Center was an art gallery with this encouraging sign. More people saw the presentation on "c change" than saw Ed Witten last month. As Peter Woit mentioned there, the world of particle physics has grown very small. Even the thousands of scientists at American Geophysical Union show little care for it. AGU was certainly more fun, with multiple parties every night. Tuesday at the Exploratorium museum the drinks had 3000-year old ice brought from Antarctica. The biggest barrier to "c change" has been getting the word out. Most physicists won't take an opinion on something they have not been exposed to, and the mainstream press follows "dark energy" and the Concorde cosmology. The difficulty of publishing papers where c changes has not helped. Arxiv has become so corrupted that it serves little use. The Smithsonian/NASA Astrophysics Data System is far more thorough. Last week a large number of people, including NASA administrators, were very interested in a changing speed of light. Happy 90th birthday to Sir Arthur Clarke! Labels: speed of light News From Saturn This week more news comes from Saturn's Rings. Perhaps humans are drawn to their beauty for a reason. They could hold secrets to how our planets formed, and may even point to future sources of energy. In the December issue of Nature scientists report discovery of an enormous ring current surrounding Saturn. Most of the plasma comes from the south pole of Enceladus. The clump of charged particles rotating in sync with Saturn is still considered a mystery. Charged particles circling the planet every 10 hours 47 minutes are like those that would be produced by an orbiting Black Hole. Wednesday at AGU, Cassini scientists claimed that Saturn's Rings are nearly old as the Solar System. Previously it was thought that the Rings would decay within 100 million years. We would then face the anthropic question of why the Rings exist in just the right time for humans to enjoy them. Later I had the good fortune to talk with Larry Esposito, who wrote the book on Ring observations. He believes the Rings are continually replenished and recycled by icy moonlets orbiting within. These unseen bodies are held together in spite of Roche's Limit by colliding and melting into each other. Normally bodies colliding at orbital velocities should not stick together. Perhaps something else is needed to seed their formation. Friday C.D. Murray talked about F Ring objects and embedded moonlets. The "fans" in this Ring are evidence of embedded objects. The shepherd moon Prometheus has been observed to interact with F Ring, sometimes leaving strands or jets of materiel. The "jets" are interpreted as resulting from collisions. A big question remains why F Ring precesses in the first place. The Rings would be another place to look for a Black Holes. Afterward M. Sremcevic talked about Propellor features in the Rings. These are located in a narrow 3000 km belt at 130,000 km from Saturn. The objects that cause the propellors must be very small, for anything bigger than 1 km would open a gap in the Rings. Their behaviour is incompatible with an accretion origin, so they are considered as possible fragments of a shattered moon. I asked and Sremcevic confirmed that his computer models treated the objects as point masses (like Black Holes). Many, many mysteries remain about the Rings. Some of these mysteries would be explained by very tiny but massive objects hidden within. Thes objects would also give off radiation, like the clump of charged particles. Saturn's rings show conditions similiar to those which formed our Solar System. Perhaps Black Holes are closer than we think. For more news, check out the new Carnival of Space! Labels: rings, saturn AGU's Expanding Universe The American Geophysical Union meeting in San Francisco has attracted a record 15,000 scientists from all over the world. The Gauge Theory and Representation Theory conference in Princeton drew barely 100, and only for Ed Witten's talk. AGU has outgrown Moscone Center West and occupied the much larger Moscone South. Those huge concrete arches supporting the roof form catenaries, like an inverted Golden Gate Bridge. Subjects at AGU range from Earth's interior to climate to the Solar System. Every year more woman scientists and students show up, for these fields have far more opportunities. There is far more here than 10 people could possibly see. Monday morning in Moscone South Room 102 Carolyn Porco began a series of talks on Saturn moons. Jennifer Meyer made the surprise assertion that Enceladus' 6 GW of heat can not be accounted for by tidal forces. The conventional estimate from tidal heating is only 0.12 GW. The old hypothesis or "radioactive decay" does not work for these icy moons. In desperation some researchers are conjecturing a meteorite strike at the South Pole, a true deus ex machina. Enceladus' core is an excellent place to consider a Black Hole. Monday evening Moscone West Room 3005 was standing-room only for the NASA Science Mission Directorate. Though Berkeley is just a subway ride away, not a single person from SNAP or Supernova Cosmology Project showed up at AGU. As the only astrophysicist in the room, I enjoyed a one-on-one talk with NASA Associate Administrator Alan Stern. He answered all my questions: 4) NASA doesn't want Principal Investigators with no spacecraft experience (sorry, Saul). 5) A new Discovery mission was announced, GRAIL. This huge building is crowded because people are very concerned about Earth and its problems. Many fine scientists like Kea are concerned about Earth. Even if a "dark energy" existed, it would have no conceivable use. The speed of light affects everything including the climate. Labels: enceladus, nasa Change of Climate The writer can't be there this week, but this was the Westin Bali in Nusa Dua. December 3-14 the attached convention centre is site of the UN Climate Change Conference. It makes one want to be involved in Climate Science, at least for the week. The shop there sold a very skimpy swimsuit. The adjacent Sheraton and Melia Bali hotels are even more luxurious. Next we will see many other scientists concerned with climate. Labels: Bali Our Cassini spacecraft continues to make big discoveries. New data shows that moons Atlas and Pan are surrounded by huge "spare tires," giving them the shape of flying saucers. The CG image is based upon Cassini photos. Since the plane of these bulges coincides with the Ring plane, scientists have concluded that the moons are made of Ring materiel piling up along their equators. Their rotation could not stretch them into this shape, because each moon takes a 14-hour day to revolve. The strange shape offers clues to how these worlds or our Earth formed. Atlas and Pan both orbit within Roche's Limit, a mathematical boundary within which moons are not supposed to form. Inside this limit, tidal forces from Saturn were thought to tear liquid objects apart. The ice crystals that make up the moons form loose "rubble piles" that behave as a liquid. Being made of ice crystals with many empty spaces in between, the moons have a density much less than liquid water. The spare tires show that they are continuing to attract particles. It is odd that objects with less density than liquid exist inside Roche's Limit, within which liquid objects are not supposed to form at all. If Pan and other moons formed around singularities, these tiny holes would explain both their formation and shape. Pan's 10^{15} kg mass could easily contain a 10^{12} kg singularity without getting sucked up. Even if you were only 3 meters from such a tiny Black Hole, you would feel no more gravitational pull than you do in Manhattan. Pan orbits within the Encke Gap of A Ring. Outer and inner boundaries of the gap correspond to Lagrangian points in the Pan-Saturn system. Inside the gap particles will be drawn toward Pan, eventually colliding to build the spare tire. Since Saturn's Rings contain conditions similiar to our Solar System's formation, this offers clues as to how other worlds formed. The moon Iapetus displays a strange ridge around its equator, which may have a similiar origin. Scientists have suggested sending probes to tunnel through Europa's kilometres of ice. A similiar probe could someday burrow into Atlas and possibly find a Black Hole. H.G Wells' Invisible Man was finally discovered by tracks he left in snow. Saturn's Rings are literally a field of ice in which the tracks of invisible objects can be seen. If our Solar System contains tiny Black Holes, this is a good place to look. Singularities would explain how these small moons formed and stay together. Black Holes may be the missing link to how Earth and other worlds were created. Night at the Museum Pt. 2 Sometimes reality exceeds our dreams, and the Museum of Natural History is far larger than any movie set. In the opposite corner from Rose Center for Earth and Space is the Hall of Meteorites. This room has not been placed closer because it can't be moved. The Cape York meteorite fragment here weighs 34 tons! This rock from Space is so heavy that it is mounted on supports descending into bedrock. Touching a big meteorite is a priceless experience. They are dense because they are made of nickel and iron forged by heat into metal. Sliced sections show the silver-grey texture of steel. Banging on a meteorite with the naked fist produces a ringing sound. Sometime in their past these rocks were exposed to intense heat, forged into a metal that survived the lesser heating of impact. For many centuries, meteorites were humanity's only supply of metals. The Cape York meteorite was mined by local Inuit for steel. In this room are travellers from the asteroids, Mars and Beyond. the Zagami meteorite (lower photo, top) comes to us from Mars. The Camel Donga meteorite displayed below came from asteroid Vesta. How a tiny body like Vesta could have been so hot was a complete mystery. The lack of olivine in meteorites is another mystery. The Brenham meteorite was found in a Kansas field. Farmers there in the 1880's often bumped into mysterious metallic rocks. A homesteader named Eliza Kimberly recalled a meteorite she had been shown as a schoolgirl. For five years she collected samples and wrote letters to scientists, despite teasing by her husband and neighbours. (Her work wasn't accepted by the arxiv, either.) Finally a scientist was convinced to examine her meteorites and the woman was proved right. The meteorites she found were billions of years old, dating from a time near Earth's formation. Elsewhere in the hall is a model of Arizona's Barringer Crater. For many years this hole in the Earth was thought to be a product of volcanoes. A Princeton graduate and lawyer named Daniel Barringer became bored with the office and headed West to be a mining engineer. He tried to take geology at Harvard, but dropped the class when an instructor called his questions "childish." Gaining success in his chosen field, Barringer became obsessively interested in the crater. He spent years and most of his fortune making excavations and trying to convince the science community. Within Barringer's lifetime the world realised he was right too. There are lessons in this room for all scientists. There have been many times (like today) when the textbooks' explanation for the Universe is lacking. Good ideas can come from outside the mainstream of science, even from a Kansas farm. These ideas may be ignored at first, even ridiculed. Determination and years of work can lead to the truth, even within a lifetime. The books claim that Earth's nickel-iron core remains hot due to "radioactive decay." We can't get samples of the core, but meteorites like Eliza Kimberly's date from the time of Earth's formation. From their composition and what is known about Earth's density, scientists have concluded that Earth's core is also made of nickel-iron. These meteorites may be considered as samples similiar to the core. The hypothesis of "radioactive decay" may also be tested here. Earth's core has temperatures exceeding thousands of degrees, hot enough to melt rock. The book claims that isotopes within Earth's core cause it to be hot. Since the Hall of Meteorites contains similiar samples, are any of them about to melt? If they contained even a tiny amount of radioactive isotopes, it would not be safe to go near this room. If they contained any isotopes, those would have decayed to nothing long ago. Today these rocks are cold as the New York Winter, yet Earth's core continues to produce heat. If Earth's core formed around a singularity, that tiny object would generate heat indefinitely. Today it would have the mass of a small moon and the diameter of a grain of sand, far too tiny to suck us up. The tiny amount of Earth that it eats is far less than the mass that arrives each year via these meteorites. Presence of a singularity would also explain Earth's magnetic field and how Earth formed from dust grains in the first place. Like Eliza Kimberly and Daniel Barringer, a scientist should not be afraid of bold steps. This week Robot Guy hosts the new Carnival of Space! Labels: asteroids, black holes, meteorites Even on sunny afternoons, one can find stars in a planetarium. A tour of New York's bright lights has included Times Square, the Christmas tree at Rockefeller Center, and Grand Central Terminal. American Museum of Natural History recently starred in NIGHT AT THE MUSEUM. The movie reminded us what an adventure this place is, for one could spend days here without seeing everything. Within the glass cube of its Rose Center for Earth and Space, a gigantic sphere marks Hayden Planetarium. The current show, COSMIC COLLISIONS, is narrated by Robert Redford and features breathtaking images of asteroids and meteors. Though New York lights drown out the stars, this place is a reminder of how Space can touch our lives. The sphere's lower half is home to a circular Big Bang Theatre, with a show narrated by Maya Angelou. The winding exit ramp is a Big Bang Walkway with milestones of cosmic evolution marked along the tour. While "inflation" and "dark energy" are not on this guest list, Rose Center has at great expense built a model of our spherical Space/Time. Complementing this space are models of the planets, further reminders of the beauty of spheres. We learn about the Greek Pythagoras via the straight sides of triangles, but he was also interested in spheres. He encouraged men and women to have many interests, making contributions to music and astronomy. As a musician, he is credited with the idea that "music of the spheres" described the planets. Reasoning that it was the most harmonious shape, he theorised that Earth was spherical. Today's science seeks to explain the Universe from such principles. To please his musician's ear, Pythagoras sought a "cosmic harmony." As a mathematician, Pythagoras was inspired to claim that "all is numbers," meaning that everything in the world could be described by equations. This idea is the basis of modern physics. Pythagorean ideas began a quest that would last thousands of years, to find equations describing the Universe. Everyone should own a book called "RELATIVITY: A Clear Explanation that Anyone Can Understand" by Albert Einstein. For about 6.95 US you will get a better view of the subject than any imitator book. In Chapter 31 Einstein dares to imagine the entire Universe. He tries to follow a Cosmological Principle that the Universe looks the same from all locations. Like Pythagoras, he reasons that the most harmonious shape is a sphere. Using General Relativity, Einstein predicts that enough mass will cause Space/Time to be curved into a sphere of 4 dimensions, of which our 3 dimensions are just the surface. Einstein also realised that gravity would cause this spherical Space/Time to collapse, unless it were supported by some repulsive force. Taking the Cosmological Principle into 4 dimensions, he desired a Universe that looks the same at all times. Here Einstein made the blunder of introducing a "cosmological constant" to prevent the sphere from collapsing. Later Alexander Friedmann and Georges Lemaitre independently found solutions to Einstein equations indicating an expanding Universe. Edwin Hubble's observations of redshifts finally allowed Einstein to drop the cosmological constant The simple expression R = ct predicts that the Universe expanded from a tiny point. When t was tiny, c was enormous and the Universe expanded like a Bang. Maya Angelou and a preponderance of evidence support the "Big Bang." According to one paradigm, faster-than-light inflation would have expanded the radius so much that its sphere would be to our perception flat. When pressed, even inflation theorists admit that on the largest scales the Universe must be spherical. It is not topologically possible for a tiny point to expand into flat Space. According to General Relativity, even the smallest mass will cause Space/Time to be curved. Views of the Cosmic Microwave Background may also indicate a spherical Universe. By measuring distances between acoustic peaks, scientists hope to complete a triangle and determine curvature. When a changing speed of light is accounted for, the angles do not add up to 180 degrees and the triangle is not flat. Most telling, the scale of density fluctuations is nearly zero for angles greater than 60 degrees. Like a ship disappearing over Earth's horizon, the lack of large-angle fluctuations is smoking-gun evidence that the Universe is curved. Both lines of CMB data indicate that the curvature has radius R = ct. According to fashion magazines, it was fashionable to believe that the Universe was flat, like the Earth. Large objects under the influence of gravity, even raindrops, tend to form spheres. The Universe is very, very large. The flat vs. sphere debate once raged about Earth, and everyone knows which side won. Even the work of mathematician Gregory Perelman and the Poincare Conjecture predict a spherical Space. The Universe model at American Museum of Natural History hints that the sphere will win again. The only missing ingredient is "GM=tc^3" written on the side. Labels: cosmology, New York Ceiling of Main Concourse in Grand Central Terminal. Depicted on this vault are the ecliptic, celestial equator, Pisces, Triangulum, Ares, Taurus, Orion, Gemini and Cancer. Brighter stars are marked with electric lights. Legend has it that once a motorised Sun moved across that ecliptic. In a city where electric lights drown out the stars, this is a very pleasant diversion. Once upon a time people thought that stars were fixed to an enormous vault rotating overhead. In some ways this was a useful assumption. The stars are so distant that their parallax is not easily noticed. In our neighbourhood one needs only their right ascension and declination. Many scientists, including this one, keep on the shelf a clear ball with stars drawn on the surface. Even for navigation in Earth, it is safe to treat the stars as if they were fixed to a sphere. Sharp-eyed astronomers will notice that the constellations are depicted inside out. Inspired by medieval depictions, the artist chose to depict a celestial sphere as if seen from the outside. This is quite inaccurate, for we know that the stars are varying distances from Earth and one could never see such a view. Once again the clear ball on the shelf proves useful in deciphering this picture. Holes in grillwork over the South windows allow sunlight to pass through, showing images of the Sun at our feet. This acts as a camera obscura, allowing one to track sunspots on the floor. One can even track the Sun's rotation from the sunspots. Galileo's observation of sunspots contradict old theories. It is a mistake to allow the assumption of a celestial sphere to become canon. Once upon a time people thought that the speed of light was fixed, like the Earth. In some ways this was a useful assumption. The speed of light changes so slowly that the change is not easily noticed. In our neighbourhood in Space/Time one need only the present value of c. Many scientists, including this one, keep on the shelf a book where the speed of light is treated as a constant. Even for navigation in the solar system, it is safe to treat c as constant. Holes in present understanding allow light to pass through, showing a changing c. Observations of high-redshift supernovae point to a past where values like c may not have been the same. We have reached the Moon and brought back rocks billions of years old, from a time when c was slightly different. Observations of the distant Universe contradict old theories. It is a mistake to allow the assumption of constant c to become canon. Labels: astronomy, New York, speed of light Witten Notes For the benefit of Kea and other fine mathematicians who couldn't make the lecture, here are my 6 pages of notes from Ed Witten. NEXT: We'll see a more exciting gateway. posted by L. Riofrio at 12:50 AM 19 comments Between Princeton and New York, the past few days have been too exciting to describe. Saturday at 11:00 AM saw the premiere of Grand Central Terminal's Kaleidoscopic Light Show. As music played, stars and fireworks were projected onto the enormous vault and columns. More about this place soon. I spoke with Peter Woit when he showed up in Princeton Wednesday. Gauge Theory and Representation Theory are small fields, and this post will show why. He was curious how many would show up, whether they would fill the 100 seats. Next week I'll be far away at a meeting with 11,000 scientists. Woit was curious where that was. As Woit expected, the biggest audience was Wednesday morning for Edward Witten and his collaborator Sergei Gukov. The latter's talk was more interesting, for it told what Witten is working on. Woit gives a far longer description in Not Even Wrong, which may not be long enough to please all mathematicians. An even shorter summary is recounted here, though this gets mathematical. G is the compact Lie Group. The goal is to understand G_R representations in terms of D-branes. G_R the real form of G_C where G_C is the complexification of G. This leads to a 4-dimensional topological gauge theory. M_4 is a 4-dimensional manifold, a product of a 3-manifold and an interval I. M_4 = $ \omega x I$ To connect this to reality, a boundary condition is to preserve topological Supersymnetry. One should note that SUSY itself is a highly speculative idea. The many particles predicted have never been detected. If we remove one dimension, the manifold can be pictured as the ceiling of Grand Central Terminal with the additional dimension projected onto that manifold. This all leads to a 3-dimensional Quantum Field Theory on W, where W is a Chernin-Simons theory with gauge group G. "Surface Operators" are operators in a 4-dimensional theory supported on a 2-dimensional surface D (like Grand Central's ceiling) which is a subset of M_4. It is considered "natural" to take D = $\gamma x I$, where $\gamma$ is a member of W and I is an interval. Next we take W = R x C, where C is a Riemann surface and R is time and $\gamma$ = R_x. The Hamiltonian approach leads to a Hilbert space H. C is replaced by a punctured disk, leading to a representation space H. In 4-dimensional gauge theory, M_4 = $\Sigma$ x C where $\Sigma$ = R x I and R is time. We have seen before that R = t in Planck units. In MKS units R = ct where t is time and c is the speed of light. A 4-dimensional gauge theory on M = $\Sigma$ x C is equivalent to a 2-dimensional topological model. $\Sigma$ leads to M_H(G,C) For applications to gauge and representation theory, C = D* a punctured disk. Boundary conditions are specified only at the puncture. Some lively audience questions asked what happens at the puncture, whether it represents a singularity. Next we have solutions to Hitchins equations: M_H = T*(E/T) = N = $\Theta_reg$ $\Theta_reg$ takes the form W_i = $\alpha$, W_j = $\beta$, W_k = $\gamma$ where $\alpha$, $\beta$, and $\gamma$ are members of L, the compact Lie algebra. Finally a Hilbert space is proposed H = hom(Bcc, B') which could be a space of open string states between two branes Bcc and B' on M_H = T*(G/T). B' is a brane supported on G/$\pi$ and Bc is the canonical isotropic brane. As all can see, this is not just complicated but highly speculative. "Branes" are a theoretical surfaces existing in higher dimensions that intersect with ours. Strings enter the picture only as one possible way to connect the branes. While the maths are interesting, none of this leads to a single testable prediction. Projecting these speculations onto the ceiling of reality will be quite difficult. For years the string enterprise dominated theoretical physics, pushing other promising ideas and people out. Thanks to the hammering of critics like Peter Woit, strings are rapidly falling out of fashion.Though Edward Witten was once considered a priest of the enterprise, his latest work moves away from strings. The way is open for more useful theories that make testable predictions. Bored yet? More interesting Space news is in the new Carnival of Space! Labels: New York, physics The latest paper on GM=tc^3 Another paper on GM=tc^3 Very short paper on GM=tc^3 Recent paper on GM=tc^3 A First GM=tc^3 website Prediction Fits the Data Astronomy Sri Lanka Station-Shuttle News Nigel Cook's Gravity Freedom of Science Monsters in the Sky Arcadian Functor Dynamics of Cats Space Image blog Insular Institute Brannenworks Rockets Away Visual Physics Brickmuppet Planetspace NASAwatch GhostNASA Robot Guy QUASAR9 Sci Guy Changing 'Constants' Are Back Wiki Wiki UK Daily Express UK Daily STAR Science News Article "Dark" Energy is Dead Dark Empire Strikes Back "Dark energy" still doesn't exist THE MARTIAN Review: Ridley Scott Returns to Space
CommonCrawl
Association between localized geohazards in West Texas and human activities, recognized by Sentinel-1A/B satellite radar imagery Assessment of ground deformation and seismicity in two areas of intense hydrocarbon production in the Argentinian Patagonia Guillermo Tamburini-Beliveau, Javier A. Grosso-Heredia, … Oriol Monserrat Triggering and recovery of earthquake accelerated landslides in Central Italy revealed by satellite radar observations Chuang Song, Chen Yu, … Jianbing Peng Advanced analysis of satellite data reveals ground deformation precursors to the Brumadinho Tailings Dam collapse Stephen Grebby, Andrew Sowter, … Renoy Girindran The Punatsangchhu-I dam landslide illuminated by InSAR multitemporal analyses Benedetta Dini, Andrea Manconi, … Jamyang Chophel The 2015 landslide and tsunami in Taan Fiord, Alaska Bretwood Higman, Dan H. Shugar, … Michael Loso Increase in landslide activity after a low-magnitude earthquake as inferred from DInSAR interferometry S. Martino, M. Fiorucci, … P. Mazzanti Detection of volcanic unrest onset in La Palma, Canary Islands, evolution and implications José Fernández, Joaquín Escayo, … Eumenio Ancochea The 2017–19 activity at Mount Agung in Bali (Indonesia): Intense unrest, monitoring, crisis response, evacuation, and eruption D. K. Syahbana, K. Kasbani, … J. B. Lowenstern Impact of communal irrigation on the 2018 Palu earthquake-triggered landslides Ian M. Watkinson & Robert Hall Jin-Woo Kim ORCID: orcid.org/0000-0002-9097-24651 & Zhong Lu ORCID: orcid.org/0000-0001-9181-18181 243 Altmetric West Texas' Permian Basin, consisting of ancient marine rocks, is underlain by water-soluble rocks and multiple oil-rich formations. In the region that is densely populated with oil producing facilities, many localized geohazards, such as ground subsidence and micro-earthquakes, have gone unnoticed. Here we identify the localized geohazards in West Texas, using the satellite radar interferometry from newly launched radar satellites that provide radar images freely to public for the first time, and probe the causal mechanisms of ground deformation, encompassing oil/gas production activities and subsurface geological characteristics. Based on our observations and analyses, human activities of fluid (saltwater, CO2) injection for stimulation of hydrocarbon production, salt dissolution in abandoned oil facilities, and hydrocarbon extraction each have negative impacts on the ground surface and infrastructures, including possible induced seismicity. Proactive continuous and detailed monitoring of ground deformation from space over the currently operating and the previously operated oil/gas production facilities, as demonstrated by this research, is essential to securing the safety of humanity, preserving property, and sustaining the growth of the hydrocarbon production industry. Geohazards pose a severe threat to humanity, civilian properties, infrastructures, and industries, possibly leading to the loss of life and high economic values1. Monitoring areas prone to geohazards is invaluable for locating their precursory signals on the surface, alerting civilians to potential disasters, mitigating the catastrophic outcomes, and facilitating the decision-making processes on the construction and operation of infrastructures and industrial facilities. The United States mid-continent has long been considered geologically stable with no large scale tectonic movements, volcanism, or seismic activities2,3. Therefore, unlike California with its dense GPS networks and frequent survey (aerial, spaceborne, field) campaigns, the mid-continent has garnered less attention from scientific communities and federal/state governments. However, recent studies have revealed that some of the mid-continent, especially the Gulf Coast of the United States including Texas, Louisiana, and Mississippi, is not immune to large-scale and/or localized geohazards4,5. The geohazards along the southern United States have been both naturally induced and stimulated by human activities1,3. Besides the occasional, strong tropical storms and flooding in lowlands, natural geohazards include settlement due to sediment loading and glacial isostatic adjustment, which can make the coastline in the Gulf Coast vulnerable to sea-level changes6,7,8. However, the naturally occurring surface subsidence on the coast displays characteristics of a continuous, slow progression (millimeters per year) and a large spatial extent (~100 km wide)6. In contrast, human-induced geohazards are faster growing (up to tens of cm/yr) and encompass a varying but generally small area (up to a couple of km wide). The most prominent difference between natural and human-induced geohazards is the correlation between surface instability and anthropogenic activities (e.g., mining, groundwater extraction, hydrocarbon production)3,9. Although there can be a time delay of ground deformation after human activities, depending on the geological characteristics (porosity, elasticity, compressibility, pore pressure, permeability) of soils and rocks and types of the operations, human-induced surface subsidence or uplift usually has high proximal and temporal correlation with those activities10,11,12. West Texas is somewhat distant from the Gulf coast, but was inundated by relatively shallow seas during the early part of the Paleozoic Era (approximately 600 to 350 million years ago). The sediments formed during this period contributed to the accumulation of sandstone, shale, and limestone. The seas constituting broad marine environments in West Texas gradually withdrew, and by the Permian Period (approximately 299 to 251 million years ago), thick evaporites (salt, gypsum) accumulated in a hot arid land encompassing shallow basins and wide tidal flats. As a consequence of geological formation in West Texas, the deposited carbonate (reef limestone) and marine evaporite sequences played an important role in the formation of oil reservoirs by helping seal the traps and preserving the hydrocarbons13. This resulted in the Permian Basin of West Texas' massive hydrocarbon reservoirs that became so lucrative to the oil and gas industry14. In West Texas, human activities such as groundwater exploitation, fluid injection, and hydrocarbon extraction have resulted in surface instability, leading to geohazards such as surface heave/subsidence, fault reactivation4, induced seismicity15,16, and sinkhole formation17,18,19. The vastness of West Texas challenges our ability to identify and locate the relatively small spatial scale of the deformation corresponding to human activities, particularly for fluctuations over the course of a month or a year. Without concerted focus, the small-sized signal in a short time window can go easily undetected. There have been a few studies documenting the surge of surface uplift/subsidence, sinkhole formations, and induced seismicity in oil fields19,20,21,22. However, the role of human activities on the surface and subsurface deformation has yet to be fully established, particularly regarding the identification of small-scale deformation signals over a vast region from big datasets spanning multiple years and analyzing them with supplementary information. Challenges to the effective study of the geohazards in West Texas include: identification of their locations in remote and vast regions, measurement of their long-term evolution, and characterization of the causal mechanism with accessible information. Satellite radar interferometry (InSAR) has proven capable of imaging ground surface deformation with a measurement accuracy of centimeters or better at a spatial resolution of meters or better over a large region covering tens of thousands km2,23. However, satellite radar acquisitions over West Texas have previously been scarce. Here we present the analysis of the ongoing ground deformations induced by various geohazards around Pecos, Monahans, Wink, and Kermit in West Texas (Fig. 1), using multi-temporal InSAR observations based on radar imagery from the first free, open-source radar satellites Sentinel-1A/B. The objective of our study is to probe the association between the ongoing localized geohazards in West Texas and anthropogenic activities. To achieve the goal, we focus on the localized, small-sized (200 m~2 km wide), and rapidly developing (cm/yr) geohazards in the region, which are categorized based on six possible causes: i) wastewater injection, ii) CO2 injection for enhanced oil recovery (EOR), iii) salt/limestone dissolution, iv) freshwater impoundment in abandoned wells, v) sinkhole formation in salt beds, and vi) hydrocarbon production. In addition, time-series measurements from two different imaging geometries are integrated to decipher the deformation phenomena. Furthermore, through comparative analysis of records of fluid injection, hydrocarbon production, and geological characteristics, we establish the relationship between the possible causes of human activities or natural perturbation and the localized observed geohazards in West Texas. Locations of ground deformation in West Texas. 6 major sites (red stars) in West Texas display the locations influenced by human activities identified based on Sentinel-1A/B multi-temporal interferometry (background image is from Sentinel-2). To estimate 2D (east-west and vertical) deformation, the ascending (path 78; black box) and descending (path 85; white box) track Sentinel-1A/B images were integrated over the overlapped regions. West Texas' Permian Basin contains two major aquifer systems under the influence of the Pecos River, the Pecos Valley aquifer and the Edwards-Trinity aquifer. The figure has been created using open-source software Generic Mapping Tools (GMT) 5.2.2_r15292 available at http://gmt.soest.hawaii.edu/projects/gmt/wiki/Download. The Sentinel-1A/B data used in this study were downloaded in 2017 through the Vertex online archive https://vertex.daac.asf.alaska.edu provided by Alaska Satellite Facility (ASF) and the Sentinel-2 data used as a background image for this figure were obtained in 2017 through Copernicus open access hub https://scihub.copernicus.eu provided by the European Space Agency (ESA)'s Copernicus Programme. Here we report local geohazards occurring in West Texas, most of which have not been noticed and reported yet. Knowledge of the presence of the ongoing geohazards in West Texas is a precursor to understanding the trigger and causality of ground deformation, the revelation of which is a focal point of our study. The localized geohazards presented below may have different characteristics in spatio-temporal progression and causality (i.e. wastewater injection, CO2 flooding, hydraulic fracturing, freshwater impoundment), but all are happening because West Texas contains a sequence of water-soluble (limestone, evaporite) and shale formations that are highly vulnerable to human activities. Wastewater injection into formation and surface uplift Wastewater 'flowback fluid', a byproduct of oil and gas production24, has been injected deep underground about 15 km west of Wink and Kermit, Texas (Fig. 1). The hydrocarbon production in the Bone Springs reservoir requires hydraulic fracturing, and wastewater (also called brine) containing high concentrations of total dissolved solids (TDS) is produced as a result of the operations. Two wells (API No. 49533675 and 49530150 in Fig. 2a) located near the county border between Winkler and Loving counties are classified as Class II injection wells for disposal of saltwater and non-hazardous fluids into the subsurface as a result of oil and gas production. The injection depth is from 1,590 to 1,670 m where the Bell Canyon Formation in the Delaware Basin of the larger Permian Basin lies. The upper layer (~10 m thick) of the formation is composed of limestone that can confine the upward flow of injected fluids. Most wastewater is injected below the nearly impervious limestone units, into the Bell Canyon Formation sandstones (also called Ramsey sandstones); these sandstones have a porosity of ~20% of open pore-space for holding fluids, and a moderate-to-low permeability (a measure of how readily fluids can flow through the rock) of ~40 md (millidarcy)25. Our InSAR analysis has detected the surface upheaval approximately centered on the injection well No. 49533675 (Fig. 2a). The maximum (cumulative) uplift from late 2014 to April 2017 was ~5.5 cm with the shape of a distorted ellipse, and the influential zone is within a 2 km radius of the peak deformation (white dot labeled 'point A'). Horizontal (east-west) deformation with the maximum of ~1.2 cm is also occurring on both the east and west sides of the peak uplift (inset in Fig. 2a), with the western region moving to the west (negative, blue color) and the eastern region moving to the east (positive, red color). The horizontal deformation around the injection wells represents <~20% of the vertical (up-down) deformation; we therefore concentrate on the vertical deformation in this study. Ground uplift due to fluid (wastewater, CO2) injection. (a) Uplift in Winker County, TX, induced by wastewater injection in nearby wells (API No. 49533675, 49530150). Inset illustrates cumulative east-west deformation in the box outlined by a dashed rectangle. (b) Time-series cumulative vertical deformation in a point A (Fig. 2a) and the volume of injected wastewater (blue and gray bars) in two injection wells. (c) Uplift in Ward County, TX, induced by CO2 injection in an EOR field (triangles). (d) Time-series cumulative vertical deformation in a point B of Fig. 2c and the volume of injected CO2 (orange and gray bars) in EOR injection wells of Fig. 2c. The figures including spatial information have been created using open-source software GMT 5.2.2_r15292 available at http://gmt.soest.hawaii.edu/projects/gmt/wiki/Download. The National Agriculture Imagery Program (NAIP) images used as a background of the figures were downloaded in 2017 through Geospatial Data Gateway https://datagateway.nrcs.usda.gov provided by United States Department of Agriculture (USDA). Generally, surface uplift can be caused by the expansion of the geological formation where the fluid is injected, resulting in the upward movement of the ground surface. The injected formation experiences an increase in pore pressure as well as a decrease of the effective stress26, promoting the surface uplift as we observed27. At the point of maximum uplift (A in Fig. 2a), uplift was detected beginning around September 2015, with a sharp increase (at a rate of ~6 cm/yr) during the first half of 2016 (Fig. 2b), and the value after October 2016 sustained near ~5 cm cumulative deformation in spite of some monthly fluctuations. The temporal changes in uplift seem to be in concert with the changes in injection volume, suggesting that a mechanical compaction of sands by means of poroelasticity is likely the primary cause of the deformation28. In addition, the small variations in the vertical deformation since mid-2016 can be depicted as the combined effects of poroelastic compaction and viscoelastic behavior of fine-grained formations surrounding an injected strata28,29. We can also infer that, relatively speaking, the upper (sandy) layer responds rapidly to waste water injection, but the lower (shale/silt) formation reacts gradually to changes in overlying stress. To unravel the causality of the surface uplift, we compared our deformation observations to the sequence of wastewater injection rates in the nearby wells. Based on the H-10 forms provided by the Texas Railroad Commission (RRC)24, the No. 49533675 injection well has been active since January 2016; the No. 49530150 injection well, which first became operational in 2009, experienced a period of disuse, and was then reactivated in September 2015 (Fig. 2b). The ratio of the uplift volume (i.e. the multiplication of the uplift amplitude and area extent) and the injection volume is about 0.05 m3/BBL (1 BBL ≈ 0.12 m3). Based on the onset of the uplift coinciding with the reactivation of 49530150 in September 2015, along with the increasing uplift rate aligning with the use of 49533675, it seems likely that both injection wells were affecting the surface uplift, but the effects of the two are not equivalent. It seems that the injection well No. 49533675, closest to the peak of vertical deformation, is likely situated in a geologically weak, critical formation, allowing the injection/disposal of wastewater to influence the surface deformation more dominantly. The correlation between the vertical deformation and the wastewater injection suggests that the expansion of injected formations induced a localized, relatively small-sized (~2 km in dimension), and small-magnitude (~5 cm) surface uplift. Although the onset of ground uplift was most likely triggered by the wastewater injection and the nearly instantaneous response of the ground surface results from the high elasticity of the underlying formations, the correlation between wastewater injection and ground surface may not be as high as we expect. The stratigraphic response to the decreased or increased pore pressure and effective stress can be a complicated process. When the injected volume is in decline, the release of relative pore pressure allows for the different response, namely immediate downward movement of coarse-grained formations and lagged upward movements of fine-grained formations. Such combined effects of the subsurface/surface processes result in retention of the intermediate correlation between the injection volume and ground uplift. The deformation has not invoked seismicity yet, but, if the injection continues, it has potential to threaten the integrity of County road 302, nearby oil/gas pipelines, and hydrocarbon production facilities. Carbon dioxide (CO2) injection and surface uplift Miscible CO2 flooding has been applied for decades as a tool for EOR in the depleted oil and gas reservoirs of the United States30,31. Unlike water and oil, supercritical fluid CO2 and oil mix together well, forming a single homogenous, or 'miscible', fluid (CO2 has different properties, depending upon its physical state; At room temperature, CO2 is a gas, such as what we exhale; we use liquid CO2 as a coolant, and we refer to the solid form of CO2 as "dry ice". Supercritical CO2, achieved under specific pressure and temperature conditions, is between gas and a liquid state, with some properties of each). CO2 is injected into a disposal well within a reservoir after initial hydrocarbon production rates have declined, where the CO2 mixes with the hydrocarbons. The CO2 injection causes an increase in reservoir pressure, which forces the CO2-oil mixture out of the pores of the rock and towards one or more producing wells, allowing more oil to be recovered from the reservoir. CO2 injection for EOR has been economically efficient due to its low cost, aiding the Permian Basin's EOR boom in the 1970s and 1980s and the recent years32,33. The North Ward Estes Field west of Monahans, Texas and near Wickett, Texas is one of the largest cumulative oil-producing field in the Permian Basin34 (Fig. 2c). The oil and gas are produced from the Yates and Queen Formations, within the Midland Basin of the larger Permian Basin. In CO2 (Class II) injection wells 11 km southwest of Monahans, Texas (Fig. 1), the CO2 is injected into both formations at depths between 750 and 810 m. Crude oil and gas can be produced from both Yates and Queen formations, but Yates consists of very-fine-grained sandstones to siltstones separated by dense dolomite beds and contains ~16% of porosity and ~37 md of permeability, providing more dominant production volume in the North Ward Estes Field33. Salt water injection is also used for EOR (either 'water flooding' by itself, or in alternation with CO2 flooding), but its use today in this region is very limited (1% of total injected fluids) and most injection for EOR relies on the miscibility of anthropogenic CO2 (99% of the total injected fluids). Our multi-temporal InSAR analysis has detected the ellipse-shaped surface uplift (major and minor axis: 6 km and 4 km, respectively) in the immediate vicinity of the CO2 injection sites (Fig. 2c), with a cumulative uplift of ~3 cm from late 2014 to April 2017. At the point of maximum uplift (B in Fig. 2c), the cumulative uplift increased linearly (at a rate of ~3 cm/yr) until January 2016 after which the value stayed at ~3 cm cumulative uplift (Fig. 2d). Within 500 m of the maximum uplift, 11 CO2 injection wells (triangles in Fig. 2c) remain active as of April 2017. Although there has been variation in the injected volume (gray bars in Fig. 2d), most injections of CO2 occurred during 2015, with much lower injection volumes (below 5 million m3) since January of 2016 (Fig. 2d). The API No. 47530058 injection well (cyan triangle in Fig. 2c; orange bars in Fig. 2d) lies in approximately the same location as the maximum uplift. The mechanism of the surface uplift caused by the CO2 injection is almost identical to the wastewater injection-induced uplift. Injected fluids, in this case, liquid supercritical fluid CO2, increases the pore pressure in the rocks (sandstones in Yates Formation for the CO2 EOR sites) and the release of the effective stress is followed by surface uplift26,27. The fluctuations in deformation after the injection was slowed down or stalled can be due to the collective effects of poroelastic compaction and viscoelastic delayed uplift in the formations surrounding the injected layer28,29. The proximity between maximum uplift and the No. 47530058 injection well, implies that the CO2 flooding in that particular well is more influential on the movement of ground surface than other surrounding wells (black triangles in Fig. 2c). The high correlation between a large amount of uplift (~3cm) and CO2 flooding during 2015 suggests that the instability of ground surface in the southwest of the North Ward Estes Field was induced by the pressurized injection of CO2 into the Yates formation. Contrary to the surface uplift in the vicinity of No. 47530058, no significant deformation has been detected on the other portions of the North Ward Estes Field. Differences in rock strength, porosity, compressibility, and permeability can play a role in the occurrence of deformation28. CO2 flooding has revitalized, and continues to enhance recovery of the mature oil fields of the Permian Basin, helping to produce significant volumes of oil without CO2 emission32. However, pressurized injection into a geologically unstable rock formation can destabilize the ground surface and risks the productivity of further oil operations35,36. Dissolution of salt/limestone in Santa Rosa Spring The Pecos County Water Improvement District No. 2 owns and operates a 2 km wide reservoir, known as the Imperial Reservoir (Fig. 3a), located about 6.4 km south of Grandfalls, Texas. Used for both irrigation of agricultural fields in Coyanosa, Texas and for recreational purposes, the Imperial Reservoir's water is pumped from the Pecos River. In addition to the pumped Pecos River water, the reservoir also receives artesian spring water from the Santa Rosa Spring, 13 km southwest of Grandfalls, Texas in Pecos County, through narrow canals and channels. Ground subsidence in karst terrain underlain by limestone and salt. (a) Cumulative vertical deformation in Santa Rosa Spring. (b) Time-series cumulative vertical deformation at C, D, and E points of Fig. 3a. (c) Cumulative vertical deformation around abandoned wells in Imperial, Texas. Inset represents the averaged deformation rate in a boxed region by stacking interferograms of less than 12 days. (d) Time-series cumulative vertical deformation at F, G, H, I, J, and K of Fig. 3c. (e) Vertical deformation rate around Wink sinkholes. The figures including spatial information have been created using open-source software GMT 5.2.2_r15292 available at http://gmt.soest.hawaii.edu/projects/gmt/wiki/Download. The NAIP images used as a background of the figures were downloaded in 2017 through Geospatial Data Gateway https://datagateway.nrcs.usda.gov provided by USDA. Our InSAR analysis has detected rapid subsidence occurring in Santa Rosa Spring (Fig. 3a) from late 2014 to April 2017, with a maximum cumulative subsidence of approximately 23 cm, or at a rate of ~8.9 cm/yr (point D in Fig. 3a). The subsiding region is elliptical in shape with dimensions of ~1.4 km by ~1.0 km. Time-series deformation measurements at three points (C, D, and E in Fig. 3a) show a strong linearity (Fig. 3b), regardless of other factors of seasonal effects and irrigational uses. Around the Santa Rosa Spring, hydrogen sulfide has been produced from multiple wells. However, the hydrogen production can hardly be directly connected to the rapid subsidence because all of the wells are located outside of the deforming zone that is centered on the Santa Rosa Spring. Hence, the hydrogen production should not have apparent impact on the observed subsidence. Historically, a limestone cavern formed around Santa Rosa Spring, and the runoff water occasionally flows from and into the cavern37. Stratigraphical data for the area's closest well, API No. 37137696 well (Fig. 3a), indicates Bone Springs Limestone formation is present at depth between 2,065 m and 2,911 m. However, the dissolution of this deep-seated limestone formation and the connection to the surface subsidence is not realistic as the extent of the subsidence area is less than 1.5 km (Fig. 3a). In addition, the dissolution rate of carbonate rocks like limestone is generally much smaller than that of the evaporite rocks, and a limestone cavern in a natural state forms very slowly (mm/yr)38,39. Therefore, such rapid subsidence rate (8~9 cm/yr) at Santa Rosa Spring is less likely caused by the natural dissolution of limestone. Because of the nature of the rapidity in subsidence, we believe the most likely cause of the observed subsidence is the dissolution of the Salado formation in the depth of 300~450 m beneath Permian Basin. Investigations of groundwater conditions in Pecos County indicated that the highest salinity over the region was found at Santa Rosa Spring (7224 mg/l)40. In addition, it has been documented that the salinity over the Santa Rosa Spring increased by 4,894 mg/l from the 1940s to 198740. Therefore, we interpret the rapid subsidence in the Santa Rosa Spring area to be caused by the dissolution of salt deposits. Although the subsidence has not triggered the collapse of the surface, the continuous surface subsidence in the areas can be hazardous to water management facilities and/or nearby oil/gas wells. Freshwater impoundment in abandoned wells The region near Imperial, Texas (Fig. 1), has been troubled with the growing subsidence, ground fissures, and the emergence of sinking lakes41,42. Some abandoned water and oil wells were left unplugged and thus did not prevent freshwater impoundment through cracks in cement casing and/or the corroded steel pipes, and the freshwater impoundment is known to be the primary cause of rapid subsidence in the area41. However, prior to this study, subsidence near many abandoned wells has gone unnoticed42, and the Texas Department of Transportation is expected to spend millions of dollars to identify and plug the abandoned wells41,42. Our InSAR analysis has detected rapid subsidence around 7 km southwest of Imperial, Texas (Figs 1 and 3c). The region around Boehmer Lake (F and G in Fig. 3c) has sunk as much as 2~3 cm over the course of our InSAR acquisition period (2.5 years). Boehmer Lake did not exist before 2003 and the sinking ground surface led to the formation of the lake as a result of water arising from the subsurface, thus this is continued subsidence. Farm to Market (FM) road 1053 (near H, I in Fig. 3c) is sinking so fast that we could only compare pairs of satellite data within 12 days in order to maintain coherence of the InSAR image (inset in Fig. 3c). Therefore, using InSAR pairs with small (6 or 12 days) temporal baselines, we were able to measure the round-shaped (500 m in diameter) subsidence rate (~10 cm/yr) along FM 1053 road. Due to the safety concern, use of the road was suspended in August of 2016 and the realignment of FM 1053 was discussed by the state transportation agency41,42. A third nearby area of rapid subsidence (~10 cm/yr for 2.5 years) (near J, K in Fig. 3c) was observed near oil well API No. 37137310 (cyan triangle in Fig. 3c, near J and K). The subsidence pattern (650x350 m in dimension) is stretched NW-SE, aligning with two wells: API No. 37172505 and 37137310. Like the subsidence over the Santa Rosa Spring, vertical deformation measurements at points (F, G, H, I, J, K in Fig. 3c) show a strong linearity in time (Fig. 3d). Two points (F, G in Fig. 3c) in Boehmer Lake are experiencing 1.4 and 2.0 cm/yr subsidence. Points (H, I in Fig. 3c) near the outer edge of the deformation in FM 1053 road show the subsidence of 0.7 and 1.5 cm/yr, respectively. The areas near two oil wells are also undergoing subsidence of as much as 3.9 and 2.5 cm/yr, respectively. A few oil wells to the south of Imperial, Texas are currently active (i.e., No. 37137310 in Fig. 3c), with moderate production (less than 400 BBLs/month) for most of the time. The observed linear subsidence relatively independent of oil/gas production and seasonal effects has the characteristics of ground subsidence (subsidence sinkhole) in karst terrain19. High salinity of channels along the Pecos River near Imperial, Texas and Boehmer Lake43,44 suggests that the surface and underground water interact with the subsurface salt deposit. The deforming area (Fig. 3c) is located in the Central Basin Platform close to the eastern Delaware Basin of the larger Permian Basin and is underlain by the Salado Formation in the depth of 300~500 m. Through unplugged abandoned wells, corroded pipes, or cracks in the casing, freshwater flows down and/or artisan water rises to the Salado formation, accelerating the dissolution of the evaporite, creating voids in the beds, and causing rapid subsidence on the surface17,39. Indeed, all three areas of subsidence are near wells. Boehmer Lake formed over an abandoned oil well (No. 37172656), which had stopped producing decades ago, and the subsidence along FM 1053 road is occurring near an orphan well (red triangle in Fig. 3c) that was identified as an inactive, non-compliant well by Texas' petroleum regulatory agency (RRC)45. The oil production in No. 37137310 or related operations may influence the large rates of subsidence. However, the downward flow of freshwater into an unplugged oil well (i.e. No. 37172505) may play a more influential role in subsidence as the subsiding areas are all underlain by salt deposits. The dissolution of salt beds (evaporite) is typically more substantial than that of the carbonate rocks (limestone), and often exceeds ~10 cm/yr subsidence38,39. Expansion of the cavity and the migration of voids toward the surface can possibly result in the collapse of the surface into sinkholes. Therefore, movements around the roads and oil facilities to the southwest of Imperial, Texas should be thoroughly monitored to mitigate potential catastrophes. Dissolution of the salt bed near the Wink Sinkholes Ground subsidence is more widely recognized near two Wink sinkholes, which collapsed in 1980 (Wink Sink #1 in Fig. 3e) and 2002 (Wink Sink #2 in Fig. 3e)18,19. The sinkholes, 4 km northeast of Wink, Texas and 9.5 km southwest of Kermit, Texas (Fig. 1), lie in the Delaware Basin part of the larger Permian Basin. The Salado Formation is near a depth of ~500 m46 over this area. The oil and gas in the region are mostly produced from the Yates Formation underneath the Salado Formation19,46. Both Wink sinkholes collapsed because of downward freshwater seeping through unplugged boreholes and cracked cement casing in oil and water wells. The subsidence in the immediate vicinity of the collapsed sinkholes continues at a rate of ~3–4 cm/yr (Fig. 3e). The most significant ongoing subsidence is occurring 1 km east of the Wink #2. There are two large subsidence bowls (Fig. 3e), and the maximum subsidence in the southern bowl (380 m by 280 m in dimension) exceeds 40 cm/yr. The large gradient of subsidence in a small region cannot be observed by C-band InSAR pairs with 24-days or longer. Accordingly, only 5 InSAR pairs with 6 or 12-days temporal baselines are used to calculate the high linear deformation rate here. The peak subsidence is located at the intersection of County roads 201 and 204, and there are no existing active wells around the region. Therefore, the rapid subsidence is likely induced by the freshwater impoundments from the nearby abandoned wells. During our field trip, we observed numerous recent ground fissures around the intersection of County roads 201 and 204. These growing fissures can allow the rainwater to swiftly flow down to the Salado formation and promote the dissolution of the salt layers. Because the oil and gas production in the area has been inactive for years, the mechanism for both bowls is believed to be the same. The access to the region surrounding Wink #1 and #2 has been restricted out of safety concerns, but County Road 201 continues to be used to transport oil and gas products. Based on the observed rapid subsidence, the use of County Road 201 should be proactively monitored for safety. Additionally, the effect of ongoing subsidence on the pipelines in the area needs to be reviewed as well. Hydrocarbon production in Pecos and the associated seismic events The Wolfbone field 9 km south of Pecos, Texas in Reeves County (Fig. 1) has been developed for oil and gas production since 2014. Compared to other oil wells in West Texas that produced hydrocarbon for decades, the wells (API No. 38934300, 38933302, 38933668, 38934175 in Fig. 4a) in the region are recent with significant production exceeding 10,000 BBLs (1 BBL ≈ 0.12 m3) starting in early 2015. The drilling depth of the wells is ~4 km below the surface, and most hydrocarbons are produced from the Bone Springs and Wolfcamp formations47, which lie in the depth of 2.3~3 km and 3~3.7 km, respectively. Ground deformation in Pecos, Texas, induced by hydrocarbon production. (a) Cumulative ground deformation in a hydrocarbon production field of Pecos, Texas. (b) Time-series cumulative vertical deformation in points (L, M, N, O) of Fig. 4a. Yellow stars represent seismic events (along with magnitude and depth) occurring less than 15 km from the subsidence area between late 2014 and April 2017. Oil production (blue and orange bars) volumes in the surrounding wells correlate to the triangles in Fig. 4a. The figure has been created using open-source software GMT 5.2.2_r15292 available at http://gmt.soest.hawaii.edu/projects/gmt/wiki/Download. The NAIP images used as a background of the figures were downloaded in 2017 through Geospatial Data Gateway https://datagateway.nrcs.usda.gov provided by USDA. Production from the wells has been enhanced by vertical and horizontal hydraulic fracturing of the sandstone and shale formation. Approximately 4.5 cm subsidence around four producing wells in the Wolfbone field can be observed from our InSAR analysis (Fig. 4a), while the horizontal deformation is negligible. From the time-series measurements in multiple points (Fig. 4b), the subsidence rate has been at a constant, relatively slow speed (1.5 cm/yr) from 2015 to 2016. However, the subsidence was accelerated from January to March 2017 and the amount of the two-month subsidence (O in Fig. 4b) for two months reached up to 1.5 cm (at a rate of ~9 cm/yr). Following the subsidence, the surface uplifted (Fig. 4b) with a maximum magnitude of ~0.5 cm between March and April 2017. We attribute the subsidence to the hydrocarbon production, as most of the subsidence is bounded by production wells in the deep formations28 and the extent of the subsidence area is consistent with a source depth of 2–4 km (Fig. 4a). Although the monthly hydrocarbon production exhibits variations, the detected ground subsidence is relatively linear in time. We can postulate that the formations in the subsiding areas behave viscously, different from other observed sites of wastewater injection and CO2 flooding. The removal of a huge mass of oil from the subsurface creates the stress changes in the rock/soil layers, but the ground surface gradually responds to such stress changes in the stratigraphy containing abundant viscoelastic shale formations. Although Pecos, Texas, is located in the geologically stable continental region without any seismic events before 2010s, there have been six small earthquakes in recent years (yellow stars in Fig. 4b). The magnitude of the earthquakes varies between M 1.8 and M 2.7, and all but two events occurred less than 15 km from the subsidence area. Both the timing of the April 2015 earthquake, shortly after the start of the massive increase in oil well production rates, as well as the latest changes in ground surface deformation coinciding the recent five earthquakes in 2017, suggest a close association among ground surface deformation, oil/gas production and seismic events, similar to those observed elsewhere3,22. The underlying mechanism to connect oil/and gas production and surface subsidence is that the extraction of oil or gas from underground decreases the pore pressure in formations, which in turn increases the effective stress, which might favor the slip of existing faults. Hydraulic fracturing along a horizontal pipeline from a horizontal well (such as well No. 38934300) could be responsible for the lateral distribution of vertical displacement in the shale oil field. Moreover, the two-year deformation can accumulate stress on the basement faults near the deforming areas. Although the pre-existing faults in Pecos, Texas have not been documented, the surge in seismic events suggests that faults may exist in the bedrock. Ground subsidence can activate the fault(s)3,16,48, and the accelerated subsidence and subsequent uplift in early 2017 can be interpreted as the co-seismic deformation and the viscoelastic relaxation during the short-term post-seismic deformation as a result of the multiple earthquakes49. The focal depth of the earthquakes ranges between 3 and 5 km, just slightly below where the hydrocarbon was extracted. Unfortunately, due to the sparse number of seismic stations in Texas, the accuracy in the location of the seismicity can be on the order of 10 km50,51. However, the absence of any previously reported regional earthquakes near Pecos, Texas, a shallow focal depth around the producing zone, the proximity of ground deformation to the epicenter therefore suggests a causative link between hydrocarbon production and the sequence of earthquakes (induced seismicity). We have compiled multiple localized geohazards in West Texas (Table 1), most of which were induced by, or at least influenced by, human activities. The correlation between time-series vertical deformation and fluid injection/hydrocarbon production exhibits evident effects of human activities on the surface, but the modeling approach can also help explain the causal relationship. The inverse modeling with the observed cumulative vertical deformation (Fig. 5a) in the box outlined by a dashed rectangle (Fig. 2a) computes the best-fit model (Fig. 5b) with the least residual (Fig. 5c; root mean square (RMSE) misfit: 0.10 cm). The modeled result with a rectangular (3.5 km by 2.5 km) dislocation source at a depth of 1.63 km (known average injection depth) indicates that the peak uplift is located near the wastewater injection well (API No. 49533675). During the observation period of about 2.5 years, both wells (API No. 49533675 and 49530150 in Fig. 2a) had injected saltwater of 5,119,129 BBLs (≈610,408 m3) and 3,704,047 BBLs (≈442,623 m3), respectively (1,053,031 m3 in total). Our computed volume change at the source is about 790,183 ± 8,750 m3 and slightly lower than the total injected volume of saltwater. The difference between the two can be attributed to the diffusion of injected saltwater into the surrounding rocks without generating any measurable deformation. The comparable volume change that was calculated from our model reaffirms that the observed surface uplift was induced by disposing a massive volume of saltwater. Although modeling can shed light on revealing the causality of the ongoing ground deformation, we have to realize that models are non-unique and dependent on hydrogeological parameters of the study site. In most general cases where surface uplift and human activities are highly correlated (Fig. 2d), the comparative analysis presented in this report is sufficient for assessing the effect of human activities in West Texas. Table 1 List of the observed ground deformation in West Texas. Modeled results of cumulative InSAR vertical deformation around wastewater injection wells. (a) observed cumulative vertical deformation in the box outlined by a dashed rectangle (Fig. 2a). (b) modeled vertical deformation. (c) residuals (observation – model). The figure has been created using MATLAB R2017a licensed by Southern Methodist University. Our observations in West Texas can be separated into three groups: i) surface uplift induced by fluid injection, ii) rapid subsidence in a karst terrain due to dissolution of underlying salt deposit, and iii) ground subsidence and seismicity induced by hydrocarbon production. The first category includes two geohazards: one west of Wink, Texas and another southwest of Monahans, Texas. Although both wastewater and CO2 injection for EOR are economically efficient to extract oil from reservoirs, the high pressure for raising hydrocarbon production and the increased fluids in the rocks can promote the surface uplift as much as 3~5 cm during an injection. Close correlation between the surface uplift and the injection fluid suggests a causal link between oil producing activities and ground instability. The second geohazards category includes salt (and perhaps limestone) dissolution in Santa Rosa Spring, Grandfalls and Wink with rapid subsidence in a karst terrain showing the characteristics of a strong linearity regardless of other factors (groundwater, precipitation, temperature, hydrocarbon production). Therefore, the rapid subsidence, once promoted by the freshwater impoundment and the interaction with brine water, is not slowed down by the changes of external effects. In addition, while oil and gas production does not directly impact the subsidence, poor management of the oil and gas facilities, boreholes, and pipelines allows freshwater or brine to interact with Salado salt (and possibly limestone) formations. Their subsidence rate can readily exceed 5 cm/yr, and the subsidence could lead to a collapse at the surface (collapse sinkhole52). Finally, the third geohazards category includes subsidence at the recently developed hydrocarbon sites in Pecos, Texas. Subsidence of ~4 cm in 2.5 years may not be significant on the ground surface, but the continuous subsidence can exert stress on the deep-seated formations and possibly reactivate undocumented faults near the producing zone. West Texas has experienced unprecedented increases of seismicity in last 5–6 years. Earthquakes are occurring in a geologically stable region and the temporal and spatial association with hydrocarbon production suggests that these earthquakes are induced16. Based on the accelerated subsidence in 2017, we can hypothesize that the increased number of seismic events is a consequence of the onset of massive hydrocarbon production and thereby ground subsidence after the increase of the effective stress. Contrary to the induced seismicity near the hydrocarbon production in Pecos, Texas, all other ground deformations identified in this study were not followed by seismic events. The ground surface undergoes significant subsidence up to 40 cm/yr (i.e. Wink sinkholes), suggesting that the basement faulting near the producing/deforming zone might not exist and the rapid subsidence can be supported by the underpinning rock formation. Another possibility is that the seismic network in West Texas is neither dense nor sensitive to the micro-earthquakes occurring around the deforming areas3,9. In that case, the small-magnitude seismic events could go undetected due to the current sparse placement of seismometers in the area. Regardless of the occurrence of the induced seismicity, measuring the ground deformation from space in areas where the wastewater and CO2 are injected, rocks are dissolving, or the massive hydrocarbons are produced is possible using satellite radar interferometry from new free-data satellites, as we demonstrated in our research. The ground deformation in West Texas is very responsive to anthropogenic activities with little time delay after their implementations were initiated. To avoid more severe geohazards in the future, consideration of such poroelastic movements in producing formations should be carefully heeded. If we do not mitigate the possible geohazards with continuous monitoring of surface deformation, we can expect one or more possible outcomes: i) damage to infrastructures (roads, railroads, levees, dams), ii) environmental impacts (i.e. ground-water pollution), iii) risks to oil and gas pipelines (note: West Texas has one of the densest networks of oil and gas pipelines in the U.S.), iv) potential threat to residents in surrounding communities, v) economic costs in hydrocarbon productions (i.e. possible improper well managements and thereby ground deformation can lead to large spending by oil companies and governmental agencies to prevent additional damages), and vi) induced seismicity. Micro-seismicity may not result in the large drastic hazards, but the ground deformation (subsidence/uplift) itself poses more direct threat to industrial facilities, infrastructures, and residential areas. Measuring deformation can assist stakeholders as they examine the safety of the oil and gas operations and make important decisions for securing facilities and people from potential larger catastrophic events. The Texas petroleum regulators have required the submission of historical seismic events in order for an injection/disposal well permit to be approved, but the additional, continuous monitoring of the ground deformation in oil producing areas (regardless of methods including conventional oil production, water flooding, CO2 flooding, or hydraulic fracturing) can provide crucial, detailed information for the safe operations of oil and gas productions and the sustainable growth of the energy industry. Sentinel-1A/B imagery Sentinel-1A/B, a constellation of two Synthetic Aperture Radar (SAR) satellites operated by the European Space Agency (ESA) within the Copernicus Program, represent the first satellite radar missions providing radar imagery freely to the public. Its radar sensor using interferometric wide swath (IW) mode as a background mode provides C-band (5.4 GHz in center frequency; 5.6 cm in wavelength) imagery with intermediate spatial resolution (20 (azimuth) x 5 (range) m) and dense temporal acquisitions with revisits of 6 days (over Europe, or 12 days outside Europe)53,54. For our study, Sentinel-1A/B imagery from November 2014 to April 2017 was been processed. To estimate the deformation in two directions (vertical and east-west), it was necessary to utilize the SAR images from ascending (path 78) and descending (path 85) tracks (Fig. 1), which have the respective heading angle (clockwise from north) of 347.23° and 192.75° with an incidence angle of ~33.8° in the image center. Adaptive multi-look factors were applied to all used SAR images to maintain appropriate spatial resolution in diagnosing the small-sized deformation phenomena. Detection of deformation signal and estimation of 2D deformation with InSAR Because Sentinel-1A/B acquires data with a swath of ~250 km, processing the large size of SAR scene for time-series measurements can be inefficient for detecting a small-sized deformation in West Texas. Moreover, our study area orientation from north to south, required merging of two or more frames. Filtering was applied to interferograms for improving InSAR coherence and also to permit retrieval of the localized deformations in an oil field feasible. To detect numerous deformation signals without losing much spatial resolution, we adopted a stepwise approach of InSAR analysis from a broad to fine scale. In the first run, all available Sentinel-1 images were coregistered based on the SAR image acquired on the first acquisition date. The precise coregistration of Sentinel-1 for avoiding the discontinuous phases between bursts and improving coherence was critical. The process of enhanced spectral diversity (ESD) was iterated until the coregistration precision of azimuth pixel became better than 0.001 pixel55, and the pre-resampled SAR images closest to newly-resampled SAR image can aid rapid and more accurate coregistration. All available interferograms with maximum temporal and spatial baselines of 1 year and 200 m, respectively, were generated from the resampled SAR images and were thoroughly examined through visual inspection. Upon discovering the localized signals in our study area, we cropped the interferograms around those deformations. The adaptive multi-look filtering was then applied to each interferogram to maintain high spatial resolution (close to the original resolution of Sentinel-1A/B). We employed the multi-dimensional small baseline subset (MSBAS) method56,57,58, after removing topographic signatures from interferograms and completing phase unwrapping to estimate the vertical and the horizontal (east-west) deformation from Sentinel-1A/B with two different radar geometries of ascending and descending tracks59. Because most SAR sensors are adopting a near-polar orbit and a single (right) look direction, the deformation in north-south direction cannot be resolved without multi-aperture interferometry, along-track interferometry, or offset tracking that is not suitable for mapping small-sized signals. The governing matrix for calculating 2D time-series deformation from multiple tracks is: $$(\begin{array}{ccc}-\frac{4\pi }{\lambda }cos\,\theta \,sin\,\varphi \,A & \frac{4\pi }{\lambda }cos\,\varphi \,A & -\frac{4\pi }{\lambda }\frac{1}{R\,sin\,\varphi }{B}_{p}\\ & \beta I & \end{array})(\begin{array}{c}{V}_{E}\\ {V}_{v}\\ {\rm{\Delta }}h\end{array})=(\begin{array}{c}{\rm{\Phi }}\\ 0\end{array})$$ where R, λ, θ and ϕ are the slant range from the satellite to the target (unit: m), the radar wavelength (~0.056 m), the azimuth angle, and the incidence angle, respectively. When M k and N k are the numbers of interferograms and SAR acquisition dates from kth SAR datasets (assuming that we have K (here K becomes 2 because we used ascending and descending track) SAR sets), respectively, A \(({\rm{unit}}:\,{\rm{time}};\,\mathrm{dimension}:\,\sum _{k=1}^{K}{M}_{k}\times (\sum _{k=1}^{K}{N}_{k}-1))\) is a matrix constructed from the time interval between consecutive SAR acquisitions, β is a regularization parameter, I (dimension: (2\((\sum _{k=1}^{K}{N}_{k}-1)+1))\times (2(\sum _{k=1}^{K}{N}_{k}-1)+1))\) is an identity matrix, V E and V v (each has dimensions of \((\sum _{k=1}^{K}{N}_{k}-1)\times 1)\) are the east-west and vertical components (unit: m/time) of the ground deformation rate vector during each time interval, B p (unit: m; dimension: \(\sum _{k=1}^{K}{M}_{k}\times 1\)) is the perpendicular baseline, Δh is the topography error (unit: m; not significant in a flat region), Φ (dimension: \(\sum _{k=1}^{K}{M}_{k}\times 1\)) is the observed (unwrapped) interferometric phase (unit: radian), and 0 is a zero vector with a dimension of \((2(\sum _{k=1}^{K}{N}_{k}-1)+1)\times 1\)19,56 (thus, a left matrix, the unknown vector, and a right vector from (1) have a dimension of \((\sum _{k=1}^{K}{M}_{k}+2(\sum _{k=1}^{K}{N}_{k}-1)+1)\) \(\times (2(\sum _{k=1}^{K}{N}_{k}-1)+1))\), (\(2(\sum _{k=1}^{K}{N}_{k}-1)+1)\times 1\), and (\(\sum _{k=1}^{K}{M}_{k}+2(\sum _{k=1}^{K}{N}_{k}-1)+1)\times 1\)). The unknown parameters (V E , V v ) were calculated by solving the matrix (1) via singular value decomposition (SVD) with minimum-norm constraints56. All used InSAR pairs were connected to each other (full-rank matrix A) due to high coherence in our study area, meaning that we have more observations than unknowns (V E , V v , Δh). Atmospheric artifacts are not removed separately, because the multi-temporal InSAR using only less-contaminated interferograms could limit the effects of those noises and small areas with ~200–300 m in dimension are less influenced by a large variation of atmosphere. Due to the extreme summer heat in West Texas, the distribution of water vapor in the atmosphere can be still problematic particularly for July and August scenes. However, the spatio-temporal filtering applied to time-series measurements for reducing the effects of water vapor and residual noises worked nicely allowing us to successfully mitigate the influence of those noise and error sources. However, when the gradient of deformation was large, exceeding 5~10 cm/yr, the interferograms from 24 or more temporal intervals could not maintain coherence. In that case, only the interferograms with 6 or 12-day temporal baselines were used to estimate such a high deformation rate (i.e. rapid subsidence near Wink sinkholes and Imperial, Texas). In West Texas, before mid-2016, most interferograms had 24 day intervals, and most of them were not suitable over the rapidly deforming regions. Therefore, for a large gradient deformation, the limited number of interferograms with short temporal baselines were used by applying a stacking method60 that is particularly useful for reducing temporally-uncorrelated signals (atmospheric artifacts, noise) and computing the deformation rate (cm/yr). The peak subsidence near the Wink sinkholes cannot be observed by any 24-day interval Sentinel-1A/B interferograms due to the loss of coherence, but the applied stacking method with 6 or 12-day interval interferograms allows for locating and calculating the maximum subsidence rate in the intersection of County roads 201 and 204. Hydrocarbon production and injection volumes To characterize the ground deformation in West Texas, we performed comparative analysis with information from hydrocarbon production and injection volumes. Records relating to oil/gas production and wastewater injection were collected from the Texas RRC, which is the responsible regulatory authority of the petroleum industry and pipeline safety in Texas. In addition, drillinginfoTM also provided additional information on geological formations in the locations of rapid subsidence. Injection wells, wastewater (generally saltwater) or carbon dioxide (CO2) can be injected into an oil-producing unit in the underground for boosting oil and gas production. Most wells used for hydrocarbon production can produce oil and natural gas together, but natural gas, often called casinghead gas, is regarded as a byproduct of these oil wells. Therefore, only the oil production in the wells was considered in our analysis. Modeling surface uplift due to wastewater injection We modeled the cumulative surface uplift to estimate the volume change in the subsurface and assess the relationship between the ground deformation and human activities (here wastewater injection). We used Okada formulation61 for motions in a homogeneous elastic half space, because ground deformation presented in our study shows high elastic response to the stress change. The source consists of a planar array of opening cracks at a fixed depth of the wastewater injection. First, we subsampled the cumulative vertical deformation using quadtree downsampling algorithm62 for reducing the computational burden in modeling while preserving the statistically significant part of the deformation signal63. The best-fitting models were searched over the grid and the best fitting parameters were obtained by minimizing the root mean square (RMS) misfit from the residuals (the observation minus the model)64,65. Hyndman, D. & Hyndman, D. Natural Hazards and Disasters (Brooks Cole, 2016). Yuan, H. & Romanowicz, B. Lithospheric layering in the North American craton. Nature. 466, 1063–1069 (2010). McGarr, A. et al. Coping with earthquakes induced by fluid injection. Science. 347, 830–831 (2015). Qu, F. et al. Mapping ground deformation over Houston-Galveston, Texas using multi-temporal InSAR. Remote Sens. Environ. 169, 290–306 (2015). Buckley, S. M., Rosen, P. A., Hensley, S. & Tapley, B. D. Land subsidence in Houston, Texas, measured by radar interferometry and constrained by extensometers. J. Geophys. Res. Solid Earth. 108, B11 (2003). Jankowski, K. L., Törnqvist, T. E. & Fernandes, A. M. Vulnerability of Louisiana's coastal wetlands to present-day rates of relative sea-level rise. Nat. Commun. 8, 14792, https://doi.org/10.1038/ncomms14792 (2017). Bourne, J. Louisiana's vanishing wetlands: Going, going. Science 289, 1860–1863 (2000). Blum, M. D. & Roberts, H. H. Drowning of the Mississippi Delta due to insufficient sediment supply and global sea-level rise. Nat. Geosci. 2, 488–491 (2009). McGarr, A., Simpson, D. & Seeber, L. Case histories of induced and triggered seismicity. in International Handbook of Earthquake and Engineering Seismology 81A, 647–661 (2002). Galloway, D. L. et al. Detection of aquifer system compaction and land subsidence using interferometric synthetic aperture radar, Antelope Valley, Mojave Desert, California. Water Resour. Res. 34, 2573–2585 (1998). Amelung, F., Galloway, D. L., Bell, J. W., Zebker, H. A. & Laczniak, R. K. Sensing the ups and downs of Las Vegas: InSAR reveals structural control of land subsidence and aquifer-system deformation. Geology 27, 0403–486 (199). Yang, Q. et al. InSAR monitoring of ground deformation due to CO2 injection at an enhanced oil recovery site, West Texas. Int. J. Greenh. Gas Control 41, 20–28 (2015). Sellards, E. H., Adkins, W. S., & Plummer, F. B. The geology of Texas: Volume I Stratigraphy. The University of Texas Bulletin 3232 (1932). Johnson, K. S. Dissolution of salt on the east flank of the Permian Basin in the southwestern USA. J. Hydrol. 54, 75–93 (1981). Hornbach, M. J. et al. Casual factors for seismicity near Azle, Texas. Nat. Commun. 6, 6728, https://doi.org/10.1038/ncomms7728 (2015). Frohlich, C. et al. A historical review of induced earthquakes in Texas. Seismol. Res. Lett. 87, 1–17 (2016). Article MathSciNet Google Scholar Johnson, K. S. Subsidence hazards due to evaporite dissolution in the United States. Environ. Geol. 48, 395–409 (2005). Paine, J. G., Buckley, S. M., Collins, E. W. & Wilson, C. R. Assessing collapse risk in evaporite sinkhole-prone areas using microgravimetry and radar interferometry. J. Environ. Eng. Geophys. 17, 75–87 (2012). Kim, J.-W., Lu, Z. & Degrandpre, K. Ongoing deformation of sinkholes in Wink, Texas, observed by time-series Sentinel-1A SAR interferometry (preliminary results). Remote Sens. 8, 313 (2016). Fielding, E. J., Blom, R. G. & Goldstein, R. M. Rapid subsidence over oil fields measured by SAR interferometry. Geophys. Res. Lett. 25, 3215–3218 (1998). Ketellaar, V. B. H. Subsidence due to hydrocarbon production in the Netherlands, in Satellite Radar Interferometry: Subsidence monitoring techniques, (Springer, 2009). Ellsworth, W. L. Injection-induced earthquakes. Science 341, https://doi.org/10.1126/science.1225942 (2013). Lu, Z. & Dzurisin, D. InSAR Imaging of Aleutian Volcanoes: Monitoring a Volcanic Arc from Space (Springer, 2014). Hornbach, M. J. et al. Ellenburger wastewater injection and seismicity in North Texas. Phys. Earth Planet. Inter. 261, 54–68 (2016). Dutton, S. P., Flanders, W. A. & Barton, M. D. Reservoir characterization of a Permian deep-water sandstone, East Ford field, Delaware basin, Texas. AAPG Bull. 87, 609–627 (2003). Terzaghi, K. Theoretical Soil Mechanics (John Wiley and Sons, 1943). Teatini, P., Gambolati, G., Ferronato, M., Settari, A. T. & Walters, D. Land uplift due to subsurface fluid injection. J. Geodyn. 51, 1–16 (2011). Zoback, M. D. Reservoir Geomechanics (Cambridge University Press, 2010). Chang, C., Mallman, E. & Zoback, M. Time-dependent subsidence associated with drainage-induced compaction in Gulf of Mexico shales bounding a severely depleted gas reservoir. AAPG Bull. 98, 1145–1159 (2014). Perera, M. S. A. et al. A review of CO2-enhanced oil recovery with a simulated sensitivity analysis. Energies 9, 481 (2016). Blunt, M., Fayers, F. J. & Orr, F. M. Jr. Carbon dioxide in enhanced oil recovery. Energy Convers. Manage. 34, 1197–1204 (1993). Alvarado, V. & Manrique, E. Enhanced oil recovery: An update review. Energies 3, 1529–1575 (2010). Shelton, J. L. et al. Determining CO2 storage potential during miscible CO2 enhanced oil recovery: Noble gas and stable isotope tracers. Int. J. Greenh. Gas Control 51, 239–253 (2016). Winzinger, R. et al. Design of a major CO2 flood, North Ward Estes Field, Ward County, Texas. SPE Reservoir Eng. 6, 11–16 (1991). Vasco, D. W. et al. Satellite-based measurements of surface deformation reveal fluid flow associated with the geological storage of carbon dioxide. Geophys. Res. Lett. 37, L03303 (2010). Samsonov, S., Czarnogorska, M. & White, D. Satellite interferometry for high-precision detection of ground deformation at a carbon dioxide storage site. Int. J. Greenh. Gas Control 42, 188–199 (2015). Brune, G. Major and historical springs of Texas. Texas Water Development Board Report 189 (1975). Waltham, T., Bell, F., & Culshaw, M. Sinkholes and Subsidence (Springer, 2005). Warren, J. K. Evaporites (Springer, 2016). Small, T.A. & Ozuna, G. Ground-water conditions in Pecos County, Texas, 1987, U.S. Geological Survey Water-Resources Investigations Report 92–4190 (1993). Malewitz, J. Abandoned Texas oil wells seen as "ticking time bombs" of contamination. The Texas Tribune. Online news article at https://www.texastribune.org/2016/12/21/texas-abandoned-oil-wells-seen-ticking-time-bombs-/ (2016). Malewitz, J. In West Texas, abandoned well sinks land, sucks tax dollars. The Texas Tribune. Online news article at https://www.texastribune.org/2017/01/22/west-texas-abandoned-well-sinks-land-sucks-tax-dol/ (2017). Jensen, R., Hatler, W., Mecke, M., & Hart, C. The influences of human activities on the waters of the Pecos Basin of Texas: A brief overview. Texas Water Resources Institute Report SR-2006-03 (2006). Miyamoto, S., Anand, S., & Hatler, W. Hydrology, salinity, and salinity control possibilities of the Middle Pecos River: A reconnaissance report. Texas Water Resources Institute Report TR-2008-315 (2008). Davies, R. J. et al. Oil and gas wells and their integrity: Implications for shale and unconventional resource exploitation. Mar. Pet. Geol. 56, 239–254 (2014). Gaswirth, S. B. Assessment of continuous oil resources in the Wolfcamp shale of the Midland Basin, Permian Basin Province, Texas, 2016. U.S. Geological Survey Open File-Report 2017–1013 (2017). Davies, R., Foulger, G., Bindley, A. & Styles, P. Induced seismicity and hydraulic fracturing for the recovery of hydrocarbons. Mar. Petrol. Geol. 45, 171–185 (2013). Sun, T. & Wang, K. Viscoelastic relaxation following subduction earthquakes and its effects on afterslip determination. J. Geophys. Res. Solid Earth. 120, 1329–1344 (2015). National Research Council. Induced Seismicity Potential in Energy Technologies (The National Academies Press, 2013). Frohlich, C., Hayward, C., Stump, B. & Potter, E. The Dallas-Fort Worth earthquake sequence: October 2008 through May 2009. Bull. Seismol. Soc. Am. 101, 327–340 (2011). Gutiérrez, F., Parise, M., De Waele, K. & Jourde, H. A review on natural and human-induced geohazards and impacts in karst. Earth Sci. Rev. 138, 61–88 (2014). De Zan, F. & Guarnieri, A. M. TOPSAR: Terrain observation by progressive scans. IEEE Trans. Geosci. Remote Sens. 44, 2352–2360 (2006). European Space Agency. Sentinel-1 User Handbook (European Space Agency, 2013). Prats-Iraola, P., Scheiber, R., Marotti, L., Wollstadt, S. & Reigber, A. TOPS interferometry with TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 50, 3179–3188 (2012). Berardino, P., Fornaro, G., Lanari, R. & Sansosti, E. A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms. IEEE Trans. Geosci. Remote Sens. 40, 2375–2383 (2002). Samsonov, S. & d'Oreye, N. Multidimensional time-series analysis of ground deformation from multiple InSAR data sets applied to Virunga volcanic province. Geophys. J. Int. 191, 1095–1108 (2012). ADS Google Scholar Hooper, A. A multi‐temporal InSAR method incorporating both persistent scatterer and small baseline approaches. Geophys. Res. Lett. 35 (2008). Wright, T. J., Parsons, B. E., & Lu, Z. Toward mapping surface deformation in three dimensions using InSAR. Geophys. Res. Lett. 31 (2004). Sandwell, D. T. & Price, E. J. Phase gradient approach to stacking interferograms. J. Geophys. Res. Solid Earth. 103, 30183–30204 (1998). Okada, Y. Subsurface deformation due to shear and tensile faults in a half-space. Bull. Seismol. Soc. Am. 92, 1018–1040 (1985). Jónsson, S., Zebker, H., Segall, P. & Amelung, F. Fault slip distribution of the 1999 Mw 7.1 Hector Mine, California, earthquake, estimated from satellite radar and GPS measurements. Bull. Seismol. Soc. Am. 92, 1377–1389 (2002). Samsonov, S. V., González, P. J., Tiampo, K. F. & D'Oreye, N. Modeling of fast ground subsidence observed in southern Saskatchewan (Canada) during 2008–2011. Nat. Hazards Earth Syst. Sci. 14, 247–257 (2014). Lu, Z. & Wicks, C. Characterizing 6 August 2007 Crandall Canyon mine collapse from ALOS PALSAR InSAR. Geomat., Nat. Haz. Risk. 1, 85–93 (2010). Fialko, Y. Probing the mechanical properties of seismically active crust with space geodesy: Study of the coseismic deformation due to the 1992 M w 7.3 Landers (southern California) earthquake. J. Geophys. Res. Solid Earth. 109 (2004). All Sentinel data were provided by the European Space Agency (ESA)'s Copernicus Programme via Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) and processed through high-performance computing utilizing SMU's supercomputer (ManeFrame). This research was financially supported by the NASA Earth and Surface Interior Program (NNX16AL10G) and the Shuler-Foscue Endowment at Southern Methodist University. Comments from Robert Gregory and Cathy Chickering Pace, three anonymous reviewers, and editorial board members improved the manuscript. Most geocoded images are drawn by Generic Mapping Tools (GMT) 5.2.2_r15292 available at http://gmt.soest.hawaii.edu/projects/gmt/wiki/Download. The National Agriculture Imagery Program (NAIP) imagery used as a background in the figures depicting ground deformation was accessed via the Geospatial Data Gateway https://datagateway.nrcs.usda.gov provided by United States Department of Agriculture (USDA). Roy M. Huffington Dept. of Earth Sciences, Southern Methodist University, Dallas, Texas, USA Jin-Woo Kim & Zhong Lu Jin-Woo Kim Zhong Lu Z.L. and J.-W.K. designed the project and experiments, and J.-W.K. collected and processed all the SAR datasets. J.-W.K. and Z.L. validated and analyzed the results, and wrote the manuscript. J.-W.K. prepared all figures and Z.L. provided the guidance for improving the figures. Both authors discussed the results and reviewed the manuscript. Correspondence to Zhong Lu. Kim, JW., Lu, Z. Association between localized geohazards in West Texas and human activities, recognized by Sentinel-1A/B satellite radar imagery. Sci Rep 8, 4727 (2018). https://doi.org/10.1038/s41598-018-23143-6 Guillermo Tamburini-Beliveau Javier A. Grosso-Heredia Oriol Monserrat Geological risk assessment by a fracture measurement procedure in an urban area of Zacatecas, Mexico Jesús Alejandro Muro-Ortega Felipe de Jesús Escalona-Alcázar Sayde María Teresa Reveles-Flores Natural Hazards (2022) Ground motion baseline analysis of the Cheshire UK GeoEnergy Observatory Alessandro Novellino Luke Bateson Colm Jordan Wastewater leakage in West Texas revealed by satellite radar imagery and numerical modeling Weiyu Zheng Journal Top 100 Top 100 in Earth Science
CommonCrawl
Damjan Hann Eingereicht: 22 Jun 2021 DOI: https://doi.org/10.2478/rmzmag-2021-0002 © 2021 Li et al., published by SciendoThis work is licensed under the Creative Commons Attribution 4.0 International License. In the past, prices for various non-ferrous metals were somewhat lower than they are today, and this is the main reason why the mining industry left significant amounts of those metals in tailings dams around the world. The prices of non-ferrous metals simply did not allow for too much engagement when it came to mineral processing and ore beneficiation. The second, equally important reason for the low efficiency of ore processing was the equipment, which was not as advanced as it is today. Copper is considered one of the first metals ever mined and used by mankind, and it has contributed significantly to the improvement of society since the beginning of civilization [1]. More specifically, copper is a metal that has been in use for about 10,000 years [2], and there is no indication that it will be replaced by a similar material in the near future. The price of copper has recently peaked as concerns about supply disruptions and strong demand have raised expectations of a tight market. This key energy transition metal, which is also widely used in construction, is trading at around $10,000 a ton. The price is at its highest level since July 2011, when copper traded at about the same price. Analysts at Goldman Sachs expect copper prices to reach $15,000 per ton in 2025, driven by high demand, which is forecast to grow by almost 600% by 2030. The projected demand is likely to lead to a supply deficit and higher copper prices, which in turn are likely to trigger new investment cycles [3]. Notwithstanding the fact that copper has been used by humans for 10,000 years, as shown in Figure 2, a negligible amount of copper was mined before 1900. About half of the total amount of copper was produced by 1998, while another half was mined in the last 22 years. Total copper production from 1900 until the end of 2020 was about 740 million tons [5, 6]. Copper price on the London Metal Exchange since April 2020 (USD/t) [4]. Total copper mine production worldwide from 1900 to 2020 (adapted from [5, 6]). In a situation in which global copper demand is growing and copper prices are steadily increasing, reprocessing copper tailings is becoming an increasingly logical decision [7, 8, 9]. In most of the copper tailings deposited in the last century, there is a sufficiently high content of valuable components so that the tailings can be economically exploited and considered a potential future resource [10]. Nowadays, the development of new technologies allows the exploitation of copper, which in the past was disposed of together with the tailings due to inefficient copper extraction processes. Another reason that enables the exploitation of copper tailings is the high price of copper on the world market. In the past, tailings, now considered a valuable source of metals, were treated as waste due to the low price of copper. In the last century, the copper content of some tailings was as high as 0.75%, which means that these historic tailings may have a higher content than the current deposits, which mostly contain 0.2%–0.8% copper. However, as each individual deposit is at least a little different from all the others, and consequently so is the tailings dam, it is impossible to predict in advance the optimal procedure for reprocessing [7]. A significant and, above all, positive side effect of tailings reprocessing is the protection of the environment by eliminating or at least reducing exposure to hazardous substances. The economic balance of tailings reprocessing can also be improved through environmental protection if a government or community is interested in rehabilitating an area degraded by tailings and is willing to allocate funds to such a project. If heavy metals and other potentially harmful substances are removed from the material, the sale of the remaining material for everyday use in mass consumption can also be considered. Since the material in the tailings dams is already accumulated in large quantities and has undergone mining and certain mineral processing such as crushing and grinding of the original ore, the costs are expected to be lower than those associated with extracting metals from the deposits. In the exploitation of deposits, mining represents the highest cost [10] and together with the costs of crushing and milling in the case of a 100,000 t/day copper concentrator (Table 1), mining costs represent the largest part of all costs. Approximate relative costs of a 100,000 t/day copper concentrator [10]. Cost (%) Crushing 2.8 Grinding 47.0 Flotation 16.2 Thickening 3.5 Filtration 2.8 Tailings 5.1 Reagents 0.5 Pipeline 1.4 Laboratory 1.5 Maintenance support 0.8 Management support 1.6 Administration 0.6 Other expenses 8.1 Owing to the many advantages of tailings retreatment, several tailings retreatment plants are already in operation around the world [10], and many more are expected to start producing copper in the near future. If metal minerals are to be extracted from ore or tailings, there are several chemical options. Minerals can be subjected to pyrometallurgy, which involves exposure to high heat; hydrometallurgy, which requires solvents; or electrometallurgy, which uses high electrical power, although combinations of different processes are also possible. By far the most common process among those mentioned is pyrometallurgy – or more specifically, smelting. All these processes are otherwise quite energy intensive. This is the main reason why smelting is not carried out until the copper ore has been concentrated to the maximum by mineral processing procedures, eg up to 25%. This also reduces transportation costs. Physical processes for separating valuable and gangue minerals are the exact opposite of chemical processes in terms of energy input [10]. This article presents various technological options for exploitation of tailings, which are expected to be rich enough in copper, according to the current copper price. It also reports the results of feasibility tests on the economic extraction of copper from tailings and gives an estimate of the amount of copper that could be extracted from tailings worldwide. The idea for the research was triggered by the rising prices of non-ferrous metals, which made tailings reprocessing interesting. Large copper mines operating in the last century with the technology available at the time could not be as effective, so the copper content in tailings is likely to be high enough to be seen as a promising opportunity. Any feasibility testing or, in other words, demonstration of the viability of tailings should start with samples collected in the field. This should be followed by granulometric analysis and compositional analysis, for example XRF (X-ray fluorescence) analysis. Finally, enrichment of copper content using appropriate equipment, such as a multi-gravity separator or flotation cell, and redetermination of the composition of the tailings should follow. In copper mining, the concentration of ore is usually accomplished by a process called froth flotation [11], in which the valuable substance floats to the surface with the aid of air bubbles. Unwanted material, called gangue, sinks to the bottom and is removed. The concentration process can also be carried out using a variety of different techniques and technologies, including gravity-based methods, leaching, dense liquid separation, etc. In the past, many underground and open pit copper mines were in operation in Europe, so several sites with tailings dams are available for research purposes [12, 13]. Sampling was carried out at a tailings dam that had been created and filled in the second half of the previous century. Owing to the time period in which the tailings dam was formed, the copper content in the original ore, and the technology available at the time of copper extraction, the samples collected represent typical material suitable for copper tailings reprocessing. An excavator was used to obtain representative samples [14]. As it is characteristic for most bulk materials exposed to external forces over an extended period of time, oxidation occurred. Excavation of the material revealed that the material layers at a depth of about 1.5 meters and deeper are grey in colour and are not oxidised. Within the layers near the surface, the proportion of material that has been exposed to weathering increases, and there is a gradual transition in the colour of the tailings from grey to brown. In some cases, intact material occurs at depths as shallow as 0.5 meters. Samples were taken from both layers, and the total amount of almost one tonne of material was transported to the laboratory for examination. The content of copper in the samples was determined by X-ray fluorescence (XRF), a technique that determines the presence of individual elements in a sample without damaging it. Samples of upper oxidised material and bottom unoxidised material were analysed separately for element content. It was found that the copper content in the samples taken from the upper part of the tailings was much lower than in the unoxidised samples, for which an average of 0.16% copper content was measured. As reported in [7], there is a reprocessing plant in Chile that extracts copper from tailings grading 0.12% to 0.27% Cu. Oxidised samples were consequently identified as not promising for reprocessing. The lower copper content in the upper layers is due not only to the weathering of the particles, but also to the continued downward movement of the heavier grains with water currents. Grains with a higher density, such as grains with some copper content (copper density is about 9 g/cm3), definitely belong to this group of grains. Granulometric analysis Since the applicability of each method in mineral processing is highly dependent on particle size, granulation analysis was performed. Sieve analysis was carried out using seven different standard laboratory sieves [15] with aperture sizes: 32, 71, 125, 250, 500, 1,000 and 2,000 µm [14]. Before sieving, homogenisation, sampling [16] and drying of the appropriate amount of the tested material was performed. Table 2 shows the results of sieving analysis. The results and the graphical representation in Figure 3 show that the parameter d50 – mean particle size – is about 200 µm, while d90 is in the range of 450 to 500 µm. Results of sieve test. Aperture size (um) Size interval (um) Sieve mass (g) Cumulative pass (g) Cumulative pass (%) Cumulative oversize (%) 2,000 +2,000 13 2.81 97.19 2.81 1,000 1,000–2,000 4 0.86 96.33 3.67 500 500–1,000 11 2.38 93.95 6.05 250 250–500 143 30.89 63.06 36.94 71 71–125 54 11.66 9.72 90.28 32 32–71 24 5.18 4.54 95.46 0 0–32 21 4.54 0.00 100.00 Graphical interpretation of the sieve test. Multi-gravity separator Multi-gravity separators are separation systems that can be useful in applications such as upgrading industrial minerals, recovering precious metals, and recovering metal minerals such as copper minerals. They have proved their usefulness by enabling the production of high-grade concentrates at high recovery from low-grade tailings [10, 14, 17, 18]. As long as there is a sufficient difference in the specific gravity of the grains, this machine can separate a particular mineral or group of heavy minerals from a low-density gangue within a liquid suspension. For effective separation, there should be at least one SG (specific gravity) unit between the heavy and light particles. Nowadays, laboratory versions of multi-gravity separators up to full-scale industrial units with an ore processing capacity of 5 t/h are available on the market, which can treat even ultrafine particles down to a size of 1 µm. The invention and further development of the multi-gravity separator, also known as an enhanced gravity separator, was inspired by a conventional shaking table. The physical basis and operating principle of both concentrators are quite alike. In the multi-gravity separator, centrifugal forces are used to increase the efficiency of separation of fine and ultra-fine particles. The centrifugal force acting on the grains is, in absolute value, much higher than the force of gravity in case of conventional shaking table; therefore, separation is easier to achieve. The forces acting on the particles can even reach up to 15 g and more. Separately prepared and constantly stirred homogeneous feed slurry enters at the centre of the rapidly rotating conical drum. As shown in Figure 4, the drum rotates clockwise whilst being shaken by cyclic oscillation in the horizontal direction at 4–6 cps. After a few turns, the entering slurry is spread evenly over the entire surface on the inside of the drum. The light fractions begin to collect in the thin flowing film of water, which continuously ensures the movement of these particles in the direction of the far end of the drum. Particles with higher density are affected by the high centrifugal forces or so-called enhanced gravity and shear forces due to shaking. Consequently, they are squeezed through the pulp film and end up as pressed to the drum. The further movement of the heavy particles is controlled by the unique scraper system. Rotating scrapers direct heavy grains within the semi-solid layer to the concentrate outlet, which is in the opposite direction as the fine grains are discharged. The scrapers also rotate clockwise like the drum but at a slightly higher rpm (revolutions per minute). Multi-gravity separator – MGS (adapted from [10, 14, 18]). Small quantities of wash water are added through the wash water inlet, which is located close to the concentrate outlet. The purpose of the wash water is to create a water film capable of carrying and bringing out the entrained light fraction into the tailings outlet and rinsing the dense grains just before they are discharged from the concentrator. The extent to which useful substance is lost to tailings during ore processing determines whether or not a deposit can be economically exploited. The proportion of losses depends partly on the natural characteristics of the ore deposit and partly on the technology used for extraction. More specifically, the losses depend, on the one hand, on the spatial distribution of the minerals in the ore and the mineralogy itself, and on the other hand, on the efficiency of the concentration [10]. As with other methods, technological progress in flotation brings new opportunities for the reprocessing of low-grade copper tailings. Flotation is a process in which a useful substance is separated using chemical reagents in a three-phase system (solid, liquid and gas) by combining the grains with air bubbles to obtain a froth of concentrate as the tailings sink to the bottom. The main problem with the flotation of tailings is the fact that the material has already been processed – crushed and milled – so that the majority of particles are smaller than 100 or even 50 microns. Many different problems can arise in the flotation of such fine fractions, and the smaller the particles, the greater the problems. Particles with 1 mm3, milled to 1 µm3 particles, mean a huge increase in the number of bubbles required. In this case, the possibility of each particle being associated with an air bubble is small. Another problem is the huge increase in the amount of reagents required due to the increase in the specific area [7]. Therefore, if the particles in the tailings are very small, a method other than froth flotation should be used, based on a different physical basis. Since the multi-gravity separator (MGS) has many adjustable parameters, it is quite a challenge to find an optimal combination for a quality separation. Operating variables of the MGS are: shake amplitude shake frequency drum rotational speed drum angle of inclination wash water flow rate feed percent solids During the test, the drum rotational speed, solids concentration, and wash water flow rate were adjusted while the other parameters remained constant. After each individual test, which lasted approximately five minutes, a small amount of concentrate was taken, and then a series of samples were heated to dryness. This was followed by elemental composition measurements, which showed that the combination of adjustable parameters plays a significant role when MGS is used for separation. The upgrade factor, which is the relationship between the share of copper in the concentrate and the share of copper in the feed material, can be as low as close to 1. However, the concentration of copper when optimal conditions were reached was several times higher compared with the concentration in the feed material [14]. Tests with the MGS gave solid results, but to get even better results, the processing of the material with the multi-gravity separator would have to be combined with another physical process. Since today's knowledge of flotation far exceeds what we knew about this process decades ago, the flotation process could be used especially for coarser fractions. Figure 5 shows the cumulative size distribution of the material from the copper tailings reprocessing plant in Chile, which has a d90 of 270 μm and was successfully processed in the flotation circuit [7]. Since the material considered in this work is quite similar in size to that processed in Chile, it would be useful to perform an additional test with flotation cells. Moreover, a batch process can be used. Cumulative size distribution of feed material from a copper tailings reprocessing plant (adapted from [7]). From the combination of all the information obtained from the experiments and the information provided by other researchers on copper reprocessing, it can be concluded that the process line shown in Figure 6 would be successful for a tested material. Process line for copper tailings reprocessing. Taking into account the quantities of copper produced with low efficiency during the last century, the amount of material accessible on tailings dams, the oxidation of tailings, the efficiency of copper reprocessing and the average copper content within tailings, it is possible to give an estimate [Equation (1)] of the capacity of copper that can be recovered from tailings worldwide: (1) P=TP×a×b×c×d×e×f P=T P \times a \times b \times c \times d \times e \times f P = 737 million tons × 0.5 × 0.33 × 0.8 × 0.8 × 0.75 × 0.97 = cca 57 million tonnes where the following abbreviations are used: P: global potential of mining tailings for copper production TP: total global historical production of copper a: share of low-efficiency copper production, characteristic for the last century b: ratio of copper content in original ore to copper content in tailings c: share of tailings dams accessible for reprocessing d: ratio of oxidised to intact tailings e: efficiency of copper reprocessing (share of copper recovered from tailings) f: share of tailinrgs not yet reprocessed Given the current world copper demand (about 20 million tons per year), this potential corresponds to the same amount of copper produced from the deposits in about three years. Despite the fact that the extraction and use of copper has a very long history, most copper has been mined in the 20th and 21st centuries. Copper consumption is increasing over the years, and the price of this commodity is growing accordingly. The extraction of copper accumulated in tailings dams around the world will be an increasing part of the mosaic of copper supply in the future. At today's copper prices and processing technologies, reprocessing old copper tailings is generally economically viable and cheaper than recovering copper from deposits. Reprocessing tailings can withstand a slightly lower copper content, as there are no costs of excavation, crushing, or grinding. Tests on samples of tailings have proved the usefulness of MGS in the reprocessing of copper tailings. However, MGS alone cannot provide sufficient recovery such that a concentrate could be prepared directly for smelting; instead, it must be combined with another process such as flotation. The reprocessing of tailings not only brings additional amounts of copper into the supply chain, but also benefits the environment because the accumulated heavy metals burden the environment. Only copper reprocessing is discussed in this paper. Other metals are also accumulated within tailings dams in larger quantities and can be recovered profitably, which would also benefit the environment. [1] 'Copper – A Metal for the Ages', U.S. Geological Survey, Mineral Resources Program, https://pubs.usgs.gov/fs/2009/3031/FS2009-3031.pdf., pp.1-4, (accessed 24 May 2021). 'Copper – A Metal for the Ages' U.S. Geological Survey, Mineral Resources Program https://pubs.usgs.gov/fs/2009/3031/FS2009-3031.pdf. 1 4 (accessed 24 May 2021). Search in Google Scholar [2] McHenry, C. (ed.), The New Encyclopedia Britannica 3, 15 edn., Chicago, Encyclopedia Britannica, Inc., 1992, pp.308-310. McHenry C. (ed.) The New Encyclopedia Britannica 3 15 edn. Chicago Encyclopedia Britannica, Inc. 1992 308 310 Search in Google Scholar [3] 'Copper Prices Hit Highest Level Since 2011', Eurometal, https://eurometal.net/copper-prices-hit-highest-level-since-2011/, (accessed 22 May 2021). 'Copper Prices Hit Highest Level Since 2011' Eurometal https://eurometal.net/copper-prices-hit-highest-level-since-2011/, (accessed 22 May 2021). Search in Google Scholar [4] 'LME Copper Historical Price Graph', London Metal Exchange, https://www.lme.com/en-GB/Metals/Non-ferrous/Copper#tabIndex=2, (accessed 24 May 2021). 'LME Copper Historical Price Graph' London Metal Exchange https://www.lme.com/en-GB/Metals/Non-ferrous/Copper#tabIndex=2, (accessed 24 May 2021). Search in Google Scholar [5] 'Total Copper Mine Production Worldwide from 2006 to 2020', Statista, https://www.statista.com/statistics/254839/copper-production-by-country/, (accessed 26 May 2021). 'Total Copper Mine Production Worldwide from 2006 to 2020' Statista https://www.statista.com/statistics/254839/copper-production-by-country/, (accessed 26 May 2021). Search in Google Scholar [6] 'World Production Trend of Copper', U.S. Geological Survey, https://commons.wikimedia.org/wiki/File:Copper_-_world_production_trend.svg. (accessed 26 May 2021.) 'World Production Trend of Copper' U.S. Geological Survey https://commons.wikimedia.org/wiki/File:Copper_-_world_production_trend.svg. (accessed 26 May 2021.) Search in Google Scholar [7] Mackay, I. et al., 'Dynamic Froth Stability of Copper Flotation Tailings', Minerals Engineering, no. 124, DOI:10.1016/j.mineng.2018.05.0052018, pp. 103–107. Mackay I. 'Dynamic Froth Stability of Copper Flotation Tailings' Minerals Engineering 124 DOI: 10.1016/j.mineng.2018.05.0052018 103 107 DOI öffnenSearch in Google Scholar [8] Drobe, M. et al., 'Processing Tests, Adjusted Cost Models and the Economies of Reprocessing Copper Mine Tailings in Chile', Metals, vol. 11, no. 1, 2021, p. 103, DOI:10.3390/met11010103, (accessed 26 May 2021). Drobe M. 'Processing Tests, Adjusted Cost Models and the Economies of Reprocessing Copper Mine Tailings in Chile' Metals 11 1 2021 103 DOI: 10.3390/met11010103 (accessed 26 May 2021) DOI öffnenSearch in Google Scholar [9] Figueiredo, J. et al., 'Tailings Reprocessing from Cabeço do Pião Dam in Central Portugal: A Kinetic Approach of Experimental Data', Journal of Sustainable Mining, no. 17, 2018, pp. 139–144, DOI:10.1016/j.jsm.2018.07.001, (accessed 28 May 2021). Figueiredo J. 'Tailings Reprocessing from Cabeço do Pião Dam in Central Portugal: A Kinetic Approach of Experimental Data' Journal of Sustainable Mining 17 2018 139 144 DOI: 10.1016/j.jsm.2018.07.001 (accessed 28 May 2021) DOI öffnenSearch in Google Scholar [10] Wills, B.A., Finch, J.A., Wills' Mineral Processing Technology: An Introduction to the Practical Aspects of Ore Treatment and Mineral Recovery, 8th edn. Oxford, Butterworth-Heinemann, 2016, pp. 1-23, DOI:10.1016/B978-0-08097053-0. 00012-1 (accessed 27 May 2021). Wills B.A. Finch J.A. Wills' Mineral Processing Technology: An Introduction to the Practical Aspects of Ore Treatment and Mineral Recovery 8th edn. Oxford Butterworth-Heinemann 2016 1 23 DOI: 10.1016/B978-0-08097053-0.00012-1 (accessed 27 May 2021) DOI öffnenSearch in Google Scholar [11] 'Processes: Copper Mining and Production', European Copper Institute, https://copperalliance.eu/about-copper/copper-and-its-alloys/processes/, pp. 1-3, (accessed 25 May 2021). 'Processes: Copper Mining and Production' European Copper Institute https://copperalliance.eu/about-copper/copper-and-its-alloys/processes/ 1 3 (accessed 25 May 2021) Search in Google Scholar [12] Wirth, P., Černič Mali, B., Fischer, W., Post-Mining Regions in Central Europe – Problems, Potentials, Possibilities, Munich, Oekom Verlag, 2012, pp. 104-118. Wirth P. Černič Mali B. Fischer W. Post-Mining Regions in Central Europe – Problems, Potentials, Possibilities Munich Oekom Verlag 2012 104 118 Search in Google Scholar [13] Uekoetter, F., Mining in Central Europe – Perspectives from Environmental History, Munich, RCC Perspectives, 2012, DOI:10.5282/rcc/5600, pp. 21-38, (accessed 28 May 2021). Uekoetter F. Mining in Central Europe – Perspectives from Environmental History, Munich, RCC Perspectives 2012 DOI: 10.5282/rcc/5600 21 38 (accessed 28 May 2021) DOI öffnenSearch in Google Scholar [14] Kastivnik J., Venta A., Report on Processing Test, Ljubljana, Geološki Zavod, 2018, pp. 4-17. Kastivnik J. Venta A. Report on Processing Test Ljubljana Geološki Zavod 2018 4 17 Search in Google Scholar [15] Stražišar, J., Mehanska Procesna Tehnika I. Ljubljana, Univerza v Ljubljani, 1996, pp. 27-30. Stražišar J. Mehanska Procesna Tehnika I Ljubljana Univerza v Ljubljani 1996 27 30 Search in Google Scholar [16] Stražišar, J., Knez, S., Vaje in Računski Primeri Iz Mehanske Procesne Tehnike, Ljubljana Univerza v Ljubljani:, 2001, pp. 23-29. Stražišar J. Knez S. Vaje in Računski Primeri Iz Mehanske Procesne Tehnike Ljubljana Univerza v Ljubljani 2001 23 29 Search in Google Scholar [17] Enhanced Gravity Separator 911mpe-c-902, 911 Metalurgy Coporation, https://www.911metallurgist.com/equipment/enhanced-gravity-separator/, pp. 1-20. (accessed 11 June 2021). Enhanced Gravity Separator 911mpe-c-902 911 Metalurgy Coporation https://www.911metallurgist.com/equipment/enhanced-gravity-separator/ 1 20 (accessed 11 June 2021) Search in Google Scholar [18] Rao, G.V., Markandeya, R., Kumar, R. 'Modeling and Optimization of Multi Gravity Separator for Recovery of Iron Values from Sub Grade Iron Ore Using Three Level Three Factor Box Behnken Design', International Journal of Mineral Processing and Extractive Metallurgy, no. 2, 2017, pp. 46–56, DOI:10.11648/j.ijmpem.20170204.12. (accessed 28 May 2021). Rao G.V. Markandeya R. Kumar R. 'Modeling and Optimization of Multi Gravity Separator for Recovery of Iron Values from Sub Grade Iron Ore Using Three Level Three Factor Box Behnken Design' International Journal of Mineral Processing and Extractive Metallurgy 2 2017 46 56 DOI: 10.11648/j.ijmpem.20170204.12 (accessed 28 May 2021) DOI öffnenSearch in Google Scholar
CommonCrawl
Subcritical bifurcation in a self-excited single-degree-of-freedom system with velocity weakening–strengthening friction law: analytical results and comparison with experiments A. Papangelo ORCID: orcid.org/0000-0002-0214-904X1, M. Ciavarella2 & N. Hoffmann1,3 Nonlinear Dynamics volume 90, pages 2037–2046 (2017)Cite this article The dynamical behavior of a single-degree-of-freedom system that experiences friction-induced vibrations is studied with particular interest on the possibility of the so-called hard effect of a subcritical Hopf bifurcation, using a velocity weakening–strengthening friction law. The bifurcation diagram of the system is numerically evaluated using as bifurcation parameter the velocity of the belt. Analytical results are provided using standard linear stability analysis and nonlinear stability analysis to large perturbations. The former permits to identify the lowest belt velocity \(({v_\mathrm{lw}})\) at which the full sliding solution is stable, the latter allows to estimate a priori the highest belt velocity at which large amplitude stick–slip vibrations exist. Together the two boundaries \([v_\mathrm{lw}, v_\mathrm{up}] \) define the range where two equilibrium solutions coexist, i.e., a stable full sliding solution and a stable stick–slip limit cycle. The model is used to fit recent experimental observations. Subcritical as well as supercritical Hopf bifurcations are often encountered in different engineering applications, e.g., aeroelastic response of airfoils with structural nonlinearities [1, 2], dynamics of ball joints [3], brake squeal [4]. Engineers are generally more concerned about subcritical (hard) bifurcations as a small perturbation around the equilibrium position can lead the system to large amplitude vibration states, which the structure may not tolerate [5]. A number of authors have studied the "Mass-on-moving-Belt" model ("MB model" in the following), Tondl [6], Hetzler et al. [7], Hetzler [8], Won and Chung [9], Nayfeh and Mook [10], Mitropolskii and Van Dao [11], Popp [12], Popp et al. [13], Hinrichs et al. [14], Andreaus and Casini [15], Awrejcewicz and Holicke [16], Awrejcewicz et al. [17], which present various types of analysis of a mass-on-belt system with various kinds of friction laws, and provide in some cases, analytical expressions for the change between stick–slip and pure-slip oscillations. Many authors have attempted to use fast vibrations which in some respects seems to transform classical Coulomb friction into viscous-like damping ([5, 18]). Most often, supercritical bifurcations are found, namely where the system undergoes a smooth transition to a limit cycle (generally involving stick–slip) when the control parameter is varied. In [19] Hoffmann studied the effect of LuGre type friction law [20] on the stability of the classical MB model. It was shown that rate-dependent effects act against the destabilizing effect of the velocity decaying friction characteristic. The reader is referred to the review by Awrejcewicz and Olejnik [21] where the dynamical behavior of different lumped mechanical systems (see also [22]) with various friction laws has been investigated. Hetzler et al. [7] (see also [23]) studied the dynamic behavior of the MB model using different friction characteristics, (exponential and polynomial decaying). They assumed a weakly nonlinear behavior and used a first-order averaging method to find approximate solutions. It was shown that the exponential decaying leads to subcritical Hopf bifurcation while, using a cubic polynomial friction law, the dynamical behavior (subcritical/supercritical) depends on the friction law parameters [7]. Also in [8] Hetzler showed that adding a Coulomb frictional damping to the self-excited MB model leads to an "imperfect" Hopf bifurcation scenario where it does not make sense to ask for stability of the steady state but rather one should seek for stability to a certain level of perturbation. Recently, Papangelo et al. [24] have found localized vibration states in a self-excited chain of mechanical oscillators weakly elastically coupled, which lead to the so-called snaking bifurcations in the bifurcation diagram. A key feature of the system was that, if isolated from the structure, each nonlinear oscillator experiences a subcritical Hopf bifurcation in a certain range of the control parameter (yielding bistabilityFootnote 1). However, Papangelo et al. [24], adopted a polynomial nonlinearity quite remote from a real friction law. Here, perhaps with an eye to the classical Stribeck curve, for the MB model we propose an exponentially weakening and linearly strengthening friction law. We show that this friction model yields to bistability thus vibration localization phenomena are expected as in Papangelo et al. [24] if those oscillators were coupled together. Hoffmann [25] showed that even with a 2-DOF model, but using the Coulomb friction model with a static \(\left( \mu _\mathrm{st}\right) \) and dynamic \(\left( \mu _\mathrm{d}\right) \) friction coefficient \(\left( \mu _\mathrm{st} >\mu _\mathrm{d}\right) \), bistability can be obtained. For the given set of parameters Hoffmann [25] showed that at \(\mu _\mathrm{d}=0.4\) the usually called "mode coupling instability" takes place and the system becomes (linearly) unstable under small oscillations. What is particularly interesting for us is that if \(\mu _\mathrm{st}/\mu _\mathrm{d}>1\) a stick–slip limit cycle exists even in the range where the steady sliding state is linearly stable. Saha et al. [26] studied the MB model (see Fig. 1) with the aim to control friction-induced oscillations using a time-delay feedback force. They also introduce two different friction models for the dependence of the frictional force on the sliding speed: one exponentially decaying, the other with polynomial decay. They carry out the analysis using the method of multiple scales in a quite elaborate manner limited to the full sliding case. Interestingly, they show that the bifurcation is supercritical for polynomially decaying and subcritical for exponentially decaying friction law. This confirms that the choice of the shape of the friction law is in a sense a delicate point. Recently Saha et al. [27] published experimental results of a mass-on-moving-belt model test rig (see Fig. 1). These results are very instructive in general, since they clearly show a bifurcation diagram with a subcritical Hopf bifurcation in a single-degree-of-freedom model. Saha et al. [27] plot the friction law obtained from measurements which surprisingly shows very large hysteretic effects, both during the slip and the "stick" state (even if talking about a proper "stick state" becomes difficult, cfr. their Fig. 6). On the other hand, we notice that they use a sample of mild steel on a belt of silicon rubber, thus viscoelastic effects (maybe thermal effects) are at play. Mass-on-moving-belt model (MB model) Table 1 Values of static coefficient of friction, kinetic coefficient of friction and the ratio \(\mu _\mathrm{st}/\mu _\mathrm{d}.\) From Rabinowicz [38] Table 2 Data for the static coefficient of friction, kinetic coefficient of friction, and their ratio \(\mu _\mathrm{st}/\mu _\mathrm{d}\) reordered and taken from the list compiled by the late Roy Beardmore using a variety of handbooks listed in his web site Velocity weakening–strengthening behavior of the friction force with the relative velocity has been observed for different materials in dry (see [28]) and lubricated condition (see [29,30,31,32,33,34]). It has been shown that the dynamical behavior can be highly influenced by the strengthening branch of the friction curve ([35,36,37]) thus we will consider a friction law with an exponential decay plus a linear strengthening which will also give a good fit of the experimental data from Saha et al. [27]. In Tables 1 and 2 typical values of \(\mu _\mathrm{st},\mu _\mathrm{d}\) and \(\mu _\mathrm{st}/\mu _\mathrm{d}\) are reported for a given couple of materials. The data are taken from reliable sources and show that \(\mu _\mathrm{st}/\mu _\mathrm{d}\) can be easily greater than 2. In the next paragraphs we will show how the dynamical behavior of our model (particularly the bistability region) can be highly affected by \(\mu _\mathrm{st}/\mu _\mathrm{d}\). Finally, in the last paragraph, we will use our model to qualitatively fit Saha et al. [27] experimental results. 2 The mass-on-moving-belt model 2.1 The model The model is constituted by a linear oscillator of mass m, stiffness k, linear damping coefficient c, (see Fig. 1) which is placed on a frictional belt driven at a constant velocity \(v_\mathrm{d}\). The dynamical equilibrium equation of the mass is $$\begin{aligned} m\overset{..}{x}+c\overset{.}{x}+kx=F \end{aligned}$$ $$\begin{aligned} \left\{ \begin{array} [c]{ll} F=-N\mu \left( v_\mathrm{rel}\right) \text {sign}\left( v_\mathrm{rel}\right) &{} v_\mathrm{rel}\ne 0\\ \left| F\right| <\mu _\mathrm{st}N &{} v_\mathrm{rel}=0 \end{array} \right. \end{aligned}$$ where \(x\left( t\right) \), \(\overset{.}{x}\left( t\right) \), \(\overset{..}{x}\left( t\right) \) are, respectively, the displacement, velocity and acceleration of the mass, F is the friction force, N is the normal contact force, \(\mu \left( v_\mathrm{rel}\right) \) is the friction coefficient which is a function of the relative velocity \(v_\mathrm{rel}=\overset{.}{x}-v_\mathrm{d}\) and sign\(\left( \bullet \right) \) is the sign function. Friction between the mass and the belt is described using a velocity weakening–strengthening friction law of the relative velocity \(v_\mathrm{rel}\) $$\begin{aligned} \mu \left( v_\mathrm{rel}\right) =\mu _\mathrm{d}+\left( \mu _\mathrm{st}-\mu _\mathrm{d}\right) \exp \left( -\frac{\left| v_\mathrm{rel}\right| }{v_{0}}\right) +\mu _\mathrm{v}\frac{\left| v_\mathrm{rel}\right| }{v_{0}} \nonumber \\ \end{aligned}$$ where \(v_{0}\) is a reference velocity, \(\mu _\mathrm{v}\) is a constant, \(\mu \left( 0\right) =\mu _\mathrm{st}\) and \(\mu \left( v_\mathrm{rel}\rightarrow +\infty \right) =\mu _\mathrm{d}\). Notice that a steeper weakening (strengthening) of the friction law behavior is obtained for small \(v_0\) (large \(\mu _\mathrm{v}\)). We define the following quantities $$\begin{aligned} \begin{array} [c]{llll} \xi =\frac{c}{2\sqrt{km}}&x_{0}=\frac{N}{k}&\omega _\mathrm{n}=\sqrt{\frac{k}{m}}&\tau =\omega _\mathrm{n}t \end{array} \end{aligned}$$ and make all displacements dimensionless using \(x_{0}\). Substituting \(\frac{\mathrm{d}}{\mathrm{d}t}=\omega _\mathrm{n}\frac{\mathrm{d}}{\mathrm{d}\tau }\) the dynamical equilibrium Eq. (1) is rewritten as $$\begin{aligned} \overset{..}{\widetilde{x}}+2\xi \overset{.}{\widetilde{x}}+\widetilde{x}=\widetilde{F} \end{aligned}$$ where a tilde superposed indicates a dimensionless quantity, and derivatives are made with respect to the dimensionless time \(\tau \). 2.2 Linear stability analysis Assume to linearize the system about the static equilibrium position \(\widetilde{x}_\mathrm{e}=\) \(\mu \left( \widetilde{v}_\mathrm{rel}\right) =\) \(\mu \left( -\widetilde{v}_\mathrm{d}\right) \) and write \(\widetilde{x}\left( \tau \right) =\widetilde{x}_\mathrm{e}+\widetilde{y}\left( \tau \right) \) where \(\widetilde{y}\left( \tau \right) \) is a small perturbation \(\left( \overset{.}{\widetilde{y}}\left( \tau \right) <\widetilde{v}_\mathrm{d}\right) \). Substituting, \(\widetilde{x}\left( \tau \right) \) in (5) one obtains $$\begin{aligned}&\overset{..}{\widetilde{y}}+2\left( \xi +\frac{1}{2}\mu ^{\prime }\left( \widetilde{v}_\mathrm{d}\right) \right) \overset{.}{\widetilde{y}}+\widetilde{y} =0\end{aligned}$$ $$\begin{aligned}&\overset{..}{\widetilde{y}}+2T\overset{.}{\widetilde{y}}+\widetilde{y} =0 \end{aligned}$$ where \(\mu ^{\prime }\left( \widetilde{v}_\mathrm{d}\right) =\left. \frac{d\mu \left( \widetilde{v}_\mathrm{rel}\right) }{d\widetilde{v}_\mathrm{rel}}\right| _{\widetilde{v}_\mathrm{rel}=\widetilde{v}_\mathrm{d}}\) and \(T=\left( \xi +\frac{\mu ^{\prime }\left( \widetilde{v}_\mathrm{d}\right) }{2}\right) \). Equation (7) is a linear second-order ODE, thus its solution can be written in exponential form \(y\left( t\right) =Ye^{\lambda t}\), with in general \(\lambda \in \mathbb {C}\). Solving the eigenvalues problem we obtain $$\begin{aligned} \lambda _{1,2}=-T\pm \sqrt{T^{2}-1} \end{aligned}$$ The equilibrium is \(T\le -1\) \(-1<T<0\) \(T=0\) \(0<T<1\) \(T\ge 1\) Unstable node Unstable focus Stable focus Stable node The condition for linear stability is \(T>0\) which translates into the condition $$\begin{aligned} \beta =-\frac{\mu ^{\prime }\left( \widetilde{v}_\mathrm{d}\right) }{2\xi }<1 \end{aligned}$$ Putting \(\beta =1\) and using (3) one obtain an equation for \(\widetilde{v}_\mathrm{lw}\) which is the threshold above which steady sliding is stable (\(v_\mathrm{rel}>0\)) $$\begin{aligned} \widetilde{v}_\mathrm{lw}=\widetilde{v}_{0}\ln \left( \frac{\mu _\mathrm{st}-\mu _\mathrm{d}}{2\xi \widetilde{v}_{0}+\mu _\mathrm{v}}\right) \end{aligned}$$ The linear strengthening coefficient has to be \(\mu _\mathrm{v}>-2\xi \widetilde{v} _{0},\) otherwise the overall damping would be negative. On the other hand if \(\mu _\mathrm{v}\) exceed \(\left( \mu _\mathrm{st}-\mu _\mathrm{d}\right) -2\xi \widetilde{v}_{0}\) then \(\widetilde{v}_\mathrm{lw}=0\) and steady sliding will be stable for any driving velocity. 3 Stability to large amplitude perturbations In this section we investigate the stability of the SS solution against non-infinitesimal perturbations. Let us approximate the system response to be harmonic sliding \(x=A\cos \left( \omega t+\phi \right) +x_\mathrm{e}\), \(\overset{\cdot }{x}=-A\omega \sin \left( \omega t+\phi \right) \) about an equilibrium full sliding position, without reaching stick. The energy dissipated by the viscous damper \(E_\mathrm{v}\) is $$\begin{aligned} E_\mathrm{v}=\int _{0}^{2\pi }c\left( \overset{\cdot }{x}\right) ^{2}\frac{\mathrm{d}\tau }{\omega }=\pi \omega cA^{2} \end{aligned}$$ and depends on the amplitude squared. The total amount of energy dissipated by dry friction is \(E_\mathrm{f}^{T}\) $$\begin{aligned} E_\mathrm{f}^{T}=\int _{0}^{2\pi }N\mu \left( v_\mathrm{rel}\right) \text {sign}\left( v_\mathrm{rel}\right) \left( \overset{\cdot }{x}-v_\mathrm{d}\right) \frac{\mathrm{d}\tau }{\omega } \end{aligned}$$ which is clearly constituted by two contributions: one "mean" contribution due to the sliding at \(v=v_\mathrm{d}\) and the other from the oscillation \(x\left( t\right) \). Notice that the mean sliding term is purely dissipative. For defining a stability criterion to high amplitude perturbations only the contribution \(E_\mathrm{f}\) due to the oscillation around the equilibrium position is considered which is $$\begin{aligned} E_\mathrm{f}=-\frac{N}{\omega }\int _{0}^{2\pi }\mu \left( v_\mathrm{rel}\right) \overset{\cdot }{x}\mathrm{d}\tau \end{aligned}$$ where we use the condition \(\overset{\cdot }{x}<v_\mathrm{d}\). The frictional dissipated energy is $$\begin{aligned} E_\mathrm{f}= & {} 2\pi NA\left[ \mu _\mathrm{v}\frac{A\omega }{2v_{0}}-\left( \mu _\mathrm{s}-\mu _\mathrm{d}\right) \exp \left( -\frac{v_\mathrm{d}}{v_{0}}\right) \right. \nonumber \\&\left. \qquad I_\mathrm{B}\left( 1,\frac{A\omega }{v_{0}}\right) \right] \end{aligned}$$ where \(I_\mathrm{B}\left( 1,\frac{A\omega }{v_{0}}\right) \) is the modified Bessel function of the first kind, in Mathematica BesselI[n,z]. Notice that the weakening part of the friction law feeds energy into the system, while the strengthening part acts like a further viscous damping which dissipate energy. The stability condition to large perturbations is obtained imposing that the overall frictional energy provided by the velocity weakening friction law is less than the energy dissipated by the damper in a cycle, thus $$\begin{aligned} \frac{-E_\mathrm{f}}{E_\mathrm{v}}= & {} \frac{2N\left( \mu _\mathrm{s}-\mu _\mathrm{d}\right) \exp \left( -\frac{v_\mathrm{d}}{v_{0}}\right) I_\mathrm{B}\left( 1,\frac{A\omega }{v_{0}}\right) }{\omega cA}\nonumber \\&-\frac{\mu _\mathrm{v}N}{v_{0}c}<1 \end{aligned}$$ which permits a simple determination of the amplitude threshold. The stability condition (14) in dimensionless form reads $$\begin{aligned}&\frac{-E_\mathrm{f}}{E_\mathrm{v}}=\frac{\mu _\mathrm{d}}{\xi \sqrt{1-\xi ^{2}}\widetilde{A}}\left( \frac{\mu _\mathrm{s}}{\mu _\mathrm{d}}-1\right) \nonumber \\&\quad \exp \left( -\frac{\widetilde{v}_\mathrm{d} }{\widetilde{v}_{0}}\right) I_\mathrm{B}\left( 1,\frac{\widetilde{A}\sqrt{1-\xi ^{2}}}{\widetilde{v}_{0}}\right) -\frac{\mu _\mathrm{v}}{2\xi \widetilde{v}_{0}}<1 \end{aligned}$$ where we estimate \(\frac{\omega }{\omega _{n}}\simeq \sqrt{1-\xi ^{2}}\). Notice that in deriving the energy "provided" by the friction law we made the hypothesis \(\overset{\cdot }{x}-v_\mathrm{d}<0\), thus the criterion (14) holds up to the critical point where \(A\omega =v_\mathrm{d}\), then the stick phase will come into play. This allows to estimate an upper bound of validity for the criterion (14) "\(v_\mathrm{up}\)" that is obtained imposing in (14) \(\frac{-E_\mathrm{f}}{E_\mathrm{v}}=1\) and \(A\omega =v_\mathrm{d}\). 3.1 A numerical example In this paragraph a numerical example is presented where the equation of motion of the mass (5) is solved using the built-in MATLAB time integration solver ode23t, which integrate the system equations using the trapezoidal rule with a "free" interpolant, has no numerical damping and is recommended for moderately stiff problem. The friction force is implemented using the switch model which defines a narrow band of vanishing relative velocity where the stick equations are solved and makes the problem not stiff (the reader is referred to [39] for more details). We assumed that the mass sticks to the belt if \(\left| \widetilde{v}_\mathrm{rel}\right| <10^{-4}\). For a numerical example assume \(\xi =0.05\) and the following parameter for the exponential decaying friction law (3). $$\begin{aligned} \begin{array} [c]{l} \mu _\mathrm{d}=0.5; \frac{\mu _\mathrm{st}}{\mu _\mathrm{d}}=2; \widetilde{v}_{0}=0.5;\\ \mu _\mathrm{v}=\left[ -0.03,0,0.02,0.05\right] \end{array} \end{aligned}$$ Weakening–strengthening friction law with \(\mu _\mathrm{d}=0.5\), \(\frac{\mu _\mathrm{st}}{\mu _\mathrm{d}}=2\), \(\widetilde{v}_{0}=0.5, \xi =0.05\) and \(\mu _\mathrm{v}=\left[ -0.03, 0, 0.02, 0.05\right] \). The red square (circle) indicate \(\widetilde{v}_\mathrm{lw}\) \(\left( \widetilde{v}_\mathrm{up}\right) \). (Color figure online) Imposing \(\beta =1\) in (9) and using (14), with \(\frac{-E_\mathrm{f}}{E_\mathrm{v}}=1\) and \(\widetilde{A}=\widetilde{v}_\mathrm{d}\), the lower (upper) boundary is computed \(\widetilde{v}_\mathrm{lw}\) \(\left( \widetilde{v} _\mathrm{up}\right) \). In Fig. 2 the friction law is reported: the bistable region is expected for values of the driving velocity \(\widetilde{v}_\mathrm{d}\) in between the two boundaries \(\widetilde{v}_\mathrm{lw}\) and \(\widetilde{v}_\mathrm{up}\) which are labeled, respectively, with a square and a circle. Notice that exponentially decaying friction laws with \(\mu _\mathrm{v}=0\) are commonly used in the literature for example for break squeal analysis [40]. On the other hand Bar-Sinai et al. [28] showed experimental observations of velocity weakening–strengthening friction in various materials, thus we will focus on the case \(\mu _\mathrm{v}\ge 0\). Bifurcation diagram for the MB model, limit cycle amplitude versus the driving velocity (dimensionless form). The equilibrium solutions are obtained increasing (blue circles) and decreasing (red triangles) the driving velocity \(\widetilde{v}_\mathrm{d}\). The gray squares represent unstable limit cycle obtained solving the ODE backwards in time. The results are coincident with the full criterion line (solid blue line) as it exactly represents the situation \(-E_\mathrm{f}/E_\mathrm{v}=1\) (see 14). (Color figure online) (a–b–c) displacement \(\widetilde{x}(\tau )\) as a function of time \(\tau \) in dimensionless form for driving velocity \(\widetilde{v}_\mathrm{d}=1.5\), and friction law parameters as in Fig. 3 but with \(\mu _\mathrm{v}=0\). Respectively, a stick–slip LC, b ULC (backward time integration), c SS state. On the right panel the phase plot is shown. (Color figure online) In Fig. 3 the bifurcation diagram for the MB model is shown where the dimensionless amplitude of the vibration is plotted against the driving velocity in dimensionless form with \(\mu _\mathrm{v}=0\). Notice that Steady Sliding solutions ("SS") have \(\widetilde{A}=0\). The equilibrium solutions are obtained increasing (blue circles) and decreasing (red triangles) the driving velocity \(\widetilde{v}_\mathrm{d}\). A bistable zone is found for \(\widetilde{v} _\mathrm{lw}<\widetilde{v}_\mathrm{d}<\widetilde{v}_\mathrm{up}\) as expected, where Limit Cycles ("LC") and SS solutions coexists. In between the two stable solutions the gray squares represent Unstable Limit Cycles ("ULC") that have been obtained solving the ODE backwards in time. Notice that those solutions match almost perfectly the equation \(\frac{-E_\mathrm{f}}{E_\mathrm{v}}=1\), that is the stability criterion (14, blue solid line) when one imposes a perfect balance between the energy supplied and dissipated in the system. The solution is unstable as a small perturbation leads either on the stick–slip LC or on the SS solution. Figure 4 reports on the left side (a–b–c) time integration results for \(\widetilde{v}_\mathrm{d}=1.5\), while on the right the solutions are reported together in the phase plane. Respectively, Fig. 4a represents the case of stick–slip LC, Fig. 4b refers to the ULC (full sliding solution) and Fig. 4c shows a case where vibrations are damped down up to the steady sliding state. The unstable limit cycle divides the phase plane into two basins of attraction: every solution initialized outside the ULC ends up in the stick–slip LC, otherwise SS is obtained. Below we summarize the possible dynamical behavior of the mass as a function of the driving velocity: $$\begin{aligned} \left\{ \begin{array} [c]{lll} \widetilde{v}_\mathrm{d}<\widetilde{v}_\mathrm{lw}, &{} &{} \text {LC}\\ \widetilde{v}_\mathrm{lw}<\widetilde{v}_\mathrm{d}<\widetilde{v}_\mathrm{up}, &{} &{} \text {SS-LC}\\ \widetilde{v}_\mathrm{d}>\widetilde{v}_\mathrm{up}, &{} &{} \text {SS} \end{array} \right. \end{aligned}$$ Figure 5 shows the curves of \(\widetilde{v}_\mathrm{lw}\) (Fig. 5a) and \(\left( \widetilde{v}_\mathrm{up}-\widetilde{v}_\mathrm{lw}\right) \) (Fig. 5b) plotted against \(\widetilde{v}_{0}\) for different \(\mu _\mathrm{s}/\mu _\mathrm{d}=\left[ 1.4-5\right] \) and for \(\xi =0.05\), \(\mu _\mathrm{d}=0.5\) and \(\mu _\mathrm{v}=0\). The lower boundary \(\widetilde{v}_\mathrm{lw}\) vanishes for both very small and high \(\widetilde{v}_{0}\) [panel (a)]. In the limit, the first case would be the classical Coulomb friction model with two friction coefficients \(\mu _\mathrm{s}>\mu _\mathrm{d}\), while, due to very slow decreasing of \(\mu _\mathrm{s}(v_\mathrm{rel})\), the second case would be in the limit of infinite \(\widetilde{v}_{0}\) the Coulomb friction model with just one friction coefficient. In both cases \(\widetilde{v}_\mathrm{lw}=0\) and even a small viscous damping will make steady sliding always stable, at any driving velocity provided \(\xi >0\). In between those two limit cases \(\widetilde{v}_\mathrm{lw}\) as well as \(\left( \widetilde{v}_\mathrm{up}-\widetilde{v}_\mathrm{lw}\right) \) reaches a maximum value. Figure 5b shows that the width of the bistability region vanishes only for high \(\widetilde{v}_{0}\) (thus in the limit of Coulomb friction with one friction coefficient), while even at very small \(\widetilde{v}_{0}\) \(\left( \text {e.g., }\widetilde{v}_{0}\simeq 10^{-3}\right) \), a well-defined bistable zone exists, even if of small size (Fig. 6). This agrees with [41] which found a subcritical bifurcation in a MB model even with the classical Coulomb friction model with a sharp jump from \(\mu _\mathrm{s}\) to \(\mu _\mathrm{d}\). It is shown that higher the ratio \(\mu _\mathrm{s}/\mu _\mathrm{d}\) the stronger is the dependence of \(\left( \widetilde{v}_\mathrm{up}-\widetilde{v}_\mathrm{lw}\right) \) on \(\widetilde{v}_{0}\). a \(\widetilde{v}_\mathrm{lw}\) and b \(\left( \widetilde{v}_\mathrm{up}-\widetilde{v}_\mathrm{lw}\right) \) plotted against \(\widetilde{v}_{0}\) for \(\xi =0.05\), \(\mu _\mathrm{d}=0.5\), \(\mu _\mathrm{v}=0\), and \(\mu _\mathrm{s}/\mu _\mathrm{d}=\left[ 1.4,2.3,3.2,4.1,5\right] \) a \(\widetilde{v}_\mathrm{lw}\) and b \(\left( \widetilde{v}_\mathrm{up}-\widetilde{v} _\mathrm{lw}\right) \) plotted against \(\widetilde{v}_{0}\) for \(\mu _\mathrm{d}=0.5,\ \mu _\mathrm{s}/\mu _\mathrm{d}=2\), \(\xi =0.05\) and \(\mu _\mathrm{v}=\left[ 0,0.01,0.025,0.07,0.12\right] \) Friction force \(F\left( v_\mathrm{rel}\right) /N\) versus the relative velocity as reported in Saha et al. [27] (red circles). Black stars indicate the points considered in the fitting with the exponential decaying friction law with (blue dashed line) and without (black solid line) the strengthening term. (Color figure online) Bifurcation diagram, amplitude versus driving velocity. Experimental results from Saha et al. [27] red squares, numerical results obtained by sequential continuation from the right to the left and viceversa (blue circles). The panels (a, b) report, respectively, the results obtained with the VW and VWS friction law. (Color figure online) 4 Comparison with experimental results Recently Saha et al. [27] performed an experimental investigation on a MB model and found experimentally that in their set up the bifurcation is a subcritical Hopf bifurcation where a bistable region exists. The test rig is constituted of a spring–mass system where a rectangular block made of mild steel slides on a silicone rubber belt. In [27] all the necessary parameters that characterize the experimental test rig are provided $$\begin{aligned}&m=0.39\text { kg}, k=6.62\times 10^{3}\text { N/m},\quad \nonumber \\&c=1.15\text { Ns/m}, \end{aligned}$$ which leads to \(\xi =0.0113\). The experimentally measured friction force obtained by Saha et al. [27] is reported in Fig. 7 with red circles. Notice that there is not any "stick phase" while a hysteresis loop appears to dominate the zone of small relative velocity. The hysteresis phenomena also exist in the slip phase but are less important. Clearly, a weakening–strengthening friction law as the one considered in this work can not be able to reproduce such a behavior, particularly because Saha et al. [27] reported the overall friction force, without any information about the normal load, which makes it impossible to retrieve the actual friction coefficient at the interface. In fact Saha et al. [27] admit that the measured oscillations were modulated by external unwanted factors, such as: joint in the belt, nonuniform surface properties of the belt, flexibility of the belt and vibration of the supporting structure. Being aware of those limitations, we try to use our model and compare with those experiments, which are quite rare in the literature, with the aim to reproduce at least a the same dynamical behavior experimentally observed. For estimating the parameters of the friction model (3) we neglect the hysteretic loop of the friction law close to \(v_\mathrm{rel}\sim 0\), instead we consider only the points indicated in Fig. 7 with black stars. Here the results of two friction curves are considered, respectively, with (Fig. 7, blue dashed curve, "VWS" friction law in the following) and without (Fig. 7, black solid curve, "VW" friction law in the following) linear strengthening. As there were no information about the normal load magnitude, we arbitrarily assume \(N=30\) N which led to a reasonable set of parameters \(\left( \text {eg. }\mu _\mathrm{st},\mu _\mathrm{d}\right) \) for both VW and VWS friction laws: $$\begin{aligned} \begin{array} [c]{lllll} \text {VW:} &{} \mu _\mathrm{d}=0.48 &{} \mu _\mathrm{st}=1 &{} \mu _\mathrm{v}=0 &{} \widetilde{v}_{0}=0.057\\ \text {VWS:} &{} \mu _\mathrm{d}=0.38 &{} \mu _\mathrm{st}=1 &{} \mu _\mathrm{v}=0.009 &{} \widetilde{v} _{0}=0.08 \end{array}\nonumber \\ \end{aligned}$$ In Fig. 8a–b the bifurcation diagram reported in Saha et al. [27] is shown in dimensionless notation (red squares), where the vibration amplitude is reported against the driving velocity. Notice that there is no SS state measured in the experiment, but more precisely a limit cycle of small amplitude. The authors explain in [27] this is due to modulation of the normal load (which in the MB model is assumed constant). Figure 8a reports the numerical results obtained by sequential continuation from the right to the left and viceversa using the exponential decaying friction law without strengthening. Although the upper and lower limit do not match exactly the vibration amplitude of the stick–slip LC is quantitatively predict by the MB model. Notice that any other choice of the normal load N would just rescale the bifurcation diagram without affecting its shape. Figure 8b reports the numerical results obtained when the effect of linear strengthening is taken into account in the friction law. Even though the friction law seems to fit better the experimental measured friction law (Fig. 7, dashed line) the results in term of bifurcation diagram are poorer. Those discrepancies could arise from the modulation of the normal load, but unfortunately we do not have quantitative information about it, thus we can not make further improvements in this direction. The results surely show that the in such a system the dynamical behavior is very sensitive to the exact shape of the friction law. The dynamical behavior of a single-degree-of-freedom system (the classical mass-on-moving-belt model) has been studied, focusing on the possibility of the so-called hard effect of a subcritical Hopf bifurcation, using a velocity weakening–strengthening friction law \(\mu \left( v_\mathrm{rel}\right) \). It has been shown that in the range of driving velocity \(\widetilde{v} _\mathrm{lw}<\widetilde{v}_\mathrm{d}<\widetilde{v}_\mathrm{up}\) two stable solutions coexist, one in steady sliding, the other as a stick–slip limit cycle. Linear stability analysis provides \(\widetilde{v}_\mathrm{lw}\) while a stability analysis to large perturbations provides the upper boundary \(\widetilde{v}_\mathrm{up}\). For a given \(\mu _\mathrm{s}/\mu _\mathrm{d}\) very sharp decaying of the friction coefficient to the dynamic value does not eliminate the bistable region, while if the decaying is slow enough the bistability region shrinks and only the steady sliding state survives. Introducing the strengthening branch has little effect on \(\widetilde{v}_\mathrm{lw}\), but strongly decreases \(\widetilde{v}_\mathrm{up}\), reducing the bistability region. In the last paragraph we used our model to fit the experimental results provided by Saha et al. [27]. It was shown that the vibration amplitude at a given velocity of the belt seems to correlate well with experiments, nevertheless the width of the bistability region is very sensitive to the shape of the friction law. "In a dynamical system, bistability means the system has two stable equilibrium states." From: Wikipedia (https://en.wikipedia.org/wiki/Bistability). Pereira, D.A., Vasconcellos, R.M., Hajj, M.R., Marques, F.D.: Insights on aeroelastic bifurcation phenomena in airfoils with structural nonlinearities. Math. Eng. Sci. Aerosp. (MESA), 6(3), 399–424 (2015) Liu, J.K., Zhao, L.C.: Bifurcation analysis of airfoils in incompressible flow. J. Sound Vib. 154(1), 117–124 (1992) Article MATH Google Scholar Weiss, C., Morlock, M.M., Hoffmann, N.: Friction induced dynamics of ball joints: instability and post bifurcation behavior. Eur. J. Mech. A/Solids 45, 161–173 (2014) Article MathSciNet Google Scholar Gräbner, N., Tiedemann, M., Von Wagner, U., Hoffmann, N.: Nonlinearities in friction brake NVH-experimental and numerical studies (No. 2014-01-2511). SAE Technical Paper Thomsen, J.J.: Vibrations and Stability: Advanced Theory, Analysis, and Tools (2nd edn., revised). Springer, Berlin (2003). ISBN 3-540-40140-7 Tondl, A.: Quenching of Self-excited Vibrations. Elsevier Science Pub Co., New York (1991) MATH Google Scholar Hetzler, H., Schwarzer, D., Seemann, W.: Steady-state stability and bifurcations of friction oscillators due to velocity-dependent friction characteristics. Proc. Inst. Mech. Eng. Part K J. Multi-body Dyn. 221(3), 401–412 (2007) Hetzler, H.: On the effect of nonsmooth Coulomb friction on Hopf bifurcations in a 1-DoF oscillator with self-excitation due to negative damping. Nonlinear Dyn. 69(1), 601–614 (2012) Won, H.I., Chung, J.: Stick–slip vibration of an oscillator with damping. Nonlinear Dyn. 86, 257 (2016). doi:10.1007/s11071-016-2887-x Nayfeh, H., Mook, D.T.: Nonlinear Oscillations. Wiley, New York (1979) Mitropolskii, Y.A., Van Dao, N.: Applied Asymptotic Methods in Nonlinear Oscillations. Kluwer, Dorderecht (1997) Book Google Scholar Popp, K.: Some model problems showing stick–slip motion and chaos. Frict. Induc. Vib. Chatter Squeal Chaos ASME DE 49, 1–12 (1992) Popp, K., Hinrichs, N., Oestreich, M.: Analysis of a self-excited friction oscillator with external excitation. In: Guran, A., Pfeiffer, F., Popp, K. (eds.) Dynamics with Friction. Modeling, Analysis and Experiment 2 Part I. World Scientific, Singapore (1996) Hinrichs, N., Oestreich, M., Popp, K.: On the modelling of friction oscillators. J. Sound Vib. 216, 435–459 (1998) Andreaus, U., Casini, P.: Dynamics of friction oscillators excited by a moving base and/or driving force. J. Sound Vib. 245(4), 685–699 (2001) Awrejcewicz, J., Holicke, M.M.: Smooth and Nonsmooth High Dimensional Chaos and the Melnikov-type Methods, vol. 60. World Scientific, Singapore (2007) Awrejcewicz, J., Andrianov, I.V., Manevitch, L.I.: Asymptotic Approaches in Nonlinear Dynamics: New Trends and Applications, vol. 69. Springer, Berlin (2012) Thomsen, J.J.: Using fast vibrations to quench friction-induced oscillations. J. Sound Vib. 228(5), 1079–1102 (1999) Hoffmann, N.P.: Linear stability of steady sliding in point contacts with velocity dependent and LuGre type friction. J. Sound Vib. 301, 1023 (2007) De Wit, C.C., Olsson, H., Astrom, K.J., Lischinsky, P.: A new model for control of systems with friction. IEEE Trans. Autom. Control 40(3), 419–425 (1995) Article MathSciNet MATH Google Scholar Awrejcewicz, J., Olejnik, P.: Analysis of dynamic systems with various friction laws. Appl. Mech. Rev. 58(6), 389–411 (2005) Brommundt, E., Krmer, E.: Instability and self-excitation caused by a gear coupling in a simple rotor system. Forschung im Ingenieurwesen 70(1), 25–37 (2005) Hetzler, H., Schwarzer, D., Seemann, W.: Analytical investigation of steady-state stability and Hopf-bifurcations occurring in sliding friction oscillators with application to low-frequency disc brake noise. Commun. Nonlinear Sci. Numer. Simul. 12(1), 83–99 (2007) Papangelo, A., Grolet, A., Salles, L., Hoffmann, N., Ciavarella, M.: Snaking bifurcations in a self-excited oscillator chain with cyclic symmetry. Commun. Nonlinear Sci. Numer. Simul. 44, 108–119 (2017) Hoffmann, N.: Transient growth and stick–slip in sliding friction. J. Appl. Mech. 73(4), 642–647 (2006) Saha, A., Bhattacharya, B., Wahi, P.: A comparative study on the control of friction-driven oscillations by time-delayed feedback. Nonlinear Dyn. 60(1–2), 15–37 (2010) Saha, A., Wahi, P., Bhattacharya, B.: Characterization of friction force and nature of bifurcation from experiments on a single-degree-of-freedom system with friction-induced vibrations. Tribol. Int. 98, 220–228 (2016) Bar-Sinai, Y., Spatschek, R., Brener, E.A., Bouchbinder, E.: On the velocity-strengthening behavior of dry friction. J. Geophys. Res. Solid Earth 119(3), 1738–1748 (2014) Jacobson, B.: The Stribeck memorial lecture. Tribol. Int. 36(11), 781–789 (2003) Stribeck, R.: Kugellager für beliebige Belastungen. Zeitschrift des Vereines deutscher Ingenieure (part I) 45(3), 73–79 (1901) Stribeck, R.: Kugellager für beliebige Belastungen. Zeitschrift des Vereines deutscher Ingenieure (part II) 45(4), 118–125 (1901) Stribeck, R.: Die wesentlischen Eigenschaften der Gleit- und Rollenlager. Zeitschrift des Vereines deutscher Ingenieure (part I) 46(37), 1341–1348 (1902) Stribeck, R.: Die wesentlischen Eigenschaften der Gleit- und Rollenlager. Zeitschrift des Vereines deutscher Ingenieure (part II) 46(38), 1432–1438 (1902) Stribeck, R.: Die wesentlischen Eigenschaften der Gleit- und Rollenlager. Zeitschrift des Vereines deutscher Ingenieure (part III) 46(39), 1463–1470 (1902) Papangelo, A., Ciavarella, M.: Some observations on Bar Sinai, Brener and Bouchbinder (BSBB) model for friction. Meccanica 52, 1–8 (2016) MathSciNet MATH Google Scholar Bouchbinder, E., Brener, E.A., Barel, I., Urbakh, M.: Slow cracklike dynamics at the onset of frictional sliding. Phys. Rev. Lett. 107(23), 235501 (2011) Bar Sinai, Y., Brener, E.A., Bouchbinder, E.: Slow rupture of frictional interfaces. Geophys. Res. Lett. 39(3), L03308 (2012). doi:10.1029/2011GL050554 Rabinowicz, E.: The nature of the static and kinetic coefficients of friction. J. Appl. Phys. 22(11), 1373–1379 (1951) Leine, R.I., Van Campen, D.H., De Kraker, A., Van den Steen, L.: Stick–slip vibrations induced by alternate friction models. Nonlinear Dyn. 16(1), 41–54 (1998) Oberst, S., Zhang, Z., Lai, J.: Model updating of brake components and subassemblies for improved numerical modelling in brake squeal. In: Presented at the International Congress on Sound and Vibration (ICSV22). Florence, Italy (2015) Leine, R.I., Van Campen, D.H.: Discontinuous bifurcations of periodic solutions. Math. Comput. Model. 36(3), 259–273 (2002) A. P. and N. H. are thankful to the DFG (German Research Foundation) for funding the project HO 3852/11-1. Department of Mechanical Engineering, Hamburg University of Technology, Am Schwarzenberg-Campus 1, 21073, Hamburg, Germany A. Papangelo & N. Hoffmann Department of Mechanical Engineering, Center of Excellence in Computational Mechanics, Politecnico di BARI, Viale Gentile 182, 70126, Bari, Italy M. Ciavarella Imperial College London, Exhibition Road, London, SW7 2AZ, UK N. Hoffmann A. Papangelo Correspondence to A. Papangelo. Papangelo, A., Ciavarella, M. & Hoffmann, N. Subcritical bifurcation in a self-excited single-degree-of-freedom system with velocity weakening–strengthening friction law: analytical results and comparison with experiments. Nonlinear Dyn 90, 2037–2046 (2017). https://doi.org/10.1007/s11071-017-3779-4 Mass-on-moving-belt model Exponential decaying Weakening–strengthening friction law Bistable equilibrium Subcritical bifurcation
CommonCrawl
A soft set theoretic approach to an AG-groupoid via ideal theory with applications Faisal Yousafzai1 & Mohammed M. Khalaf2 In this paper, we study the structural properties of a non-associative algebraic structure called an AG-groupoid by using soft set theory. We characterize a right regular class of an AG-groupoid in terms of soft intersection ideals and provide counter examples to discuss the converse part of various problems. We also characterize a weakly regular class of an AG***-groupoid by using generated ideals and soft intersection ideals. We investigate the relationship between SI-left-ideal, SI-right-ideal, SI-two-sided-ideal, and SI-interior-ideal of an AG-groupoid over a universe set by providing some practical examples. The concept of soft set theory was introduced by Molodtsov in [16]. This theory can be used as a generic mathematical tool for dealing with uncertainties. In soft set theory, the problem of setting the membership function does not arise, which makes the theory easily applied to many different fields [1, 2, 5–9]. At present, the research work on soft set theory in algebraic fields is progressing rapidly [19, 21–23]. A soft set is a parameterized family of subsets of the universe set. In the real world, the parameters of this family arise from the view point of fuzzy set theory. Most of the researchers of algebraic structures have worked on the fuzzy aspect of soft sets. Soft set theory is applied in the field of optimization by Kovkov in [12]. Several similarity measures have been discussed in [15], decision-making problems have been studied in [21], and reduction of fuzzy soft sets and its applications in decision-making problems have been analyzed in [13]. The notions of soft numbers, soft derivatives, soft integrals, and many more have been formulated in [14]. This concept have been used for forecasting the export and import volumes in international trade [28]. A. Sezgin have introduced the concept of a soft sets in non-associative semigroups in [24] and studied soft intersection left (right, two-sided) ideals, (generalized) bi-ideals, interior ideals, and quasi-ideals in an AG-groupoid. A lot of work has been done on the applications of soft sets to a non-associative rings by T. Shah et al. in [25, 26]. They have characterized the non-associative rings through soft M-systems and different soft ideals to get generalized results. This paper is the continuation of the work carried out by F. Yousafzai et al. in [29] in which they define the smallest one-sided ideals in an AG-groupoid and use them to characterize a strongly regular class of an AG-groupoid along with its semilattices and soft intersection left (right, two-sided) ideals, and bi-ideals. The main motivation behind this paper is to study some structural properties of a non-associative structure as it has not attracted much attention compared to associative structures. We investigate the notions of SI-left-ideal, SI -right-ideal, SI-two-sided-ideal, and SI -interior-ideal in an AG-groupoid. We provide examples/counter examples for these SI-ideals and study the relationship between them in detail. As an application of our results, we get characterizations of a right regular AG-groupoid and weakly regular AG***-groupoid in terms of SI-left-ideal, SI-right-ideal, SI-two-sided-ideal, and SI-interior-ideal. AG-groupoids An AG-groupoid is a non-associative and a non-commutative algebraic structure lying in a gray area between a groupoid and a commutative semigroup. Commutative law is given by abc=cba in ternary operations. By putting brackets on the left of this equation, i.e., (ab)c=(cb)a, in 1972, M. A. Kazim and M. Naseeruddin introduced a new algebraic structure called a left almost semigroup abbreviated as an LA-semigroup [10]. This identity is called the left invertive law. P. V. Protic and N. Stevanovic called the same structure an Abel-Grassmann's groupoid abbreviated as an AG-groupoid [20]. This structure is closely related to a commutative semigroup because a commutative AG-groupoid is a semigroup [17]. It was proved in [10] that an AG-groupoid S is medial, that is, ab·cd=ac·bd holds for all a,b,c,d∈S. An AG-groupoid may or may not contain a left identity. The left identity of an AG-groupoid permits the inverses of elements in the structure. If an AG-groupoid contains a left identity, then this left identity is unique [17]. In an AG-groupoid S with left identity, the paramedial law ab·cd=dc·ba holds for all a,b,c,d∈S. By using medial law with left identity, we get a·bc=b·ac for all a,b,c∈S. We should genuinely acknowledge that much of the ground work has been done by M. A. Kazim, M. Naseeruddin, Q. Mushtaq, M. S. Kamran, P. V. Protic, N. Stevanovic, M. Khan, W. A. Dudek, and R. S. Gigon. One can be referred to [3, 4, 11, 17, 18, 20, 27] in this regard. A nonempty subset A of an AG-groupoid S is called a left (right, interior) ideal of S if SA⊆A (AS⊆A,SA·S⊆A). Equivalently, a nonempty subset A of an AG-groupoid S is called a left (right, interior) ideal of S if SA⊆A (AS⊆A,SA·S⊆A). By two-sided ideal or simply ideal, we mean a nonempty subset of an AG-groupoid S which is both left and right ideal of S. In [23], Sezgin and Atagun introduced some new operations on soft set theory and defined soft sets in the following way : Let U be an initial universe set, E a set of parameters, P(U) the power set of U, and A⊆E. Then, a soft set (briefly, a soft set) fA over U is a function defined by : $$f_{A}:E\rightarrow P(U) \text{ such that } f_{A}(x)=\emptyset,\ \text{if}\ x\notin A. $$ Here, fA is called an approximate function. A soft set over U can be represented by the set of ordered pairs as follows: $$f_{A}=\left \{ (x,\text{ }f_{A}(x)):\text{ }x\in E,\text{ }f_{A}(x)\in P(U)\right \}. $$ It is clear that a soft set is a parameterized family of subsets of U. The set of all soft sets is denoted by S(U). Let fA,fB∈S(U). Then, fA is a soft subset of fB, denoted by \(f_{A}\overset {\sim }{\subseteq }f_{B}\) if fA(x)⊆fB(x) for all x∈S. Two soft sets fA,fB are said to be equal soft sets if \(f_{A}\overset {\sim }{\subseteq }f_{B}\) and \(\overset { \sim }{f_{B}\subseteq f_{A}}\) and is denoted by \(f_{A}\overset {\sim }{=} f_{B} \). The union of fA and fB, denoted by \(f_{A}\overset {\sim }{ \cup }f_{B},\) is defined by \(f_{A}\overset {\sim }{\cup }f_{B}=f_{A\cup B},\) where fA∪B(x)=fA(x)∪fB(x),∀x∈E. In a similar way, we can define the intersection of fA and fB. Let fA,fB∈S(U). Then, the soft product[23] of fA and fB, denoted by fA∘fB, is defined as follows : $$(f_{A}\circ f_{B})(x)=\left \{ \begin{array}{c} \bigcup \limits_{x=yz}\{f_{A}(y)\cap g_{_{B}}(z)\} \text{ \ \ \ \ if } \exists \text{ }y,z\in S\text{ }\ni \text{ }x=yz \\ \emptyset \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ otherwise } \end{array} \right.. $$ Let fA be a soft set of an AG-groupoid S over a universe U. Then, fA is called a soft intersection left ideal,right ideal,interior ideal (briefly,SI -left-ideal,SI-right-ideal, SI-interior-ideal) of S over U if it satisfies fA(xy)⊇fA(y)(fA(xy)⊇fA(x),fA(xy·z)⊇fA(y)),∀x,y∈S. A soft set fA is called a soft intersection two-sided ideal (briefly, SI -two-sided-ideal) of S over U if fA is an SI -left-ideal and an SI-right-ideal of S over U. Let A be a nonempty subset of S. We denote by XA the soft characteristic functionof A and define it as follows: $$X_{A}=\left \{ \begin{array}{c} U\text{ \ \ \ if }x\in A \\ \emptyset \text{\ \ \ if }x\notin A\text{\ \ \ } \end{array} \right.. $$ Note that the soft characteristic mapping of the whole set S, denoted by XS, is called an identity soft mapping. Basic results [29] For a nonempty subset A of an AG-groupoid S, the following conditions are equivalent : (i)Ais a left ideal (right ideal,interior ideal)of S; (ii)A soft set XAofSoverUis an SI-left-ideal (SI -right-ideal,SI-interior-ideal)ofSover U. [29] Let S be an AG-groupoid. For ∅≠A,B⊆S, the following assertions hold : \((i) X_{A}\overset {\sim }{\cap }X_{B}=X_{A\cap B};\) (ii)XA∘XB=XAB. [29]The set (S (U),∘) forms an AG-groupoid and satisfies all the basic laws. [29] If S is an AG-groupoid, then XS∘XS=XS. Let fA be anysoft set of a right regular AG-groupoid S with left identity over U. Then, fAisan SI-right-ideal (SI-left-ideal, SI-interior-ideal)ofSoverUif and only if fA=fA∘XS(fA=XS∘fA,fA=(XS∘fA)∘XS) and fAis soft semiprime. It is simple. □ For every SI-interior-ideal fA of a right regular AG-groupoid S with left identity over U,fA=XS∘fA=f∘XS. Assume that fA is any SI-interior-ideal of S with left identity over U. Then, by using Remark 2 and Lemma 3, we have XS∘fA=(XS∘XS)∘fA=(fA∘XS)∘XS=(fA∘XS)∘(XS∘XS)=(XS∘XS)∘(XS∘fA)=((XS∘fA)∘XS)∘XS=fA∘XS and XS∘fA=(XS∘XS)∘fA=(fA∘XS)∘XS=(XS∘fA)∘XS=fA. □ [29] Let fA be any soft set of an AG-groupoid S over U. Then, fAis anSI-right-ideal (SI-left-ideal)ofSoverUif and only if\(f_{A}\circ X_{S}\overset {\sim }{\subseteq }f_{A} (X_{S}\circ f_{A}\overset {\sim }{\subseteq }f_{A}).\) A right (left, two-sided) ideal R of an AG-groupoid S is semiprime if and only if XR is softsemiprime over U. Let R be a right ideal of S. By Lemma 1, XR is an SI-right-ideal of S over U. If a∈S, then by given assumption (XR)(a)⊇(XR)(a2). Now a2∈R, implies that a∈R. Thus every right ideal of S is semiprime. The converse is simple. Similarly every left or two-sided ideal of S is semiprime if and only if its identity soft mapping is soft semiprime over U. □ Corollary 1 If any SI-right-ideal (SI -left-ideal, SI-two-sided-ideal) of an AG-groupoid S is S-semiprime, then any right (left, two-sided) ideal of S is semiprime. The converse of Lemma 6 is not true in general which can be followed from the following example. Let us consider an initial universe setUgiven by\(U= \mathbb {Z},\) and S={1,2,3,4,5}be a set of parameters with the following binary operation. It is easy to check that (S,∗)is an AG-groupoid with left identity4. Notice that the only left ideals ofSare {1,2,5}, {1,3,5}, {1,2,3,5}and {1,5}respectively which are semiprime. Clearly, the right and two-sided ideals ofSare {1,2,3,5}and {1,5}which are also semiprime. On the other hand,let A=Sand definea soft set fAofSoverUas follows: \({f}_{A}{(x)=}\left \{ \begin {array}{c} \mathbb {Z} \ \text {if}\ x=1 \\ 4 \mathbb {Z} \ \text {if}\ x=2 \\ 4 \mathbb {Z} \ \text {if}\ x=3 \\ 8 \mathbb {Z} \ \text {if}\ x=4 \\ 2 \mathbb {Z} \ \text {if}\ x=5 \end {array} \right \} {.}\) Then, fAis an SI-right-ideal (SI-left-ideal,SI-two-sided-ideal )ofSoverUbut fAis not soft semiprime. Indeed\(f_{A}(2) \varsupsetneq f_{A}(2^{2}).\) If any SI-interior-ideal of an AG-groupoid S with left identity over U is an S-semiprime over U, then any interior ideal of S is semiprime. The converse inclusion is not true in general. The following lemma will be used frequently in upcoming section without mention in the sequel. Let S be an AG-groupoid with left identity. Then, Sa and Sa2 are the left and interior ideals of S respectively. Right regular AG-groupoids An element a of an AG-groupoid S is called a left (right) regular element of S if there exists some x∈S such that a=a2x (a=xa2) and S is called left (right) regular if every element of S is left (right) regular. Let S be an AG-groupoid with left identity. Then, the concepts of left and right regularity coincide in S. Indeed,for every a∈Sthere exist some x,y∈Ssuch that a=xa2=a2y. As a=xa2=ex·aa=aa·xe=a2y,and a=a2y=xa2also holds in a similar way. Let us give an example of an AG-groupoid which will be used for the converse parts of various problems in this section. Let us consider an AG-groupoid S={1,2,3,4,5} with left identity 4 defined in the following multiplication table. It is easy to check that S is non-commutative and non-associative. An AG-groupoid S is called left (right) duo if every left (right) ideal of S is a two-sided ideal of S and is called duo if it is both left and right duo. Similarly an AG-groupoid S is called an SI-left (SI-right) duo if every SI-left-ideal (SI-right-ideal) of S over U is an SI-two-sided-ideal of S over U, and S is called an SI-duo if it is both an SI-left and an SI-right duo. If every SI-left-ideal of an AG-groupoid S with left identity over U is anSI-interior-ideal of S over U, then S is left duo. Let I be any left ideal of S with left identity. Now by Lemma 1, the identity soft mapping XI is an SI-left-ideal of S over U. Thus, by hypothesis, XI is anSI-interior-ideal of S over U, and by using Lemma 1 again, I is an interior ideal of S. Thus IS=I·SS=S·IS=SS·IS=SI·SS=SI·S⊆I. This shows that S is left duo. □ The converse part of Lemma 8 is not true in general. Let us consider an AG-groupoid S (from Example 2). It is easy to see that S is left duo because the only left ideal of S is {1,5} which is also a right ideal of S. Let A=S and define a soft set fA of S over U={p1,p2,p3,p4,p5,p6} as follows: \(f_{A}(x)=\left \{ \begin {array}{c} U\ \text {if}\ x=1 \\ \{p_{1},p_{2},p_{3},p_{4}\}\ \text {if}x\ =2 \\ \{p_{2},p_{3},p_{4,},p_{5}\}\ \text {if}\ x=3 \\ \{p_{3},p_{4,},p_{5}\}\ \text {if}\ x=4 \\ \{p_{1},p_{2},p_{3},p_{4,}p_{5}\}\ \text {if}\ x=5 \end {array} \right \}.\) Then, it is easy to see that fA is anSI -left-ideal of S over U but it is not anSI -interior-ideal of S over U because \(f_{A}(42\ast 4)\varsupsetneq f_{A}(2).\) Every interior ideal of an AG-groupoid S with left identity is a right ideal of S. Every SI-right-ideal of an AG-groupoid S with left identity is anSI-interior-ideal of S over U if and only if S is right duo. Let S be a right regular AG-groupoid with left identity. Then, S is left duo if and only if every SI-left-ideal of S over U is anSI-interior-ideal of S over U. Necessity. Let a right regular S with left identity be a left duo, and assume that fA is any SI-left-ideal of S over U. Let a,b,c∈S, then b≤xb2 for some x∈S. Since Sa is a left ideal of S, therefore by hypothesis, Sa is a two-sided ideal of S. Thus, ab·c=a(x·bb)·c=a(b·xb)·c=b(a·xb)·c=c(a·xb)·b. It follows that ab·c∈S(a·SS)·b⊆(S·aS)b=(SS·aS)b=(Sa·SS)b⊆(Sa·S)b⊆Sa·b. Thus, ab·c=ta·b for some t∈S, and therefore fA(ab·c)=fA(ta·b)⊇fA(b), implies that fA is anSI-interior-ideal of S over U. Sufficiency. It can be followed from Lemma 8. □ By left-right dual of above Theorem, we have the following Theorem : Let S be a right regular AG-groupoid with left identity. Then,S is right duo if and only if every SI-right-ideal of S over U is anSI-interior-ideal of S over U. A non-empty subset A of a right regular AG-groupoid S with left identity is a two-sided ideal of S if and only if it is an interior ideal of S. Lemma 10 Every left ideal of an AG-groupoid S with left identity is an interior ideal of S if S is an SI-left duo. It can be followed from Lemmas 1 and 9. □ The converse of Lemma 10 is not true in general. The only left ideal of S (from Example 2) is {1,2} which is also an interior ideal of S. Let A={2,3,4,5} and define a soft set fA of S over \( U= \mathbb {Z} \) as follows: \(f_{A}(x)=\left \{ \begin {array}{c} 4 \mathbb {Z} \ \text {if}\ x=2 \\ 8 \mathbb {Z} \ \text {if}\ x=3 \\ 16 \mathbb {Z} \ \text {if}\ x=4 \\ 2 \mathbb {Z} \ \text {if}\ x=5 \end {array} \right \} \). Then, it is easy to see that fA is anSI -left-ideal of S over U but it is not anSI -right-ideal of S over U because \(f_{A}(2\ast 4)\varsupsetneq f_{A}(2).\) It is easy to see that every SI-right-ideal of S with left identity over U is anSI-left-ideal of S over U. Every SI-right-ideal of an AG-groupoid S with left identity is anSI-left-ideal of S over U, but the converse is not true in general. Every right ideal of an AG-groupoid S with left identity is an interior ideal of S if and only if S is an SI-right duo. It is straightforward. □ Let S be a right regular AG-groupoid with left identity. Then,S is an SI-left duo if and only if every left ideal of S is an interior ideal of S. The direct part can be followed from Lemma 10. The converse is simple. □ By left-right dual of above Theorem, we have the following Theorem. Let S be a right regular AG-groupoid with left identity. Then,S is an SI-right duo if and only if every right ideal of S is an interior ideal of S. Let S be an AG-groupoid with left identity and E={x∈S:x=x2}⊆S. Then the following assertions hold : (i)Eforms a semilattice ; (ii)Eis a singleton set,if a=ax·a,∀a,x∈S. For an AG-groupoid S with left identity, the following conditions are equivalent : (i)Sis right regular ; (ii)For any interior idealIof S; (a) I⊆I2, (b)Iis semiprime. (iii)For any SI-interior-ideal fAofSover U; (a)\(f_{A}\overset {\sim }{\subseteq }f_{A}\circ f_{A},\) (b) fAis soft semiprime over U. (iv)Sis right regular and |E|=1, (a=ax·a, ∀ a,x∈E); (v)Sis right regular and ∅≠E⊆Sis semilattice. (i)⇒(v)⇒(iv) can be followed from Theorem 7. (iv)⇒(iii):(a). Let fA be any SI -interior-ideal of a right regular S with left identity. Thus, for each a∈S, there exists some x∈S such that a=x·aa=a·xa=a·x(x·aa)=a·(ex)(a·xa)=a·(xa·a)(xe). Therefore, $$\begin{array}{@{}rcl@{}} (f_{A}\circ f_{A})(a) &=&\bigcup \limits_{a=a\cdot (xa\cdot a)(xe)}\left \{ f_{A}(a)\cap f_{A}((xa\cdot a)(xe))\right \} \\ &\supseteq &f_{A}(a)\cap f_{A}((xa\cdot a)(xe))\supseteq f_{A}(a)\cap f_{A}(a)=f_{A}(a)\text{.} \end{array} $$ This shows that \(f_{A}\overset {\sim }{\subseteq }f_{A}\circ f_{A}.\) (b). Also, $$\begin{array}{@{}rcl@{}} a &=&x\cdot aa\leq ex\cdot aa=aa\cdot xe=(a\cdot xa^{2})(xe)=(x\cdot aa^{2})(xe)=x(ea\cdot aa)\cdot (xe) \\ &=&x(aa\cdot ae)\cdot (xe)=(aa)(x\cdot ae)\cdot (xe)=(ae\cdot x)(aa)\cdot (xe)=(ae\cdot x)a^{2}\cdot (xe). \end{array} $$ This implies that fA(a)=fA((ae·x)a2·(xe))⊇fA(a2). Hence, fA is softsemiprime. (iii)⇒(ii):(a). Assume that I is any interior ideal of S, then by using Lemma 1, XI is an SI-interior-ideal of S over U. Let i∈I, then by using Lemma 2, we have XI(i)⊆(XI∘XI)(i)=(XI)(i)=U. Hence, I⊆I2. (b). Let i2∈I. Then, by given assumption, we have XI(i)⊇XI(i2)=U. This implies that i∈I, and therefore I is semiprime. (ii)⇒(i): Let a∈S with left identity. Since Sa2 is an interior ideals of S, and clearly a2∈Sa2. Thus, by using given assumption, a∈Sa2. Hence, S is right regular. □ Every SI-interior-ideal of a right regular AG-groupoid S with left identity is softsemiprime over U. Let fI be any SI-interior-ideal of a right regular S with left identity. Then, for each a∈S, there exists some x∈S such that fI(a)=fI(x·aa)=fI(a·xa)=fI(xa2·xa)⊇fI(a2). □ Let I be an interior ideal of an AG-groupoid S. Then, I is semiprime if and only if XI is softsemiprime over U. Let S be an AG-groupoid with left identity. Then, S is right regular if and only if every SI-interior-ideal fA of S over U is softidempotent and softsemiprime. Necessity : Let fA be any SI-interior-ideal of a right regular S with left identity over U. Then, clearly \(f_{A}\circ f_{A} \overset {\sim }{\subseteq }f_{A}\). Now for each a∈S, there exists some x∈S such that a=x·aa=a·xa=ea·xa=ax·ae=(ae·x)a. Thus, $$\begin{array}{@{}rcl@{}} (f_{A}\circ f_{A})(a) &=&\bigcup \limits_{a=(ae\cdot x)a}\left \{ f_{A}(ae\cdot x)\cap f_{A}(a)\right \} \supseteq f_{A}(ae\cdot x)\cap f_{A}(a) \\ &\supseteq &f_{A}(a)\cap f_{A}(a)=(f_{A}\cap f_{A})(a)=f_{A}(a)\text{.} \end{array} $$ This shows that fA is softidempotent over U. Again a=ex·aa=aa·xe=a2·xe. Therefore, fA(a)=fA(a2·xe)⊇fA(a2). Hence, fA is softsemiprime over U. Sufficiency : Since Sa2 is an interior ideal of S, therefore by Lemma 1, its soft characteristic function \(\phantom {\dot {i}\!}X_{Sa^{2}}\) is an SI-interior-ideal of S over U such that \(\phantom {\dot {i}\!}X_{Sa^{2}}\) is softidempotent over U. Since by given assumption, \(\phantom {\dot {i}\!}X_{Sa^{2}}\) is softsemiprime over U so by Corollary 4,Sa2 is semiprime. Since a2∈Sa2, therefore, a∈a2S. Thus, by using Lemma 2, we have \(\phantom {\dot {i}\!}X_{Sa^{2}}\circ X_{Sa^{2}}=X_{Sa^{2}},\) and \(\phantom {\dot {i}\!}X_{Sa^{2}}\circ X_{Sa^{2}}=X_{(Sa^{2}\cdot Sa^{2})}.\) Thus, we get \(\phantom {\dot {i}\!}X_{(Sa^{2}\cdot Sa^{2})}=X_{Sa^{2}}.\) This implies that \(\phantom {\dot {i}\!}X_{(Sa^{2}\cdot Sa^{2})}(a)=X_{Sa^{2}}(a)=U.\) Therefore, a∈Sa2·Sa2=a2S·Sa2=(Sa2·S)a2⊆Sa2. Hence,S is right regular. □ Every SI-interior-ideal of a right regular AG-groupoid S with left identity over U is softidempotent. Let fA be any SI-interior-ideal of a right regular S with left identity over U. Then, by using Lemma 4, \(f_{A}\circ f_{A}\overset {\sim }{\subseteq }f_{A}\). Since S right regular, therefore for every a∈S there exists some x∈S such that a=x·aa=a·xa=xa2·xa=ax·a2x=(a2x·x)a=(xx·aa)a=(aa·x2)a. Therefore, $$\begin{array}{@{}rcl@{}} (f_{A}\circ f_{A})(a) &=&\bigcup \limits_{a=(aa\cdot x^{2})a}\left \{ f_{A}(aa\cdot x^{2})\cap f_{A}(a)\right \} \supseteq f_{A}(aa\cdot x^{2})\cap f_{A}(a) \\ &\supseteq &f_{A}(a)\cap f_{A}(a)=(f_{A}\cap f_{A})(a)\text{.} \end{array} $$ Thus, fA∘fA=fA. □ Theorem 10 Let S be an AG-groupoid with left identity and fA be any SI-interior-ideal of\(\ S\mathcal {\ }\)over U. Then,S is right regular if and only if fA=(XS∘fA)2 and fA is softsemiprime. Necessity : Let fA be any SI-interior-ideal of a right regular S with left identity over U. Then, by using Lemmas 4 and 2, we have $$(X_{S}\circ (X_{S}\circ f_{A}))\circ X_{S}=(X_{S}\circ f_{A})\circ X_{S}=(f_{A}\circ X_{S})\circ X_{S}=(X_{S}\circ X_{S})\circ f_{A}=X_{S}\circ f_{A}. $$ This shows that XS∘fA is anSI -interior-ideal of S over U. Now by using Lemmas 11 and 4, we have (XS∘fA)2=XS∘fA=fA. It is easy to see that fA is softsemiprime. Sufficiency : Let fA=(XS∘fA)2 holds for any SI-interior-ideal fA of S over U. Then, by given assumption and Lemma 14, we get, \(f_{A}=(X_{S}\circ f_{A})^{2}=f_{A}^{2}.\) Thus, by using Theorem 9, S is right regular. □ Let S be an AG-groupoid with left identity and fA be any SI-interior-ideal of\(\ S\mathcal {\ }\)over U. Then, S is right regular if and only if \(f_{A}=X_{S}\circ f_{A}^{2}\) and fA is soft semiprime. From above theorem, \(f_{A}=(X_{S}\circ f_{A})^{2}=(X_{S}\circ f_{A})(X_{S}\circ f_{A})=(X_{S}\circ f_{A})\circ f_{A}=(f_{A}\circ f_{A})\circ X_{S}=(f_{A}\circ f_{A})\circ (X_{S}\circ X_{S})=(X_{S}\circ X_{S})\circ (f_{A}\circ f_{A})=X_{S}\circ f_{A}^{2}.\) □ Let S be an AG-groupoid with left identity and fA be any SI-left-ideal (SI -right-ideal,SI-two-sided-ideal)of\( \ S\mathcal {\ }\)over U. Then, S is right regular if and only if fA is softidempotent. Necessity : Let fA be anSI-left-ideal of a right regular S with left identity over U. Then, it is easy to see that \( f_{A}\circ f_{A}\overset {\sim }{\subseteq }f_{A}.\) Let a∈S, then there exists x∈S such that a=aa·x=xa·a. Thus $$(f_{A}\circ f_{A})(a)=\bigcup \limits_{a=xa\cdot a}\{f_{A}(xa)\cap f_{A}(a)\} \supseteq f_{A}(a)\cap f_{A}(a)=f_{A}(a), $$ which implies that fA is softidempotent. Sufficiency : Assume that fA∘fA=fA holds for all SI-left-ideal of S with a left identity over U. Since Sa is a left ideal of S, therefore by Lemma 1, it follows that XSa is anSI-left-ideal of S over U. Since a∈Sa, it follows that (XSa)(a)=U. By hypothesis and Lemma 2, we obtain (XSa)∘(XSa)=XSa and (XSa)∘(XSa)=XSa·Sa. Thus, we have (XSa·Sa)(a)=XSa(a)=U, which implies that a∈Sa·Sa. Therefore, a∈Sa·Sa=S2a2=Sa2. This shows that S is right regular. □ Let S be an AG-groupoid with left identity and fA be any SI-left-ideal (SI -right-ideal,SI-two-sided-ideal) of\(\ S \mathcal {\ }\)over U. Then, S is right regular if and only if fA=(XS∘fA)∘(XS∘fA). Necessity : Let S be a right regular S with left identity and let fA be any SI-left-ideal of S over U. It is easy to see that XS∘fA is also anSI -left-ideal of S over U. By Lemma 12, we obtain \((X_{S}\circ f_{A})\circ (X_{S}\circ f_{A})=(X_{S}\circ f_{A})\overset {\sim }{\subseteq } f_{A}.\) Let a∈S, then there exists x∈S such that a=aa·x=xa·a=(xa)(aa·x)=(xa)(xa·a). Therefore, $$\begin{array}{@{}rcl@{}} \left((X_{S}\circ f_{A})\circ (X_{S}\circ f_{A})\right) (a) &\supseteq &(X_{S}\circ f_{A})(xa)\cap (X_{S}\circ f_{A})(xa\cdot a) \\ &\supseteq &X_{S}(x)\cap f_{A}(a)\cap X_{S}(xa)\cap f_{A}(a)=f_{A}(a), \end{array} $$ which is what we set out to prove. Sufficiency : Suppose that fA=(XS∘fA)∘(XS∘fA) holds for all SI-left-ideal fA of S over U. Then \(f_{A}=(X_{S}\circ f_{A})\circ (X_{S}\circ f_{A})\overset {\sim }{ \subseteq }f_{A}\circ f_{A}\overset {\sim }{\subseteq }X_{S}\circ f_{A} \overset {\sim }{\subseteq }f_{A}.\) Thus, by Lemma 12, it follows that S is right regular. □ Let fA be any SI-interior-ideal of a right regular AG-groupoid S with left identity\(\mathcal {\ }\)over U. Then, fA(a)=fA(a2), for all a∈S. Let fA be any SI-interior-ideal of a right regular S with left identity over U. For a∈S, there exists some x in S such that a=ex·aa=aa·xe=(xe·a)a=(xe·a)(ex·aa)=(xe·a)(aa·xe)=aa·(xe·a)(xe)=ea2·(xe·a)(xe). Therefore fA(a)=fA(ea2·(xe·a)(xe))⊇fA(a2)=fA(aa)=fA(a(ex·aa))=fA(a(aa·xe))=fA((aa)(a·xe))=fA((xe·a)(aa))⊇fA(a). That is, fA(a)=fA(a2),∀a∈S □ The converse of Lemma 13 is not true in general. Let us consider an AG-groupoid S (from Example 2). Let A={1,2,4,5} and define a soft set fA of S over \(U=\left \{ \left [ \begin {array}{cc} 0 & 0 \\ x & x \end {array} \right ] /x\in \mathbb {Z} _{3}\right \} (\)the set of all 2×2 matrices with entries from \( \mathbb {Z} _{3})\) as follows: \(f_{A}(x)=\left \{ \begin {array}{c} \left \{ \left [ \begin {array}{cc} 0 & 0 \\ 0 & 0 \end {array} \right ],\left [ \begin {array}{cc} 0 & 0 \\ 1 & 1 \end {array} \right ],\left [ \begin {array}{cc} 0 & 0 \\ 2 & 2 \end {array} \right ] \right \}\ \text {if}\ x=1 \\ \left \{ \left [ \begin {array}{cc} 0 & 0 \\ 1 & 1 \end {array} \right ],\left [ \begin {array}{cc} 0 & 0 \\ 2 & 2 \end {array} \right ] \right \}\ \text {if}\ x=2 \\ \left \{ \left [ \begin {array}{cc} 0 & 0 \\ 2 & 2 \end {array} \right ] \right \}\ \text {if}\ x=4 \\ \left \{ \left [ \begin {array}{cc} 0 & 0 \\ 1 & 1 \end {array} \right ],\left [ \begin {array}{cc} 0 & 0 \\ 2 & 2 \end {array} \right ] \right \}\ \text {if}\ x=\mathit {5} \end {array} \right \}.\) It is easy to see that fA is anSI -interior-ideal of S such that fA(x)⊇fA(x2),∀x∈S but S is not right regular. On the other hand, it is easy to see that every SI -two-sided-ideal of S over U is anSI -interior-ideal of S over U. Every SI-two-sided-ideal of a right regular AG-groupoid S with left identity\(\mathcal {\ }\)over U is an SI-interior-ideal of S over U but the converse is not true in general. (ii)Every interior ideal ofSis semiprime ; (iii)Every SI-interior-ideal ofSoverUis soft semiprime ; (iv)For every SI-interior-ideal fAofSover U, fA(a)=fA(a2), ∀ a∈S. (i)⇒(iv) can be followed from Lemma 13. (iv)⇒(iii) and (iii)⇒(ii) are obvious. (ii)⇒(i): Since Sa2 is an interior ideal of S with left identity such that a2∈Sa2, therefore by given assumption, we have a∈Sa2. Thus, S is right regular. □ Weakly regular AG ***-groupoids An AG-groupoid S is called an AG ***-groupoid [29] if the following conditions are satisfied : (i) For all a,b,c∈S,a·bc=b·ac; (ii) For all a∈S, there exist some b,c∈S such that a=bc. An AG-groupoid satisfying (i) is called an AG **-groupoid. The condition (ii) for an AG **-groupoid to become an AG ***-groupoid is equivalent to S=S2. Let S={1,2,3,4} be an AG-groupoid define in the following multiplication table. It is easy to verify that (S,·) is an AG *** -groupoid. Note that every AG-groupoid with left identity is an AG *** -groupoid but the converse is not true in general. An AG-groupoid in the above example is an AG ***-groupoid, but it does not contains a left identity. Hence, we can say that an AG ***-groupoid is the generalization of an AG-groupoid with left identity. An element a of an AG-groupoid S is called a weakly regular element of S if there exist some x,y∈S such that a=ax·ay and S is called weakly regular if every element of S is weakly regular. Let S be anAG ***-groupoid. Then, the concepts of weak and right regularity coincide in S. Let S be an AG ***-groupoid. From now onward,R (resp.L) will denote any right (resp. left) ideal of \(S; \left \langle R\right \rangle _{a^{2}}\) will denote a right ideal Sa2∪a2 of S containing a2 and 〈L〉a will denote a left ideal Sa∪a of S containing a;fA(resp.gB) will denote any SI-right-ideal of S (resp. SI-left-ideal of S) over U unless otherwise specified. LetSbe anAG ***-groupoid. Then,Sis weakly regular if and only if \(\left \langle R\right \rangle _{a^{2}}\cap \left \langle L\right \rangle _{a}=\left \langle R\right \rangle _{a^{2}}^{2}\left \langle L\right \rangle _{a}^{2}\) and \(\left \langle R\right \rangle _{a^{2}}\) is semiprime. Necessity : Let S be weakly regular. It is easy to see that \( \left \langle R\right \rangle _{a^{2}}^{2}\left \langle L\right \rangle _{a}^{2}\subseteq \left \langle R\right \rangle _{a^{2}}\cap \left \langle L\right \rangle _{a}.\) Let \(a\in \left \langle R\right \rangle _{a^{2}}\cap \left \langle L\right \rangle _{a}.\) Then, there exist some x,y∈S such that $$\begin{array}{@{}rcl@{}} a &=&ax\cdot ay=(ax\cdot ay)x\cdot (ax\cdot ay)y=(x\cdot ay)(ax)\cdot (y\cdot ay)(ax) \\ &=&(a\cdot xy)(ax)\cdot (ay^{2})(ax)=(a\cdot xy)(ax)\cdot (xa)(y^{2}a) \\ &\in &(\left \langle R\right \rangle_{a^{2}}S\cdot \left \langle R\right \rangle_{a^{2}}S)(S\left \langle L\right \rangle_{a}\cdot S\left \langle L\right \rangle_{a})\subseteq \left \langle R\right \rangle_{a^{2}}^{2}\left \langle L\right \rangle_{a}^{2}, \end{array} $$ which shows that \(\left \langle R\right \rangle _{a^{2}}\cap \left \langle L\right \rangle _{a}=\left \langle R\right \rangle _{a^{2}}^{2}\left \langle L\right \rangle _{a}^{2}.\) It is easy to see that \(\left \langle R\right \rangle _{a^{2}}\) is semiprime. Sufficiency : Since Sa2∪a2 and Sa∪a are the right and left ideals of S containing a2 and a respectively. Thus, by using given assumption, we get $$\begin{array}{@{}rcl@{}} a &\in &\left(Sa^{2}\cup a^{2}\right)\cap (Sa\cup a)=\left(Sa^{2}\cup a^{2}\right)^{2}(Sa\cup a)^{2} \\ &=&\left(Sa^{2}\cup a^{2}\right)\left(Sa^{2}\cup a\right)\cdot (Sa\cup a)(Sa\cup a) \\ &\subseteq &S\left(Sa^{2}\cup a\right)\cdot S(Sa\cup a)=\left(S\cdot Sa^{2}\cup Sa\right)(S\cdot Sa\cup Sa) \\ &=&\left(a^{2}S\cdot S\cup Sa\right)(aS\cdot S\cup Sa)=\left(Sa^{2}\cup Sa\right)(Sa\cup Sa) \\ &=&\left(a^{2}S\cup Sa\right)(Sa\cup Sa)=(Sa\cdot a\cup Sa)(Sa\cup Sa) \\ &\subseteq &(Sa\cup Sa)(Sa\cup Sa)=Sa\cdot Sa=aS\cdot aS. \end{array} $$ This implies that S is weakly regular. □ LetSbe anAG ***-groupoid. Then,Sis weakly regular if and only if \( \left \langle R\right \rangle _{a^{2}}\cap \left \langle L\right \rangle _{a}=\left \langle L\right \rangle _{a}^{2}\left \langle R\right \rangle _{a^{2}}^{2}\) and \(\left \langle R\right \rangle _{a^{2}}\) is semiprime. LetSbe anAG ***-groupoid. Then, the following conditions are equivalent : (i)Sis weakly regular ; \((ii) \left \langle R\right \rangle _{a^{2}}\cap \left \langle L\right \rangle _{a}=\left \langle L\right \rangle _{a}^{2}\left \langle R\right \rangle _{a^{2}}^{2}\)and\(\left \langle R\right \rangle _{a^{2}}\)is semiprime ; (iii) R∩L=L2R2 andRsemiprime ; \(\left (iv\right) \ f_{A}\overset {\sim }{\cap }g_{B}=(f_{A}\circ g_{B})\circ (f_{A}\circ g_{B})\ \)and fAis soft semiprime ; (v)Sisweakly regular and |E|=1,(a=ax·a,∀a,x∈E); (vi)Sisweakly regular and ∅≠E⊆Sis semilattice. (i)⇒(vi)⇒(v): It can be followed from Theorem 7. (v)⇒(iv): Let fA and gB be any SI -right-ideal and SI-left-ideal of a weakly regular S over U respectively. From Lemma 5, it is easy to show that \((f_{A}\circ g_{B})\circ (f_{A}\circ g_{B})\overset {\sim }{\subseteq }f_{A}\overset {\sim } {\cap }g_{B}.\) Now for a∈S, there exist some x,y∈S such that $$\begin{array}{@{}rcl@{}} a &=&ax\cdot ay=(ax\cdot ay)x\cdot (ax\cdot ay)y=(ax\cdot ay)\cdot ((ax\cdot ay)x)y \\ &=&(ax\cdot ay)\cdot (yx)(ax\cdot ay)=(ax\cdot ay)\cdot (ax)(yx\cdot ay) \\ &=&(ax\cdot ay)\cdot (ay\cdot yx)(xa)=(ax\cdot ay)\cdot ((yx\cdot y)a)(xa) \\ &=&(ax)((yx\cdot y)a)\cdot (ay)(xa)=(ax)(ba)\cdot (ay)(xa),\ \text{where}\ yx\cdot y=b. \end{array} $$ $$\begin{array}{@{}rcl@{}} ((f_{A}\circ g_{B})\circ (f_{A}\circ g_{B}))(a) &=&\bigcup \limits_{a=(ax)(ba)\cdot (ay)(xa)}\{(f_{A}\circ g_{B})(ax\cdot ba) \\ &&\cap (f_{A}\circ g_{B})(ay\cdot xa)\} \\ &\supseteq &\bigcup \limits_{ax\cdot ba=ax\cdot ba}\{f_{A}(ax)\cap g_{B}(ba)\} \\ &&\cap \bigcup \limits_{ay\cdot xa=ay\cdot xa}\{f_{A}(ay)\cap g_{B}(xa)\} \\ &\supseteq &f_{A}(ax)\cap g_{B}(ba)\cap f_{A}(ay)\cap g_{B}(xa) \\ &\supseteq &f_{A}(a)\cap g_{B}(a), \end{array} $$ which shows that \((f_{A}\circ g_{B})\circ (f_{A}\circ g_{B})\overset {\sim }{ \supseteq }f_{A}\overset {\sim }{\cap }g_{B}.\) Hence, \(f_{A}\overset {\sim }{ \cap }g_{B}=(f_{A}\circ g_{B})\circ (f_{A}\circ g_{B})\). Also by using Lemma 3, fA is softsemiprime. (iv)⇒(iii): Let R and L be any left and right ideals of S. Then, by using Lemma 1, XR and XL are the SI-right-ideal and SI-left-ideal of S over U respectively. Now by using Lemma 2, we get \(X_{R\cap L}=X_{R}\overset {\sim }{\cap }X_{L}=(X_{R}\circ X_{L})\circ (X_{R}\circ X_{L})=(X_{R}\circ X_{R})\circ (X_{L}\circ X_{L})=X_{R^{2}}\circ X_{L^{2}}=X_{R^{2}L^{2}}=X_{L^{2}R^{2}},\)which implies that R∩L=L2R2. (iii)⇒(ii): It is simple. (ii)⇒(i): It can be followed from Corollary 6. □ Let R be a right ideal and L be a left ideal of a unitary AG-groupoid S with left identity respectively. Then,RL is a left ideal of S. (ii)\(\left \langle R\right \rangle _{a^{2}}\cap \left \langle L\right \rangle _{a}=\left \langle R\right \rangle _{a^{2}}\left \langle L\right \rangle _{a}\cdot \left \langle R\right \rangle _{a^{2}}\)and\(\left \langle R\right \rangle _{a^{2}}\)is semiprime ; (iii) R∩L=RL·RandRis semiprime ; (iv)\(f_{A}\overset {\sim }{\cap }g_{B}=(f_{A}\circ g_{B})\circ f_{A}\mathit {\ }\)and fAis soft semiprime ; (v)⇒(iv): Let fA and gB be any SI -left-ideals of a weakly regular S over U. Now, for a∈S, there exist some x,y∈S such that a=ax·ay=ax·(ax·ay)y=((ax·ay)y·x)a=(xy·(ax·ay))a=(ax·(xy·ay))a=(ax·(a·(xy)y))a. $$\begin{array}{@{}rcl@{}} ((f_{A}\circ g_{B})\circ f_{A})(a) &=&\bigcup \limits_{a=(ax\cdot (a\cdot (xy)y))a}\{(f_{A}\circ g_{B})(ax\cdot (a\cdot (xy)y))\cap g_{B}(a)\} \\ &\supseteq &\bigcup \limits_{ax\cdot (a\cdot (xy)y=ax\cdot (a\cdot (xy)y}\{f_{A}(ax)\cap g_{B}(a\cdot (xy)y)\} \cap g_{B}(a) \\ &\supseteq &f_{A}(ax)\cap g_{B}(a\cdot (xy)y)\cap g_{B}(a)\supseteq f_{A}(a)\cap g_{B}(a)\text{,} \end{array} $$ which shows that \((f_{A}\circ g_{B})\circ f_{A}\overset {\sim }{\supseteq } f_{A}\overset {\sim }{\cap }g_{B}\). By using Lemmas 5 and 3, it is easy to show that \((f_{A}\circ g_{B})\circ f_{A}\overset {\sim }{ \subseteq }f_{A}\overset {\sim }{\cap }g_{B}.\) Thus, \(f_{A}\overset {\sim }{ \cap }g_{B}=(f_{A}\circ g_{B})\circ f_{A}\). Also, by using Lemma 3, fA is softsemiprime. (iv)⇒(iii): Let R and L be any left and right ideals of S respectively. Then, by Lemma 1, XR and XL are the SI-right-ideal and SI -left-ideal of S over U respectively. Now, by using Lemmas 2, 14, we get \(X_{R\cap L}=X_{R}\overset {\sim }{\cap } X_{L}=(X_{R}\circ X_{L})\circ X_{L}=X_{RL\cdot R},\) which shows that R∩L=RL·R. Also, by using Lemma 6, R is semiprime. (iii)⇒(ii): It is obvious. (ii)⇒(i): Since Sa2∪a2 and Sa∪a are the right and left ideals of S containing a2 and a respectively. Thus, by using given assumption and Lemma, we get $$\begin{array}{@{}rcl@{}} a &\in &(Sa^{2}\cup a^{2})\cap (Sa\cup a)=(Sa^{2}\cup a^{2})(Sa\cup a)\cdot (Sa^{2}\cup a^{2}) \\ &\subseteq &S(Sa\cup a)\cdot (Sa^{2}\cup a^{2})=(S^{2}a\cup Sa)(Sa^{2}\cup a^{2}) \\ &=&(S^{2}a\cdot Sa^{2})\cup (S^{2}a\cdot a^{2})\cup (Sa\cdot Sa^{2})\cup (S^{2}a\cdot a^{2}) \\ &\subseteq &(Sa\cdot a^{2}S)\cup (Sa\cdot Sa)\cup (Sa\cdot a^{2}S)\cup (Sa\cdot Sa) \\ &\subseteq &(Sa\cdot Sa)\cup (Sa\cdot Sa)\cup (Sa\cdot Sa)\cup (Sa\cdot Sa) \\ &=&Sa\cdot Sa=aS\cdot aS. \end{array} $$ Hence, S is weakly regular. □ Comparison of SI-left (right, two-sided, interior) ideals A very major and an abstract conclusion from this section is that SI-left-ideal, SI-right-ideal and SI-interior-ideal need not to be coincide in an AG-groupoid S even if S has a left identity, but they will coincide in a right regular class of an AG-groupoid S with left identity. E-1. Take a collection of 8 chemicals as an initial universe set U given by U={s1,s2,s3,s4,s5,s6,s7,s8}. Let a set of parameters S={1,2,3,4,5} be a set of particular properties of each chemical in U with the following type of natures : 1 stands for the parameter "density", 2 stands for the parameter "melting point", 3 stands for the parameter "combustion", 4 stands for the parameter "enthalpy", 5 stands for the parameter "toxicity". Let us define the following binary operation on a set of parameters S as follows. It is easy to check that (S,∗) is non-commutative and non-associative. Also, by routine calculation, one can easily verify that (S,∗) forms an AG-groupoid with left identity 4. Note that S is left (right) regular. Indeed, for a∈S there does exists some x∈S such that a=xa2(a=a2x). Let A=S and define a soft set fA of S over U as follows : \(f_{A}(x)=\left \{ \begin {array}{c} \{s_{1},s_{2},s_{3},s_{4,}s_{5},s_{6}\}\ \text {if}\ x=1 \\ \{s_{2},s_{3},s_{4,}\}\ \text {if}\ x=2 \\ \{s_{2},s_{3}\ \text {if}\ x=3=4=5 \end {array} \right \}.\) Then, it is easy to verify that fA is an SI -interior-ideal of S over U. E-2. There are seven civil engineers in an initial universe set U given by U={s1,s2,s3,s4,s5,s6,s7}. Let a set of parameters S={1,2,3} be a set of status of each civil engineer in U with the following type of attributes: 1 stands for the parameter "critical thinking", 2 stands for the parameter "decision making", 3 stands for the parameter "project management". It is easy to check that (S,∗) is non-commutative and non-associative. One can easily verify that (S,∗) forms an AG-groupoid. Note that S is not left (right) regular. Indeed for 3∈S there does not exists some x∈S such that 3=x∗32(3=32∗x). Let A=S and define a soft set fA of S over U as follows : \(f_{A}(x)=\left \{ \begin {array}{c} \{s_{1},s_{2},s_{3},s_{4}\}\ \text {if}\ x=1 \\ \{s_{1},s_{2},s_{3}\}\ \text {if}\ x=2 \\ \{s_{2},s_{3}\}\ \text {if}\ x=3 \end {array} \right \}.\) Then, it is easy to verify that fA is an SI -interior-ideal of S over U but it is not an SI -left-ideal, SI-right-ideal, and SI -interior-ideal of S which can be seen from the following : $$f_{A}(2\ast 2)\varsupsetneq f_{A}(2)\text{ and\textit{\ }}f_{A}(3\ast 2)\varsupsetneq f(2). $$ Every SI-right-ideal of an AG-groupoid S with left identity over U is an SI-left-ideal of S over U. The converse of above Lemma is not true in general which can be seen from the following example. E-3. Let us consider an AG-groupoid S with left identity 4 given in an Example 1 with an initial universe set U={s1,s2,...,s12}. Let a set of parameters S={1,2,3,4,5} be a set of status of houses in which, 1 stands for the parameter "beautiful", 2 stands for the parameter "cheap", 3 stands for the parameter "in good location", 4 stands for the parameter "in green surroundings", 5 stands for the parameter "secure". It is important to note that S is not right regular because for 3∈S there does not exists some x∈S such that 3=x∗32. \(f_{A}(x)=\left \{ \begin {array}{c} U\ \text {if}\ x=1 \\ \{s_{2},s_{3},s_{4,}s_{5,}s_{6,}s_{7,}s_{8}\}\ \text {if}\ x=2 \\ \{s_{2},s_{3},s_{4,}s_{5,}s_{6}\}\ \text {if}\ x=3 \\ \{s_{2},s_{3},s_{4,}s_{5}\}\ \text {if}\ x=4 \\ \{s_{1},s_{2},s_{3},s_{4,}s_{5,}s_{6,}s_{7,}s_{8},s_{9},s_{10}\}\ \text {if}\ x=5 \end {array} \right \} \). It is easy to verify that fA is an SI-left-ideal of S over U, but it is not an SI-right-ideal of S over U, because \(f_{A}(2\ast 4)\varsupsetneq f_{A}(2).\) Also, one can easily see that fA is an SI-interior-ideal of S over U but it is not an SI-two-sided-ideal of S over U. Note that every SI-two-sided-ideal of an AG-groupoid S with left identity over U is an SI-interior-ideal of S over U. Let fA be any soft set of a right regular AG-groupoid S with left identity over U. Then, fAisanSI -left-idealofSoverUif and only if fAisanSI-right-idealofSoverUif and only if fAisanSI-two-sided-idealofSoverUif and only if fAisanSI -interior-idealofSover U. Assume that fA is any SI-left-ideal of a right regular S with left identity over U. Let a,b∈S. For a∈S, there exists some x∈S such that a=xa2. Thus, ab=xa2·b=(a·xa)b=(b·xa)a. Therefore, fA((b·xa)a)⊇fA(a). Now, by using Lemma 15, fAis an SI-left-ideal of S over U if and only if fA is an SI-right-ideal of S over U. Let fA is any SI-right-ideal of a right regular with left identity over U. Let a,b,c∈S, then fA(ab·c)=fA((xa2·b)c)=fA(cb·xa2)=fA(a2x·bc)=fA(b(a2x·c))⊇fA(b). Again assume that fA is any SI -interior-ideal of a right regular S with left identity over U. Thus, fA(ab)⊇fA(xa2·b)⊇fA(a2)=fA(xa2·xa2)=fA(a2x·a2x)=fA((aa)(a2x·x))⊇fA(a), which is what we set out to prove. □ Every AG-groupoid with left identity can be considered as an AG***-groupoid, but the converse is not true in general. This leads us to the fact that an AG***-groupoid can be seen as the generalization of an AG-groupoid with left identity. Thus, the results of "Right regular AG-groupoids" section can be trivially followed for an AG***-groupoid. The idea of soft sets in an AG-groupoid will help us in verifying the existing characterizations and to achieving new and generalized results in future works. Some of them are as under: 1. To generalize the results of a semigroups using soft sets. 2. To characterize a newly developed substructure called an AG***-groupoid through soft sets. 3. To study the structural properties of an AG-hypergroupoid by using soft sets. 4. To introduce and examine the concept of a Γ-AG-groupoid in terms of soft sets. No data were used to support this study. Ali, M. I., Feng, F., Liu, X., Mine, W. K., Shabir, M: On some new operations in soft set theory, Comput. Math. Appl. 57, 1547–1553 (2009). Cagman, N., Enginoglu, S.: FP-soft set theory and its applications. Ann. Fuzzy Math. Inform. 2, 219–226 (2011). Dudek, W. A., Gigon, R. S.: Congruences on completely inverse AG**-groupoids. Quasigroups Relat. Syst. 20, 203–209 (2012). Dudek, W. A., Gigon, R. S.: Completely inverse AG**-groupoids. Semigroup Forum. 87, 201–229 (2013). Feng, F.: Soft rough sets applied to multicriteria group decision making. Ann. Fuzzy Math. Inform. 2, 69–80 (2011). Feng, F., Jun, Y. B., Zhao, X.: Soft semirings. Comput. Math. Appl. 56, 2621–2628 (2008). Jun, Y. B.: Soft BCK/BCI-algebras, Comput. Math. Appl. 56, 1408–1413 (2008). Jun, Y. B., Ahn, S. S.: Double-framed soft sets with applications in BCK/BCI-algebras (2012). Jun, Y. B., Lee, K. J., Khan, A.: Soft ordered semigroups. Math. Logic Q. 56, 42–50 (2010). Kazim, M. A., Naseeruddin, M.: On almost semigroups. The Alig. Bull. Math. 2, 1–7 (1972). Khan, M.: Some studies in AG*-groupoids, Ph. D Thesis. Quaid-i-Azam University, Pakistan (2008). Kovkov, D. V., Kolbanov, V. M., Molodtsov, D. A.: Soft sets theory based optimization. J. Comput. Syst. Sci. Int. 46(6), 872–880 (2007). Maji, P. K., Roy, A. R., Biswas, R.: an application of soft sets in a decision making problem, Comput. Math. Appl. 44, 1077–1083 (2002). Molodtsov, D., Leonov, V. Y., Kovkov, D. V.: Soft sets technique and its application. Nechetkie Sistemy i Myagkie Vychisleniya. 1(1), 8–39 (2006). Majumdar, P., Samanta, S. K.: Similarity measures of soft sets. New Math. Neutral Comput. 4(1), 1–12 (2008). Molodtsov, D.: Soft set theory. Comput. Math. Appl. 37, 19–31 (1999). Mushtaq, Q., Yusuf, S. M: On LA-semigroups. The Alig. Bull. Math. 8, 65–70 (1978). Mushtaq, Q., Yusuf, S. M: On locally associative left almost semigroups. J. Nat. Sci. Math. 19, 57–62 (1979). Pei, D. W., Miao, D.: From soft sets to information systems. IEEE Int. Conf. Granul. Comput., 617–621 (2005). Protić P.V., Stevanović, N.: AG-test and some general properties of Abel-Grassmann's groupoids. PU. M. A. 4(6), 371–383 (1995). Roy, A. R., Maji, P. K.: A fuzzy soft set theoretic approach to decision making problems. J. Comput. Appl. Math. 203, 412–418 (2007). Sezgin, A., Atagun, A. O., Cagman, N.: Soft intersection nearrings with applications. Neural Comput. Appl. 21, 221–229 (2011). Seizgin, A., Atagun, A. O.: On operations of soft sets, Comput. Math. Appl. 61, 1457–1467 (2011). Sezgin, A.: A new approach to LA-semigroup theory via the soft sets. J. Intell. Fuzzy Syst. 26, 2483–2495 (2014). Shah, T., Razzaque, A., Rehman, I.: Application of soft sets to non-associative rings. J. Intell. Fuzzy Syst. 30, 1537–1564 (2016). Shah, T, Razzaque, A: Soft M-systems in a class of soft non-associative rings. U. P. B. Sci. Bull. Ser. A. 77, 131–142 (2015). Stevanović N., Protić P.V.: Composition of Abel-Grassmann's 3-bands. Novi Sad. Math. J. 2, 175–182 (2004). Yang, X., Yu, D., Yang, J., Wu, C.: Generalization of soft set theory from crisp to fuzzy case. Fuzzy Inormation Eng. 40, 345–355 (2007). Yousafzai, F., Ali, A., Haq, S., Hila, K.: Non-associative semigroups in terms of semilattices via soft ideals. J. Intell. Fuzzy Syst. 35, 4837–4847 (2018). The authors are grateful to the reviewers' valuable comments that improved the manuscript. No any place give us supporting or funding, only by the author. Military College of Engineering, National University of Sciences and Technology (NUST), Islamabad, Pakistan Faisal Yousafzai Higher Institute of Engineering and Technology, King Mariout, Alexandria, Egypt Mohammed M. Khalaf Both authors contributed equally. Both authors read and approved the final manuscript. Correspondence to Mohammed M. Khalaf. Yousafzai, F., Khalaf, M.M. A soft set theoretic approach to an AG-groupoid via ideal theory with applications. J Egypt Math Soc 27, 58 (2019). https://doi.org/10.1186/s42787-019-0060-7 Left invertive law Soft-sets AG-groupoid Right regularity Weak regularity and SI-ideals
CommonCrawl
A CZT-based blood counter for quantitative molecular imaging Romain Espagnet ORCID: orcid.org/0000-0002-2186-75651, Andrea Frezza1, Jean-Pierre Martin2, Louis-André Hamel2, Laëtitia Lechippey1, Jean-Mathieu Beauregard3,4 & Philippe Després1,5 EJNMMI Physics volume 4, Article number: 18 (2017) Cite this article Robust quantitative analysis in positron emission tomography (PET) and in single-photon emission computed tomography (SPECT) typically requires the time-activity curve as an input function for the pharmacokinetic modeling of tracer uptake. For this purpose, a new automated tool for the determination of blood activity as a function of time is presented. The device, compact enough to be used on the patient bed, relies on a peristaltic pump for continuous blood withdrawal at user-defined rates. Gamma detection is based on a 20 × 20 × 15 mm3 cadmium zinc telluride (CZT) detector, read by custom-made electronics and a field-programmable gate array-based signal processing unit. A graphical user interface (GUI) allows users to select parameters and easily perform acquisitions. This paper presents the overall design of the device as well as the results related to the detector performance in terms of stability, sensitivity and energy resolution. Results from a patient study are also reported. The device achieved a sensitivity of 7.1 cps/(kBq/mL) and a minimum detectable activity of 2.5 kBq/ml for 18F. The gamma counter also demonstrated an excellent stability with a deviation in count rates inferior to 0.05% over 6 h. An energy resolution of 8% was achieved at 662 keV. The patient study was conclusive and demonstrated that the compact gamma blood counter developed has the sensitivity and the stability required to conduct quantitative molecular imaging studies in PET and SPECT. Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) are well-established molecular imaging modalities used in many fields of the biomedical sciences. They allow in vivo investigations of biological processes at the molecular level and provide valuable information on the onset and progression of diseases. The images obtained are based on a measurable number of nuclear disintegrations and, as such, are inherently quantitative. However, the quantitative nature of these modalities is usually dismissed, largely because tools and methods dedicated to quantitative imaging are lacking. Accurate quantification in PET and SPECT typically requires frequent assessments of blood activity, for example through manual sampling and subsequent measurements in a well counter. These methods however can be inaccurate and error-prone, and they expose the personnel to radiation and blood pathogen health hazards. The manual sampling methods might also be difficult to apply to tracers with a fast uptake. Image-derived TACs are also possible and were investigated by several groups (see e.g. [1, 2] and references therein). These methods are less invasive, but they are not suitable for all anatomical sites, acquisition protocols and radiopharmaceuticals. They typically require the presence of a large blood vessel in the field-of-view (FOV), which is not always possible for dynamic acquisitions where the patient bed stays stationary over a particular FOV. Some image-based TAC also require corrections and sometimes suffer from methodological problems that hinder their widespread use [1]. Zanotti-Fregonara et al. for instance found that none out of eight image-based methods for TAC were reliable to estimate cerebral metabolic rate of glucose without resorting to blood sampling [2, 3]. Often, blood sampling is inevitable (to measure plasma vs whole blood TAC, for example) and remains the gold standard in all cases. Consequently, automated blood counters were developed for these reasons. The majority of blood counters use a photomultiplier tube (PMT) coupled to a scintillation crystal such as BGO [4–8] or GSO [9, 10] to detect gammas. One device was build with an avalanche photodiode coupled to a LSO crystal [11]. This combination has the advantage of being compact compared to PMT-based solutions but suffers from a rather large temperature dependence. Other devices are based on PIN photodiodes and detect charged particles rather than gammas [12]. This design allows for compact devices but might suffer from low sensitivity as the tubing used can stop a significant fraction of electrons and positrons emitted from the blood. A microfluidic approach was recently used to overcome this problem [13]. In order to build a robust, sensitive and compact device, the cadmium zinc telluride (CZT) detector technology was used in this work [14, 15]. CZT detectors are known for their high energy resolution and stable operation (no temperature dependence). Although CZT will convert less gammas than BGO or GSO for a given volume, it has the potential advantage of requiring less shielding than PMT-based solutions and offers a better energy resolution. These advantages can result in more compact devices that can potentially improve quantification efforts in multi-isotope studies. The objective of this work is to demonstrate the advantages of a CZT-based blood counter for PET and SPECT quantification. A compact prototype was designed and built to be used in a clinical setting. This paper will describe the main elements of the system and will present results regarding the detection sensitivity, the stability in time, the background noise and the minimum detectable activity (MDA). First results from a patient study are also reported. General device features The device relies on gamma ray detection to determine the amount of activity in the blood. The system was designed to accommodate two detector modules, facing each other as shown in Fig. 1. These modules can be operated independently with the objective of increasing the sensitivity or in coincidence mode to reduce background counts. For the prototype evaluation, a single detector module was used, and therefore, coincidence counting was not performed. The prototype dimensions are 36 × 29 × 15 cm3 including all components: power supplies, electronic boards, motors and pump. The blood is withdrawn from the patient via a catheter (arterial or venous) connected to a peristaltic pump. A pump was dedicated to laboratory testing (model 313VDL, Watson Marlow, Falmouth, UK), while a second one, achieving lower withdrawal rates, was used in the patient study (model P625, Instech Laboratories, Plymouth Meeting PA, USA). The catheter delivers blood to the gamma counting system and then to a waste container. Figure 1 shows the overall organisation of components of the device. The blood withdrawal rate depends on the catheter size and can be set between 3 mL/min to 10 mL/min for the 313VDL pump and between 1 mL/min to 7 mL/min for the P625 pump. In this 3D rendering, the catheter 1 passes close to the gamma detection modules (2a, 2b shown without shielding) then through the peristaltic pump 3. A Y-connector allows to control the flow towards a waste container or a carousel holding evacuated tubes 4. Acquisition boards are also shown 5. For this work, a configuration with a single detector module 2a was evaluated, shown here with its transparent-rendered shielding 2a. The actual tungsten shielding is shown in Fig. 3 The total amount of blood withdrawn from a patient should be kept as low as possible. McGuill et al. suggested to withdraw no more than 7.5% of the total blood volume [16]. It is therefore important to adjust the withdrawal parameters to the pharmacokinetic behaviour of a given radiopharmaceutical. The device built allows pre-programmed, variable pump rates in order to minimise the total amount of blood withdrawn for a given study while capturing the dynamics of uptake. Everett et al. have reported that in a 924 patients PET study, taking 117 to 137 mL of blood by arterial cannulation led to a single case of adverse effect (thrombotic occlusion) and concluded that the practise was safe, even if the catheter was in place for 5 h on average [17]. Zanotti-Fregonara et al. came to the same conclusions from their experience with more than 3000 patients [1]. The device must be located as close to the patient as possible to reduce activity dispersion along the catheter. Therefore, a design as compact as possible was sought for the device. The weight of the prototype was approximately 10 kg, including shielding. This allows the use of the device directly on the patient bed, thereby minimising the length of catheters required and reducing the effect of diffusion that affects the time resolution. The activity dispersion along the catheter can be estimated and corrected, typically by monoexponential deconvolution [18, 19] or step function calibration [20], but this was not implemented yet for the prototype device presented here. An access to radial blood with short catheters is more complicated in the case of brain studies where arms usually rest alongside the patient. In these cases, the system can be positioned at the feet of the patient with a longer catheter, at the expense of larger dispersion. For head studies with one or two bed positions, another possibility is to place the device on a cart beside the patient with a shorter access to the arm. Gamma detector In order to fulfill compactness requirements, a CdZnTe (CZT) semi-conductor detector was selected. A 20 × 20× 15 mm 3 commercially available CZT crystal from Redlen Technologies (Saanichton BC, Canada) was chosen for the prototype, primarily for its large volume. Although this detector was designed primarily for imaging applications [21, 22] and has an 11 × 11 anode readout scheme, it provides the large detection volume required for the counting application developed here. Pixels on the detector are 1.22 mm in size deposited at a pitch of 1.72 mm. A grid of 0.1 mm is deposited 0.2 mm around the pixels, except at the edges of the pattern where it is 0.5 mm wide. For counting-only purposes, the readout pins of the 121 pixels were connected to obtain a pattern similar to a coplanar grid [23]. A custom-made front-end electronic on a six-layer PCB was used to create a virtual coplanar detector where pixels are connected column-wise, leading to a two-channel readout scheme where five columns are interleaved between six others and can be maintained at two different biases [15]. Figure 2 shows one layer of the PCB and conductive tracks used to make a virtual coplanar detector from a pixelated detector. This anode geometry with alterning columns maintained at different biases has the advantage of preserving energy resolution comparatively to a planar geometry and requires only two polarised anodes for readout [24, 25]. A custom-made charge sensitive preamplifier for each anode allowed the creation of a very compact board that also includes the appropriate routing of pixels to create a coplanar readout. The output of the preamplifier was fed to a dual-channel ultralow noise amplifier (AD8432, Analog Devices, MA, USA) which was used to obtain a differential signal that was then routed to the device's main board by a Mini DisplayPort cable. The CZT crystal/preamplifier assembly was packaged in a compact custom-made 27 × 67 × 37 mm3 aluminium casing as shown in Fig. 2, along with 3D-printed pieces for accurate and reproducible positioning of all components. The weight of the detector module is 146 g. a An exploded view of the detector and preamplifier assembly. The CZT crystal is shown in green while the preamp board with the Mini DisplayPort connector is on top. The 3D-printed white plastic case is also shown and is used to isolate the high-voltage cathode and physically protect the CZT crystal. b A 3D rendering of the detection module showing one layer of the PCB and conductive tracks used to make a virtual coplanar detector from a pixelated detector a Technical drawing in millimetre of the shielding container and b the actual shielding made of ABS plastic. The container was filled with 97% pure tungsten cubes. The detector assembly fits in the blue zone while the red zone shows the catheter space. The bottom of the blue zone (detector module enclosure) was shielded by 1 cm of tungsten (not shown) while the opening was not shielded to allow cables to exit (see Fig. 7) Signal workflow and connection diagram. The FPGA listens to the USB port and executes commands sent by the GUI The detector assembly was shielded with 25 to 35 mm of 97% pure tungsten. A custom-made plastic container was 3D-printed and filled with tungsten cubes, as shown in Fig. 3. The container, which has a slit for the catheter, ensures reproducible positioning. A separate 3D-printed piece (not shown) slides in the slit and maintains the catheter in place. The length of the catheter exposed to the CZT detector is 29 mm. The detector was not shielded where the cables pass; this area was pointing towards the ceiling in the clinical experiments as shown in Fig. 8. In all cases, there was no direct unshielded line of sight between the CZT crystal and the main source of background radiation (the patient). A field-programmable gate array (FPGA)-based (Cyclone V, Altera, CA, USA) circuit was designed to control the acquisition and the different subsystems of the blood counter. The FPGA chip allows a convenient handling of signals and components through the Nios II processor. The signal workflow is illustrated in Fig. 4. The analog signal from the detector-preamplifier assembly is digitised by a "free-running" quad, 14-bit analog-to-digital converter (ADC, AD9253, Analog Devices, MA, USA) and then routed to the FPGA where programmable filters and thresholds (shaping) are applied. More specifically, the two signals from the virtual coplanar detector are combined (weighted subtraction) on the FPGA to compensate for charge trapping, as described by Luke et al. [23, 26]. A thorough characterisation of the detector, not reported here, allowed to set optimal operation parameters of this anode configuration. A trapezoidal filter was applied on the resulting signal, and the maximum was extracted. Counts exceeding 70 keV were stored in memory while the FPGA is waiting for calls by the host program via a USB connection. The host program runs on a PC and has a graphical user interface (GUI) implemented with Qt (version 5.3, Helsinki, Finland). The GUI, shown in Fig. 5, can display the detected counts per second for two energy windows and two detector modules. The centre panel gathers the primary controls that allow an acquisition to start, stop and reinitialise. The information on the acquisition sequence is also shown in the central panel. It defines the pump rates and acquisition duration. Sequences can be programmed, saved and reused. The GUI also shows a visual representation of a carousel where discrete blood samples can be packaged in evacuated tubes for further analysis. For this work, the packaging feature of the device was not used. Screenshot of the GUI, showing the various controls and visualisation elements and a typical 22Na acquisition The bottom part of the GUI shows either a graph of the activity as a function of time or an energy histogram of detected events, depending on which tab is selected. Four markers can be positioned to define two energy windows, allowing for example dual-isotope studies. The energy windows selected are applied to the activity displayed in the associated tab. The GUI has a user mode for regular use and a superuser mode for development. The superuser mode allows a manual control of the device and real-time programming of pump rates, motors and data processing parameters. The user mode is meant to be used in a clinical setting and therefore allows the definition and use of pre-programmed acquisition sequences. For safety reasons, it is possible to bypass a sequence with manual control of the peristaltic pump or the detector. Device characterisation For all acquisitions, pulse height histograms with 256 bins were obtained. The histograms were energy-calibrated with a 137Cs source of 32.7 kBq, and the energy threshold was set to 110 keV for all acquisitions. An estimation of energy resolution was obtained by fitting a Gaussian on the photopeak (662 keV) of the energy spectrum. Stability over time and catheter positioning reproducibility Two series of tests were performed to evaluate the stability of the detector over time, both conducted with a 137Cs source (half-life of 30.17 years). The first one was used to verify that there is no drift in count rates over a period of 6 h with an integration time per sample of 5 s. The second test consisted in performing 19 acquisitions of 3 min at random times over a period of 3 weeks with a counting time per sample of 1 s. A 3D-printed template, shown in Fig. 6, was used to ensure reproducible positioning of the source relative to the detector. The template allows positioning of the source every 15 mm from the detector, with the closest position at 3 mm. Positioning template for the 137Cs source Setup used for calibration with circulating FDG Setup used to extract time-activity curves during clinical PET studies. (Left) 3D rendering and (right) photograph The first test over the 6 h acquisition was analysed by fitting the data to a linear function. A χ 2 test was used to verify that the data was Poisson-distributed, as expected. In the second experiment, an analysis of variance (ANOVA) was performed to determine if the 19 acquisitions over 3 weeks belonged to a distribution with identical parameters (mean and variance). Another experiment was conducted to verify that the catheter can be positioned in a reproducible manner each time. For a catheter filled with FDG, the catheter was removed and repositioned 20 times and counting was performed for 30 s with a counting time per sample of 1 s. Counting rates were decay-corrected, and an ANOVA was performed to assess reproducibility of catheter positioning. All statistical tests were performed with R (version 3.2.1). Minimum detectable activity The minimum detectable activity (MDA) is a crucial characteristic of the device; the counter must detect a small number of counts per second in the relatively high background of a PET scan room. This can be achieved through adequate shielding, coincidence counting or a combination of both. In this work, shielding only was used but the acquisition system of the device was designed with coincidence counting as an optional feature. Typically, the blood activity is lower than 500 kBq/ml at the maximum of the input function for fluorodeoxyglucose (FDG) [7]. The MDA, in units of kilobecquerel per millilitre, is defined at a 95% confidence interval by [27]: $$ \text{MDA}= \frac{4.65\sqrt{N_{B}}+2.71}{fTs} $$ where T is the counting time per sample, N B is the number of background counts recorded during T, f is a factor of radiation yield per disintegration (f = 0.967 here) and s is the sensitivity of the detector as obtained by calibration. The sensitivity—the ratio of recorded count rate and activity concentration—was obtained with a 1.58-mm-inner-diameter catheter filled with 52.5 kBq/ml of FDG. The counting time per sample T used for MDA determination was 3 s, but it can be adjusted between 1 and 30 s to optimise the MDA as a function of background and activity level in the catheter. Figure 7 shows the prototype with FDG circulating in a catheter. Background counts were measured in a realistic environment, i.e. in a PET scanner room at 1 m from a patient injected with 275 MBq of FDG 40 min prior to the experiment. The background count rate was averaged over a 3 min acquisition. The experiment was repeated with and without the tungsten shielding to estimate its efficacy. To evaluate the activity dispersion in the tubing used, a step function study was conducted. A three-way valve was added at the end of the tubing (inner diameter of 1.58 mm) in a setup similar to the one used by Munk et al. [20]. Two vials were connected to the valve, one filled with water and the other with a mixture of water and FDG. Measurements were performed at two pump rates (2 and 4.5 ml min −1) and two tubing lengths (80 and 45 cm). The rising part of the step was modelled by an exponential function, f(t) = A(1− exp−t/τ) for each case [19]. PET study The use of the device in real conditions is essential to verify that it meets clinical requirements and workflows. For this purpose, the device was tested in a clinical setting with prostate cancer patients undergoing dynamic 18F-fluoromethylcholine (FCH) PET studies. As shown in Fig. 8, the patient was in supine position with his arms positioned above his head. Two venous accesses were installed, one in each arm. FCH was injected in the left arm while blood was withdrawn from the right arm (18 gauge needle). Tubing of 76 cm with an inner diameter (ID) of 2.54 mm was connected between the patient and a stopcock. The stopcock interfaced a saline syringe and the tubing (ID of 1.58 mm) going to the P625 pump, the detector module and then the waste container. The blood withdrawal rate was set to 2 ml/min for the acquisition. The patient was injected with a standard activity of 4 MBq/kg for a total of 355 MBq. The injection was performed within 3 s. The pump was started approximately 1 min before the beginning of the acquisition, defined as the time where the FCH is injected while the PET acquisition and the gamma counting are started simultaneously. The dynamic PET scan with a field-of-view centred on the prostate lasted 600 s. To compensate for the transport delay of the blood in the tubing and to extend gamma counting time, 125 s were added to the gamma counting acquisition. Figure 9 shows the count rate as a function of time over a 6-h acquisition. A linear fit through the data yields a slope of 0.01%. This observed slope is not statistically significant, and the error on the slope of ± 0.05% was retained. Acquisition with a 137Cs source over 6 h and linear fit through the data The data is Poisson-distributed as expected, with a reduced χ 2 value of 1.02. Figure 10 shows a boxplot representation of the 19 acquisitions of 3 min taken over a period of 3 weeks. The ANOVA analysis yielded a p value of 0.195, suggesting that the count rates observed over 3 weeks have the same mean as expected. Box-and-whisker plot representation of nineteen 3-min measurements over 3 weeks, showing median values, upper and lower quartile values (boxes), minimum and maximum values excluding outliers (whiskers) and outliers (1.5 times the upper/lower quartiles shown as points). The green line represents the mean value of all data, while the red line represents one standard deviation from the mean Figure 11 shows the results of the catheter repositioning experiment. A p=0.16 value was obtained for the ANOVA, suggesting that the positioning is reproducible. Box-and-whisker plot representation of twenty successive repositioning of the catheter, which is kept in place by a 3D-printed piece. The green line represents the mean value of all data and the red line represents one standard deviation from the mean Minimum detectable activity and energy resolution The sensitivity s obtained through calibration was 7.1 cps/(kBq/mL) for 18F. In the PET scanner room at 1 m from an FDG-injected patient, the measured background counts were 40 cps and 1500 cps with and without tungsten shielding respectively. For a 3 s sampling time, this yields a MDA of 2.5 kBq/ml with the tungsten shielding, which provides approximately a 30-fold reduction in background counts. The measured energy resolution of the detector was 8% at 662 keV. Figure 12 shows the step functions and related fits for the flow rates and tubing lengths used. The time constant τ extracted from the fits can be used to correct the TAC curves obtained, provided the experiment is conducted in the same conditions (liquid and radiopharmaceutical involved, tubing and pump rate used) [20]. Step functions for different flow rates and tubing lengths Figure 13 shows the PET image of the patient as well as the TAC in kilobecquerel per millilitre, as corrected by the calibrated system sensitivity but not for activity dispersion in the tubing. The measured blood activity was well above the background generated by the patient body, measured at 5 kBq/ml with the catheter filled with saline before and after the passage of blood. a Time-activity curve not corrected for dispersion, where the injection time corresponds to the beginning of the acquisition time corrected by 125 s. The dispersion is relatively large due to the long catheter used in this proof of concept study. b Dynamic PET image of the patient with attenuation correction The results show that the device fulfills compactness, stability and sensitivity requirements for a typical usage in molecular imaging. The measured sensitivity of 7.1 cps/(kBq/mL) translates into a MDA of 2.5 kBq/ml in a PET scanner room, an environment with a high background. This is well below the 500 kBq ml −1 concentration typically encountered in blood for FDG studies [7]. The detector system count rate behaviour was shown to be linear below 500 kBq ml −1, while obeying a non-paralysable model up to 2.5 MBq ml −1 where dead time accounts for a 10% count loss [15]. This MDA value corresponds to a sampling time of 3 s, and it can be lowered by a factor 3.2 by increasing the sampling time to 30 s as the activity level decreases in the blood. The prototype allows such sequence programming through its GUI. The device demonstrated high stability with a drift in count rates lower than 0.05% over 6 h. Random checks over 3 weeks also suggested high reliability. This robustness is a typical advantage of solid-state detectors over scintillators/PMT assemblies that might require periodic recalibration. Table 1 reports a non-exhaustive list of characteristics of blood counters reported in the literature. Direct comparisons are not always straightforward as some devices were built for animal use only and some are based on the detection of positrons. However, the detector presented here exhibits performances similar to other detectors, while the overall prototype being relatively more compact than other designs. Table 1 Comparison of real-time blood counters Although energy resolution is not a critical feature for blood counting in PET, the CZT detector used here achieved a relatively good value of 8% at 662 keV, better than any crystal-based solution. This energy resolution let efficient counting in dual-isotope studies. A single detector module was used in the experiments reported here. This makes the device potentially very sensitive to background radiation. In this regard, the acquisition card and associated firmware were designed to accommodate a second detector module and to perform coincidence counting or photopeak summation [6]. This could further reduce the size and weight of the device as less shielding would be required to achieve the same MDA. Further modelling efforts will be conducted to devise the optimal combination of acquisition mode and shielding requirements to make the detector immune to background variations. It is important to note that positrons have a non-null probability of reaching the CZT crystal, especially for emitters having an E max larger than 1.2 MeV such as 15O (E max = 1.735 MeV) in the geometry proposed here. Positrons with a lower energy will most likely be absorbed by the material located between the emission position and the CZT crystal (1 mm Al, 1.2 mm plastic and 0.8 mm air in our case). Positron events in CZT can lead to different outcomes, one of which is an energy deposition larger than 511 keV (if one or two annihilation photon interact afterwards in CZT). The other possibility, where both photons escape, produce an event which cannot be differentiated from a gamma interaction and contribute to the system deadtime. In all cases, the effect is systematic for a given geometry and isotope and is easily handled with calibration. Therefore, no problems are anticipated for the use of positron emitters other than 18F assuming proper calibration. Results obtained in this study demonstrate the feasibility of using a CZT-based detector to obtain the input function for PET or SPECT pharmacokinetic modeling. The large CZT detector used here (20 × 20 × 15 mm3) has the advantage of converting a relatively large fraction of incident gamma at 511 keV. Furthermore, the detector module has a compact design requiring less shielding than PMT-based solutions, provides a better energy resolution than crystal-based solutions and can be operated at room temperature. The MDA of the device was 2.5 kBq/ml for 3 s sampling duration, and it can be improved by a factor 3.2 by increasing the sampling time to 30 s for low activity measurements. The compact device was shown to be stable in time and robustness. The measurement of a TAC in a PET study confirms that the device is adequate for use in a clinical setting. Future work will include the development of an automatic blood packaging system so that additional biochemistry and calibration tests can be performed easily. This is especially important for radiopharmaceuticals requiring metabolite corrections. Zanotti-Fregonara P, Chen K, Liow J-S, Fujita M, Innis RB. Image-derived input function for brain PET studies: many challenges and few opportunities. J Cereb Blood Flow Metab. 2011; 31(10):1986–98. http://dx.doi.org/10.1038/jcbfm.2011.107. Su Y, Arbelaez AM, Benzinger TL, Snyder AZ, Vlassenko AG, Mintun MA, Raichle ME. Noninvasive estimation of the arterial input function in positron emission tomography imaging of cerebral blood flow. J Cereb Blood Flow Metab. 2013; 33(1):115–21. http://jcb.sagepub.com/content/33/1/115.abstract. Zanotti-Fregonara P, Fadaili EM, Maroy R, Comtat C, Souloumiac A, Jan S, Ribeiro M-J, Gaura V, Bar-Hen A, Trebossen R. Comparison of eight methods for the estimation of the image-derived input function in dynamic [18F]-FDG PET human brain studies. J Cereb Blood Flow Metab. 2009; 29(11):1825–35. http://jcb.sagepub.com/content/29/11/1825.abstract. Eriksson L, Holte S, Bohm C, Kesselberg M, Hovander B. Automated blood sampling systems for positron emission tomography. IEEE Trans Nucl Sci. 1988; 35(1):703–7. iSSN 0018-9499. Eriksson L, Ingvar M, Rosenqvist G, Stone-Elander S, Ekdahl T, Kappel P. Characteristics of a new automated blood sampling system for positron emission tomography. IEEE Trans Nucl Sci. 1995; 42(4):1007–11. iSSN 0018-9499. Votaw JR, Shulman SD. Performance evaluation of the pico-count flow-through detector for use in cerebral blood flow PET studies. J Nucl Med. 1998; 39(3):509–15. http://jnm.snmjournals.org/cgi/content/abstract/39/3/509. Boellaard R, van Lingen A, van Balen SC, Hoving BG, Lammertsma A. Characteristics of a new fully programmable blood sampling device for monitoring blood radioactivity during PET. Eur J Nucl Med Mol Imaging. 2001; 28(1):81–9. http://dx.doi.org/10.1007/s002590000405. Laymon CM, Shin D, Carney JP, Ruszkiewicz J, Altenburger D, Becker C, Lopresti BJ, Mason N, Mountz J, Price J, Schavey R, Mathis CA. Evaluation of a commercial radiochromatography module as an arterial blood activity monitor. Phys Med Biol. 2008; 53(2):339–51. http://stacks.iop.org/0031-9155/53/339. Kudomi N, Choi E, Yamamoto S, Watabe H, Kim KM, Shidahara M, Ogawa M, Teramoto N, Sakamoto E, Iida H. Development of a GSO detector assembly for a continuous blood sampling system. IEEE Trans Nucl Sci. 2003; 50(1):70–3. iSSN 0018-9499. Yamamoto S, Imaizumi M, Shimosegawa E, Kanai Y, Sakamoto Y, Minato K, Shimizu K, Senda M, Hatazawa J. A compact and high sensitivity positron detector using dual-layer thin GSO scintillators for a small animal PET blood sampling system. Phys Med Biol. 2010; 55(13):3813. http://stacks.iop.org/0031-9155/55/i=13/a=016. Breuer J, Grazioso R, Zhang N, Schmand M, Wienhard K. Evaluation of an MR-compatible blood sampler for PET. Phys Med Biol. 2010; 55(19):5883. http://stacks.iop.org/0031-9155/55/i=19/a=017. Convert L, Morin-Brassard G, Cadorette J, Rouleau D, Croteau E, Archambault M, Fontaine R, Lecomte R. A microvolumetric β blood counter for pharmacokinetic PET studies in small animals. IEEE Trans Nucl Sci. 2007; 54:173–80. Convert L, Lebel R, Gascon S, Fontaine R, Pratte J-F, Charette P, Aimez V, Lecomte R. Real-time microfluidic blood-counting system for PET and SPECT preclinical pharmacokinetic studies. J Nucl Med. 2016; 57(9):1460–6. http://jnm.snmjournals.org/content/57/9/1460.abstract. Espagnet R, Martin J-P, Hamel L-A, Després P. A CdZnTe-based automated blood counter for quantitative molecular imaging In: Jaffray D, editor. World Congress on Medical Physics and Biomedical Engineering, Vol. 51 of IFMBE Proceedings. Toronto: Springer International Publishing: 2015. p. 1338–42. Espagnet R, Frezza A, Martin J-P, Hamel L-A, Després P. Conception and characterization of a virtual coplanar grid for a 11x11 pixelated CZT detector. Nucl Instrum Methods Phys Res Sect A Accelerators Spectrometers Detectors Assoc Equip. 2017; 860:62–9. ISSN 0168-9002. http://www.sciencedirect.com/science/article/pii/S0168900217303595. McGuill MW, Rowan AN. Biological effects of blood loss: implications for sampling volumes and techniques. ILAR J. 1989; 31(4):5–20. http://ilarjournal.oxfordjournals.org/content/31/4/5.short. Everett BA, Oquendo MA, Abi-Dargham A, Nobler MS, Devanand DP, Lisanby SH, Mann JJ, Parsey RV. Safety of radial arterial catheterization in PET research subjects. J Nucl Med. 2009; 50(10):1742. http://jnm.snmjournals.org/content/50/10/1742.short. Meyer E. Simultaneous correction for tracer arrival delay and dispersion in CBF measurements by the H215O Autoradiographic Method and Dynamic PET. J Nucl Med. 1989; 30(6):1069–78. http://jnm.snmjournals.org/content/30/6/1069.short. Iida H, Kanno I, Miura S, Murakami M, Takahashi K, Uemura K. Error analysis of a quantitative cerebral blood flow measurement using H215O autoradiography and positron emission tomography, with respect to the dispersion of the input function. J Cereb Blood Flow Metab. 1986; 6(5):536–45. Munk OL, Keiding S, Bass L. A method to estimate dispersion in sampling catheters and to calculate dispersion-free blood time-activity curves. Med Phys. 2008; 35(8):3471–81. http://dx.doi.org/10.1118/1.2948391. Kim JC, Anderson SE, Kaye W, Zhang F, Zhu Y, Kaye SJ, He Z. Charge sharing in common-grid pixelated CdZnTe detectors. Nucl Inst Methods Phys Res Sect A Accelerators Spectrometers Detectors Assoc Equip. 2011; 654(1):233–43. iSSN 0168-9002. http://www.sciencedirect.com/science/article/pii/S0168900211011545. Boucher YA, Jaworski JM, Kaye WR, Zhang F, He Z. Results from testing of 145 3D position-sensitive, Pixelated CdZnTe Detectors. IEEE Trans Nucl Sci. 2012; 59(6):3332–8. iSSN 0018-9499. Luke P. Unipolar charge sensing with coplanar electrodes-application to semiconductor detectors. IEEE Trans Nucl Sci. 1995; 42(4):207–13. He Z. Review of the Shockley-Ramo theorem and its application in semiconductor gamma-ray detectors. Nucl Inst Methods Phys Res Sect A Accelerators Spectrometers Detectors Assoc Equip. 2001; 463(1–2):250–67. iSSN 0168-9002. http://www.sciencedirect.com/science/article/pii/S0168900201002236. Hamel L-A, Julien M. Generalized demonstration of Ramo's theorem with space charge and polarization effects. Nucl Inst Methods Phys Res Sect A Accelerators Spectrometers Detectors Assoc Equip. 2008; 597(2–3):207–11. iSSN 0168-9002. http://www.sciencedirect.com/science/article/pii/S0168900208014393. Luke PN, Eissler EE. Performance of CdZnTe coplanar-grid gamma-ray detectors. IEEE Trans Nucl Sci. 1996; 43(3):1481–6. iSSN 0018-9499. Knoll GF. Radiation detection and measurement, 4th Edition. New York: John Wiley and Sons; 2010. Blood Sampler twilite two. 2016. http://www.swisstrace.com/specifications.html. Roehrbacher F, Bankstahl JP, Bankstahl M, Wanek T, Stanek J, Sauberer M, Muellauer J, Schroettner T, Langer O, Kuntner C. Development and performance test of an online blood sampling system for determination of the arterial input function in rats. EJNMMI Phys. 2015; 2(1):1. The authors would like to thank Jean-Michel Guay and Yanik Landry-Ducharme for the insightful comments and suggestion on the design of the device. This work was financed by the Collaborative Health Research Projects program (grant CHRPJ 385866-10), a joint program of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Institutes of Health Research (CIHR). RE, JPM, LAH and PD designed the device. RE, JPM, AF, LAH, LC, JMB and PD participated in the critical revision of the manuscript. RE and LC did the measurements. RE drafted the manuscript and carried out the data analysis and interpretation. JMB supervised the patient study, and RE and AF operated the device. PD approved the final content. All authors read and approved the final manuscript. The informed consent to report the individual patient data was obtained from all participants included in the study. All procedures performed in studies involving human participants were in accordance with the ethical standards of the Ethical Review Committee of CHU de Québec–Université Laval. Informed consent was obtained from all individual participants included in the study. Department of Physics, Engineering Physics and Optics and Cancer Research Center, Université Laval, Quebec City, G1V 0A6, QC, Canada Romain Espagnet , Andrea Frezza , Laëtitia Lechippey & Philippe Després Department of Physics, Université de Montréal, C.P. 6128, Montréal, H3C 3J7, QC, Canada & Louis-André Hamel Department of Medical Imaging and Research Center of CHU de Québec - Université Laval, Quebec City, G1R 2J6, QC, Canada Jean-Mathieu Beauregard Department of Radiology and Nuclear medicine and Cancer Research Center, Université Laval, Quebec CityQC, G1V 0A6, Canada Department of Radiation Oncology and Research Center of CHU de Québec - Université Laval, Quebec City, G1R 2J6, QC, Canada Philippe Després Search for Romain Espagnet in: Search for Andrea Frezza in: Search for Jean-Pierre Martin in: Search for Louis-André Hamel in: Search for Laëtitia Lechippey in: Search for Jean-Mathieu Beauregard in: Search for Philippe Després in: Correspondence to Philippe Després. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Espagnet, R., Frezza, A., Martin, J. et al. A CZT-based blood counter for quantitative molecular imaging. EJNMMI Phys 4, 18 (2017) doi:10.1186/s40658-017-0184-5 Gamma counter Blood activity
CommonCrawl
In details What is the minimal focal length of the human eye? What is the minimum length to which the focal length of our eye can go, even when considering the blurred images too. The focal length of the average, healthy, adult human eye at near-point is about 18.5 mm. Young individuals can accommodate their lenses further to a focal length of around 15.4 mm. The focal length of the human eye is the distance between the lens and the retina when an object is in focus (Fig. 1). Therefore, the… even when considering blurred images toopart in the question doesn't make much sense. So, I will focus my answer on sharp images only (pun intended). The lenses of the eye are thicker in the center than at the edges, and hence positive and converging lenses. They form an inverted image on the photosensitive layer in the back of the eye - the retina (Fig. 1) The retinal image is shaped by two lenses: 1) the cornea with a fixed focal length and 2) the eye-lens, which is a lens with variable focal length through shape changes (Fig. 1) called accommodation and is mediated by the ciliary muscles (Kolb, 2012). When the ciliary muscles are relaxed, the focal length of the eye-lens is maximal and the distant objects are in focus (infinity). When the ciliary muscles contract they shorten the focal length of the eye-lens to bring nearer objects into focus. The two limits of this range are called the far-point, (ciliary muscles relaxed), and the near-point (maximal accommodation) (source: University of Colorado, Boulder). The distance between the eye-lens and the retina is about 20 mm. When an object is far away from the eye (infinity), the image is located essentially at the focal point. Therefore the focal length of the cornea and the eye-lens should be about 20 mm when the muscles of the eye are relaxed. The lens strength is the reciprocal of its focal length in meters. Hence, the strength of the cornea and the eye-lens at the far-point is about 1/0.020 = 50 diopters (source: University of Colorado, Boulder)… When an object is located at the near-point (the closest point at which an object can be brought into clear focus on the retina), the focal length of the cornea and the eye-lens must be changed so that the image is formed on the retina, which is still 20 mm away. The typical near-point in an adult is 25 cm, corresponding to a focal length of the cornea and the eye-lens of 18.52 mm using the standard ray-tracing rules of lenses. Hence the strength of the cornea and the eye-lens must now be about 1/0.01852 = 54 diopters. In other words, the muscles of the eye can provide an accommodation range of 4 diopters (source: University of Colorado, Boulder). Kids can, however focus to points as close as 6.5 cm away, i.e., 15 diopters worth of optical power, i.e., a minimal focal length of 15.39 mm. Fig. 1. Human eye. Upper panel: non-accomodated relaxed eye. Lower panel: accomodated eye. source: Khan Academy - Kolb, Gross Anatomy of the Eye. In: Webvision. The Organization of the Retina and Visual System, Moran Eye Center (2012) Closest focusing distance of a human eye? Was wondering, so I just measured mine. Focuses up to 14 cm close. Yours? Twelve centimeters for me, but then I'm extremely nearsighted--my maximum focusing distance is about 15 centimeters. My vision sans augmentation is 20/400+ &quotTell me the smallest letter on the chart that you can see clearly." &quotThere are letters on the chart?" RDKirk 'TANSTAAFL: The only unbreakable rule in photography.' . just poked myself in the eye with a ruler. Closest Focusing Distance for me Friday 6:00 PM: 7" Friday 10:00 PM: 5" Saturday 1:00 AM 3" Well you are all younger than me (40) or my eyesights really bad. At my last eye test (3 months ago) i could read the smallest line easily but today i can only focus at around 20cm, although it all depends how much your lens has hardend. 24 years Professional Letterpress/Litho printer. saaketham1 wrote: Friday 10:00 PM: 5' (Feet) Saturday 1:00 AM (wish on o' them 3 shulld i focusss on??) and other things. Young adults with 20/20 vision can often focus at 4 to 6 inches. At 6 inches, they can see detail at 600 pixels per inch or 300 line pairs per inch. As you age, your lenses become less resilient and the focus distance increases. At about 50 years, the average minimum focus distance is about a foot corresponding to 150 line pairs per inch or 300 pixels per inch. These numbers reflect good light and high contrast scenes. Generally, under more typical conditions, people don't see this well. http://homepage.mac.com/leonwittwer/landscapes.htm Dash29 wrote: Well you are all younger than me (40) or my eyesights really bad. I'm older than you by more than a decade. I'm both nearsighted (myopia) AND farsighted (presbyopia). My uncorrected range of vision is 12-15 centimeters. I normally wear bifocals that give me normal reading distance vision from the lower half of the lens and normal distance vision from the upper half of the lens. When I have to see close up, I peer over the top of the glasses to get my unaugmented super-close vision. My interpupillary distance is also wider than normal--80mm, which according to my optometrist should be giving me a bit more accurate stereo vision than normal. My daughter calls this a &quotuseless super power." Why did some ichthyosaurs have such large eyes? Many species of extinct marine ichthyosaurs had much larger eyes for their body size than would be expected of extant marine mammals and reptiles. Sensitivity to low light at great depth for the deep-diving genus Ophthalmosaurus has recently been suggested as the reason for the large eyes of these animals. Here, we discuss the implications for vision at such depths and consider other optical factors determining eye size. We suggest that the large eyes of ichthyosaurs are more likely to be the result of simultaneous selection for both sensitivity to low light and visual acuity. The importance of the evolutionary history of extant marine mammals and extinct ichthyosaurs is discussed, as are ecological factors driving both acuity and sensitivity. Ichthyosaurs were large marine reptiles that lived between 90 and 250 million years ago. Fossil evidence suggests that several species had very large eyes in comparison with those of the extant dolphins, with which ichthyosaurs are often compared. For example, some 9 m long ichthyosaurs had eyes 25 cm in diameter, more than five times that of similar-sized extant marine mammals. Recently, on the basis of estimating the f-number (see below) of the eye, it has been suggested that the particularly large eyes of the genus Ophthalmosaurus allowed it to see in the low light conditions experienced in the sea at depths of at least 500 m (Motani et al., 1999). This depth estimate is similar to those of the dive depth for this genus, based on scaling relationships between size and swimming speed and on size and dive duration in extant diving animals (Motani et al., 1999). Here, we re-evaluate the methods used to obtain these depth estimates and consider the implications of this revision. We suggest that previous estimates may be even more interesting than they first appear. First, experiments with seals at low light levels suggest that harp seals (Phoca groenlandica) are sensitive to different visual images at light levels equivalent to those experienced at a depth of approximately 615 m (Lavigne and Ronald, 1972). In a similar experiment, Wartzok (1979) reported a value of 670 m for spotted seals (P. largha). Since seals do not have unusually large eyes compared with those of other mammals, this suggests that ichthyosaurs may well have been able to see at depths substantially greater than 500 m without recourse to enlarged eyes. The argument that large eyes suggest deep diving is based on the estimation of the f-number of the eye, which is the ratio of the focal length (lf) of the optical system to the diameter of the aperture (da) through which light enters (Denny, 1993). Thus: The sensitivity of the eye (S) changes with f-number to the power –2: where L is the radiance (which is approximately equal to the brightness) of the source. Hence, low f-numbers lead to high sensitivity. We have been able to estimate an f-number for an elephant seal (Mirounga spp.) eye and, depending upon assumptions about lens size, we estimate that the minimum f-number for this species is between 1.18 and 1.48 (see Appendix). Motani et al. (1999) estimated the f-number of Ophthalmosaurus to be 0.76. Hence, all other things being equal, Ophthalmosaurus would have had a sensitivity 2.5–4 times that of an elephant seal. The largest of these values suggests that Ophthalmosaurus could probably see in light levels approximately 25 % of the minimum requirements of the elephant seal. Surprisingly, this greater sensitivity buys only 42 m of extra depth, since light intensity in the oceans decreases by approximately 90 % for every 70 m dropped (Wartzok and Ketten, 1999). Considering that elephant seals are known to forage at depths of over 1000 m (Schreer and Kovacs, 1996), the comparatively small potential expansion of depth range that large eyes would bring suggests that visual sensitivity alone is insufficient to explain why these ichthyosaurs had huge eyes. However, we must also be mindful that the method used to estimate the f-number of Ophthalmosaurus by Motani et al. (1999) is necessarily indirect and speculative because they were forced to make assumptions based only on preserved skeletal material and not soft tissues. Extant fish, squid and seals, in which we can examine optical systems directly, all have a ratio of focal length to lens radius of approximately 2.5 (Mattheissen's ratio), which equates to an f-number of 1.25 (Land, 1981). The convergence on Mattheissen's ratio through the different evolutionary pathways followed by these groups indicates that 1.25 is likely to be the minimum achievable f-number (Land, 1981). This suggests that the estimate of Motani et al. (1999) of 0.76 for Ophthalmosaurus may be a considerable underestimate. However, even if Ophthalmosaurus had an f-number of 1.25, it would still have been able to detect light usefully at considerable depths. The human eye, with a fully open pupil, has an f-number of 2.0 (M. F. Land, personal communication) which, all other things being equal, makes it 2.6 times less sensitive than a fish eye. The absolute threshold of the human eye is approximately 10 log units lower than the intensity of sunlight at the ocean surface, meaning that humans are able to see to a depth of approximately 700 m. In comparison, a fish (or Ophthalmosaurus) with an f-number of 1.25 and an eye equivalent in size and retinal structure to our own, would be able to see to a depth of perhaps 750 m. Hence, this line of reasoning also suggests that sensitivity to low light levels alone seems unlikely to provide a full explanation for the large eyes of ichthyosaurs. Sensitivity to low light levels is only one measure of visual ability another is the ability to resolve fine detail in an image (visual acuity). The resolving power (R) of an eye increases with the focal length of its lens (Bradbury and Vehrencamp, 1998) as: where dr is the centre-to-centre spacing between the receptors of the retina. This introduces a trade-off, since increasing the focal length of the eye on its own increases the f-number and so decreases sensitivity. One way to achieve both good sensitivity and acuity is to allow the focal length to increase, but simultaneously to increase the aperture size to avoid increasing the f-number. Hence, it may be that the large eyes of ichthyosaurs were a result of simultaneous selection for both high sensitivity and acuity. However, it is interesting to note that the visual acuities of extant cetaceans and pinnipeds are generally good and comparable with those of terrestrial hunters such as the domestic cat Felis catus (Muir and Mitchell, 1973). Visual performance also depends on retinal pooling – the summation of signals from individual sensory cells to produce a retina with fewer individual receptor units but greater sensitivity per receptor. With the longer focal length of its larger eye, an ichthyosaur could pool signals over a much larger region of retina, without loss of acuity, than humans. Alternatively, it could trade off some acuity in return for even greater sensitivity. Land (1981) has suggested that eye size is proportional to the product of resolution and the square root of sensitivity. Hence, increasing resolution by a given factor requires a greater increase in eye size than the same relative increase in sensitivity. This, combined with the impressive visual performance of extant aquatic mammals without huge eyes, suggests that the large eye size of ichthyosaurs was driven by a need for greater visual acuity allied to sensitivity to low light levels. This seems especially likely because the logarithmic decrease in light intensity with depth means that, at depths below 500 m, considerable improvement in sensitivity is required to produce an ecologically relevant increase in the range of visible depths. However, in terms of visual acuity, the type of receptor cell predominating in the retina strongly influences the value of dr, as these cells determine the level of receptor pooling. In general, rods tend to pool signals across several neighbouring receptors, thus effectively increasing the value of dr, while cones generally do not pool. Thus, a predominance of cones in the retina suggests that the value of dr is relatively small and, hence, that resolution is relatively high (Walls, 1963). This difference can be explained by the function of the two receptor types. Rods are generally found in animals adapted to low light levels, while cones predominate in diurnal species. The phylogenetic history of ichthyosaurs and extant marine mammals indicates that the former were derived from primarily diurnal reptilian ancestors, while mammals are characterised by nocturnal predecessors (Walls, 1963 Muntz, 1978). This suggests that ichthyosaurs had visual systems already geared towards visual acuity more than sensitivity. Pooling of receptor signals in ichthyosaurs would allow increased sensitivity, but at the cost of reduced acuity. Thus, relatively large eyes would appear to be an adaptation for both acuity and sensitivity in these animals. The above arguments lead us to the conclusion that the ecological demand giving rise to the large eyes of the ichthyosaurs was not simply a need to see in the low light environment of the ocean depths. Rather, large eyes probably developed in response to the constraint of sensitivity, in conjunction with a need for high visual acuity. However, the mechanism driving this need for high visual acuity is not obvious, especially given the good acuity of modern marine mammals. One possible hypothesis is that the main predators and prey of the ichthyosaurs were superficially similar in appearance at a distance, and fine resolution was required to tell one from another at sufficient range to allow flight from predators. However, this is an odd situation apparently not encountered by extant animals, especially considering the body-size scaling relationships involved in predator/prey systems. A more plausible explanation for this need for both sensitivity and acuity is that these animals were fast, active hunters of small prey at some depth. A similar argument involving the amount of receptor-cell pooling might explain the occurrence of relatively large eyes in many extant cephalopods, such as the giant squid Architeuthis, that are fast, deep-swimming hunters. A further possible consequence of selection for high visual acuity is the use of visual signalling or individual recognition between ichthyosaurs, perhaps related to mating or coordinated foraging. It is noticeable that marine animals that do have primarily visual communication (e.g. many cephalopod molluscs, mantis shrimps) also have large eyes relative to their body size. In summary, we suggest that the large eyes of Ophthalmosaurus are the result of simultaneous pressure for sensitivity, allowing prey detection at considerable depths, combined with pressure for high acuity, allowing these animals to hunt small, fast-moving prey. Focal length eye In subsequent sections, a 58-D reduced eye model is used, that is, an eye with a single refracting surface that separates air from aqueous humor with an index of refraction of 1.333. The radius of curvature this eye equals 333/58 = 5.74 mm, its first focal length = 1000/58 = 17.2 mm, and its length or second focal length = 1333/58 = 23.0 m . The more commonly accepted value, however, is 22mm to 24mm.. When you look through a viewfinder, a lens at around 50mm focal length will show objects at the same size as when you look at something with your eyes. You could test this by looking through the viewfinder with one eye, and looking next to it with the other eye. When you close one of your eyes, you will notice that your sight does not change, regarding the size of objects. This applies to APS-C cameras, as well for full frame cameras For the popular 35 mm film format, typical focal lengths of fisheye lenses are between 8 mm and 10 mm for circular images, and 15-16 mm for full-frame images. For digital cameras using smaller electronic imagers such as 1⁄4 and 1⁄3 format CCD or CMOS sensors, the focal length of miniature fisheye lenses can be as short as 1 to 2 mm The focal length of the average, healthy, adult human eye at near-point is about 18.5 mm. Young individuals can accommodate their lenses further to a focal length of around 15.4 mm We created Lumion so the focal length corresponds with 35mm full frame. This makes it easier to look up the desired focal lengths. A focal length which comes close to the human eye is about 50mm. This is just a common value and I'm not sure this is an exact scientific value. Well actually I'm pretty sure it's not. It's probably what people. Again, too specialized. 24mm Well that focal length is neither fish nor fowl, even though Canon's and Nikon's F2.8 standard zooms start at 24mm. Panasonic has the exact equivalent with its 12-35mm F2.8, whereas Olympus lenses seem to generally like a base focal length of 12mm a.k.a. 24mm The focal length of a lens used on a digital camera, is the distance between the focal point of a lens and the sensor when the subject is in focus. It doesn't refer to the size of the lens, because it's not the actual length of the lens. The focal point is inside the lens, at the point where light rays converge Based on this, I'd like to propose that before you latch onto focal lengths such as 22, 24 or 50mm as the closest focal length to the human eye, I strongly suggest that you think first about what you actually want to capture in your image. Seeing and Perceiving. How we see and how we perceive the world, are two very different things. A 24mm focal length might be great for photography when you want to show approximately the amount of the scene that we can see with our peripheral. Although the human eye has a focal length of approximately 22 mm, this is misleading because (i) the back of our eyes are curved, (ii) the periphery of our visual field contains progressively less detail than the center, and (iii) the scene we perceive is the combined result of both eyes. Each eye individually has anywhere from a 120-200° angle of view, depending on how strictly one defines. length of 50 mm provides a very close field of view of the vision of our eye, hence having also a magnification of 1. In fact, and for the purists, this focal length is 43 mm. We'll just use the very good approximation of 50 mm for the rest of our words. From there, we can say that the magnification of a lens is equal to the focal length divided by 50. with α = angular size of image β. However, oculars with small focal length tend to have a smaller eye relief, e.g. only 2 or 3 mm, although there are design methods with which more can be achieved - possibly at the expense of other parameters. A particularly large eye relief is required for riflescopes, because the recoil would otherwise push the ocular into the eye. Ocular Lens Designs. There is a wide range of optical. For a similar perspective to the human eye you need something between 40mm and 55mm focal lengths in 35mm frame terms A fast (i.e., low focal ratio) telescope can have a fairly long focal length, if the aperture is large enough, a 24 f/3.3 for example. In that case, a low-power (i.e., long focal length) eyepiece will produce an overly large exit pupil. #5 macdonjh Fly Me to the Moo . You can zoom in on your phone, but that's not changing your focal length. That's just cropping your photo before you actually take it, photographer Derek Boyd points out Larger focal length lenses have less optical power. In SI, optical power is measured in reciprocal meter (m⁻¹). This unit is usually called the dioptre or diopter. For example, a 2-diopter lens can focus parallel rays of light at ½ meter. We can view the above formula in action if we dive without a mask or goggles: we cannot see clearly because the refractive index of water at 20° C is 1. The focal length of your eyepiece is often printed on the eye piece itself. If your telescope has a focal length of 800mm and you are using a 20mm eyepiece you divide the focal length of the scope by the focal length of the eyepiece: 800mm/20mm = 40. As a result you will get 40X as a your magnification The maximum focal length of eye lens is 2.5 cm. The distance between lens and retina is 2.5cm.Minimum focal length occur when you focus on images at your nearpoint The minimum focal length of eye lens is 2.27 cm Perhaps the starting point for equivalence with the human eye is the focal length there are various answers to this question and ClarkVision provides a good summary that can be distilled down to. Focal Length. This schematic shows an example of a convex lens on the top and a concave lens on the bottom. The focal point (F) is the point at which parallel light rays cross Your choice of Focal Length will have a dramatic affect on how your story is told. This video covers some general uses for various focal lengths in order to.. Focal Length Eye Photography On Landscap imum limit. T ry to read a printed page by holding it very close to your eyes. Y ou may see the image being blurr ed or feel. What is Focal Length? Ultra Wide Angle Lens [10mm to 24mm]. As the name suggests, these lenses have a very wide angle of view! Often used by. Wide Angle Lens [24mm to 35mm]. Wide angle lenses are not quite as wide as the ultra wides, but are still quite wide. Standard Lens [35mm to 70mm]. Standard. This principle is used in the eye's lens. The focal length is somewhat reduced for focusing on nearby objects. When an optical system contains multiple optical elements (e.g. lenses), the focal length may be tuned by adjusting the relative distances between the optical elements. This principle is used e.g. in photographic zoom objectives. Wavelength Dependence of the Focal Length Using Curved. The focal length (f) is the distance between the lens and the focal point. Because the focal length measures a distance, it uses units of length, such as centimeters (cm), meters (m), or inches.. It is accepted that the human eye has a magnification of 1. It is also generally accepted that a lens with a focal length of 50 mm provides a very close field of view of the vision of our eye, hence having also a magnification of 1. In fact, and for the purists, this focal length is 43 mm. We'll just use the very good approximation of 50 m The distance from the magnifying lens to the piece of paper is the focal length. For the eye, light from distant objects is focused onto the retina at the back of the eye. The eye is about the size of a table tennis ball, so the focal length needs to be about 2.5 cm. The cornea does most of the focusing . About 70% of the bending of light takes place as it enters the cornea and the aqueous. Without getting too technical, focal length can be defined as the distance between the optical center of the lens and the image plane (the sensor or film) when the lens is focused at infinity. Focal length is typically measured in millimeters, and is the primary defining trait of a lens When the eye is relaxed and the interior lens is the least rounded, the lens has its maximum focal length for distant viewing. As the muscle tension around the ring of muscle is increased and the supporting fibers are thereby loosened, the interior lens rounds out to its minimum focal length. Things like this are because of the curvature of the lens of your eye. If it's too curved the focal point of your eye will land somewhere in front of your retina causing myopia, or short-sightedness, and in the case of your lens being not curved enough, the focal point lands behind the retina and you have hyperopia or long sightedness PS EYE: Focal Length / Brennweite ? Diskutiere PS EYE: Focal Length / Brennweite ? im Hardware und Zubehör Forum im Bereich Playstation 3 Hallo Leute! Vielleicht ne doofe Frage, aber kann mir jemand sagen, welche Brennweite/ Focal Length die PS Eye Kamera hat? Vielen Dank schon mal! Neues Thema erstellen Antworten 04.03.2011 #1 H. hoodoo101. Dabei seit 04.03.2011 Beiträge 1. Approximating the eye as a single thin lens 2.60 cm from the retina, find the eye's near-point distance if the smallest focal length the eye can produce is 2.20 cm. Solution: Chapter 27 Optical Instruments Q.5C Thus, focal length is the distance behind the lens at which collimated light striking the lens will converge. For compound lenses (lenses with more than one lens element with real thickness - pretty much every modern photographic lens), the distance a theoretical thin lens having the same refractive properties would need to be in front of the focal plane for collimated rays striking that lens. Focal length is the distance (measured in millimeters) between the point of convergence of your lens and the sensor or film recording the image. The focal length of your film or digital camera lens dictates how much of the scene your camera will be able to capture Do this for every shot to make sure their eye is roughly in the same place for each photo. As your focal length changes you'll need to adapt. For each new focal length, starting at 28mm, you'll have to move further back and realign your subject in the frame as closely as possible to your first phot The standard focal length lens (50 mm for the 24 x 36 m format) is the standard as it very well approximates the human eye's FOV. Generalized, a focal length of about 115 % of the image's. Focal lengths with larger numbers make subjects appear larger compared to how the human eyes perceive them. Moreover, the longer the focal length of a lens, the more elements stack within the frame, causing a photo to have a compressive perspective. These focal length lenses can create a shallow depth of field, allowing you to focus on small objects at particular distances or make distant subjects closer What Is The Normal Focal Length Of Human Eye? - Make money The principal focal point - nearest point For an object at 250mm from the eye the principal focal length would be: 1 f = 1 do + 1 di 1 f = 1 d o + 1 d i 1 f = 1 250 + 1 20 1 f = 1 250 + 1 2 If the object distance is changed (i.e., the eye is trying to focus objects that are at different distances), then the focal length of the eye is adjusted to create a sharp image. This is done by changing the shape of the lens a muscle known as the ciliary muscle does this job. Nearsightedness . A person who is nearsighted can only create sharp images of close objects. Objects that are. Focal length is the distance between the optical center of the lens, and the camera sensor or film plane when focused at infinity. The optical center is where light rays converge inside the body of your lens. The focal length defines the magnification and field of view for a given lens. This value is most commonly measured in millimeters. Prime lenses have set focal lengths whereas zoom lenses. For a normal eye, when the eye muscles are relaxed, the focal length f of the lens L is slightly less than its diameter, which is D = 2.4 cm. An idealized eye will focus an object at infinity on the retina which is located at distance, D, behind the lens of the eye (see Figures 5 and 6a below). When the eye views a closer object, the eye muscles produce a shortening of f (so-called. What is Lens Focal Length. Focal length, usually represented in millimeters (mm), is the basic description of a photographic lens. It is not a measurement of the actual length of a lens, but a calculation of an optical distance from the point where light rays converge to form a sharp image of an object to the digital sensor or 35mm film at the focal plane in the camera. The focal length of a lens is determined when the lens is focused at infinity Focal length is measured in millimeters (mm) and it represents the distance from the optical center of a lens to the digital camera sensor when the subject of the photo is in focus. This is the standard textbook definition, but it's still not entirely obvious WHY you need to know about it before your purchase a new lens The distance from the lens to this principal focus point is called the focal length of the lens and will be designated by the symbol f. A converging lens may be used to project an image of a lighted object What is the focal length of the human eye related to 35mm? I heard that to get the picture with closest perspective to the one of the human eye, the focal length of the lenses should be set to the focal length of the human eye, is that true? ibiza123's gear list: ibiza123's gear list. Fujifilm FinePix HS10 Sony Cyber-shot DSC-HX90V Panasonic Lumix DMC-GX7 Panasonic Lumix G Vario 14-140mm F3.5. What's the focal length of a human eye? - Quor 1. Eye piece have moderate focal length. If it is decreased after a certain limit, the focus of eye piece would be very close to the optical centre of the eye piece and therefore the eye piece would suffer spherical aberration. The image formed by the eye piece would no longer be clear. Share A normal human eye can clearly see all the objects at the different distance. Reason The human eye has the capacity to suitably adjust the focal length of its lens to a certain extent Focal length is the system used in photography to describe how wide or tight a lens is. Listed as a number and measured in millimetres — e.g., 35mm, 85mm — it tells you how much of a scene a lens can capture, and how big subjects will appear. The number gives an indication of the angle of view that a lens can see Focal length is not stated directly in a prescription for eyeglasses. Instead, the refractive power is used to describe the extent to which a lens refracts light. The formula used to find the refractive power of the lens (in diopters) is the inverse of the focal length (f: given in meters). This relationship shows that the greater the power of a lens, the shorter the focal length. For example. Page 1 of 2 - Eyepieces - Focal Length vs FOV vs Eye relief - posted in Beginners Forum (No astrophotography): Eyepiece descriptions seem somewhat confusing: 1. Ive read that some expensive eyepieces give a wider field of view at the same magnification than cheaper eyepieces. Is this true? 2. Also read about eye relief being measured in mm - How, if at all, does this relate to focal length e how much of the subject your camera sees. You may already be familiar with the basics, and understand the difference between, say, wide-angle and telephoto lenses, but let's dive into the the topic a little deeper to see what's really going on. There are four fundamental things to know and. g image has a focal-length equivalent to 22-24mm The focal length of the lenses of an astronomical telescope are 50 c m and 5 c m. The length of the telescope when the image is formed at the least distance of distinct vision is 9. The radius of curvature of each surface of a convex lens of refractive index 1.5 is 40 c m 25mm (50mm): This focal length is said to be identical to the human eye. The image is what you see with your eyes, in terms of distance. When shooting a close up from a face, this is the minimum focal length you want to use Q: If the focal length of a magnifier is 5 cm calculate (a) the power of the lens (b) the magnifying power of the lens for relaxed and strained eye. Sol: (a) As power of a lens is reciprocal of focal length . #92large P = \frac<1><5\times 10^<-2>> = \frac<1><0.05>$ P =20 D (b) for relaxed eye, MP is minimum and will b The answer to this question is: When the muscles are relaxed, the lens becomes thin. Thus, its focal length increases. Access a diverse Question Bank and ask You Own Doubt Now The human eye - University of Tennesse Focal Length and Depth of Focus. In addition to f-stop affecting depth of focus, focal length plays a significant role in depth of focus. The greater the focal length, the less the depth of focus. Stated another way, the more you zoom in, the less objects in the background will be in focus. As noted below, the image of the flower on the right. Many translated example sentences containing focal length of eye - Spanish-English dictionary and search engine for Spanish translations A person with normal near point $25\, cm$ using a compound microscope with objective of focal length $8.0\, mm$ and an eye piece of focal length $2.5\, cm$ can bring an object placed at $9.0\, mm$ from the objective in sharp focus. The separation between two lenses and magnification respectively ar This can't be done with the human eye: the image distance, the distance between the lens and the retina, is fixed. If the object distance is changed (i.e., the eye is trying to focus objects that are at different distances), then the focal length of the eye is adjusted to create a sharp image. This is done by changing the shape of the lens a. Eyepiece Focal Length Calculators The calculators below perform a rough determination of the Effective Focal Length (EFL) of simple 2- and 3-element eyepieces. You need only specify the focal length of the individual elements and the spacing between the elements. The formulas employed by these calculators are approximations Medium Focal Lengths. Medium focal lengths fall anywhere within the 35mm to 70mm range. This range is the most similar to what we see with our own eyes. In general, human eyesight is equivalent to about 50-70mm on a full frame camera. This focal length is great for walking around. You can frame photos quickly as the image will largely resemble. Calculation of focal length and magnification of a magnifying glass. Conversion factor is the near point distance of the eye, which is estimated as 25 cm. This is not the distance of the glass from the object A telescope has magnification 5 and length of tube 60 cm then the focal length of eye piece is- asked Jan 11, 2020 in Physics by Nishu03 (64.2k points) jee main 2020 0 votes. 1 answer. The magnifying power of a small telescope is 20 and the separation between its objective and eye piece is 42 cm in normal setting. asked May 7, 2019 in Physics by Sweety01 (70.0k points) optics jee jee. focal length Bedeutung, Definition focal length: the distance between a point where waves of light meet and the centre of a lens focal length definition: 1. the distance between a point where waves of light meet and the centre of a lens 2. the distance. Learn more The probe is equipped with a 60 mm focal length lens, with optional 80 mm and 120 mm focal length lenses available. tsi.com Die Sonde is t mit einer Linse mit e in er Brennweite von 6 0 mm ausgestattet, o pt ion al sind Linsen mit 80 und 1 20 m m Brennweite e rhäl tl ich A telescope with objective of focal length 60 cm and eye piece of focal length 5 cm is focused on a far distance object such that the parallel rays emerge from eye piece. If object subtends an angle 1 o on objective, then angular width of the image is : (A) 62 ° (B) 48 ° (C) 24 ° (D) 12 ° This PR uses the eye camera intrinsics (specifically focal_length) to get a more accurate 3D eyeball position estimate. Depends on pupil-detectors v1.1.1, upgrade with pip install -U pupil-detectors Summary Added known intrinsics for the eye cameras to camera_models.py On recording stop, current eye camera intrinsics are saved to the recording (like with world.intrinsics) The 3D detector is. we've been doing a bunch of these videos with these convex lenses where we drew parallel rays and rays that go through the focal point to figure out what the image of an object might be but what I want to do in this video is actually come up with an algebraic relationship between between the distance of the object from the convex lens the distance of the image from from the convex lens usually. Muchos ejemplos de oraciones traducidas contienen focal length of eye - Diccionario español-inglés y buscador de traducciones en español The focal length of a lens f is the distance from a lens to the focal point F . Light rays (of a single frequency) traveling parallel to the optical axis of a convex or a concavo-convex lens will meet at the focal point The eye has a nominal focal length of approximately 17mm,[1]but it varies with accommodation. The nature of human binocular vision, which uses two lenses instead of a single one, and post-processing by the cortex is very different to the process of making and rendering a photograph, video or film The Human Eye as an Optical System Ento Ke As the traditional wideangle lens used to have a focal length of around 28mm, most kit lenses start from 18mm to meet this length (ie, 18 x 1.5 = 27mm). This is equally the case for the Four Thirds format, whose 2x multiplication factor means that Olympus's kit lenses start at 14mm. As only the central part of the image is used by the sensor, it allows digital lenses to be both lighter and smaller, as less glass is needed in their construction The next time you're out shooting for fun, limit yourself to one focal length. Ideally, the focal length you choose will be one extreme or the other (telephoto or wide) so that you're forced to see the world through different eyes than normal. If you're using a zoom, keep it set to one focal length the entire time So, what is the average focal length of the average human eye? The size of a human adult eye is approximately 24.2 mm (transverse) × 23.7 mm (sagittal) × 22.0-24.8 mm (axial) with no significant difference between sexes and age groups Focal length= 1/2center of curvature The Attempt at a Solution a) C=1/2D =1/2(2.5cm) =1.25cm F=1/2C =1/2 (1.25) =0.625 It doesn't make sense though, that focal length seems too small/odd. It says in the question that it changes to between 2.1 and 2.3, so my answer seems wayyyyy off. http://img23.imageshack.us/img23/9751/eyecopyw.jpg [Broken The Camera Versus the Human Eye - PetaPixe closer to the eye than 73 cm. Determine the focal length of contact lenses that will enable this person to read a magazine at a distance of 25 cm. Solution: Remember a converging lens is required to correct far sightedness. That is, the focal length of the lens is positive. Far sighted-objects close up are blurred A diopter is defined as a unit of lens power equal to 1/focal length of the lens in meters, or 100/focal length of the lens in centimeters, or 40/focal length of the lens in inches. In eye care, by convention, we work in quarter diopter units of power. Slide 3 Corrective lens are either positive or negative in power. Negative or minus lenses cause light to diverge. Positive or plus lenses cause light to converge Longer focal lengths really limit your field of view in my opinion. Imagine a completely black room, your only contact with the world is trough a small window and a large one. Of course, the large one is the 28mm. I don't know about you, but I want as much of the world that I can get. When you use a 28mm, it's as much about the subject that you are shooting, as it is their background. I have seen optical models of the eye, in the OSA Handbook, for example. From memory, the eye is about 25mm diameter making its lens power 40D. add your prescription algebraically to to get 28.75D. Take the reciprocal to obtain a focal length of about 35mm. That is an indication of how long your eyeball is. It is possible to calculate more. Field of view - What lens focal length most closely As this look is considered to resemble the focal length of the human eye, Ozu used it to create a naturalistic approach to his story. Another more recent example of single-lens use in a film is Call Me By Your Name, directed by Luca Guadagnino. Luca Guadagnino with his director of photography Sayombhu Mukdeeprom took on the challenge of filming the entire film with a 35mm lens. Their aim was. The standard lens has a fixed focal length (50mm, 85mm, 100mm), and reproduces fairly accurately what the human eye sees - in terms of perspective and angle of view. For a 35mm film camera or a full-frame DSLR, the 50mm lens is considered standard The crystalline lens of the human eye is a double convex lens made of material haveing an index of refraction of 1.44. Its focal length in air is about 8 mm which also varies. We shall assume that the radii of curvature of its two . Physics. An object is placed 30mm in front of a lens. An image of the object is located 90mm behind the lens. a) Is the lens converging or diverging? explain. b) What is the focal length of the lens? c) Draw a diagram with lens at x=0 The length focal length is calculated using the following formula: 1 6 + 1 7 = 1 U and V are measured from the principal planes. Ru and Rv are measured from the lens vertexes. If the object O is close to the front focal point, the beam coming out of the lens is almost collimated and Rv is very large. Thus, d << Rv, and one can approximate V Rv. Rv is measured for two positions U1 and U2. Fisheye lens - Wikipedi Focal, length, eye, focus icon. Open in icon editor. This is a premium icon which is suitable for commercial work: Use it commercially. No attribution required. Comes in multiple formats suitable for screen and print. Ready to use in multiple sizes. Modify colors and shapes using the icon editor. Add icon to cart $2.00 Focal length. Focal length, or focal length range in the case of zooms, will usually be the foremost consideration when choosing a lens for a specific photograph or type of photography. The focal length of a lens determines two characteristics that are very important to photographers: magnification and angle of view When it comes to photography, normal most often refers to the standard focal length lens on a camera. A normal lens sees about the same angle of view as the human eye. Let's delve into what normal means and why it's important. A normal lens is one whose focal length is the diagonal of the sensor of the camera. The sensor size is commonly known as the format. A full frame DSLR sensor. Focal length choice is a huge part of the composition process of an image. You can use a wide lens to lead into a background or create distance, or choose a longer focal length to compress your subject against the background. A focal length of any choice can be a good one depending on the way you envision the scene Vision - What is the minimal focal length of the human eye Those buying a point-and-shoot camera for landscape photography also should keep an eye on focal length equivalent to make sure their camera can go wide enough. Ideal Focal Lengths for Landscapes. Focal length(s) is perhaps the most important factor in choosing a landscape lens. As we mentioned above, the heart of the landscape focal length range is 14mm to 35mm. To illustrate what each focal. Describes how focal length is defined for a converging or diverging lens. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new. Focal length is something that we talk about constantly as we discuss different lenses and styles of photography in our weekly free podcasts.. It can be a bit confusing as a beginner to understand focal length because there are a few twists and complexities, but I'll do my best to explain it in 5 minutes or less Changes in focal length of a camera lens does not directly influence perspective. An alternative, but equivalent statement would be: 4. The appearance to the eye of objects in respect to their relative distance and positions. From Wikipedia. Perspective, in the context of vision and visual perception, is the way in which objects appear to the eye based on their spatial attributes or their. Standard focal lengths range from 35mm to 50mm depending on the type of camera sensor. The field of view provided by standard focal lengths approximates the field of view of the human eye. Images taken with a standard focal length show a natural perspective without distortions >>> so it makes sense that he would use 50mm lenses for entire films since the 50mm (as well as the 35mm) is often considered to resemble the focal length of the human eye. This well known 50mm lens comment is in reference to the 35mm Full Frame still photo format, which would translate to a 35mm lens for the Super35 film format T he primary measurement of a lens is its focal length. The focal length of a lens, expressed in millimeters, is the distance from the lens's optical center (or nodal point) to the image plane in the camera (often illustrated by a Φ on the top plate of a camera body) when the lens is focused at infinity. The image plane in the camera is where you. The human eye has a focal length of somewhere between 40mm and 58mm, with 50mm being the usual compromise. This is referred to as the normal focal length . It's hard to measure because a camera lens is not a perfect analog of our eyes Other articles where Focal length is discussed: photoreception: Diversity of eyes: lens surface, which shortens its focal length (the distance from the retina to the centre of the lens). One of the most interesting examples of amphibious optics occurs in the four-eyed fish of the genus Anableps, which cruises the surface meniscus with the upper part of the eye looking int Electromagnetic Spectrum and Color Visible light is just one form of electromagnetic radiation (EMR), a type of energy that is all around us. Other forms of EMR include microwaves, X-rays, and radio waves, among others. The different types of EMR fall on the electromagnetic spectrum, which is defined in terms of wavelength and frequency. The spectrum of visible light occupies a relatively small range of frequencies between infrared and ultraviolet light (Figure (PageIndex<6>)). Figure (PageIndex<6>): The electromagnetic spectrum ranges from high-frequency gamma rays to low-frequency radio waves. Visible light is the relatively small range of electromagnetic frequencies that can be sensed by the human eye. On the electromagnetic spectrum, visible light falls between ultraviolet and infrared light. (credit: modification of work by Johannes Ahlmann). Whereas wavelength represents the distance between adjacent peaks of a light wave, frequency, in a simplified definition, represents the rate of oscillation. Waves with higher frequencies have shorter wavelengths and, therefore, have more oscillations per unit time than lower-frequency waves. Higher-frequency waves also contain more energy than lower-frequency waves. This energy is delivered as elementary particles called photons. Higher-frequency waves deliver more energetic photons than lower-frequency waves. Photons with different energies interact differently with the retina. In the spectrum of visible light, each color corresponds to a particular frequency and wavelength (Figure (PageIndex<6>)).The lowest frequency of visible light appears as the color red, whereas the highest appears as the color violet. When the retina receives visible light of many different frequencies, we perceive this as white light. However, white light can be separated into its component colors using refraction. If we pass white light through a prism, different colors will be refracted in different directions, creating a rainbow-like spectrum on a screen behind the prism. This separation of colors is called dispersion, and it occurs because, for a given material, the refractive index is different for different frequencies of light. Certain materials can refract nonvisible forms of EMR and, in effect, transform them into visible light. Certain fluorescent dyes, for instance, absorb ultraviolet or blue light and then use the energy to emit photons of a different color, giving off light rather than simply vibrating. This occurs because the energy absorption causes electrons to jump to higher energy states, after which they then almost immediately fall back down to their ground states, emitting specific amounts of energy as photons. Not all of the energy is emitted in a given photon, so the emitted photons will be of lower energy and, thus, of lower frequency than the absorbed ones. Thus, a dye such as Texas red may be excited by blue light, but emit red light or a dye such as fluorescein isothiocyanate (FITC) may absorb (invisible) high-energy ultraviolet light and emit green light (Figure (PageIndex<7>)). In some materials, the photons may be emitted following a delay after absorption in this case, the process is called phosphorescence. Glow-in-the-dark plastic works by using phosphorescent material. Figure (PageIndex<7>): The fluorescent dyes absorbed by these bovine pulmonary artery endothelial cells emit brilliant colors when excited by ultraviolet light under a fluorescence microscope. Various cell structures absorb different dyes. The nuclei are stained blue with 4&rsquo,6-diamidino-2-phenylindole (DAPI) microtubles are marked green by an antibody bound to FITC and actin filaments are labeled red with phalloidin bound to tetramethylrhodamine (TRITC). Which has a higher frequency: red light or green light? Explain why dispersion occurs when white light passes through a prism. Why do fluorescent dyes emit a different color of light than they absorb? CBSE Class 10 Science Chapter 11 Notes Human Eye and Colourful World Human Eye and Colourful World Class 10 Notes Understanding the Lesson 1. The human eye: The human eye an extremely valuable and a sensitive sense organ, which enables us to see objects and colours around us. Cornea: A thin membrane through which light enters the eye, maximum refraction occurs at the outer surface of cornea. Iris: A dark muscular membrane which controls size of pupil. Pupil: Regulates and controls the amount of light entering the eye. Eye lens: Composed of fibrous, jelly-like material, with adjustable curvature, forms an inverted and real image of object on retina. Retina: It is a light sensitive screen on which image is formed. The ability of the eye lens to adjust its focal length is called accommodation. Least distance of distinct vision: Minimum distance at which object can be seen distinctly without any strain from normal eye, i.e, 25 cm for normal vision. Far point of the eye: The farthest point upto which the eye can see objects clearly is called far point of the eye. It is infinity for normal eye. 4. Defects of Vision: (i) Cataract: Crystalline lens of people at old age becomes milky and cloudy. This condition is called cataract. It is possible to restore vision through cataract surgery. (ii) Myopia: (Near sightedness) A person with myopia can see nearby objects clearly but cannot see distant objects clearly. Cause (iii) Hypermetropia (far-sightedness) A person with hypermetropia can see distant objects clearly but cannot see nearby objects distinctly. Convex lens of suitable power. (iv) Presbyopia The power of accommodation of the eye usually decreases with ageing. In this eye defect it is difficult to see nearby objects comfortably and distinctly without corrective eye glasses. Cause:Weakening of cilary muscles and diminishing flexibility of eye lens. Correction: By using bifocal lens. Upper portion consists of concave lens and lower part is convex lens. 5. Refraction of Light through Prism (i) The refraction of light takes place at two surfaces firstly when light enters from air to prism and j secondly when light emerges from prism. (ii) Angle of prism: The angle between the two lateral faces of the prism is called angle of prism. (iii) Angle of deviation: The angle between incident ray (produced forward) and emergent ray I (produced backward). 6. Dispersion of White Light by a Glass Prism Dispersion: The splitting of light into its component colours is called dispersion. Red light bends the least while violet bends the most. Spectrum: The band of the coloured components of a light beam is called spectrum, i.e., VIBGYOR When an inverted prism is kept a little distance away from the prism causing dispersion or basically in the path of splitted beam, the spectrum recombines to form white light. 7. Rainbow Formation A rainbow is a natural spectrum appearing in the sky after rain shower. It is caused by dispersion of sunlight by tiny water droplets, present in the atmosphere. The water droplet act like small prism. They refract and disperse the incident sunlight, then reflect it internally and finally refract it again. Due to dispersion of light and internal reflection different colours appears. 8. Atmospheric Refraction If physical conditions of the refracting medium (air) are not stationary, the apparent position of the object fluctuates. The twinkling of stars is due to atmospheric refraction of starlight. When starlight enters the earth's atmosphere, it suffers refraction continuously. Since the physical conditions of the earth's atmosphere are not stationary the stars appear twinkling. Advance sunrise and delayed sunset Advance sunrise and delayed sunset is due to atmospheric refraction. When the sun is slightly below the horizon, the sunlight coming from the less dense (vacuum) to the more dense (air) medium is refracted downwards. Therefore the Sun appears to be above the horizon. Similarly, even after sunset, the Sun can be seen for sometime due to refraction of sunlight. The phenomenon of scattering of light by colloidal particle gives rise to Tyndall effect. Tyndall effect can be observed when sunlight passes through a canopy of a dense forest. Here tiny droplets in mist scatters light. The colour of the scattered light depends on the size of the scattering particles. Very fine particles scatter mainly blue light while particles of larger size scatter light of longer wavelengths. Colour of the clear sky is blue: The molecules of air and other fine particles in the atmosphere have size smaller than the wavelength of visible light. When sunlight passes through the atmosphere, the fine particles in air scatter the blue colour more strongly than red. Danger signal lights are red in colour: Because red colour is least scattered by fog or smoke. Sun appears reddish early in the morning: In the morning and evening, the Sun lies near the horizon. Sunlight travels through a larger distance in the atmosphere and most of the blue light and shorter wavelengths are scattered away by the particles. Therefore, the light that reaches our eyes is of longer wavelength. This gives rise to the reddish appearance of the Sun. Class 10 Science Chapter 11 Notes Important Terms Eye: The human eye is an extremely valuable and sensitive sense organ, which enables us to see objects and colours around us. Power of accommodation: The ability of the eye lens to adjust its focal length is called accommodation. Myopia: A person with myopia can see nearby objects clearly but cannot see distant objects clearly. Cataract: Crystalline lens of people at old age becomes milky and cloudy. This condition is called cataract. Hypermetropia: A person with hypermetropia can see distant objects clearly but cannot see nearby objects distinctly. Presbyopia: The power of accommodation of the eye usually decreases with ageing. In this eye defect, it is difficult to see nearby objects comfortably and distinctly without corrective eye glasses. Dispersion: The splitting of light into its component colours is called dispersion. Atmospheric refraction: Refraction of light by the constituent particles of the atmosphere. Tyndall effect: The phenomenon of scattering of light by colloidal particles gives rise to Tyndall effect. Electronic Contact Lenses Put Data on Your Eye We've seen it thousands of times in movies massive amounts of information unspooling right before someone's eyes, without the need for any type of monitor. Now, fiction is closer to becoming fact as a working model of electronic contact lenses proves to be successful with rabbits. The current incarnation of the lens isn't something that would excite anyone with those Hollywood movies in mind because the device only displays one pixel, but it's the concept behind that one pixel that's the real attention-getter because where one pixel can go, others can follow. According to PopSci, Professor Babak Parviz says the next step is to "incorporate some predetermined text in the contact lens." However, in addition to going beyond one pixel, there are a few other hurdles to overcome. The first problem is power. The current version of the contact lens draws energy from an external source using an antenna that has a range of one meter in free space and only two centimeters when the lens is placed on the eye. The other issue concerns the eye itself. The minimal focal distance of the human eye is a few centimeters so information that would displayed on a contact lens would be blurry. To take care of this particular problem, researchers used thin Fresnel lenses to magnify the display. There's no information at this time on if the process will be refined at some point or how exactly a magnified display might affect vision when not reading text on a contact lens. When it comes to limitless amounts of data being streamed directly to the eye, the future is closer but it still has some travelling to do before it gets here. Science Question World Ans. The pupil of an eye acts like a variable aperture whose size can be varied with the help of the iris and the adjustment of the pupil takes time. So, when we enter from bright sunlight to a dark room, we cannot see initially. 3. A person uses spectacles of power +2D. What is the defect of vision he is suffering from? Ans. A person who uses spectacles of power +2D means he is suffering from hypermetropia (long-sightedness). 4. Why do chickens wake up early and sleep early? Ans. Chickens have a large number of rod cells that help them to detect the intensity of light. Thus, chickens wake up early and go to sleep early . 5. What is the nature of the image formed at retina? Ans. T he image formed at the retina is diminished, inverted and real. 6. What is the cause of colour blindness? Ans. C one cells of the retina are sensitive to colours and when these cells do not respond properly , enable the retina to distinguish between colors. 7. State the structure of iris and its functions in the human eye. Ans. A structure called iris behind the cornea is a dark muscular diaphragm that controls the size of the pupil and the pupil regulates and controls the amount of light. 8. Define the distance of distinct vision and give its range. Ans. The minimum distance, at which objects can be seen most distinctly without strain, is called the least distance of distinct vision and its range is about 25 cm . 9. What is meant by the least distance of distinct vision? Ans. T he least distance of distinct vision means the minimum distance, at which objects can be seen most distinctly without strain. 10. Define the power of accommodation of the eye. Ans. The ability of the eye lens to adjust its focal length is called the power of accommodation. 11. Why the clear sky appear blue? Ans. When sunlight passes through the atmosphere the fine particles in the air scatter the blue colour, so the clear sky appears blue. 12. Why does it take some time to see objects in a cinema hall when we just entered the hall from bright sunlight? Explain in brief. Ans. The pupil of an eye acts like a variable aperture whose size can be varied with the help of the iris and the adjustment of the pupil takes time. So, it takes some time to see objects in a cinema hall when we just entered the hall from bright sunlight. 13. How does the thickness of the eye lens change when we shift looking from a distance tree to reading a book? Ans. The thickness of the eye lens increases when we shift looking from a distance tree to reading a book. 14. A student sitting at the back of the classroom cannot read clearly the letters written on the blackboard. W hat advice will a doctor give to her? Ans. The student is a short-sightedness or Myopia and a doctor will give her advice to take a spectacle of -ve power means the concave lens of suitable power. 15. A hyper meteoric person prefers to remove his spectacles while driving. Give reason. Ans. A person with hypermetropia can see distant objects clearly and during driving a person has to see more than a nearer point (25 cm). This is because a hyper meteoric person prefers to remove his spectacles while driving. 16. How are we able to see nearby and also the distant objects clearly? Ans. W e are able to see nearby and also the distant objects clearly by the ability of the eye lens to adjust its focal length that is called power accommodation. 17. Why do parallel rays of different colours deviate differently while passing through a glass prism? Ans. Different colours of light bend through different angles with respect to the incident ray while passing through a prism as they have different wavelengths. 18. Name any two phenomena associated with the formation of the rainbow. Ans. T wo phenomena associated with the formation of the rainbow are internal reflection and dispersion. 19 . Draw a ray diagram showing the dispersion through a Prism when a narrow beam of white light is incident on one of its refracting surfaces. Also, indicate the order of the colours of the spectrum obtained. Ans. Ans. 20. Define the angle of deviation. Ans. The angle between the incident ray and emergent ray is called the angle of deviation. 21. List the colours into which light splits in the decreasing order of their bending on emergence from the prism. Ans. Red, orange, yellow, green, blue, indigo and violet. 22. A beam of white light splits when it passes through a Prism. Name this phenomenon and give its reason. Ans. The phenomenon is refraction and the reason is the different wavelengths of a different colour and different colour deviate from different angles. 23. Why does the sun look reddish at the time of sunrise and sunset? Explain. Ans. During sunrise and sunset , l ight from the Sun near the horizon passes through thicker layers of air and larger distance in the earth's atmosphere. Shorter wavelengths are scattered away by the particles and most of the red light of a longer wavelength which is least scattered reaches our eyes. This gives rise to the reddish appearance of the Sun. 24. Why do different components of white light split up into a spectrum, when it passes through a triangular glass prism? Ans. Different colours of light bend through different angles with respect to the incident ray while passing through a prism as different colours have different wavelengths so deviate from different angles. 25. What is the dispersion? Ans. The splitting of light into its seven component colours is called dispersion. 26. What happens when light is passed through a glass prism. Ans. Different colours of light bend through different angles with respect to the incident ray, as they pass through a prism. 27. What is astigmatism? Ans. Astigmatism is a common vision problem caused by irregular-shaped of cornea, that causes blurred vision. 28. Name the defect of vision in which the eye loses its power of accommodation due to old age. Ans. Presbyopia. II. Short answer type questions: (b) State two reasons due to which the myopia eye defect may be caused? Ans. (a) (b) This defect may arise due to (i) excessive curvature of the eye lens, or (ii) elongation of the eyeball. Ans. No, the position of a star as seen by us is not it's true position. The atmospheric refraction occurs in a medium of gradually changing the refractive index. Since the atmosphere bends starlight towards the normal, the apparent position of the star is slightly different from its actual position. The star appears slightly higher than its actual position when viewed near the horizon. 16. What will be the colour of the sky be for an astronaut staying in the International Space Station orbiting the earth? Justify your answer by giving reasons. Ans. T he colour of the sky will be black for an astronaut staying in the International Space Station orbiting the earth because there is no atmosphere in the space and the light reaching it does not scatter. Scattering of blue light of short wavelength causes the blue colour of the sky. Different components of white light split up into spectrum when it passes through a triangular glass prism because different colour has a different wavelength and deviate with different angles. 24. Why the power of accommodation of an eye decreases with age? Explain. Ans. The power of accommodation of the eye usually decreases with ageing. I t arises due to the gradual weakening of the ciliary muscles and diminishing flexibility of the eye lens. 25. Draw ray diagram each show: (i) Myopic eye (ii) Hypermetropic eye. Ans. (i) Myopic eye- III. Long answer type questions: 1. A student suffering from myopia is not able to see distinctly the objects placed beyond 5 m. List two possible reasons due to which this defect of vision may have arisen. With the help of ray diagrams, explain. Ans. Myopia is known as short-sightedness. A myopic person can see nearby objects clearly but cannot see distant objects distinctly. In a myopic eye, the image of a distant object is formed in front of the retina and not at the retina itself. This defect may arise due to (i) excessive curvature of the eye lens, or (ii) elongation of the eyeball. This defect can be corrected by using a concave lens of suitable power. A concave lens of suitable power will bring the image back on to the retina and thus the defect is corrected. 2. (i) Why the student is unable to see distinctly the objects placed beyond 5m from his eyes. (ii) the type of corrective lens used to restore proper vision and how this defect is corrected by the use of this lens. Ans. See the answer of Q.1 3. List the parts of the human eye that control the amount of light entering into it. Explain how they perform this function. Ans. Iris and pupil are the two parts of the eye that controls the amount of light entering into it. Iris behind the cornea is a dark muscular diaphragm that controls the size of the pupil. The pupil regulates and controls the amount of light entering the eye. The pupil of an eye acts like a variable aperture whose size can be varied with the help of the iris. When the light is very bright, the iris contracts the pupil to allow less light to enter the eye. However, in dim light, the iris expands the pupil to allow more light to enter the eye. Thus, the pupil opens completely through the relaxation of the iris. 4. Write the function of the retina in the human eye. Do you know that corneal impairment can be cured by replacing the defective cornea with the cornea of a donated eye? How and why should we organise groups to motivate the community members to donate their eyes after death? Ans. The retina of human eye act as a screen. The eye lens forms an inverted real image of the object on the retina. The retina is a delicate membrane with having an enormous number of light-sensitive cells. The light-sensitive cells get activated upon illumination and generate electrical signals. These signals are sent to the brain via the optic nerves. The brain interprets these signals, and finally, processes the information so that we perceive objects as they are. By donating our eyes after we die, we can light the life of a blind person. About 35 million people in the developing world are blind and most of them can be cured. About 4.5 million people with corneal blindness can be cured through corneal transplantation of donated eyes. One pair of eyes gives vision to TWO CORNEAL BLIND PEOPLE. 5. List three common refractive defects of vision. Suggest the way of correcting these defects. Ans. Three common refractive defects of vision are myopia or short-sightedness, hypermetropia or long-sightedness and presbyopia. Myopia - A person with myopia can see nearby objects clearly but cannot see distant objects distinctly. In a myopic eye, the image of a distant object is formed in front of the retina. This defect can be corrected by using a concave lens of suitable power. A concave lens of suitable power will bring the image back on to the retina and thus the defect is corrected. Hypermetropia - Hypermetropia is also known as far-sightedness. A person with hypermetropia can see distant object clearly but cannot see nearby objects distinctly. This is because the light rays from a close-by object are focussed at a point behind the retina. This defect can be corrected by using a convex lens of appropriate power. Eye-glasses with converging lenses provide the additional focusing power required for forming the image on the retina. Presbyopia- The power of accommodation of the eye usually decreases with ageing. They find it difficult to see nearby objects comfortably and distinctly without corrective eye-glasses. This defect is called Presbyopia. It arises due to the gradual weakening of the ciliary muscles and diminishing flexibility of the eye lens. Such people require A common type of bi-focal lenses consists of both concave and convex lenses. These days, it is possible to correct the refractive defects with contact lenses or through surgical interventions. 6. About 45 lakh people in the developing countries are suffering from corneal blindness about 3 lakh children below the age of 12 suffering from this defect can be cured by replacing the defective, with the cornea of a donated eye. How and why can a student of your age involve themselves to create awareness about this fact among people? Ans. Try to yourself. 7. A person cannot read a newspaper place near 50 cm from his eye. Name the defect of vision he is suffering from? Draw a ray diagram to illustrate the defects. List two possible causes. Draw a ray diagram to show how this defect may be corrected using a lens of appropriate focal length. We see an advertisement for eye donation on television or a newspaper. Write the importance of such advertisement. Ans. A person cannot read a newspaper place near 50 cm from his eye. The person is hypermetropic. He can see distant objects clearly but cannot see nearby objects distinctly. This is because the light rays from a close-by object are focussed at a point behind the retina . This defect arises either because (i) the focal length of the eye lens is too long, or (ii) the eyeball has become too small. Advertisement for eye donation on television or a newspaper helps to blind people around us and more people can aware of this noble cause. 8. (a) What type of spectacles should be worn by a person having the defect of myopia as well as hypermetropia. (b) The far point of a myopic person is 150 cm. What is the nature and the power of the lens required to correct the defect? (c) With the help of a ray, a diagram showing the formation of image by: (i) a myopic eye (ii) Correction of myopia by using an appropriate lens. Ans. (a) Spectacles of the concave lens or diverging lens should be worn by a person having the defect of myopia and converging lens for hypermetropia. (b) The far point of a myopic person is 150 cm. A person with myopia can see nearby objects clearly but cannot see distant objects distinctly. A person with this defect has a far point nearer than infinity. A concave lens or diverging lens of suitable - ve power will bring the image back on to the retina and thus the defect is corrected. 9. A person's image when seen through a stream of hot air rising above a fire disappeared to waver. Explain. Ans. The apparent random wavering of objects seen through a stream of hot air rising above a fire or a radiator because the air just above the fire becomes hotter than the air further up. The hotter air is lighter or less dense than the cooler air above it, and has a refractive index slightly less than that of the cooler air. Since the physical conditions of the refracting medium are not stationary, the apparent position of the object, as seen through the hot air, fluctuates. This wavering is thus an effect of atmospheric refraction on a small scale in our local environment. 10. (a) Describe an activity along with a level diagram of the phenomenon of dispersion through a Prism. (b) Explain in brief the formation of the rainbow with the help of the figure. Ans. (a) Activity - Take a thick sheet of cardboard and make a small hole or narrow slit in its middle. Allow sunlight to fall on the narrow slit. This gives a narrow beam of white light. Now, take a glass prism and allow the light from the slit to fall on one of its faces. Turn the prism slowly until the light that comes out of it appears on a nearby screen. We will find a beautiful band of colours due to the dispersion of light. Activity: . Place a strong source (S) of white light at the focus of a converging lens (L1). that provides a parallel beam of light. . Allow the light beam to pass through a transparent glass tank (T) containing clear water. . Allow the beam of light to pass through a circular hole (c) made in cardboard. Obtain a sharp image of the circular hole on a screen (MN) using a second converging lens (L2). .Dissolve about 200 g of sodium thiosulphate in about 2 L of clean water taken in the tank. Add about 1 to 2 mL of concentrated sulphuric acid to the water. We can observe the blue light from the three sides of the glass tank that is due to scattering of short sulphur particles. The colour of the transmitted light from the fourth side of the glass tank facing the circular hole, at first the orange red colour and then bright crimson red colour on the screen. Two chemicals used in this activity are sodium thiosulphate and sulphuric acid . 12. (I) Define dispersion. How does a prism disappear white light? Which colour of light bends the most and the least? (II) A narrow beam of white light is passing through a glass prism. Trace it on your answer sheet and show the path of the emergent beam as observed on the screen. (a) Write the name and the cause of the phenomenon observed. (b) Where else in nature in this phenomenon observed. (c) Base on the observation, state the conclusions which can be drawn about the constitution of white light. Ans. (I) The splitting of light into its component colours is called dispersion. White light is dispersed into its seven-colour components by a prism. Different colours of light bend through different angles with respect to the incident ray, as they pass through a prism. It is due to different wavelengths of different colour. The red light bends the least while the violet the most. (a) The phenomenon is a dispersion of light and it caused due to different colours of light bend through different angles with respect to the incident ray as they have different wavelengths. (b) Rainbow after rain. (c) The prism has probably split the incident white light into a band of seven colours. The sequence of colours are Violet, Indigo, Blue, Green, Yellow, Orange, and Red. (VIBGYOR) 13. State the natural phenomenon behind the formation of the rainbow? Explain the phenomenon. Name a device that can be used to observe such a phenomenon in the laboratory? If you are facing a rainbow in the sky, what is the position of the sun with respect to your position? Ans. The natural phenomenon behind the formation of rainbow is dispersion. Prism is used to observe dispersion in the laboratory. A rainbow is always formed in a direction opposite to that of the Sun. Therefore the position of the Sun behind me. 14. An old person is unable to see clearly nearby objects as well as distinct objects. (a) What defect of vision is the suffering from? (b) What kind of lens will be required to see clearly the nearby as well as distant objects? Give reasons. Ans. (a) The defect of vision is Presbyopia in which he finds it difficult to see nearby objects comfortably and distinctly without corrective eye-glasses. This defect is called Presbyopia. (b) A common type of bi-focal lenses consists of both concave and convex lenses. Reason: It arises due to the gradual weakening of the ciliary muscles and diminishing flexibility of the eye lens. So a person may suffer from both myopia and hypermetropia. A common type of bi-focal lenses consists of both concave and convex lenses. The upper portion consists of a concave lens. It facilitates a distant vision. The lower part is a convex lens. It facilitates near vision. A Change in Worldview: Vision Correction Explained Nearly 75% of the American population relies on some form of visual aid. According to The Vision Council, approximately 64% of people wear eyeglasses, while 11% rely on contact lenses. For most of us, these statistics are no surprise ⁠— visual aids have been a prevalent part of our society for centuries. Many of us acquired our first pair of glasses or contacts at a young age, developing optical reliance early on. But what really are these instruments, and why do so many of us need them? How do our lenses transform the way we see the world? Before delving into such questions, it is helpful to understand the physiology of sight . Anatomy of the human eye When light enters the cornea, the clear outermost layer of the eye, it is bent towards the pupil, the opening of the eye. The pupil constricts and dilates in accordance with the environment through a process called pupillary light reflex: in dim settings, the pupil expands for better visual detection, while in brighter settings, it constricts to moderate light exposure. The transmitted light then passes through the lens and is bent once more before extending to the retina. This double-bending mechanism flips all visual input however, the brain reverts it right-side up before cognitive perception occurs. To reach the brain, visual images are coded into electrical impulses that travel along the optic nerve, eventually making their way to the occipital lobe of the cerebral cortex. Normally, light that enters the lens is fixated on a precise location in the retina known as the focal point. The quality of this fixation is dependent on the distance between the lens to the retina. When this distance deviates from the ideal length, the focal point forms either in front of or behind the retina rather than on the retina itself. The result is an imprecise scattering of light termed refractive error . Refractive errors disturb the clarity of sight by diminishing how visual input is focused and eventually interpreted. Because the shape and size of the human eye continue to change throughout development and adulthood, visual complications can occur at nearly any point in life. The most prevalent refractive condition is myopia, commonly referred to as nearsightedness. In myopia, the axial length (distance between the lens and retina) is abnormally elongated such that the focal point occurs before the retina. This condition, resulting in a decreased ability to see distant objects clearly, affects an estimated 25% of Americans . Myopia (top) and hyperopia (bottom), illustrating their respective focal point errors forming before and beyond the retina. Alarmingly, the frequency of nearsightedness has doubled in the United States since 1971. In East Asian countries such as China, Taiwan and Japan, the prevalence of myopia in young adults approximates 70- 90% . This finding indicates that environmental influence on vision is more prominent than previously believed, as children in these countries are reported to spend more time indoors and away from sunlight. In fact, one Taiwanese study found that light intervention significantly decreased myopic shift and axial elongation in schoolchildren who spent at least 11 hours a week outdoors for a one year duration . Another study found that students who played outdoor sports showed the least potential for myopic development . The theorized mechanism for this effect is that dopamine secretion in the retina, which is induced by light, inversely correlates to axial elongation . Though the concise role of dopamine in this process is still uncertain, one explanation states that retinal dopamine agonists, which are substances that initiate a physiological response once bound to receptors, "interact with the early signaling molecule ZENK." The result is an initiation of postnatal eye growth . The severity of myopia in patients can be graded as mild, moderate, or high, depending on the extent of optical power needed for correction. Whereas mild myopia is most common and easily managed, high myopia is associated with more serious conditions, including retinal damage, glaucoma, and cataracts. Glaucoma, which results in damage to the optic nerve, and cataracts, which disturb visual clarity due to protein accumulation in the lens, are progressive pathologies. In severe cases, they can result in complete vision loss. In contrast to myopia, hyperopia (also known as farsightedness) is marked by a shortened axial length, resulting in the focal point forming beyond the retina. Consequently, farsighted individuals are able to see distant objects clearly but report blurred vision at closer distances. Hyperopia is present in 10% of individuals in the United States and, like myopia, is diagnosed through a refractive assessment. Hyperopia also shows an association with the environment: a 2008 study conducted in Poland discovered that hyperopia presents at a lower frequency among schoolchildren raised in the city compared to those living in the countryside. This ailment has also shown to worsen with age due to a gradual increase in the rigidity of the lens over time. When farsightedness emerges in later adulthood, it is defined as presbyopia . Hyperopia can be clinically diagnosed in several ways, including as simple or pathological. Simple hyperopia corresponds to the refractive error caused by axial shortening, while pathological hyperopia is attributed to "[prenatal, neurological, or inflammatory] maldevelopment, ocular disease, or trauma ." Though most refractive errors are congenital , meaning present at birth , they can worsen throughout development. A primary reason for this progression is that ocular tissue continues to grow before and during adulthood. Consequently, t he progression of myopia is often inevitable. In contrast, the shortened axial length seen in hyperopia may be naturally corrected over time due to this growth ⁠— a process termed accommodation . The optical treatment for myopia (top) as illustrated by the placement of a minus lens, altering light refraction at the cornea. Because the causes of myopia and hyperopia are related to the refraction of light, their treatment directly involves the modification of such refraction. The treatment for all forms of optical error involves refractive modification through corrective lenses such as eyeglasses and contact lenses. More specifically, myopia is corrected by the diversion of light through a minus lens, which consists of a thick base and thin center. This structure promotes the focus of light at the retina. In contrast, hyperopia is corrected through a plus powered lens, which is composed of a thicker center that shifts the focal point forward. Though the use of glasses and contacts are increasingly prevalent, the development of such optics date back to as far as the 13th century. The first pair of spectacles is believed to have emerged in Pisa, Italy, though the conceptualization of optical aids existed much earlier. During the Middle Ages, for instance, scholars looked through glasses filled with water as a means of magnifying scripture. Eventually, single-lens frames composed of glass were hand-held to enhance near-sighted reading. By the late 1200s, magnifying lenses were doubled and connected along the nose bridge by various materials — think leather, wood, metal, or even animal bones — paving the way to the modern eyeglass. A silver lined frame crafted by Carl Fredrik Jonssén in 1850. (Depiction of Source ) Centuries later, contact lenses made their appearance through a series of flawed, yet increasingly efficient, introductions . In 1801, a young English scientist by the name of Thomas Young was inspired by Renee Descartes' innovative notion that optical aids could be worn in direct contact with the lens of the eye. Young designed a thin glass tube containing water (for magnifying purposes) and applied the tube to his eyes using a wax adhesive. In retrospect, this was both dangerous and inefficient at the time, however, Young paved the way for centuries of contact lens development. Today, many advancements in contact lenses have allowed for increased comfort and safety. Most lenses provide moisture for longer wear-time following the emergence of hydrogel plastics. In addition, they can be worn either during the day or overnight, with the latter providing temporary day-time correction (known as "corneal reshaping contact lenses"). In addition to the advancements in both eyeglasses and contact lenses, newer and more permanent technologies have emerged within the last several decades. Refractive eye surgery, also known as LASIK (laser-assisted in situ keratomileusis) involves the use of a pulsating laser beam for precise reshaping of the cornea. The process is relatively brief and begins with the incision of a thin flap of the cornea to expose underlying tissue curvature, followed by numbing eye drops and ocular tissue removal. In cases of myopia, the cornea is shaped in a concave manner to promote a long-lasting reduction in refractive power. With hyperopia, tissue is flattened along a spherical circumference to produce a sharper convex shape. With high rates of success and a low probability for complications, LASIK is best suited for individuals with mild to moderate myopia or hyperopia. It can also treat astigmatism, which is an impairment in vision characterized by imperfect curvature of the cornea. Surgical treatment is not suitable for severe myopia, as it would require too large a fraction of tissue removal. Despite the increasing global prevalence of refractive errors, many misconceptions continue to surround what does or does not worsen eyesight. While natural light exposure does play a role in the progression of myopia and hyperopia, most lifestyle patterns do not. Squinting, for example, may be indicative of myopia but does not affect its progression. Similarly, while extended screen time may cause temporary eye discomfort (termed "digital eye strain"), studies show a limited impact of blue light on long-term visual impairment . Another common misconception is that individuals who wear corrective lenses develop a physiological reliance on them or weaken their eyes through their usage. While refractive errors can worsen over time, the changes in our prescriptions are not a result of the optical aids we wear. Finally, a prevailing household misconception is that carrots are beneficial to vision many of us can likely recall being given cups of carrot juice to "strengthen our eyes." While high in Vitamin A, carrot juice provides minimal, if any, impact on refractive errors. Nonetheless, the idea that vegetables can improve eye health has, in fact, been supported empirically. Leafy greens containing lutein and zeaxanthin carotenoid pigments have shown to prevent eye diseases such as age-related macular degeneration, which results in damage to the retina. Such pigments reduce the amount of light-induced oxidation in the retina, a process associated with harmful, high-energy blue light. With the frequency of vision ailments on the rise, it is critical that we understand the causes and management of refractive errors. Though our ability to alter our visual predispositions is limited, there are measures we can take to protect our eyes. The best form of self-care includes regularly visiting an optometrist, spending time outdoors, and maintaining a balanced diet. " Glasögon " by Bohusläns museum is licensed under CC BY-NC-ND 4.0 Focal Length Examples To find the focal length of a lens, measure the distances and plug the numbers into the focal length formula. Be sure all measurements use the same measurement system. ​Example 1​: The measured distance from a lens to the object is 20 centimeters and from the lens to the image is 5 centimeters. Completing the focal length formula yields: The focal length is therefore 4 centimeters. ​Example 2​: The measured distance from a lens to the object is 10 centimeters and the distance from the lens to the image is 5 centimeters. The focal length equation shows: Watch the video: Εισακτέοι 2021 u0026 Ελάχιστη Βάση Εισαγωγής (February 2023). 7.7: Introduction to Photosynthesis - Biology Calvin cycle:synthesis of glucose It's right to say coding sequence is part of exon sequence? 11.3: Amino transferases are important in reactions with amino acids - Biology 16.3: Circulatory and Respiratory Systems - Biology How much can a blue whale lift? What are the characteristics of a cancerous cell surface membrane? Why some parts of the human body have immune privilege? 9: Genomes, Genes & Regulatory networks - Biology 13.1: Formative Questions - Biology Human Anatomy Books Copyright 2023 \ What is the minimal focal length of the human eye?...
CommonCrawl
Targeting Lymphotoxin Beta and Paired Box 5: a potential therapeutic strategy for soft tissue sarcoma metastasis Runzhi Huang1,2 na1, Zhiwei Zeng1 na1, Penghui Yan1 na1, Huabin Yin3, Xiaolong Zhu1, Peng Hu1, Juanwei Zhuang1, Jiaju Li1, Siqi Li4, Dianwen Song3, Tong Meng2,3 & Zongqiang Huang ORCID: orcid.org/0000-0002-2787-16291 Soft tissue sarcomas (STS) has a high rate of early metastasis. In this study, we aimed to uncover the potential metastasis mechanisms and related signaling pathways in STS with differentially expressed genes and tumor-infiltrating cells. RNA-sequencing (RNA-seq) of 261 STS samples downloaded from the Cancer Genome Atlas (TCGA) database were used to identify metastasis-related differentially expressed immune genes and transcription factors (TFs), whose relationship was constructed by Pearson correlation analysis. Metastasis-related prediction model was established based on the most significant immune genes. CIBERSORT algorithm was performed to identify significant immune cells co-expressed with key immune genes. The GSVA and GSEA were performed to identify prognosis-related KEGG pathways. Ultimately, we used the Pearson correlation analysis to explore the relationship among immune genes, immune cells, and KEGG pathways. Additionally, key genes and regulatory mechanisms were validated by single-cell RNA sequencing and ChIP sequencing data. A total of 204 immune genes and 12 TFs, were identified. The prediction model achieved a satisfactory effectiveness in distant metastasis with the Area Under Curve (AUC) of 0.808. LTB was significantly correlated with PAX5 (P < 0.001, R = 0.829) and hematopoietic cell lineage pathway (P < 0.001, R = 0.375). The transcriptional regulatory pattern between PAX5 and LTB was validated by ChIP sequencing data. We hypothesized that down-regulated LTB (immune gene) modulated by PAX5 (TF) in STSs may have the capability of inducing cancer cell metastasis in patients with STS. Soft tissue sarcomas (STSs) are a group of rare and heterogeneous malignancies arising from resident cells of connective tissues that are comprised of more than 50 different histological subtypes and account for approximately 1% of all malignancies [1]. Despite advances in understanding STS tumorigenesis, management options have remained unchanged over the past few decades because of its rarity, complexity, late diagnosis and early metastasis [2]. In addition, due to the limited responsiveness to chemotherapy, surgery remains the standard treatment for patients with localized STS, but over 50% of patients may experience recurrence and metastasis after surgery [3]. Thus, novel treatments, such as targeted therapies, and the identification of biomarkers for identifying early metastatic disease are desperately needed. Both molecular and cellular features have been shown to exert important influences on tumorigenesis and metastasis [4]. Transcription factors (TFs) are a group of proteins that regulate the transcription rate of genetic information from DNA to mRNA by binding to the specific DNA sequences. A large number of studies have indicated that TFs are actively involved in many human diseases, including cancers, in which they constitute approximately 20% of currently identified oncogenes [5]. Some abnormal biological behaviours, such as apoptosis, epithelial-mesenchymal transition (EMT), invasion, and metastasis, have also been attributed to the aberrant expression of TFs in various cancers [6, 7]. On the other hand, interactions and complicated communication among diverse tumour-infiltrating immune cells also plays a role in tumour metastasis and mortality prediction [8]. However, metastasis-related TFs and tumour-infiltrating immune cells in STS have not been explored and need to be further analysed. In this study, we conducted a comprehensive analysis of TFs and immune gene profiling to examine the overall survival (OS) and metastasis-related TFs and immune genes in patients with STS and constructed a prognostic model. Then, we used the "Cell Type Identification by Estimating Relative Subsets of RNA Transcripts (CIBERSORT)" algorithm to detect tumour-infiltrating immune cells and their proportions in STSs. We also performed gene set enrichment analysis (GSEA), gene set variation analysis (GSVA) and Pearson correlation analysis to examine potential metastasis-related signalling pathways. Finally, we proposed an innovative and systematic hypothesis about aberrantly expressed TFs that regulate the expression of corresponding immune genes and promote STS metastasis, which may unveil significant and novel biomarkers and help to improve clinical management. Additionally, key genes and regulatory mechanisms were validated by single-cell RNA sequencing (scRNA-seq) and chromatin immunoprecipitation (ChIP-seq) data. Data collection, differentially expressed genes (DEGs) and functional enrichment analysis The Ethics Committee of the First Affiliated Hospital of Zhengzhou University approved this study. RNA sequencing profiles and clinical information of localized and metastatic STS samples were collected from the Cancer Genome Atlas (TCGA) database (https://tcgadata.nci.nih.gov/tcga/). Cancer-related transcription factors (TFs) were collected from the Cistrome Cancer database (http://cistrome.org/). Immune-related genes were retrieved from the ImmPort database (https://www.import.org/) and Molecular Signatures Database (MSigDB) v7.0 (https://www.gsea-msigdb.org/gsea/msigdb/index.jsp). HTseq-count and Fragments Per Kilobase of transcript per Million mapped reads (FPKM) profiles of 261 samples, including 121 localized STS and 55 metastatic STS samples, were assembled. "edgeR" was used to identify DEGs after removing non-STS-specific genes. Counts per million (CPM) and trimmed mean of M-values (TMM) algorithms were used for data normalization. Genes with a false discovery rate (FDR) P < 0.05 and log2(fold change) > 1 or < −1 were regarded as DEGs. Heatmaps and volcano plots were created to illustrate DEGs. Then, DEGs were analysed using Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) datasets to examine potential mechanisms of STS metastasis. Identification of OS-related immune genes The expression of all immune-related genes and immune-related DEGs was extracted from previously downloaded RNA-seq profiles and the DEG list, respectively, and was used to generate a heatmap and volcano plot. Then, immune-related DEGs and clinical data were used in univariate Cox regression analysis to identify OS-related immune genes. Construction of a prognostic model based on OS-related immune genes Based on the results of univariate Cox regression analysis, we extracted the most significant OS-related immune genes (P < 0.05 in univariate Cox regression analysis), all of which were included in multivariate Cox regression analysis to evaluate the significance of each OS-related immune gene with a β value (the regression coefficient of each integrated gene in the model). The risk score of No. i patient was calculated with the following formula: $$ {\text{Risk score}}_{\text{i}} = \mathop \sum \limits_{a = 1}^{n} \beta {\text{a}} \times \left( {\text{expression level of gene a}} \right) $$ Then, individuals were divided into two risk groups based on the median risk score. The area under the ROC curve was analysed to assess the accuracy of the model. Kaplan–Meier survival analysis was used to compare the survival probability between the high- and low-risk groups. Individuals were reordered based on the risk score and a risk curve, survival state-related scatterplot, and heatmap of OS-related immune genes were plotted. Univariate and multivariate Cox regression analyses, modified by baseline information, were used to identify the independent prognostic value of the risk score, age, sex, race, and metastatic diagnosis (in multivariate Cox regression analysis, the variables were all corrected for demographics and clinical information, which also reduced the bias among individual patients). Identification of differentially expressed transcription factors The expression of all the -related TFs and cancer-related DEGs was extracted from the previously downloaded RNA-seq profiles and DEG list, respectively, and was used to create a heatmap and volcano plot. Pearson correlation analysis was performed to examine the interaction and correlation between differentially expressed transcription factors and overall survival-related immune genes. Interaction pairs with correlation coefficients > 0.300 and P < 0.001 were included in the subsequent analysis. Identification of potential immune cell and KEGG pathway mechanisms The quantity of 21 immune cell types in localized primary STS and metastatic samples was evaluated by CIBERSORT to further examine immune cells that drove metastasis. Then, correlation analysis was used to identify the correlation between immune cells and the biomarker, which was illustrated by a co-expression heatmap. Linear plots of biomarkers and immune cells with P < 0.001 were generated. Prognosis-related signalling pathways, identified by univariate Cox regression analysis based on gene set variation analysis (GSVA), were then subjected to correlation analysis with crucial metastasis-related biomarkers and illustrated by a co-expression heatmap. Metastasis-related signalling pathways were also identified by gene set enrichment analysis (GSEA). KEGG pathways in both GSEA and GSVA analysis are displayed by Venn plots. Then, linear plots were generated to show the correlation between the crucial biomarker and metastasis- and prognosis-related KEGG signalling pathways. Construction of a network with TFs, key biomarkers, immune cells, and KEGG pathways To further discover the metastatic mechanisms in patients with STS, we constructed a network based on the interaction among prognosis-related and/or metastasis-related transcription factors, biomarkers, immune cells, and KEGG pathways with Cytoscape. Finally, the STS metastasis-related hypothesis based on bioinformatics was illustrated by a signalling diagram. Online database external validation To obtain the complete annotation of selected TFs, key biomarkers, immune cells, and signalling pathways, multiple online databases were used to detect gene and protein expression levels, including cBioPortal [9, 10], GEPIA [11], K-M Plotter [12], PathCards [13], LinkedOmics [14], STRING [15], TISIDB [16], UALCAN [17] and CellMarkers [18]. Immunohistochemistry (IHC) validation Twenty-nine formalin-fixed paraffin-embedded (FFPE) tissue blocks from 29 sarcoma patients were deparaffinized and dehydrated. The slides were incubated overnight (4 °C) with an anti-PAX5 antibody (1:50 dilution, Proteintech), anti-LTB antibody (1:200 dilution, Bioworld), anti-CHSY1 antibody (1:100 dilution, Abcam), anti-CD19 antibody (1:50 dilution, Proteintech), anti-CD38 antibody (1:50 dilution, Proteintech), anti-CD138 (1:100 dilution, Cell Signalling Technology) and anti-SPM310 (1:100 dilution, Novus NBP2-34359) after routine rehydration, antigen retrieval, and blocking procedures. Next, all slides were labelled with polymer HRP for 30 min and haematoxylin as a counterstain for 5 min at room temperature. Two pathologists examined the pathological sections and identified positive results when the cytoplasm of cancer cells was stained. The percentage score of tumour cells was as follows: negative (0), yellowish (1–4), light brown (5–8), and dark brown (9–12). The markers of B cells (CD19 and CD38) and plasma cells (CD138 (Syndean-1) [19] and SPM310 (Novus NBP2-34359)) were scored in the tumour and in the surrounding lymph nodes, respectively. In negative controls, the primary antibody was replaced by buffer. Additionally, correlation analysis and nonparametric tests (Mann–Whitney U test) were performed to evaluate the relationship between the IHC score and clinical features (grade of differentiation and metastasis during follow-up). Validation of the regulatory mechanism of transcription factors Two algorithms (ENCODE Transcription Factor Targets and JASPAR) [20, 21] were used to re-predict the transcriptional regulatory pattern of LTB and PAX5 to further support our hypothesis. In addition, we conducted a comprehensive retrieval of a public database and found five ChIP-seq datasets for PAX5 (four from Homo sapiens and one from Mus musculus) [22,23,24,25]. Integrative Genomics Viewer (IGV) was used to normalize and visualize binding regions and peaks from different datasets [26]. Validation of scRNA-seq data The scRNA-seq data of the human alveolar rhabdomyosarcoma cell line Rh41 were downloaded from Gene Expression Omnibus (GEO) (GSE113660) to validate the distribution and expression of key genes [27, 28]. To integrate data analysis, the Seurat method was used [29]. During quality control, only genes expressed in more than 200 single cells and cells with transcript counts ranging from 1500 to 100,000 were integrated into further analysis. The "vst" method was utilized to identify variable genes. Then, principal component analysis (PCA) was performed based on variable genes, and jackstraw analysis was used to select the principal components (PCs) [29]. In terms of dimension reduction analysis, the UMAP (Uniform Manifold Approximation and Projection) method with a resolution of 0.50 was applied to identify cellular clusters based on the top 20 significant PCs [30]. DEGs were filtered when the absolute value of log2(FC) was > 0.5 and FDR was < 0.05 in each cluster. The distribution and expression of DEGs are illustrated in feature plots and violin plots, respectively. In addition, every cluster was annotated by the singleR method [31] and CellMarker database [18]. Moreover, the GSVA method was used to quantify the signalling pathway (50 hallmark pathways) activity in each single cell. All statistical analyses were performed with R version 3.5.1 (Institute for Statistics and Mathematics, Vienna, Austria; https://www.r-project.org). For descriptive statistics, the mean ± standard deviation was used for continuous variables with a normal distribution, while the median (range) was used for continuous variables with an abnormal distribution. Categorical variables are described by counts and percentages. Two-tailed P < 0.05 was regarded as statistically significant. Identification of DEGs and functional enrichment analysis The analysis in this study is illustrated in Fig. 1. The baseline features of samples collected from the TCGA database are described in Additional file 1: Table S1. Genes with a log2(fold change) > 1 or < − 1 and FDR < 0.05 between localized STS and samples without metastasis were defined as DEGs. We identified 1947 differentially expressed genes (1375 down- and 572 upregulated), which is illustrated by a heatmap and volcano plot (Additional file 1: Figure S1A, B). To examine the potential mechanisms of the identified DEGs, GO and KEGG enrichment analyses were performed. Several immune response processes, such as "humoral immune response", "complement activation", and "immunoglobulin mediated immune response" in biological process (BP), "immunoglobulin complex" in cellular component (CC), and immune function, including "antigen binding" and "immunoglobulin receptor binding", in molecular function (MF), were significantly different in GO analysis (Fig. 2a). KEGG enrichment analysis indicated that some key pathways, such as "cytokine–cytokine receptor interaction", were significantly different between localized STS with and without metastasis (Fig. 2b). The analysis flowchart Functional enrichment analysis of significantly differentially expressed genes: GO (a) and KEGG (b) enrichment analysis of significantly differentially expressed genes. c The univariate Cox regression analysis for evaluating the prognostic value of identified immune genes. GO: Gene Ontology; KEGG: Kyoto Encyclopedia of Genes and Genomes; STS: soft tissue sarcoma Identification of differentially expressed and prognosis-related immune genes Differentially expressed immune genes (log2(fold change) > 1 or < − 1 and FDR < 0.05) are illustrated in the heatmap and volcano plot (Additional file 1: Figure S1C, D). To identify prognosis-related immune genes, univariate Cox regression analysis was performed, in which 6 protective factors and 9 risk factors were found. Among these factors, LTB (HR = 0.999, 95% CI (0.998–0.999), P = 0.027) was found to be inversely correlated with prognosis in patients with STS (Fig. 2c). Establishment of the prediction model Immune genes identified by univariate Cox regression analysis were included in Lasso regression analysis, and we found that key immune genes were significantly correlated with patient prognosis. Individuals were medially divided into the low- or high-risk group based on the risk score. The results indicated the good effectiveness of the prediction model with a high area under the curve (AUC) of the ROC curve (0.808) (Fig. 3a) and a significant difference in Kaplan–Meier analysis (P < 0.001) (Fig. 3b). Prognostic model for STS patients: a The ROC curve for evaluating the accuracy of the prediction model. bThe Kaplan–Meier analysis of the prediction model. c The univariate and multivariate Cox regression analysis of risk score, age, gender, race, and metastatic diagnosis for evaluating the independent prognostic value of the risk score. d The risk curve of each patient by risk score. e The scatter plot of the samples. The green and red dots representing survival and death, respectively. f The heatmap of immune genes screened by Lasso regression Risk curves and scatterplots were created to display the risk score and survival status of each patient with STS. Patients in the high-risk group had a higher mortality than those in the low-risk group (Fig. 3d, e). The heatmap shows the expression of CRH, S100A7L2, UCN3, TRH, IL1RL1, S100A1, CCR7, and CX3CR1, which were all included in the prognostic model (Fig. 3f). To verify the independent prognostic value of the risk score and other clinical features, including age, sex, race, and metastatic diagnosis, both univariate and multivariate Cox regression analyses were performed. The risk score was proven to be an independent predictor in both univariate (HR = 1.064, 95% CI 1.041–1.087, P < 0.001) and multivariate Cox regression analyses (HR = 1.078, 95% CI 1.053–1.104, P < 0.001) (Fig. 3c). PAX5 regulated LTB to promote STS metastasis Differentially expressed TFs (log2(fold change) > 1 or < −1 and FDR < 0.05) are illustrated in the heatmap and volcano plot (Additional file 1: Figure S1C). To explore the relationship between the identified TFs and key immune genes, Pearson correlation analysis was performed. Only regulation pairs with correlation coefficients < −0.300 or > 0.300 and P < 0.001 were selected to construct the regulatory network in subsequent analysis. We found that ASCL1-TPBV29-1 (P < 0.001, R = 0.36), PAX5-CD1C (P < 0.001, R = 0.31), PAX5-CCR7 (P < 0.001, R = 0.67), PAX5-LTB (P < 0.001, R = 0.829), and TFAP2A-S100A7L2 (P < −0.001, R = 0.361) were 5 pairs that met the screening threshold. Moreover, the PAX-LTB interaction had the greatest correlation coefficient among the pairs; thus, we entered that pair into the subsequent analysis (Figure S2). Molecular and signalling pathway mechanisms of LTB triggering STS metastasis To further explore the potential cellular and signalling pathway mechanisms, CIBERSORT and GSVA were performed, in which the quantity of 21 immune cell types was evaluated, and 39 signalling pathways related to prognosis in patients with STS were screened. Pearson correlation analysis was applied to examine the correlation among LTB, immune cells, and prognosis-related KEGG pathways (Fig. 4a, b). The results showed that LTB was significantly correlated with B cell memory expression (P < 0.001, R = 0.658), plasma cells (P < 0.001, R = 0.448), follicular helper T cells (P < 0.001, R = 0.409), and M2 macrophages (P < 0.001, P = −0.272) (Fig. 4c–f). Identification of cytological and signaling pathways mechanisms in STS metastasis: a Co-expression heatmap between LTB and 21 immune cells. b Co-expression heatmap between LTB and prognosis-related KEGG pathways screened by GSVA and univariate Cox regression analysis. c–f Correlation relationship between LTB and immune cells Identification of signalling pathways To further identify metastasis- and prognosis-related KEGG pathways, GSEA was also performed. The results showed that 5 key KEGG pathways were significant in both GSEA and GSVA, including the arachidonic acid metabolism pathway, basal transcription factor pathway, cytokine–cytokine receptor interaction pathway, haematopoietic cell lineage pathway, and primary immunodeficiency pathway (Fig. 5a–g). Pearson correlation analysis showed that LTB was significantly correlated with the primary immunodeficiency pathway (P < 0.001, R = 0.432) (Fig. 5h), haematopoietic cell lineage pathway (P < 0.001, R = 0.375) (Fig. 5i), cytokine–cytokine receptor interaction pathway (P < 0.001, R = 0.369) (Fig. 5j), arachidonic acid metabolism pathway (P < 0.001, R = 0.322) (Fig. 5k), and basal transcription factor pathway (P < 0.001, R = −0.249) (Fig. 5l). Our hypothesis regarding STS metastasis mechanisms is illustrated in Fig. 6. Further discovering of potential signaling pathways underlying STS metastasis: a Venn plot illustrating the number of KEGG pathways related to STS metastasis in GSEA and GSVA. b Integrated plot showing the GSEA analysis. c–g Significant KEGG pathways identified by GSEA analysis. h–l Results of Pearson correlation analysis between LTB and key KEGG pathways The illustration of our scientific hypothesis External validation with multiple online databases To reduce the bias induced by pure bioinformatics analysis, we used multiple online databases to further prove the reliability of our study. First, we used the CellMarker and PathCards databases to explore the biomarkers of plasma cells (IL1A, IL5RA, and IL7) and haematopoietic cell lineage pathways (IL5RA, LY9, SLAMF7, and ICAM1), respectively. The Oncomine database showed that LTB, LY9, SLAMF7, and ICAM1 were downregulated, while IL5RA was upregulated in different STS-related studies (Additional file 1: Figure S2). UALCAN, K-M Plotter, TISIDB, and LinkedOmics revealed that LTB, IL1A, IL5RA, IL7, LY9, SLAMF7, and ICAM1 were all significantly correlated with STS patient prognosis (Additional file 1: Figure S3–S6). In addition, LTB, IL1A, LY9, and SLAMF7 were differentially expressed between normal and tumour tissues (Additional file 1: Figure S3). GEPIA showed that LTB, IL5RA, LY9, and ICAM1 were significantly correlated with prognosis (Additional file 1: Figure S7). To examine the relationship between LTB and other biomarkers, we conducted Spearman correlation analysis with different databases. In LinkedOmics, LTB was significantly correlated with PAX5 (P < 0.001, R = 0.31), IL1A (P = 0.008, R = 0.17), IL5RA (P < 0.001, R = 0.43), IL7 (P < 0.001, R = 0.51), LY9 (P < 0.001, R = 0.82), SLAMF7 (P < 0.001, R = 0.79), and ICAM1 (P < 0.001, R = 0.61) (Additional file 1: Figure S5). In GEPIA, LTB was significantly correlated with PAX5 (P < 0.001, R = 0.34), IL1A (P = 0.016, R = 0.15), IL5RA (P < 0.001, R = 0.41), IL7 (P < 0.001, R = 0.50), LY9 (P < 0.001, R = 0.81), SLAMF7 (P < 0.001, R = 0.79), and ICAM1 (P < 0.001, R = 0.56) (Additional file 1: Figure S7). In cBioPortal, LTB was significantly correlated with PAX5 (P < 0.001, R = 0.31), IL1A (P = 0.009, R = 0.16), IL5RA (P < 0.001, R = 0.43), IL7 (P < 0.001, R = 0.52), LY9 (P < 0.001, R = 0.83), SLAMF7 (P < 0.001, R = 0.79), and ICAM1 (P < 0.001, R = 0.60) (Additional file 1: Figure S8B-H). In addition, K-M survival analysis that integrated all the biomarkers in cBioPortal showed that the overall expression of biomarkers was significantly related to patient prognosis (Additional file 1: Figure S8I). Finally, the STRING database suggested that all the biomarkers were strongly connected with each other based on the protein–protein interaction network (Additional file 1: Figure S9). Table 1 summarizes the results of the external validation of biomarkers in SARC with multiple online databases. Additionally, we tried to apply two other algorithms (ENCODE Transcription Factor Targets and JASPAR) [20, 21] to re-predict the transcriptional regulatory pattern between LTB and PAX5 to further support our hypothesis, which suggested that the DNA binding domain of PAX was similar to the sequence of the promoter region of LTB (Fig. 6). Table 1 External Validation of Biomarkers in SARC via Multiple Online Database Among 29 patients, 15 were diagnosed with liposarcoma (metastasis occurred in eight patients during follow-up), and 14 were diagnosed with leiomyosarcoma (metastasis occurred in nine patients during follow-up). PAX5 and LTB proteins were significantly downregulated in the tumour cells of primary sarcomas with metastasis, while markers of B cells (CD19 and CD38) were not detected in almost all primary tumours (Fig. 7a). Although CD19 and CD38 were found in lymph nodes, as noted by a pathologist at our hospital, these findings were not surprising, as B cells are abundant in lymph nodes. Therefore, the IHC results of CD19 and CD38 did not prove or disprove the hypothesis that plasma cells were downstream of LTB. However, PAX5 and LTB proteins were shown to be significantly downregulated in the tumour cells of primary sarcomas with metastasis. Furthermore, in the absence of a good CD19 and CD38 antibody, we used anti-CD138 (Syndean-1) and anti-SPM310 antibodies as antibodies for plasma cell marker detection (Novus NBP2-34359), as these were other proven plasma cell markers. However, only four of 29 sarcomas were found to have plasma cells in HE staining of the tumour, and none of these four patients had metastases (Fig. 7b). This might be because tumour-infiltrating immune cells tended to be located around the tumour rather than within it, and sarcomas tended to be excised en bloc, so there were no paracancerous tissues that could be used as a control. The results of the Mann–Whitney U test suggested that PAX5 (P < 0.001) and LTB (P < 0.001) were all highly expressed in well-differentiated primary sarcomas and primary sarcomas without metastasis (Fig. 7c, d). The results of immunohistochemistry (IHC). In total of 29 patients, 15 were diagnosed with liposarcoma (Metastasis occurred in eight patients during follow-up) and 14 were diagnosed with leiomyosarcoma (Metastasis occurred in nine patients during follow-up). Proteins of PAX5 and LTB were shown to be significantly down-regulated in the tumor cells of primary sarcomas with metastasis while the markers of B cells (CD19 and CD 38) were not detected in almost all primary tumor (a). Although CD19 and CD 38 were found in lymph node, as the advice of a pathologist in our hospital, these were not surprising as B cells are abundant in lymph node. Therefore, the IHC results of CD19 and CD 38 did not prove or disprove the hypothesis. Furthermore, in the absence of a good CD19 and CD38 antibody, we considered CD138 (Syndean-1) and SPM310 plasma cell marker antibody (Novus NBP2-34359) as other proven plasma cell markers. However, only four of 29 sarcomas were found plasma cells in the HE staining section of the tumor and none of the four patients had metastases (b). This might be because tumor-infiltrating immune cells tended to be located around the tumor rather than within it, and sarcomas tended to be excised en bloc, so there were no paracancer tissues as a control. Fortunately, the results of Mann–Whitney U test suggested that PAX5 (P < 0.001), LTB (P < 0.001) in were all highly expressed in well differentiated primary sarcomas and primary sarcomas without metastasis (c, d) ChIP-seq validation A comprehensive retrieval of public databases (Sequence Read Archive (SRA), European Genome-phenome Archive (EGA) and The European Bioinformatics Institute (EBI)) was conducted, and five ChIP-seq datasets for PAX5 were filtered (four from Homo sapiens and one from Mus musculus) (Table S3) [22,23,24,25]. In the three different kinds of B lymphocytes in Hodgkin's lymphoma and Burkitt's lymphoma (Raji, Namalwa and L428 cells) (PRJNA190710), compared to that in cells in the control group, the binding regions of PAX5 in LTB showed higher binding strength (input samples) (Fig. 8a). Similarly, higher binding strength of PAX5 and LTB was also illustrated in NALM6, DOHH2, OCI-LY-7, GM1287 and GM12892 cells compared to that in cells in the control group (PRJNA63447, PRJNA285847 and PRJNA475974) (Fig. 8b). In addition, in Pax5 ChIP-seq data of activated B cells and plasmablasts (Mus musculus), upregulated binding peaks were also found in Ltb sequences (PRJNA625028) (Fig. 8c). Moreover, the binding peaks of Ltb were higher in activated B cells and plasmablasts from IghPax5/+ mice than in those from mice in the control group. The results of ChIP-seq validation. In the three different kinds of B Lymphocyte in Hodgkin's Lymphoma and Burkitt's Lymphoma (Raji, Namalwa and L428 cells) (PRJNA190710), compared to the control group, the binding regions of PAX5 in LTB showed more bonding strength (input samples) (a). Similarly, more bonding strength of PAX5 and LTB was also illustrated in the NALM6, DOHH2, OCI-LY-7, GM1287 and GM12892 cells comparing to the control group (PRJNA63447, PRJNA285847 and PRJNA475974) (b). Besides, in Pax5 ChIP-seq data of activated B cells and plasmablasts (mus musculus), upregulated binding peaks were also found in Ltb sequences (PRJNA625028) (c). What's more, binding peaks of Ltb were higher in activated B cells and plasmablasts from IghPax5/+ mice than the control group The scRNA-seq data of the human alveolar rhabdomyosarcoma cell line Rh41 were downloaded from Gene Expression Omnibus (GEO) (GSE113660) to validate the distribution and expression of key genes (PAX5, LTB, IL1A, IL5RA, IL7, LY9, SLAMF7, SDC1 and ICAM1). First, 7261 human alveolar rhabdomyosarcoma cells were reduced and clustered into ten cellular clusters by the UMAP method with a resolution of 0.50 (Fig. 9a). PAX5, LTB and CD44 (markers of stem cells) were significantly colocalized in the No. 7 cluster; SLAMF7, SDC1 (Syndecan-1) and ICAM1 were scattered among different clusters of rhabdomyosarcoma cells; while IL1A, IL5RA, IL7 and LY9 were not detected in rhabdomyosarcoma cells (Fig. 9b). In cell cycle analysis, rhabdomyosarcoma cells with high expression of PAX5 and LTB were significantly located in the G2M and S phases (Fig. 9c, d). Moreover, the GSVA heatmap demonstrated that some metastasis-related signalling pathways, such as epithelial-mesenchymal transition (EMT) and angiogenesis, were active in cells with high expression of PAX5 and LTB (Fig. 9e). Validation of scRNA-seq data. The scRNA-seq data of the human alveolar rhabdomyosarcoma cell line Rh41 was downloaded from Gene expression omnibus (GEO) (GSE113660) to validate the distribution and expression of key genes (PAX5, LTB, IL1A, IL5RA, IL7, LY9, SLAMF7, SDC1 and ICAM1). Firstly, 7261 human alveolar rhabdomyosarcoma cells were reduced and clustered into ten cellular clusters by the UMAP method with a resolution of 0.50 (a). Except for PAX5, LTB and CD44 (Markers of stem cells) were significantly colocalization in the No. 7 clusters, SLAMF7, SDC1 (Syndecan-1) and ICAM1 were scattered among different clusters of rhabdomyosarcoma cells while IL1A, IL5RA, IL7 and LY9 were not detected in rhabdomyosarcoma cells (b). In cell cycle analysis, rhabdomyosarcoma cells with high expression of PAX5 and LTB were significantly located in the G2M and S phases (c–d). Moreover, GSVA heatmap demonstrated that some metastasis-related signaling pathways such as epithelial mesenchymal transformation (EMT) and angiogenesis were active in cells with high expression of PAX5 and LTB (E) STSs, accounting for 1% of all malignancies, are difficult diagnosis early and accurately. In addition, effective management methods have not been established due to its rarity, histological heterogeneity, and diverse biological behaviours [32]. Moreover, STS is notorious for its high rate of wide, early metastasis [2]. Recently, many researchers reported that the aberrant expression of transcription factors, immune genes, and tumour-infiltrating immune cells played important roles in promoting multiple abnormal biological behaviours in tumour progression, including metastasis [33,34,35]. However, related mechanisms in STS have not yet been clearly explored. In this study, we identified 204 differentially expressed immune genes and 12 TFs. Based on 15 OS-related immune genes, we established a prediction model that was highly effective based on the K-M survival curve (P < 0.001) and ROC curve (AUC: 0.808). Based on the results of Pearson correlation analysis between TFs and immune genes, we found that LTB (an immune gene) was significantly correlated with PAX5 (a TF) (P < 0.001, R = 0.83). PAX5 and LTB proteins were shown to be significantly downregulated in the tumour cells of primary sarcomas with metastasis based on IHC. Compared to the control group, the binding regions of PAX5 in LTB showed higher binding strength in five different ChIP-seq datasets. Additionally, PAX5, LTB and CD44 (markers of stem cells) were significantly colocalized in the scRNA-seq data of the human alveolar rhabdomyosarcoma cell line Rh41. These results all suggested that PAX5 and LTB might be potential predictors and therapeutic targets for STS metastasis. Paired Box 5 (PAX5) encodes a member of the PAX family and functions as a TF through a DNA-binding motif, which is also known as a paired box. Paired box transcription factors are vital regulators of early organ development and tissue differentiation, and alterations in their expression are considered catalysts in neoplastic transformation [36]. Previous studies revealed that as a B-lymphoid transcription factor, PAX5 was downregulated in over 80% of pre-B cell acute lymphoblastic leukaemia (ALL), and its downregulation in lymphoid neoplasms was associated with promoter hypermethylation and poor clinical outcomes [37, 38]. In addition, aberrantly expressed PAX5 contributed to the tumorigenesis and malignant progression of many other cancers. In gastric cancer, PAX5 functioned as a tumour suppressor via promoter hypermethylation and suppressed cell proliferation and apoptosis. In addition, PAX5 also constrained cell invasion and metastasis by inducing MTSS1 (MTSS I-BAR Domain Containing 1) and TIMP1 (Tissue Inhibitor of Metalloproteinase 1) and inhibiting MMP1 (Matrix Metallopeptidase 1) [39]. Moreover, in non-small cell lung cancer (NSCLC), mesothelioma and oesophageal cancer, the expression of PAX5 was also decreased [40, 41]. In this study, we also found that the downregulation of PAX5 in STS was significantly correlated with distant metastasis and poor prognosis, which was in accordance with previous studies. As a member of the tumour necrosis factor (TNF) ligand superfamily, Lymphotoxin Beta (LTB) forms a heteromeric complex with LT-alpha by acting as the primary ligand of the LT-beta receptor [42]. Previously, its function and mechanism were mainly believed to be involved in inflammatory responses, such as immune cell interactions and cytokine secretion regulation [43, 44]. Recently, its role in tumorigenesis and tumour evolution has received attention. The upregulation of LTB and its downstream targets, CXCL10 and NF-κB, was associated with tumorigenesis in HCV-related hepatocellular carcinoma (HCC) [45]. Additionally, LTB also interacted with methylated epithelial growth factor receptor (EGFR) in head and neck squamous cell carcinoma (HNSCC) to induce cetuximab resistance, leading to unfavourable outcomes. In papillary thyroid carcinoma, upregulated LTB also triggered metastasis [46]. However, in this study, the favourable prognostic role of LTB was evidenced by univariate Cox regression analysis and multiple online databases that showed that the expression of LTB was negatively correlated with metastasis and prognosis, which may help uncover the novel mechanisms of LTB as a tumour suppressor in tumorigenesis and metastasis. However, all of the above studies on PAX5 and LTB were conducted in cancer cells rather than tumour-infiltrating immune cells (tumour-infiltrating B cells and plasma cells). To identify immune cells that are actively involved in metastasis in tumour tissues, we conducted CIBERSORT and Pearson correlation analysis of LTB and key immune cells, the results of which suggested the importance of plasma cells (P < 0.001, R = 0.45). Multiple online databases were also used to test the prognostic values of the biomarkers of plasma cells, including IL1A, IL5RA, and IL7. Plasma cells are a group of terminally differentiated B cells originating from marginal zone or germinal centre B cells. As an indispensable component of the humoural immune system, plasma cells play an important role in immune protection by secreting clonospecific immunoglobulins. The differentiation, development, and function of plasma cells are regulated and influenced by a variety of cytokines and transcription factors [47]. In several cancers, the dense infiltration of plasma cells was associated with prolonged survival [48]. In mice with hepatocellular carcinoma (HCC), the depletion of plasma cells suppressed the growth of tumours by promoting the antitumour T cell immune response, and the upregulated plasma cells were associated with poor prognosis in HCC patients [49]. However, unlike the definitive roles of tumour-infiltrating CD8+ T cells in antitumour immunity, the roles of tumour-infiltrating B cells and plasma cells are still unclear and controversial. Thus, our study may provide another potential mechanism [50]. The IHC results of CD19, CD38, CD138 (Syndean-1) and SPM310 did not prove or disprove the hypothesis that plasma cells were downstream of LTB. Thus, based on the results of this study, we could determine the transcriptional regulatory pattern between LTB and PAX5 and their cellular colocalization in cancer cells, and whether this regulatory mechanism existed in tumour-infiltrating B cells and plasma cells needs to be further validated. Furthermore, to uncover the deeper mechanism underlying STS metastasis, GSEA and GSVA were performed to find prognosis-related KEGG pathways, including the arachidonic acid metabolism pathway, basal transcription factor pathway, cytokine–cytokine receptor interaction pathway, haematopoietic cell lineage pathway, and primary immunodeficiency pathway. In multiple online databases, we found that IL5RA, LY9, SLAMF7, and ICAM1, biomarkers of the haematopoietic cell lineage pathway, were significantly correlated with metastasis and prognosis in patients with STS. The haematopoietic cell lineage pathway is a complex renewal and differentiation process of blood cells, in which haematopoietic stem cells (HSCs) differentiate into common lymphoid progenitors (CLPs) and common myeloid progenitors (CMPs), ultimately promoting the lymphoid lineage and the myeloid lineage, respectively [51, 52]. Defects in the haematopoietic cell lineage pathway reportedly contribute to malignant cell transformation [53, 54]. In addition, reductions in haematopoietic stem cells were also associated with leukaemic stem cell persistence and poor prognosis in acute myeloid leukaemia [55]. Although many methods were used to control the bias introduced by pure bioinformatics analysis, there were still some weaknesses in this study. First, STS patients identified in the TCGA database in this study were mainly from Western countries. Thus, whether this prediction model is applicable for Asian populations remains unknown. In addition, most conclusions were made based on several computational predictions and few direct experiments. Thus, the evidence that LTB directly regulates PAX5 in STSs is not extensive. However, we have been performing a series of experiments, including flow cytometry, ChIP-seq, and single-cell sequencing, to further support the hypothesis proposed in this study. In conclusion, we established a satisfactory prediction model for patients with STS. Based on comprehensive bioinformatics analysis and preliminary experiments, we hypothesized that excessively downregulated LTB (an immune gene) modulated by PAX5 (a TF) in STSs induced cancer cell metastasis in patients with STS by modifying the haematopoietic cell lineage pathway. The transcriptional regulatory pattern between LTB and PAX5 could be determined by a bioinformatics algorithm and public ChIP-seq data. The datasets generated and/or analysed during the current study are available in the Cancer Availability of data and materialsGenome Atlas (TCGA) database (https://tcgadata.nci.nih.gov/tcga/), the Cistrome Cancer database (http://cistrome.org/), the ImmPort database (https://www.import.org/). Multiple other database of external validation also support the findings of this study, including cBioportal (https://www.cbioportal.org/), GEPIA (http://gepia.cancer-pku.cn/), K-M Plotter (http://kmplot.com/), PathCards (http://pathcards.genecards.org/), LinkedOmics (http://www.linkedomics.org/), STRING (https://string-db.org/), TISIDB (http://cis.hku.hk/TISIDB/), UALCAN (http://ualcan.path.uab.edu/), CellMarkers (http://biocc.hrbmu.edu.cn/CellMarker/). Gene expression omnibus (GEO) (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE113660). Sequence Read Archive (SRA) (https://www.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA190710) (https://www.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA63447) (https://www.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA285847) (https://www.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA475974) (https://www.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA625028). STS: LTB: Lymphotoxin Beta PAX5: Paired Box 5 TF: CIBERSORT: Cell type identification by estimating relative subsets of RNA transcripts GSVA: Gene set variation analysis GSEA: KEGG: Kyoto Encyclopedia of Genes and Genomes The Area Under Curve EMT: Epithelial-mesenchymal transition Overall survival Differentially expressed gene MSigDB: Molecular Signatures Database FPKM: Fragments per kilobase of exon per million reads mapped IL1A: Interleukin 1 Alpha IL5RA: Interleukin 5 Receptor Subunit Alpha IL7: Interleukin 7 LY9: Lymphocyte Antigen 9 SLAMF7: Signaling Lymphocytic Activation Molecule Family Member 7 ICAM1: Intercellular Adhesion Molecule 1 TIMP1: Tissue Inhibitor of Metalloproteinase MMP1: Matrix Metallopeptidase 1 TNF: EGFR: Epithelial growth factor receptor HNSCC: Head and neck squamous cell carcinoma HCC: HSC: Hematopoietic stem cell CLP: Differentiate into common lymphoid progenitors CMP: Common myeloid progenitors ASCL1: Achaete-Scute Family BHLH Transcription Factor 1 TFAP2A: Transcription Factor AP-2 Alpha TRBV29-1: T Cell Receptor Beta Variable 29-1 CD1C: Clustering of Differentiation 1C CCR7: C-C Motif Chemokine Receptor 7 S100A7L2: S100 Calcium Binding Protein A7 Like 2 Bertucci F, De Nonneville A, Finetti P, Perrot D, Nilbert M, Italiano A, Le Cesne A, Skubitz KM, Blay JY, Birnbaum D. The Genomic Grade Index predicts postoperative clinical outcome in patients with soft-tissue sarcoma. Ann Oncol. 2018;29(2):459–65. Samantarrai D, Mallick B. miR-429 inhibits metastasis by targeting KIAA0101 in soft tissue sarcoma. Exp Cell Res. 2017;357(1):33–9. Casali PG, Abecassis N, Aro HT, Bauer S, Biagini R, Bielack S, Bonvalot S, Boukovinas I, Bovee JVMG, Brodowicz T, et al. Soft tissue and visceral sarcomas: ESMO-EURACAN clinical practice guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2018;29(Suppl 4):iv268–9. Altorki NK, Markowitz GJ, Gao D, Port JL, Saxena A, Stiles B, McGraw T, Mittal V. The lung microenvironment: an important regulator of tumour growth and metastasis. Nat Rev Cancer. 2019;19(1):9–31. Lambert M, Jambon S, Depauw S, David-Cordonnier MH. Targeting transcription factors for cancer treatment. Molecules. 2018;23(6):1479. Mollaoglu G, Jones A, Wait SJ, Mukhopadhyay A, Jeong S, Arya R, Camolotto SA, Mosbruger TL, Stubben CJ, Conley CJ, et al. The lineage-defining transcription factors SOX2 and NKX2-1 determine lung cancer cell fate and shape the tumor immune microenvironment. Immunity. 2018;49(4):764–79. Bishop JL, Thaper D, Vahid S, Davies A, Ketola K, Kuruma H, Jama R, Nip KM, Angeles A, Johnson F, et al. The master neural transcription factor BRN2 is an androgen receptor-suppressed driver of neuroendocrine differentiation in prostate cancer. Cancer Discov. 2017;7(1):54–71. Galon J, Costes A, Sanchez-Cabo F, Kirilovsky A, Mlecnik B, Lagorce-Pages C, Tosolini M, Camus M, Berger A, Wind P, et al. Type, density, and location of immune cells within human colorectal tumors predict clinical outcome. Science. 2006;313(5795):1960–4. Cerami E, Gao J, Dogrusoz U, Gross BE, Sumer SO, Aksoy BA, Jacobsen A, Byrne CJ, Heuer ML, Larsson E, et al. The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data. Cancer Discov. 2012;2(5):401–4. Gao J, Aksoy BA, Dogrusoz U, Dresdner G, Gross B, Sumer SO, Sun Y, Jacobsen A, Sinha R, Larsson E, et al. Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal. Sci Signal. 2013;6(269):pl1. PubMed Article CAS PubMed Central Google Scholar Tang Z, Li C, Kang B, Gao G, Li C, Zhang Z. GEPIA: a web server for cancer and normal gene expression profiling and interactive analyses. Nucleic Acids Res. 2017;45(W1):W98–102. Nagy Á, Lánczky A, Menyhárt O, Győrffy B. Validation of miRNA prognostic power in hepatocellular carcinoma using expression data of independent datasets. Sci Rep. 2018;8(1):9227. Belinky F, Nativ N, Stelzer G, Zimmerman S, InyStein T, Safran M, Lancet D. PathCards: multi-source consolidation of human biological pathways. Database. 2015;2015:bav006. Vasaikar SV, Straub P, Wang J, Zhang B. LinkedOmics: analyzing multi-omics data within and across 32 cancer types. Nucleic Acids Res. 2018;46(D1):D956–63. Szklarczyk D, Gable AL, Lyon D, Junge A, Wyder S, Huerta-Cepas J, Simonovic M, Doncheva NT, Morris JH, Bork P, et al. STRING v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Res. 2019;47(D1):D607–13. Ru B, Wong CN, Tong Y, Zhong JY, Zhong SSW, Wu WC, Chu KC, Wong CY, Lau CY, Chen I, et al. TISIDB: an integrated repository portal for tumor-immune system interactions. Bioinformatics. 2019;35(20):4200–2. Chandrashekar DS, Bashel B, Balasubramanya SAH, Creighton CJ, Ponce-Rodriguez I, Chakravarthi BVSK, Varambally S. UALCAN: a portal for facilitating tumor subgroup gene expression and survival analyses. Neoplasia. 2017;19(8):649–58. Zhang X, Lan Y, Xu J, Quan F, Zhao E, Deng C, Luo T, Xu L, Liao G, Yan M, et al. Cell Marker: a manually curated resource of cell markers in human and mouse. Nucleic Acids Res. 2019;47(D1):D721–8. O'Connell FP, Pinkus JL, Pinkus GS. CD138 (syndecan-1), a plasma cell marker immunohistochemical profile in hematopoietic and nonhematopoietic neoplasms. Am J Clin Pathol. 2004;121(2):254–63. ENCODE Project Consortium. A user's guide to the encyclopedia of DNA elements (ENCODE). PLoS Biol. 2011;9(4):e1001046. Khan A, Fornes O, Stigliani A, Gheorghe M, Castro-Mondragon JA, van der Lee R, Bessy A, Chèneby J, Kulkarni SR, Tan G, et al. JASPAR 2018: update of the open-access database of transcription factor binding profiles and its web framework. Nucleic Acids Res. 2018;46(D1):D260–6. Autry RJ, Paugh SW, Carter R, Shi L, Liu J, Ferguson DC, Lau CE, Bonten EJ, Yang W, McCorkle JR, et al. Integrative genomic analyses reveal mechanisms of glucocorticoid resistance in acute lymphoblastic leukemia. Nat Cancer. 2020;1(3):329–44. Dimitrova L, Seitz V, Hecht J, Lenze D, Hansen P, Szczepanowski M, Ma L, Oker E, Sommerfeld A, Jundt F, et al. PAX5 overexpression is not enough to reestablish the mature B-cell phenotype in classical Hodgkin lymphoma. Leukemia. 2014;28(1):213–6. Gertz J, Savic D, Varley KE, Partridge EC, Safi A, Jain P, Cooper GM, Reddy TE, Crawford GE, Myers RM. Distinct properties of cell-type-specific and shared transcription factor binding sites. Mol Cell. 2013;52(1):25–36. Ryan RJ, Drier Y, Whitton H, Cotton MJ, Kaur J, Issner R, Gillespie S, Epstein CB, Nardi V, Sohani AR, et al. Detection of enhancer-associated rearrangements reveals mechanisms of oncogene dysregulation in B-cell lymphoma. Cancer Discov. 2015;5(10):1058–71. Thorvaldsdóttir H, Robinson JT, Mesirov JP. Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration. Brief Bioinform. 2013;14(2):178–92. Chen W, Li Y, Easton J, Finkelstein D, Wu G, Chen X. UMI-count modeling and differential expression analysis for single-cell RNA sequencing. Genome Biol. 2018;19(1):70. Cheng C, Easton J, Rosencrance C, Li Y, Ju B, Williams J, Mulder HL, Pang Y, Chen W, Chen X. Latent cellular analysis robustly reveals subtle diversity in large-scale single-cell RNA-seq data. Nucleic Acids Res. 2019;47(22):e143. Butler A, Hoffman P, Smibert P, Papalexi E, Satija R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat Biotechnol. 2018;36(5):411–20. Chung NC, Storey JD. Statistical significance of variables driving systematic variation in high-dimensional data. Bioinformatics. 2015;31(4):545–54. Aran D, Looney AP, Liu L, Wu E, Fong V, Hsu A, Chak S, Naikawadi RP, Wolters PJ, Abate AR, et al. Reference-based analysis of lung single-cell sequencing reveals a transitional profibrotic macrophage. Nat Immunol. 2019;20(2):163–72. Schaefer I-M, Cote GM, Hornick JL. Contemporary sarcoma diagnosis, genetics, and genomics. J Clin Oncol. 2018;36(2):101–10. Cheung WKC, Zhao M, Liu Z, Stevens LE, Cao PD, Fang JE, Westbrook TF, Nguyen DX. Control of alveolar differentiation by the lineage transcription factors GATA6 and HOPX inhibits lung adenocarcinoma metastasis. Cancer Cell. 2013;23(6):725–38. Tauriello DVF, Palomo-Ponce S, Stork D, Berenguer-Llergo A, Badia-Ramentol J, Iglesias M, Sevillano M, Ibiza S, Cañellas A, Hernando-Momblona X, et al. TGFβ drives immune evasion in genetically reconstituted colon cancer metastasis. Nature. 2018;554(7693):538–43. Van den Eynde M, Mlecnik B, Bindea G, Fredriksen T, Church SE, Lafontaine L, Haicheur N, Marliot F, Angelova M, Vasaturo A, et al. The link between the multiverse of immune microenvironments in metastases and the survival of colorectal cancer patients. Cancer Cell. 2018;34(6):1012–26. Okuyama K, Strid T, Kuruvilla J, Somasundaram R, Cristobal S, Smith E, Prasad M, Fioretos T, Lilljebjörn H, Soneji S, et al. PAX5 is part of a functional transcription factor network targeted in lymphoid leukemia. PLoS Genet. 2019;15(8):e1008280. Chan LN, Chen Z, Braas D, Lee J-W, Xiao G, Geng H, Cosgun KN, Hurtz C, Shojaee S, Cazzaniga V, et al. Metabolic gatekeeper function of B-lymphoid transcription factors. Nature. 2017;542(7642):479–83. Lazzi S, Bellan C, Onnis A, De Falco G, Sayed S, Kostopoulos I, Onorati M, D'Amuri A, Santopietro R, Vindigni C, et al. Rare lymphoid neoplasms coexpressing B- and T-cell antigens. The role of PAX-5 gene methylation in their pathogenesis. Hum Pathol. 2009;40(9):1252–61. Li X, Cheung KF, Ma X, Tian L, Zhao J, Go MYY, Shen B, Cheng ASL, Ying J, Tao Q, et al. Epigenetic inactivation of paired box gene 5, a novel tumor suppressor gene, through direct upregulation of p53 is associated with prognosis in gastric cancer patients. Oncogene. 2012;31(29):3419–30. Kanteti R, Nallasura V, Loganathan S, Tretiakova M, Kroll T, Krishnaswamy S, Faoro L, Cagle P, Husain AN, Vokes EE, et al. PAX5 is expressed in small-cell lung cancer and positively regulates c-Met transcription. Lab Investig J Tech Methods Pathol. 2009;89(3):301–14. Palmisano WA, Crume KP, Grimes MJ, Winters SA, Toyota M, Esteller M, Joste N, Baylin SB, Belinsky SA. Aberrant promoter methylation of the transcription factor genes PAX5 alpha and beta in human cancers. Cancer Res. 2003;63(15):4620–5. Crowe PD, VanArsdale TL, Walter BN, Ware CF, Hession C, Ehrenfels B, Browning JL, Din WS, Goodwin RG, Smith CA. A lymphotoxin-beta-specific receptor. Science. 1994;264(5159):707–10. Browning JL, Ngam-ek A, Lawton P, DeMarinis J, Tizard R, Chow EP, Hession C, O'Brine-Greco B, Foley SF, Ware CF. Lymphotoxin beta, a novel member of the TNF family that forms a heteromeric complex with lymphotoxin on the cell surface. Cell. 1993;72(6):847–56. Gaudreault É, Paquet-Bouchard C, Fiola S, Le Bel M, Lacerte P, Shio MT, Olivier M, Gosselin J. TAK1 contributes to the enhanced responsiveness of LTB(4)-treated neutrophils to Toll-like receptor ligands. Int Immunol. 2012;24(11):693–704. Simonin Y, Vegna S, Akkari L, Grégoire D, Antoine E, Piette J, Floc'h N, Lassus P, Yu G-Y, Rosenberg AR, et al. Lymphotoxin signaling is initiated by the viral polymerase in HCV-linked tumorigenesis. PLoS Pathog. 2013;9(3):e1003234. Zhang H, Teng X, Liu Z, Zhang L, Liu Z. Gene expression profile analyze the molecular mechanism of CXCR7 regulating papillary thyroid carcinoma growth and metastasis. J Exp Clin Cancer Res. 2015;34(1):16–16. Shapiro-Shelef M, Calame K. Regulation of plasma-cell development. Nat Rev Immunol. 2005;5(3):230–42. Bindea G, Mlecnik B, Tosolini M, Kirilovsky A, Waldner M, Obenauf AC, Angell H, Fredriksen T, Lafontaine L, Berger A, et al. Spatiotemporal dynamics of intratumoral immune cells reveal the immune landscape in human cancer. Immunity. 2013;39(4):782–95. Wei Y, Lao X-M, Xiao X, Wang X-Y, Wu Z-J, Zeng Q-H, Wu C-Y, Wu R-Q, Chen Z-X, Zheng L, et al. Plasma cell polarization to the immunoglobulin G phenotype in hepatocellular carcinomas involves epigenetic alterations and promotes hepatoma progression in mice. Gastroenterology. 2019;156(6):1890–904. Wouters MCA, Nelson BH. Prognostic significance of tumor-infiltrating B cells and plasma cells in human cancer. Clin Cancer Res. 2018;24(24):6125–35. Drissen R, Nerlov C. Hematopoietic lineage diversification, simplified. Cell stem cell. 2016;19(2):148–50. Hirschi KK, Nicoli S, Walsh K. Hematopoiesis lineage tree uprooted: every cell is a rainbow. Dev Cell. 2017;41(1):7–9. Johnsen HE, Kjeldsen MK, Urup T, Fogd K, Pilgaard L, Boegsted M, Nyegaard M, Christiansen I, Bukh A, Dybkaer K. Cancer stem cells and the cellular hierarchy in haematological malignancies. Eur J Cancer. 2009;45(1):194–201. Weiskopf K, Schnorr PJ, Pang WW, Chao MP, Chhabra A, Seita J, Feng M, Weissman IL. Myeloid cell origins, differentiation, and clinical implications. Microbiol Spectr. 2016. https://doi.org/10.1128/microbiolspec.mchd-0031-2016. Wang W, Stiehl T, Raffel S, Hoang VT, Hoffmann I, Poisa-Beiro L, Saeed BR, Blume R, Manta L, Eckstein V, et al. Reduced hematopoietic stem cell frequency predicts outcome in acute myeloid leukemia. Haematologica. 2017;102(9):1567–77. We thank the TCGA team of the National Cancer Institute for using their data and the guidance on pathology of Professor Guo Ji. This research was funded by the National Natural Science Foundation of China (Nos. 81702659, 81772856, 82073207); Youth Fund of Shanghai Municipal Health Planning Commission (20174Y0117); Interdisciplinary Program of Shanghai Jiao Tong University (No. YG2017MS26), Shanghai Talent Development Fund (No. 2018094); Henan medical science and technology research project (Grant No. 201602031) and Key project of provincial and ministerial co-construction of Henan Medical Science and Technology (No. 2020040). Runzhi Huang, Zhiwei Zeng and Penghui Yan have contributed equally to this work Department of Orthopedics, The First Affiliated Hospital of Zhengzhou University, 1 East Jianshe Road, Zhengzhou, 450052, China Runzhi Huang, Zhiwei Zeng, Penghui Yan, Xiaolong Zhu, Peng Hu, Juanwei Zhuang, Jiaju Li & Zongqiang Huang Division of Spine, Department of Orthopedics, Tongji Hospital Affiliated to Tongji University School of Medicine, 389 Xincun Road, Shanghai, China Runzhi Huang & Tong Meng Department of Orthopedics, Shanghai General Hospital, School of Medicine, Shanghai Jiaotong University, 100 Haining Road, Shanghai, China Huabin Yin, Dianwen Song & Tong Meng Tongji University School of Medicine, 1239 Siping Road, Shanghai, 200092, China Siqi Li Runzhi Huang Zhiwei Zeng Penghui Yan Huabin Yin Xiaolong Zhu Peng Hu Juanwei Zhuang Jiaju Li Dianwen Song Tong Meng Zongqiang Huang DWD, TM, and ZQH designed the main idea of this study. RZH analyzed and interpreted the data regarding to patients of soft tissue sarcoma. ZWZ and PHY were major contributors in writing the manuscript. HBY, XL, and PH finished the online database validation. JWZ, and JJL revised the manuscript. All authors read and approved the final manuscript. Correspondence to Dianwen Song or Tong Meng or Zongqiang Huang. The Ethics Committee of the First Affiliated Hospital of Zhengzhou University approved this study. Table S1 The baseline information of patients with STS; Table S2 The regulatory relationship between transcription factors and immune genes; Table S3 The series information of ChIP-seq datasets; Figure S1 Identification of differentially expressed genes : The heatmap and volcano plot of differentially expressed genes (A, B), immune genes (C, D), and transcription factors (E, F) between localized STS and STS with metastasis; Figure S2 Oncomine database validation. TB (A), LY9 (C), SLAMF7 (D), and ICAM1 (E) were down regulated, while IL5RA (B) were up-regulated in different STS-related studies; Figure S3 UALCAN database validation. LTB (P = 0.006) (A), IL1A (P = 0.019) (B), LY9 (P = 0.008) (C), SLAMF7 (P = 0.013) (D), IL5RA (P = 0.031) (E), IL7 (P = 0.041) (F), and ICAM1 (P = 0.030) (G) were significantly correlated with STS patients' prognosis. Besides, expression of LTB (P < 0.001) (A), IL1A (P < 0.001) (B), LY9 (P < 0.001) (C), and SLAMF7 (P < 0.001) (D) were significantly different between normal and tumor tissues; Figure S4 K-M Plotter database validation. LTB (P < 0.001) (A), IL1A (P = 0.003) (B), IL5RA (P < 0.001) (C), IL7 (P = 0.041) (D), LY9 (P < 0.001) (E), SLAMF7 (P = 0.006) (F), and ICAM1 (P = 0.015) (G) were significantly correlated with patients' prognosis; Figure S5 TISIDB database validation. LTB (P = 0.006) (A), IL1A (P = 0.019) (B), IL7 (P = 0.021) (C), LY9 (P < 0.001) (D), SLAMF7 (P = 0.006) (E), and ICAM1 (P = 0.043) (F) were significantly correlated with patients' prognosis; Figure S6 LinkedOmics database validation. LTB (P < 0.001) (A), IL1A (P < 0.015) (C), IL5RA (P < 0.030) (D), IL7 (P < 0.022) (E), LY9 (P = 0.001) (F), SLAMF7 (P = 0.008) (G), and ICAM1 (P = 0.035) (H) were significantly correlated with patients' prognosis. LTB were significantly correlated with PAX5 (P < 0.001, R = 0.31) (B), IL1A (P = 0.008, R = 0.17) (C), IL5RA (P < 0.001, R = 0.43) (D), IL7 (P < 0.001, R = 0.51) (E), LY9 (P < 0.001, R = 0.82) (F), SLAMF7 (P < 0.001, R = 0.79) (G), and ICAM1 (P < 0.001, R = 0.61) (H). (I) Volcano plot and heatmaps displayed the genes most significantly correlated with LTB; Figure S7 GEPIA database validation. LTB (P = 0.004) (A), IL5RA (P < 0.001) (B), LY9 (P = 0.006) (C), and ICAM1 (P = 0.037) (D) were significantly correlated with prognosis. LTB was significantly correlated with PAX5 (P < 0.001, R = 0.34) (E), IL1A (P = 0.016, R = 0.15) (F), IL5RA (P < 0.001, R = 0.41) (G), IL7 (P < 0.001, R = 0.50) (H), LY9 (P < 0.001, R = 0.81) (I), SLAMF7 (P < 0.001, R = 0.79) (J), and ICAM1 (P < 0.001, R = 0.56) (K); Figure S8 cBioportal database validation. (A) mRNA expression of each biomarker illustrated by heatmap. Spearman correlation analysis shown that LTB was significantly correlated with PAX5 (P < 0.001, R = 0.31) (B), IL1A (P = 0.009, R = 0.31) (C), IL5RA (P < 0.001, R = 0.43) (D), IL7 (P < 0.001, R = 0.52) (E), LY9 (P < 0.001, R = 0.83) (F), SLAMF7 (P < 0.001, R = 0.79) (G), and ICAM1 (P < 0.001, R = 0.60) (H). (I) K-M survival analysis integrated with all the biomarkers shown that the overall expression of biomarkers was significantly related to patients' prognosis; Figure S9 Protein-protein interaction network. (A) PathCards database provided the main biomarkers actively involved in hematopoietic cell lineage pathway, including IL5RA, LY9, SLAMF7, and ICAM1. (B) STRING database shown that all the biomarkers were tightly connected with each other. Huang, R., Zeng, Z., Yan, P. et al. Targeting Lymphotoxin Beta and Paired Box 5: a potential therapeutic strategy for soft tissue sarcoma metastasis. Cancer Cell Int 21, 3 (2021). https://doi.org/10.1186/s12935-020-01632-x Received: 29 March 2020 Accepted: 29 October 2020 DOI: https://doi.org/10.1186/s12935-020-01632-x Immune gene Tumor-infiltrating immune cells
CommonCrawl
Contributions of default mode network stability and deactivation to adolescent task engagement Task-evoked Negative BOLD Response and Functional Connectivity in the Default Mode Network are Representative of Two Overlapping but Separate Neurophysiological Processes David B. Parker & Qolamreza R. Razlighi Latent brain state dynamics distinguish behavioral variability, impaired decision-making, and inattention Weidong Cai, Stacie L. Warren, … Vinod Menon Contrasting dorsal caudate functional connectivity patterns between frontal and temporal cortex with BMI increase: link to cognitive flexibility Jizheng Zhao, Peter Manza, … Dongjian He Increased decision latency in alcohol use disorder reflects altered resting-state synchrony in the anterior salience network Nicola Canessa, Gianpaolo Basso, … Claudia Gianelli Repetitive negative thinking in daily life and functional connectivity among default mode, fronto-parietal, and salience networks D. M. Lydon-Staley, C. Kuehner, … D. S. Bassett Prediction of stimulus-independent and task-unrelated thought from functional brain networks Aaron Kucyi, Michael Esterman, … Susan Whitfield-Gabrieli Abnormal frontoinsular-default network dynamics in adolescent depression and rumination: a preliminary resting-state co-activation pattern analysis Roselinde H. Kaiser, Min Su Kang, … Diego A. Pizzagalli Ventromedial prefrontal value signals and functional connectivity during decision-making in suicidal behavior and impulsivity Vanessa M. Brown, Jonathan Wilson, … Alexandre Y. Dombrovski Functional neural correlates of psychopathy: a meta-analysis of MRI data Philip Deming & Michael Koenigs Ethan M. McCormick1 & Eva H. Telzer1 Scientific Reports volume 8, Article number: 18049 (2018) Cite this article Operant learning Out of the several intrinsic brain networks discovered through resting-state functional analyses in the past decade, the default mode network (DMN) has been the subject of intense interest and study. In particular, the DMN shows marked suppression during task engagement, and has led to hypothesized roles in internally-directed cognition that need to be down-regulated in order to perform goal-directed behaviors. Previous work has largely focused on univariate deactivation as the mechanism of DMN suppression. However, given the transient nature of DMN down-regulation during task, an important question arises: Does the DMN need to be strongly, or more stably suppressed to promote successful task learning? In order to explore this question, 65 adolescents (Mage = 13.32; 21 females) completed a risky decision-making task during an fMRI scan. We tested our primary question by examining individual differences in absolute level of deactivation against the stability of activation across time in predicting levels of feedback learning on the task. To measure stability, we utilized a model-based functional connectivity approach that estimates the stability of activation across time within a region. In line with our hypothesis, the stability of activation in default mode regions predicted task engagement over and above the absolute level of DMN deactivation, revealing a new mechanism by which the brain can suppress the influence of brain networks on behavior. These results also highlight the importance of adopting model-based network approaches to understand the functional dynamics of the brain. With the advent of resting-state fMRI, there has been an explosion of interest in characterizing intrinsic neural networks, as well as describing their various contributions to human cognition and behavior. Of particular interest to researchers has been the default mode network (DMN), a functionally-related system of regions which show greater metabolic activity at rest compared to task1. Regions of the DMN include posterior cingulate, medial-prefrontal, hippocampal, and lateral temporal areas, and are often defined as regions showing strong functional connectivity with the posterior cingulate during rest2. While the exact function of the DMN has proven elusive, it is involved in a wide variety of cognitions, such as autobiographical memory, spontaneous thought, and the integration of social information3,4,5,6. Furthermore, disruption of DMN connectivity is linked to psychopathological states such as schizophrenia, ADHD, conduct disorder, and depression7,8,9,10. Importantly, activation in DMN regions shows an inverse relationship with "task-active" regions such as the fronto-parietal and salience networks. These networks show increased activation during task conditions11,12, in contrast to the DMN, which often shows strong deactivation (relative to baseline) during decision making (i.e., "task-negative")2. Furthermore, functional anti-correlations (i.e., connectivity) between task-active regions and the DMN are regularly seen during task11,13 and are thought to reflect the degree of task engagement and attention14,15. However, suppression (i.e., the down-regulation of a network's influence on behavior, often indexed by deactivation of the BOLD signal) of the DMN by task-related activity appears to be transient16, with the default mode network coming back online quickly once task demands ease15. Additionally, individuals who exhibit psychopathologies such as ADHD often fail to show DMN suppression17, reflecting difficulties in maintaining sustained attention to a task. One key form of task engagement is feedback learning, during which individuals must maintain mental representations of the task structure and reinforcement history in order to guide future behavior18,19,20. Processes which impact feedback learning are important during adolescence, as teens are particularly sensitive to performance-relevant cues in their environment, showing both developmental19,21 and inter-individual20,22,23 differences in sensitivity to positive and negative feedback information. Given the role of DMN in disrupting attentional and engagement processes14,15, failure to suppress DMN regions during task should be related to decreased feedback learning, as adolescents disengage (as a result of decreased attention and/or motivation) from task information. This disengagement could have particularly important consequences for adolescents, as deficits in the ability to learn from feedback can have negative impacts on adolescent behavior19,20. Given the transient nature of DMN suppression during task, one key unanswered question remains when considering potential mechanisms involved in down-regulating (or suppressing) the default mode network during task engagement: Does the DMN need to show reduced (i.e., a decrease in the absolute level of activation), or more stable (i.e., reduced fluctuations in activation) activity to achieve suppression? This question distinguishes between the mean level of neural activation (or deactivation as the case may be) and the stability over time of DMN activity as the mechanism by which the brain reduces the influence of functional networks on behavior. Under the first hypothesis, we would expect that individuals who show greater task engagement (indexed by increased feedback learning) would show the greatest DMN deactivation during task. Alternatively, feedback learning may be reflected in more stable DMN activity over time, as suppression of the DMN causes the network to become less responsive to changing task dynamics. Importantly, these two explanations may not be mutually exclusive, as stable and strong deactivation might co-occur. In the current study, we tested the hypothesis that stability in task-negative regions of the DMN contributes to feedback learning (i.e., participants' ability to extract and respond to information cues from the task environment), over and above absolute level of deactivation, against the alternative hypothesis that suppression of the DMN is primarily achieved through reductions in univariate activation (i.e., deactivation). To do so, adolescents completed a risky decision-making task, the Balloon Analog Risk Task (BART) during functional magnetic resonance imaging (fMRI). Using an ROI-based approach for both traditional univariate and model-based network analyses, we extracted parameter estimates of DMN deactivation and stability for each individual. We then entered these two types of parameters as simultaneous predictors of adolescents' engagement on the task. While previous work has focused on the absolute deactivation of DMN, we hypothesized that DMN stability would be a more powerful predictor of feedback learning than absolute level of deactivation in these regions. However, we further predicted that stability and level of deactivation would be positively related, with stronger deactivation being associated with greater DMN stability, suggesting a possible reconciliation of this hypothesis with previous conceptualizations of the DMN during task. Sixty-seven adolescent participants completed an fMRI scan. One participant was scanned using the wrong head coil, and another was excluded for excessive movement (>10% of slices with movement in excess of 2 mm), resulting in a final sample of 65 adolescents (Mage = 13.32, SD = 0.62, range = 12.42–14.83; 56 Caucasian, 2 African American, 7 mixed race/multiple responses). Participants were largely from high income households (1 $0–14,999; 3 $15–29,999; 6 $30–44,999; 4 $45–59,999; 8 $60–74,999; 9 $75–89,999; 30 > $90,000; and 4 not reported), with highly educated parents (3 completed a high school diploma; 8 completed some college; 4 completed an associate degree; 21 completed a bachelor's degree; 5 completed some graduate school; 18 completed a master's degree; and 3 completed a professional degree). Written informed assent was obtained for all participants under the age of 18, and written informed consent was obtained from each participants' parent and/or legal guardian. All methods were carried out in accordance with the relevant guidelines and regulations outlined by the Declaration of Helsinki and experimental protocols were approved by the University of Illinois, Urbana-Champaign Institutional Review Board. Risky Decision-Making Task Participants completed a version of the Balloon Analogue Risk Task (BART), a well-validated experimental paradigm24,25 that has been adapted for fMRI in developmental populations19,26. The BART measures participants' willingness to engage in risky behavior in order to earn rewards, and is associated with real-life risk taking in adolescents20,27 and adults24,25. During the scan session, participants were presented with a sequence of 24 balloons that they could pump up to earn points. Each pump decision was associated with earning one point but also increased the risk that a balloon would explode. If participants pumped a balloon too many times, the balloon would explode and participants would lose all the points they had earned for that balloon. However, if participants chose to cash out before the balloon exploded, the points they earned would be added to the running total of points, which was presented on the screen as a points meter. Participants were instructed that their goal was to earn as many points as possible during the task. Each event (e.g., larger balloon following a pump, new balloon following cashed or explosion outcomes) was separated with a random jitter (500–4000 ms). Balloons exploded after 4 to 10 pumps, and the order of balloons was presented in a fixed order (after being pseudo-randomly ordered prior to data collection), although none of this information was made available to participants. The BART was self-paced and would not advance unless the participant made the choice to either pump or cash out. Participants were told that they could win a $10 gift card at the end of the neuroimaging session if they earned enough points during the task. The point threshold for winning this prize was intentionally left ambiguous so that participants were motivated to continue earning points throughout the task. In reality, all participants were given a $10 gift card after completing the scan session. Task Engagement To measure adolescents' task engagement during the risky decision-making task, we calculated two indices of feedback learning. Specifically, we were interested in how adolescents used feedback information from previous trials in order to guide their current behavior, and to adapt that behavior when it results in maladaptive outcomes26. Previous research using the BART has shown that adolescence is a time of increased feedback learning (compared with childhood), and that individual differences in feedback learning predict differences in risk behavior19. In the current study, we were interested in two types of feedback learning to measure task engagement. First, we estimated how sensitive adolescents were to the valence of feedback on the task. To do so, we measured the impact of experiencing positive (i.e., a cash-out) versus negative (i.e., an explosion) on the previous trial. For this metric, larger positive values indicate that adolescents increase their pumping behavior following positive feedback and decrease pumping following negative feedback, while values close to zero indicate pump behavior that is random with respect to the valence of feedback that adolescents receive. Secondly, we estimated adolescents' sensitivity to the value (i.e., magnitude) of feedback on the previous trial, by contrasting risk decisions made after receiving low-value feedback (i.e., earning or losing points on a small or medium-sized balloon) versus those made after high-value feedback (i.e., earning or losing points on relatively large balloons). Larger positive values on this metric indicate that adolescents change their pump behavior more after a high-value feedback event, whereas values close to zero indicate that adolescents' decisions to pump were not impacted by the value of points earned (in a previous cash-out) or lost (in a previous explosion). Each of these feedback learning indices measure how adolescents retain relevant task information to guide their ongoing risk behavior. To obtain these indices, we took a multi-level modeling approach utilizing the SAS software package (SAS version 9.4; SAS Institute Inc., Durham, NC), in which trials (24 balloons) were nested within adolescents, and the level 1 outcome was the final number of pump decisions made on a given balloon. To obtain our learning indices, we modeled pump number at the trial level as dependent on (1) previous feedback and (2) the size of that feedback. Consistent with previous research27,28, we also controlled for the overall trial number and the outcome of the current balloon, resulting in the following Level 1 equation: $$\begin{array}{c}Number\,of\,Pump{s}_{ij}={\gamma }_{0j}+{\gamma }_{1j}Trial\,Numbe{r}_{ij}+{\gamma }_{2j}Current\,Outcom{e}_{ij}\\ \,+\,{\gamma }_{3j}Previous\,Outcome\,Valenc{e}_{ij}+{\gamma }_{4j}Previous\,Outcome\,Valu{e}_{ij}+{\mu }_{0j}+{\varepsilon }_{ij}\end{array}$$ Total pumps on a particular balloon trial (i) for a given adolescent (j) was modeled as a function of the average number of pumps across the task (γ0j), the trial number (γ1j; range = 0–23), the outcome of the current trial (γ2j; coded Cash-Out = 0, Explosion = 1), the outcome of the previous trial (γ3j; coded Cash-Out = 0, Explosion = 0), and the size of the previous outcome (γ4j), in which we calculated a 75th percentile threshold for each participant's pump behavior based on their individual data. For previous trial outcomes on balloons where adolescents pumped above this threshold, the predictor was coded as 1, as points earned or lost on these trials were high value for the individual, while all other previous trials were coded as 0, indicating lower value earnings or loss28. As our focus was on adolescents' individual sensitivity to the valence and value of previous feedback, these parameters were allowed to vary randomly in our model in order to gain individual effect estimates. Nesting of trials within balloons was achieved by modeling a between-person random intercept (μ0j) assumed to be independent and identically distributed and follow a Normal distribution with a constant variance (i.e., \({u}_{0j} \sim {\rm{N}}[0,{\tau }_{00}]\)). Finally, the individual-level residuals errors (εij) were assumed to be independent and identically distributed, following a Normal distribution with a constant variance (i.e., \({\varepsilon }_{ij} \sim {\rm{N}}[0,{\sigma }^{2}]\)). In order to use these two metrics of task engagement, we extracted empirical Bayes estimates for each adolescent. Empirical Bayes estimates are optimally-weighted averages which combine individual- and group-level slope estimates, and "shrink" individual slope estimates towards group mean effect29. fMRI Data Acquisition and Processing fMRI data acquisition Imaging data were collected utilizing a 3 Tesla Trio MRI scanner. The BART included T2*-weighted echoplanar images (EPI; slice thickness = 3 mm; 38 slices; TR = 2 sec; TE = 25 ms; matrix = 92 × 92; FOV = 230 mm; voxel size = 2.5 × 2.5 × 3 mm3). Additionally, structural scans were acquired, including a T1* magnetization-prepared rapid-acquisition gradient echo (MPRAGE; slice thickness = 0.9 mm; 192 slices; TR = 1.9 sec; TE = 2.32 ms; matrix = 256 × 256; FOV = 230 mm; voxel size = 0.9 × 0.9 × 0.9 mm3; sagittal plane) and a T2*-weighted, matched-bandwidth (MBW), high resolution, anatomical scan (slice thickness = 3 mm; 192 slices; TR = 4 sec; TE = 64 ms; matrix = 192 × 192; FOV = 230 mm; voxel size = 1.2 × 1.2 × 3 mm3). EPI and MBW scans were obtained at an oblique axial orientation in order to maximize brain coverage and minimize dropout in orbital regions. fMRI data preprocessing and analysis Preprocessing utilized FSL FMRIBs Software Library (FSL v6.0; https://fsl.fmrib.ox.ac.uk/fsl/). Steps taken during preprocessing included correction for slice-timing using MCFLIRT; spatial smoothing using a 6 mm Gaussian kernel, full-width-at-half maximum; high-pass temporal filtering with a 128 s cutoff to remove low frequency drift across the time-series; and skull stripping of all images with BET. Functional images were re-sampled to a 2 × 2 × 2 mm space and co-registered in a two-step sequence to the MBW and the MPRAGE images using FLIRT in order to warp them into the standard stereotactic space defined by the Montreal Neurological Institute (MNI) and the International Consortium for Brain Mapping. Preprocessing was completed utilizing individual-level independent component analysis (ICA) with MELODIC combined with an automated component classifier30 (Neyman-Pearson threshold = 0.3), which was applied to filter signal originating from noise sources (e.g., motion, physiological rhythms). Global signal regression was not performed due to its tendency to increase distance-related dependencies in the strength of functional connectivity measures31. Motion Correction Prior to modeling the fMRI data further, we took several steps to reduce the influence of motion. First, as mentioned previously, we subjected each participants' data to individual-level ICA in order to remove motion-related signal from each participants' time-series. We also controlled for 8 nuisance regressors in the GLM and time-series analyses: 6 motion parameters generated during realignment and the average signal from both the white matter and cerebrospinal fluid masks. Finally, slices with greater than 2 mm of motion were censored from the time-series (or modeled as a junk regressor in the GLM) to remove the effects of large, sudden movements on the functional data. No participant exceeded 5% of slices being censored (range: 0–2.5%). Previous work has shown that these strategies effectively reduce the influence of motion on functional connectivity analyses31. Regions of Interest To estimate how autoregressive stability in the default mode network impacts task behavior, we constructed 10 a priori regions of interest (ROIs) based on previous neuroimaging work with this network (Fig. 1). We based our ROIs off of resting-state maps of the default mode network32,33. Regions included the medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), bilateral dorsal superior frontal gyrus (dSFG), bilateral temporal poles (TP), bilateral hippocampus, and bilateral angular gyrus (AG). Regions were extracted from the templates using FSL. Most ROIs showed good separation from other DMN regions, however, in order to separate the mPFC and bilateral dSFG (which overlap in standard maps), we took additional steps to create 3 separate ROIs by zeroing voxels between the two regions with a z-stat < 6. This approach achieved appreciable separation for our masks. Individual masks were then evaluated again using the Marsbar toolbox in SPM34 and FSL to ensure that ROIs did not contain any voxels that overlapped with another mask or exceeded the boundaries of the whole-brain mask. A 3D, navigable image containing all masks superimposed onto a single brain map is available on NeuroVault (https://neurovault.org/collections/VSWQSTDA/ 35. Default Mode Network. We defined 10 ROIs composing regions central to the default mode network (DMN). (A) A 3D video of our a priori regions of interest in free space. (B) Regions included the medial prefrontal cortex, posterior cingulate cortex, and bilateral complements of the dorsal superior frontal gyrus (dSFG), temporal pole, hippocampus, and angular gyrus. Whole-brain Univariate Analyses For our univariate analyses, we modeled the BART as an event-related design. Whole-brain statistical analyses were performed using the general linear model (GLM) in SPM8. Fixed effects models were constructed for each participant with several conditions of interest, including pump decisions, cash-outs, and explosion events. The jittered inter-trial period between pump decisions and between outcomes and a new balloon was not modeled and served as an implicit baseline. A parametric modulator (PM) was included for conditions of interest and corresponded to the current pump number for the current trial at the time of event. This PM serves to control for differences across pumps within a balloon. For descriptive purposes, we ran whole-brain, group-level, random effects analyses for pump decisions using GLMFlex (http://mrtools.mgh.harvard.edu/index.php/GLM_Flex). This approach corrects for variance-covariance inequality, removes outliers and sudden activation changes in the brain, partitions within- and between-person error terms, and analyzes all voxels containing data. Since this analysis was purely for visualization purposes, we thresholded the resultant statistical image at p < 0.001, with a minimum cluster size of 40 voxels. For the central univariate analyses, we extracted parameter estimates from each individual's unthresholded fixed effects statistical map using the 10 a priori ROIs we constructed for the DMN network. Time-series Analysis Granger Causality Originally developed in the context of economic models36, Granger causality emerges from a vector autoregression framework (VAR), where the contemporaneous and lagged relationships between a system of variables can be examined. A weak form of causal inference (compared to experimental designs for example), Granger causality relies on the intuition that x cannot cause y if x temporally follows y (i.e., cause precedes effect)37. Based on this idea, Granger causality can be inferred if x at a previous time point (e.g., t − 1) predicts y at time (t) above and beyond the self-predictive effect of y at t − 1 on y at t. In other words, the combined information of x(t−1) and y(t−1) is more predictive of y(t) than is y(t−1) alone. Under this causal definition, it is possible for variables to Granger cause one another across time38. Group Iterative Multiple Model Estimation (GIMME) GIMME is a model-based network approach, which utilizes both individual and group-level information to derive directed functional connectivity maps39. GIMME estimates connectivity graphs using both unified SEM40 and extended unified SEM41 to assess whether the presence of a path between ROIs significantly improves the overall model fit to the time-series data. GIMME estimates both contemporaneous (e.g., ROI1 at t predicts ROI2 at t) and lagged (e.g., ROI1 at t − 1 predicts ROI2 at t) effects between ROIs, as well as the autoregressive (e.g., ROI1 at t − 1 predicts ROI1 at t) effects for each ROI time-series. Formally, a GIMME model for a set of p time-series with t measurements is: $${\eta }_{i}(t)=({A}_{i}+{A}_{{i}^{g}}){\eta }_{i}(t)+({{\rm{\Phi }}}_{1,i}+{{\rm{\Phi }}}_{1,{i}^{g}}){\eta }_{i}(t-1)+{\zeta }_{i}(t)$$ where A represents a p × p matrix of contemporaneous paths for the individual (Ai) and group (\({A}_{{i}^{g}}\)) parameters, Φ1 is matrix of first-order lagged paths (for the individual and group respectively), and ζ is the p-length vector of errors, assumed to be a white noise process, with means of zero, a finite variance, and no sequential dependencies (i.e., all temporal information is contained within A and Φ1)42. The diagonal of Φ1 contains path estimates for the autoregressive effects (e.g., ROI1 at t − 1 predicts ROI1 at t), which represent the autocorrelations of each ROI predicting itself forward in time. GIMME assesses directional paths by testing whether a given ROI can predict another, controlling for the predicted ROI's autoregressive effect (i.e., establishing Granger causality). GIMME has been developed for both block40 and event-related41 fMRI data, and is freely available through the open-source R platform43. In contrast with many other functional connectivity approaches (e.g., graph theoretical approaches), GIMME constructs functional maps through a model-driven, multi-step processing of model building and pruning. First, information across all participants is used to derive a common network map that is representative of the majority of the sample. Group paths are only retained if they are significant for 70% of all individuals in the sample. All autoregressive paths are automatically estimated in order to accurately assess directionality in the between-ROI paths. Once a group map has been obtained, additional paths at the individual level are evaluated based on improvements to model fit for that individual. Unnecessary paths are pruned at the group level, and additional paths at the individual level are evaluated based on improvements to model fit for that individual. Individual-level paths are then pruned if they do not significantly improve the fit of the final model. This approach offers the unique advantage of being able to derive a group-level map that should be applicable to the majority of the sample, while still recognizing that individuals often show significant heterogeneity from the group map. This approach shows significant advantages over other methods in recovering "true" paths in simulated data while minimizing false positives44. Our task provided two main challenges when measuring neural connectivity. First, our goal was to analyze connectivity patterns during risky decisions; however, the BART also contains feedback trials (i.e., cash-out outcomes, explosions). Secondly, our task was self-paced and as such, we needed a modeling approach that would allow for individuals to possess different amounts of data. Fortunately, GIMME is capable of handling unequal amounts of data between participants, as well as the inclusion of missing data45. Missing values are replaced with placeholder NaN values to maintain the temporal ordering of scans, and neither contemporaneous nor lagged effects are estimated based on missing values. These features make GIMME especially well-suited to estimating connectivity graphs for the BART, allowing for the self-paced nature of the task, as well as specifically examining connectivity during risk decisions, without considering connectivity during outcomes. Autoregressive Paths Autoregressive pathways are estimated as the predictive effect of activity in an ROI at one time point on that ROI's activity at the next time point. As such, stronger autoregressive path indicate that an ROI's activation is more stable over time. In our analyses, we were specifically interested in testing whether the strength of an individual's autoregressive paths was related to task behavior on the BART. As such, the parameter estimates from GIMME for each ROI's autoregressive path were extracted for use in subsequent regression analysis. Analytic Plan To test the hypothesis that stability versus deactivation of the default mode network would be important for feedback learning, we took two analytic approaches. First, we utilized standard univariate analyses to extract each individual's parameter estimates of deactivation in the 10 a priori DMN ROIs during risk decisions. Secondly, we took a model-based network approach using the same 10 ROIs to estimate stability over time in DMN activation. After completing both univariate and network analyses, each participant had 10 parameter estimates of mean activation and 10 parameter estimates of stability in activation over time. To avoid concerns of multiple comparisons by regressing each of these 20 estimates on behavioral metrics of interest, we took a dimension reduction approach through principal components analysis (PCA). PCA also offers a key advantage by partitioning variance into bins: variance that is common across regions (and representative of the DMN as a network), and variance that is unique to a particular region. Because activation in a given region is likely a combination of network- and ROI-level information, partitioning this region-specific variance out helps remove noise from our estimates that originate from individual ROIs. For both sets of parameters, we extracted the first principal component from a PCA where estimates from all 10 ROIs were used as inputs. We utilized the R function, "principal" (https://cran.r-project.org/web/packages/psych/psych.pdf), to extract the first principal component utilizing the covariance matrix and varimax rotation. We ran follow-up analyses using the cross-validation function in the R function "pca" (https://cran.r-project.org/web/packages/mdatools/index.html), and results remained unchanged. These principal component scores were then used in subsequent regression analyses to predict task engagement (i.e., sensitivity to the valence and value of feedback in the task). Group-Level Results DMN Deactivation During Risk Decisions We first ran main effects analysis at the whole brain level, for descriptive purposes, to check for the expected deactivation of the default mode regions during decisions to pump across individuals in the sample. Consistent with prior work2, adolescents showed strong deactivation of default mode regions during risk decisions at the group level, including mPFC, PCC, STS, AG, dSFG, and hippocampus (Fig. 2; Table 1). In contrast, typical task-active regions such as the anterior cingulate, anterior insula, and motor cortex showed positive activation during risk decisions. However, substantial individual differences emerged such that not all adolescents showed strong deactivation of DMN regions, and some adolescents even showed positive activation of DMN regions during risk decisions (Fig. 3). Main Effect of Risk Decisions. The univariate condition of risk decisions showed robust deactivation across all DMN regions. Salience regions, such as the anterior cingulate and anterior insula showed strong positive activation. Table 1 Neural Regions Showing Significant Activation During Risky Decisions (i.e., Pumps). Distribution of Activation and Deactivation in the DMN. While regions of the default mode network show mean deactivation at the group-level, there are individual differences, including individuals who show positive activation of the DMN on average. Network Map of the DMN During Risk Decisions Next, we constructed model-based functional networks between our a priori DMN ROIs. While our focus was on the autoregessive paths, we estimated and displayed the full group model for descriptive purposes. Results show a strongly interconnected default mode network (Fig. 4). In addition to connections between bilateral complements (e.g., left and right dSFG), the PCC, left TP, and left AG show many between-region paths. Importantly for testing our hypothesis, the autoregressive pathways for each of the 10 DMN ROIs were estimated for all subjects. DMN Network during Risk Decisions. DMN seed regions showed strong interconnectivity (grey), with hubs such as the left angular gyrus and posterior cingulate showing several cross-region connections. However, for the purpose of the current study, our main focus was the autoregressive paths (black) which are estimates of within-region stability in activation. Autoregressive paths are dashed to denote a lagged temporal relationship. Individual Differences in DMN Deactivation and Stability Differentially Predict Feedback Learning Finally, our key analysis explored individual differences in deactivation (from the univariate analyses) and stability (from the functional network analyses) in DMN regions to task engagement. We took parameter estimates of univariate activation and autoregressive strength for each of the 10 a priori DMN regions and ran separate PCA analyses on each type of parameter and obtained one score per person, per analysis. For univariate analyses, this analysis resulted in a representative level of deactivation across DMN regions. For network analyses on the autoregressive paths, the PCA score reflected representative activational stability in the DMN as a whole (see Table 2 for factor loadings to each principal component). Table 2 Factor Loadings for Regions of DMN on Principal Components for Mean Levels of Deactivation and Autoregressive Stability. Next, we entered both of the scores into a multiple regression analysis with adolescents' two indices of feedback learning (i.e., sensitivity to the valence and sensitivity to the value of previous feedback) as our outcomes in two separate analyses (thresholded at p = 0.025 to correct for multiple comparisons). Results showed that DMN stability (B = 0.063, SE = 0.026, p = 0.018) but not mean deactivation (B = 0.025, SE = 0.026, p = 0.349), is associated with adolescents' sensitivity to the valence of the previous outcome. Similarly, stability (B = 0.150, SE = 0.048, p = 0.003) but not deactivation (B = 0.033, SE = 0.048, p = 0.487) is associated with adolescents' sensitivity to the value (i.e., magnitude) of previous feedback on the task. Furthermore, the two factor scores were uncorrelated (r = 0.112, p = 0.376), meaning that deactivation did not indicate more stability in DMN activity, nor were interactions between deactivation and stability predictive of feedback learning (p = 0.186 and p = 0.963 respectively). These results suggest that stability in DMN activity, even if that activity is positive on average (as is characteristic of some adolescents; Fig. 3), is more predictive of adolescents' feedback learning than absolute level of (de)activation. The exact role of the default mode network in cognition and behavior remains an important open question for cognitive neuroscientists. Traditionally, the DMN has been conceptualized as a "task negative" network1, with suppression of the network being important for normal decision-making processes and task engagement2,14. Indeed, DMN suppression is an important marker of task engagement14, showing linear deactivation as the difficulty of the task increases15. Furthermore, disruption of DMN suppression is thought to contribute to attention-related disorders such as ADHD7. However, unanswered questions remain as to the mechanism by which DMN suppression is implemented in the brain. We utilized both traditional univariate, as well as a novel, model-based network approach to test two competing hypotheses related to this mechanism of DMN suppression. Consistent with previous work1,2, main effects analyses showed characteristic deactivation in DMN regions during task. Furthermore, the group connectivity map revealed large numbers of connections between central DMN nodes (e.g., PCC and angular gyrus) and the other nodes of the network. We then used the overall level of DMN deactivation and stability in DMN activation across time to assess whether the absolute level or stability in DMN activation was related to task engagement, operationalized as adolescents' ability to extract and use feedback information learned from the task environment. In line with our hypothesis, the stability of activation in default mode regions (as estimated by the autoregressive pathways in GIMME) predicted both metrics of feedback learning over and above the absolute level of DMN deactivation (as measured through the univariate contrast). Indeed, adolescents' mean level of deactivation in DMN was not a significant predictor of either metric of feedback learning within the task. Interestingly, DMN deactivation was uncorrelated with the stability within those regions, suggesting that highly stable DMN activation was possible even when the DMN showed positive activation during the task. These results suggest that the brain may be able to suppress the influence of neural regions during a task without an apparent change in resource consumption (at least as measured by the BOLD signal). The implications of the current study offer promise for future research for two reasons. First, the current results offer a validation for the adoption of model-based network approaches for functional data. Traditional approaches to functional connectivity (e.g., seed-based, graph theoretical) only consider concurrent relationships between ROIs. However, methods such as GIMME41 and other vector autoregression40,46 (VAR) models are capable of estimating both concurrent and lagged effects, which improve network model fit for each individual. Importantly for the current study, GIMME automatically estimates autoregressive paths (i.e., lagged effects within an ROI) as part of its model-building approach, allowing us to examine the temporal stability of activation across time. Our finding that activational stability, as measured through these autoregressive paths, is key for promoting feedback learning highlights the importance of considering these lagged effects and provides encouragement for an increased focus on model-based network approaches that can estimate them. Secondly, the implication of the current results (i.e., that network influence can be suppressed through stability rather than through deactivation) raises questions about the inferences made about negative BOLD estimates (i.e., deactivation) in fMRI. While deactivation is often viewed as synonymous with a reduced role in decision making processes, the fact that deactivation is not correlated with stability suggests that a highly-deactivated region can still show low stability in activation across the task. Furthermore, we found unexpected variability in the mean level of deactivation in the DMN at the main effect level, such that some adolescents showed the expected pattern of strong DMN deactivation whereas others showed weak deactivation or even positive activation. This suggests that deactivation of the DMN may not be a universal phenomenon during decision making, and that a failure to deactivate DMN does not a priori impair performance on the task. Whether there are differences in the behavioral profiles associated with activation fluctuations between individuals who show strongly versus weakly deactivated DMN remains an open question, as the current sample is likely underpowered to detect interaction effects. Future research may be able to address this by examining the interaction between stability and level of deactivation in the DMN, and the consequences of different configurations (e.g., low stability and strong deactivation versus high stability and strong deactivation) for task behavior. For future research, unanswered questions to consider are the potential mechanisms by which the brain instantiates stability in activation in the DMN regions. One possibility is that task-relevant regions (e.g., ACC, insula) or some third set of regions actively suppresses DMN involvement during task by producing signals which down-regulate default mode regions. Alternatively, other networks could simply disengage from DMN regions. By increasing the segregation between networks, the brain may isolate the DMN, decreasing its ability to influence cognition and behavior. Uncovering the mechanism that instantiates DMN suppression is important for understanding both normal cognition but also has implications for disease states which are associated with disruptions to the DMN7,8,47. In conclusion, we tested two competing hypotheses related to the suppression of the default mode network during a risky decision-making task. In contrast with a focus on the mean level of deactivation, we proposed that stability in activation, rather than absolute level, would be a more-important mechanism for reduced DMN influence on feedback learning. We adopted both traditional univariate and a novel model-based network approach to test these hypotheses, and found support for our hypothesis that increased DMN stability is related to increased sensitivity to information from the task (i.e., learning). These results shed light on a new mechanism by which the brain reduces the influence of a functional network, and highlights the importance of adopting network methods which consider both contemporaneous and lagged effects. Raichle, M. E. et al. A default mode of brain function. Proc. Natl. Acad. Sci. USA 98(2), 676–682 (2001). Raichle, M. E. The brain's default mode network. Annu. Rev. Neurosci. 38, 433–447 (2015). Christoff, K., Gordon, A. M., Smallwood, J., Smith, R. & Schooler, J. W. Experience sampling during fMRI reveals default network and executive system contributions to mind wandering. Proc. Natl. Acad. Sci. USA 106(21), 8719–8724 (2009). Spreng, R. N. & Grady, C. L. Patterns of brain activity supporting autobiographical memory, prospection, and theory of mind, and their relationship to the default mode network. J. Cogn. Neurosci. 22(6), 1112–1123 (2010). Andrews-Hanna, J. R., Reidler, J. S., Huang, C. & Buckner, R. L. Evidence for the default network's role in spontaneous cognition. J. Neurophysiol. 104(1), 322–335 (2010). Meyer, M. L., Davachi, L., Ochsner, K. N. & Lieberman, M. D. Evidence That Default Network Connectivity During Rest Consolidates Social Information. Cereb. Cortex, https://doi.org/10.1093/cercor/bhy071 (2018). Castellanos, F. X. et al. Cingulate-precuneus interactions: a new locus of dysfunction in adult attention-deficit/hyperactivity disorder. Biol. Psychiatry 63(3), 332–337 (2008). Whitfield-Gabrieli, S. & Ford, J. M. Default mode network activity and connectivity in psychopathology. Annu. Rev. Clin. Psychol. 8, 49–76 (2012). Dalwani, M. S. et al. Default mode network activity in male adolescents with conduct and substance use disorder. Drug Alcohol Depend. 134, 242–250 (2014). Ho, T. C. et al. Emotion-dependent functional connectivity of the default mode network in adolescent depression. Biol. Psychiatry 78(9), 635–646 (2015). Uddin, L. Q., Clare Kelly, A. M., Biswal, B. B., Xavier Castellanos, F. & Milham, M. P. Functional connectivity of default mode network components: correlation, anticorrelation, and causality. Hum. Brain Mapp. 30(2), 625–637 (2009). Sridharan, D., Levitin, D. J. & Menon, V. A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proc. Natl. Acad. Sci. USA 105(34), 12569–12574 (2008). Greicius, M. D., Krasnow, B., Reiss, A. L. & Menon, V. Functional connectivity in the resting brain: a network analysis of the default mode hypothesis. Proc. Natl. Acad. Sci. USA 100(1), 253–258 (2003). Greicius, M. D. & Menon, V. Default-mode activity during a passive sensory task: uncoupled from deactivation but impacting activation. J. Cogn. Neurosci. 16(9), 1484–1492 (2004). Singh, K. D. & Fawcett, I. P. Transient and linearly graded deactivation of the human default-mode network by a visual detection task. NeuroImage 41(1), 100–112 (2008). Ossandón, T. et al. Transient suppression of broadband gamma power in the default-mode network is correlated with task complexity and subject performance. J. Neurosci. 31(41), 14521–14530 (2011). Liddle, E. B. et al. Task‐related default mode network modulation and inhibitory control in ADHD: Effects of motivation and methylphenidate. J. Child Psychol. Psychiatry 52(7), 761–771 (2011). O'Doherty, J. P. Reward representations and reward-related learning in the human brain: insights from neuroimaging. Curr. Opin. Neurobiol. 14(6), 769–776 (2004). McCormick, E. M. & Telzer, E. H. Adaptive adolescent flexibility: Neurodevelopmental of decision-making and learning in a risky context. J. Cogn. Neurosci. 29, 413–423 (2017a). McCormick, E. M. & Telzer, E. H. Failure to retreat: blunted sensitivity to negative feedback supports risky behavior in adolescents. NeuroImage 147, 381–389 (2017b). Peters, S., Van Duijvenvoorde, A. C., Koolschijn, P. C. M. & Crone, E. A. Longitudinal development of frontoparietal activity during feedback learning: contributions of age, performance, working memory and cortical thickness. Dev. Cogn. Neurosci. 19, 211–222 (2016). van Duijvenvoorde, A. C. et al. A cross-sectional and longitudinal analysis of reward-related brain activation: effects of age, pubertal stage, and reward sensitivity. Brain Cogn. 89, 3–14 (2014). McCormick, E. M. & Telzer, E. H. Not Doomed to Repeat: Enhanced Medial Prefrontal Cortex Tracking of Errors Promotes Adaptive Behavior during Adolescence. J. Cogn. Neurosci. 30(3), 281–289 (2018). Lejuez, C. W. et al. Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART). J. Exp. Psychol. –Appl. 8(2), 75 (2002). Wallsten, T. S., Pleskac, T. J. & Lejuez, C. W. Modeling behavior in a clinically diagnostic sequential risk-taking task. Psychol. Rev. 112(4), 862 (2005). Telzer, E. H., Fuligni, A. J., Lieberman, M. D., Miernicki, M. E. & Galván, A. The quality of adolescents' peer relationships modulates neural sensitivity to risk taking. Soc. Cogn. Affect. Neurosci. 10(3), 389–398 (2014). Qu, Y., Galvan, A., Fuligni, A. J., Lieberman, M. D. & Telzer, E. H. Longitudinal changes in prefrontal cortex activation underlie declines in adolescent risk taking. J. Neurosci. 35(32), 11308–11314 (2015). Ashenhurst, J. R., Bujarski, S., Jentsch, J. D. & Ray, L. A. Modeling behavioral reactivity to losses and rewards on the Balloon Analogue Risk Task (BART): Moderation by alcohol problem severity. Exp. Clin. Psychopharmacol. 22(4), 298 (2014). Diez-Roux, A. V. A glossary for multilevel analysis. J. Epidemiol. Community Health 56, 588–594 (2002). Tohka, J. et al. Automatic independent component labeling for artifact removal in fMRI. NeuroImage 39(3), 1227–1245 (2008). Ciric, R. et al. Benchmarking of participant-level confound regression strategies for the control of motion artifact in studies of functional connectivity. NeuroImage 154, 174–187 (2017). Laird, A. R. et al. Behavioral interpretations of intrinsic connectivity networks. J. Cogn. Neurosci. 23(12), 4022–4037 (2011). Lee, T. H., Miernicki, M. E. & Telzer, E. H. Families that fire together smile together: resting state connectome similarity and daily emotional synchrony in parent-child dyads. NeuroImage 152, 31–37 (2017). Brett, M., Anton, J. L., Valabregue, R. & Poline, J. B. Region of interest analysis using the MarsBar toolbox for SPM 99. NeuroImage 16(2), S497 (2002). Gorgolewski, K. J. et al. NeuroVault.org: A web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Front. Neuroinform. 9, 8 (2015). Granger, C. W. Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 424–438 (1969). Goebel, R., Roebroeck, A., Kim, D. S. & Formisano, E. Investigating directed cortical interactions in time-resolved fMRI data using vector autoregressive modeling and Granger causality mapping. J. Mag. Reson. Imaging 21(10), 1251–1261 (2003). Henry, T. & Gates, K. Causal search procedures for fMRI: review and suggestions. Behaviormetrika 44(1), 193–225 (2017). Gates, K. M., Molenaar, P. C., Hillary, F. G., Ram, N. & Rovine, M. J. Automatic search for fMRI connectivity mapping: an alternative to Granger causality testing using formal equivalences among SEM path modeling, VAR, and unified SEM. NeuroImage 50(3), 1118–1125 (2010). Kim, J., Zhu, W., Chang, L., Bentler, P. M. & Ernst, T. Unified structural equation modeling approach for the analysis of multisubject, multivariate functional MRI data. Hum. Brain Mapp. 28(2), 85–93 (2007). Gates, K. M., Molenaar, P. C., Hillary, F. G. & Slobounov, S. Extended unified SEM approach for modeling event-related fMRI data. NeuroImage 54(2), 1151–1158 (2011). Beltz, A. M. & Gates, K. M. Network Mapping with GIMME. Multivariate Behav. Res. 52(6), 789–804 (2017). Lane, S. T., Gates, K. M., Molenaar, P. C. M., Hallquist, M. & Pike, H. Gimme: Group iterative multiple model estimation. Computer Software. Retrieved from, https://CRAN.R-project. org/package=gimme (2016). Gates, K. M. & Molenaar, P. C. Group search algorithm recovers effective connectivity maps for individuals in homogeneous and heterogeneous samples. NeuroImage 63(1), 310–319 (2012). Gates, K. M., Molenaar, P. C., Iyer, S. P., Nigg, J. T. & Fair, D. A. Organizing heterogeneous samples using community detection of GIMME-derived resting state functional networks. PloS One 9(3), e91322 (2014). Penny,W., Harrison, L. Multivariate autoregressive models. In: Friston, K. J., Ashburner, J. T., Kiebel, S. J., Nichols, T. E., Penny, W. D. (Eds.), Statistical Parametric Mapping: The Analysis of Functional Brain Images. Academic Press, Amsterdam, 534–540 (2007). Humphreys, K. L. et al. Risky decision making from childhood through adulthood: Contributions of learning and sensitivity to negative feedback. Emotion 16(1), 101 (2016). We greatly appreciate the assistance of the Biomedical Imaging Center at the University of Illinois, as well as Heather Ross, Jordan Krawczyk, and Tae-Ho Lee for assistance collecting data. This research was supported by grants from the National Institutes of Health (R01DA039923), the National Science Foundation (BCS 1539651), and the Jacobs Foundation (2014-1095). Department of Psychology and Neuroscience, University of North Carolina, Chapel Hill, North Carolina, 27599, USA Ethan M. McCormick & Eva H. Telzer Ethan M. McCormick Eva H. Telzer E.M.M. and E.H.T. designed and performed research; analyzed data; and wrote the paper. Correspondence to Ethan M. McCormick. McCormick, E.M., Telzer, E.H. Contributions of default mode network stability and deactivation to adolescent task engagement. Sci Rep 8, 18049 (2018). https://doi.org/10.1038/s41598-018-36269-4 Effects of age, sex, and puberty on neural efficiency of cognitive and motor control in adolescents Tilman Schulte Jui-Yang Hong Eva M. Müller-Oehring Brain Imaging and Behavior (2020)
CommonCrawl
Mathemathinking The Lost Boarding Pass One hundred passengers are lined up to board a full flight. The first passenger lost his boarding pass and decides to choose a seat randomly. Each subsequent passenger (responsible enough to not lose their boarding pass) will sit in his or her assigned seat if it is free and, otherwise, randomly choose a seat from those remaining. What is the probability that the last passenger will get to sit in his assigned seat? A systematic solution by induction [1,2] To gain insight that hopefully leads to solving the full problem, mathematicians usually study a reduced scenario that is easier to solve. Taking this approach, we consider two passengers boarding a two-passenger plane. In this case, the only way for the second (last) passenger to sit in his assigned seat is if the first passenger happens to choose his own seat. Since the first passenger chooses one of the two seats randomly, the probability that the second passenger gets his own seat is 1/2. Let $p(n)$ be the probability that the $n$th passenger gets his or her assigned seat on an n passenger plane. We determined that $p(2)=\frac{1}{2}$. Next, consider when there are three people boarding a three-passenger plane. If the first passenger takes his own seat, the second passenger's seat will be free for him or her to sit, and the third (last) passenger will get his own seat. However, if the first passenger takes the second passenger's seat, the second passenger must randomly choose between the first passenger and the third passenger's seat. The third passenger then gets the assigned seat only if the second passenger happens to choose the first passenger's seat, which has a probability of 1/2 since there are only two options and the choice is random. If the first passenger chooses the third passenger's seat, there is no hope that the third passenger will get his or her seat, of course. We have exhausted all ways of the last (3rd) passenger getting his or her seat. $p(3)=$ 1/3 + (1/3) (1/2) Notice this fact that screams induction: when the first passenger chose the 2nd passenger's seat, the second passenger now plays the role of the first passenger in that when considering $p(2)$-- the 2nd passenger has two seats to randomly choose from, and one of them is the seat of the third, and last, passenger. So, we could write: $p(3) = \frac{1}{3} + \frac{1}{3} p(2)$. Now, for four passengers, consider all ways that the last passenger can occupy his own seat. $p(4) =$ 1/4 + 1/4 p(3) + 1/4 p(2) = 1/4 + (1/4) (1/2) + (1/4) (1/2) = 1/2. The first passenger takes his own seat. The first passenger takes passenger #2's seat, and passenger #2 has three seats to randomly choose from-- one of which is passenger #4's seat. This is the $p(3)$ problem. The first passenger takes passenger #3's seat, and passenger #3 has two seats to randomly choose from-- one of which is passenger #4's seat. This is the $p(2)$ problem. Again, we get 1/2. We reduced the problem into the previous cases for $p(2)$ and $p(3)$, which we already know. For finding $p(n)$ ($n>1$), the probability that the first passenger takes his own seat is $\frac{1}{n}$. If the first passenger takes the $K$th passenger's seat ($K>1$), passengers $2,3,4,...,K-1$ will get their seats, but passenger $K$ will be faced with the reduced problem of having $n - (K-1)$ seats to randomly choose from, one of which is the last passenger's seat-- this is the $p(n-K+1)$ problem! So, as above, we consider all possible seats that the first passenger can possibly choose (with a $\frac{1}{n}$ chance) and bring the $p(n-K+1)$ problems into play: $p(n) = \frac{1}{n} + \displaystyle \sum_{K=2}^{n-1} \frac{1}{n} p(n-K+1) = \frac{1}{n} \left(1 + \displaystyle \sum_{K=2}^{n-1} p(n-K+1) \right).$ By the above recursion starting with $p(2)=\frac{1}{2}$, we can show that $p(n)=\frac{1}{n} \left(1+(n-2)\frac{1}{2} \right)=\frac{1}{2}$. For every $n$-- and this includes $n=100$ for this problem-- we have that the probability that the last passenger sits in his assigned seat is 1/2. I expected it to be much less. A more insightful solution: paraphrased from [3] The trick is to consider just before the last passenger boards the plane, and one seat is left. Claim: The last free seat is either the first or the last passenger's assigned seat. proof by contradiction. Assume that the last seat belongs to passenger #$x$ that is not the first or last passenger. Since passenger $x$ boarded the plane before the last passenger, passenger $x$ did not sit in his or her assigned seat, violating the problem statement. One seat is now left for the last passenger. The event that the last person's seat was taken is equivalent to the event that the first passenger's seat was taken before the last person's seat. This is because of the above claim: if the last passenger's seat is taken, then the last passenger must sit in the assigned seat of the first passenger. If the first passenger's seat is taken, then the last passenger sits in his own assigned seat. Since each time a passenger chooses a seat that is not assigned to them, the choice is random, there is no bias for a passenger to choose the first passenger's seat over the last. Thus, the probability of the last passenger sitting in his assigned seat is 1/2. [1] http://www.mscs.mu.edu/~paulb/Puzzle/boardingpasssolution.html [2] Understanding Probability by Henk Tijms. This book is written in a colloquial style and has very interesting examples-- highly recommended. [3] http://www.nd.edu/~dgalvin1/Probpuz/probpuz3.html Andrew Hill July 31, 2012 at 9:41 AM Why cocaine users should learn Bayes' Theorem Diagnostic tests for diseases and drugs are not perfect. Two common measures of test efficacy are sensitivity and specificity. Precisely, ... How do airlines choose by how many customers to overbook flights? A few years ago, I was returning home from a trip to the Florida Keys, which required two layovers. After my first flight, the airline annou... Berkson's paradox Berkson's paradox is a counter-intuitive result in probability and statistics. Imagine that we have two independent events A and B. By d... One hundred passengers are lined up to board a full flight. The first passenger lost his boarding pass and decides to choose a seat rand... How a statistically inept jury led to a wrongful conviction In 1999, Sally Clark of Britain was wrongly convicted of murdering her two infant sons that actually died of sudden infant death syndrome...
CommonCrawl
Only show open access (15) Over 3 years (126) Earth and Environmental Sciences (188) flm;boundary layers;boundary layer stability Journal of Fluid Mechanics (188) test society (186) Test Society 2018-05-10 (2) Boundary layer instability over a rotating slender cone under non-axial inflow Sumit Tambe, Ferry Schrijer, Arvind Gangoli Rao, Leo Veldhuis Journal: Journal of Fluid Mechanics / Volume 910 / 10 March 2021 Published online by Cambridge University Press: 12 January 2021, A25 Print publication: 10 March 2021 Centrifugal instability of the boundary layer is known to induce spiral vortices over a rotating slender cone that is facing an axial inflow. This paper shows how a deviation from the symmetry of such axial inflow affects the boundary layer instability over a rotating slender cone with half-angle $\psi =15^\circ$. The spiral vortices are experimentally detected using their thermal footprint on the cone surface for both axial and non-axial inflow conditions. In axial inflow, the onset and growth of the spiral vortices are governed by the local rotational speed ratio $S$ and Reynolds number $Re_l$ in agreement with the literature. During their growth, the spiral vortices significantly affect the mean velocity field as they entrain and bring high-momentum flow closer to the wall. It is found that the centrifugal instability induces these spiral vortices in non-axial inflow as well; however, the asymmetry of the non-axial inflow inhibits the initial growth of the spiral vortices, and they appear at higher local rotational speed ratio and Reynolds number, where the azimuthal variations in the instability characteristics (azimuthal number $n$ and vortex angle $\phi$) are low. Tollmien–Schlichting wave cancellation via localised heating elements in boundary layers G. S. Brennan, J. S. B. Gajjar, R. E. Hewitt Journal: Journal of Fluid Mechanics / Volume 909 / 25 February 2021 Published online by Cambridge University Press: 23 December 2020, A16 Print publication: 25 February 2021 Instability to Tollmien–Schlichting waves is one of the primary routes to transition to turbulence for two-dimensional boundary layers in quiet disturbance environments. Cancellation of Tollmien–Schlichting waves using surface heating was first demonstrated in the experiments of Liepmann et al. (J. Fluid Mech., vol. 118, 1982, pp. 187–200) and Liepmann & Nosenchuck (J. Fluid Mech., vol. 118, 1982, pp. 201–204). Here we consider a similar theoretical formulation that includes the effects of localised (unsteady) wall heating/cooling. The resulting problem is closely related to that of Terent'ev (Prikl. Mat. Mekh., vol. 45, 1981, pp. 1049–1055; Prikl. Mat. Mekh., vol. 48, 1984, pp. 264–272) on the generation of Tollmien–Schlichting waves by a vibrating ribbon, but with thermal effects. The nonlinear receptivity problem based on triple-deck scales is formulated and the linearised version solved both analytically as well as numerically. The most significant result is that the wall heating/cooling function can be chosen such that there is no pressure response to the disturbance, meaning there is no generation of Tollmien–Schlichting waves. Numerical calculations substantiate this with an approximation based on the exact analytical result. Previous numerical studies of the unsteady triple-deck equations have shown difficulties in capturing the convective wave packet that develops in the initial-value problem and we show that these arise from the choice of time steps as well as the range of the Fourier modes taken. Growth mechanisms of second-mode instability in hypersonic boundary layers Xudong Tian, Chihyung Wen Published online by Cambridge University Press: 15 December 2020, R4 Stability analyses based on the rates of change of perturbations were performed to study the growth mechanisms of second-mode instability in hypersonic boundary layers. The results show that the streamwise velocity perturbation is strengthened by the concurrence of the momentum transfer due to the wall-normal velocity fluctuation and the streamwise gradient of the pressure perturbation near the wall, while the wall-normal velocity perturbation is dominated by the wall-normal gradient of the pressure perturbation. Meanwhile, the change of fluctuating internal energy is sustained by the advection of perturbed thermal energy in the vicinity of the critical layer and by the dilatation fluctuation near the wall. The energy transport by the wall-normal velocity fluctuation accounts for the growth of second-mode instability, and the growth rate depends on the relative phase of the energy transport by the wall-normal velocity fluctuation to the total time rate of change of fluctuating internal energy in the vicinity of the critical layer. Moreover, this relative phase is associated with the mutual interaction between the critical-layer fluctuation and the near-wall fluctuation. Porous walls recast this mutual interaction by delaying the phase of the wall-normal energy transport near the wall, resulting in the stabilization of the second mode. Ionization and dissociation effects on boundary-layer stability Fernando Miró Miró, Ethan S. Beyak, Fabio Pinna, Helen L. Reed Journal: Journal of Fluid Mechanics / Volume 907 / 25 January 2021 Published online by Cambridge University Press: 23 November 2020, A13 Print publication: 25 January 2021 The ever-increasing need for optimized atmospheric-entry and hypersonic-cruise vehicles requires an understanding of the coexisting high-enthalpy phenomena. These phenomena strongly condition the development of instabilities leading to the boundary layer's transition to turbulence. The present article explores how shock waves, internal-energy-mode excitation, species interdiffusion, dissociation and ionization condition boundary-layer perturbation growth related to second-mode instabilities. Linear stability theory and the e $^{N}$ method are applied to laminar base flows over a $10^{\circ }$ wedge with an isothermal wall and free-stream conditions similar to three flight-envelope points in an extreme planetary return. The authors explore a wide range of boundary conditions and flow assumptions, on both the laminar base flow and the perturbation quantities, in order to decouple the various phenomena of interest. Under the assumptions of this study, the cooling of the laminar base flow due to internal-energy-mode excitation, dissociation and ionization is seen to be strongly destabilizing. However, species interdiffusion, dissociation and ionization acting on the perturbation terms are seen to have the opposite effect. The net result of these competing effects ultimately amounts to internal-energy-mode excitation and dissociation being destabilizing, and ionization being stabilizing. The appearance of unstable supersonic modes due to high-enthalpy effects is seen to be linked to the diffusion-flux perturbations, rather than the cooling of the laminar base flow (as is commonly believed). The use of the linearized shock boundary condition was seen to have a minor impact in the $N$-factor envelopes, despite the extremely low relative shock angle. Decoupling ablation effects on boundary-layer stability and transition Fernando Miró Miró, Fabio Pinna A modelling methodology is proposed and applied to effectively decouple many of the multiple physical phenomena simultaneously coexisting in boundary-layer-transition problems in the presence of an ablating thermal protection system. Investigations are based on linear stability theory and the semi-empirical $\textrm {e}^{\textrm {N}}$ method, and study the marginal contribution to second-mode-wave amplitudes of internal-energy-mode excitation, ablation-induced outgassing, ablation- and radiation-induced surface cooling, air- and carbon-species dissociation reactions, the interdiffusion of dissimilar species, surface chemistry and radiation and perturbation–shock interactions. The contributions of these phenomena are isolated by deploying a variety of flow assumptions, mixtures and boundary conditions with marginal increases in modelling complexity and generality. Internal-energy-mode excitation is seen to be the major contributor to the perturbation amplitudes for most conditions considered, whereas ablation-induced outgassing or the ablation- and radiation-induced modification of the surface-temperature distribution display a minor effect. Other phenomena are seen to have a variable contribution depending on the trajectory point, owing to the different ablation rates with which the thermal protection system decomposes. This is the case with the diffusion of carbon species injected through the surface, and the dissociation of air and carbon species. The use of a radiative equilibrium, rather than a homogeneous boundary condition on the temperature perturbation amplitude, is seen to increase the predicted growth of second-mode waves at all the trajectory points. Perturbation–shock interactions remarkably modify instability development only in scenarios with significant unstable supersonic modes. The substitution of all ablation subproducts for a single non-reacting species ( $\textrm {CO}_{2}$) was acceptable as long as the flow chemistry can be assumed frozen. The use of inaccurate transport and diffusion models, rather than the state of the art, is seen to have a variable effect on the predictions, yet generally smaller than what was observed in previous work. Self-excited primary and secondary instability of laminar separation bubbles Daniel Rodríguez, Elmer M. Gennaro, Leandro F. Souza The self-excited instabilities acting on laminar separation bubbles in the absence of external forcing are studied by means of linear stability analysis and direct numerical simulation. Previous studies demonstrated the existence of a three-dimensional modal instability, that becomes active for bubbles with peak reversed flow of approximately $7\,\%$ of the free-stream velocity, well below the ${\approx } 16\,\%$ required for the absolute instability of Kelvin–Helmholtz waves. Direct numerical simulations are used to describe the nonlinear evolution of the primary instability, which is found to correspond to a supercritical pitchfork bifurcation and results in fully three-dimensional flows with spanwise inhomogeneity of finite amplitude. An extension of the classic weakly non-parallel analysis is then applied to the bifurcated flows, that have a strong dependence on the cross-stream planes and a mild dependence on the streamwise direction. The spanwise distortion of the separated flow induced by the primary instability is found to strongly destabilize the Kelvin–Helmholtz waves, leading to their absolute instability and the appearance of a global oscillator-type instability. This sequence of instabilities triggers the laminar–turbulent transition without requiring external disturbances or actuation. The characteristic frequency and streamwise and spanwise wavelengths of the self-excited instability are in good agreement with those reported for low-turbulence wind-tunnel experiments without explicit forcing. This indicates that the inherent dynamics described by the self-excited instability can also be relevant when external disturbances are present. Görtler vortices and streaks in boundary layer subject to pressure gradient: excitation by free stream vortical disturbances, nonlinear evolution and secondary instability Dongdong Xu, Jianxin Liu, Xuesong Wu Journal: Journal of Fluid Mechanics / Volume 900 / 10 October 2020 Published online by Cambridge University Press: 06 August 2020, A15 Print publication: 10 October 2020 This paper investigates streaks and Görtler vortices in a boundary layer over a flat or concave wall in a contracting or expanding stream, which provides a favourable or adverse pressure gradient, respectively. We consider first the excitation of streaks and Görtler vortices by free stream vortical disturbances (FSVD), and their nonlinear evolution. The focus is on FSVD with sufficiently long wavelength, to which the boundary layer is most receptive. The formulation is directed at the general case where the Görtler number $G_{\Lambda }$ (based on the spanwise length scale $\Lambda$ of FSVD) is of order one, and the FSVD is strong enough that the induced vortices acquire an $O(1)$ streamwise velocity in the region where the boundary layer thickness becomes comparable with $\Lambda,$ and the vortices are governed by the nonlinear boundary region equations (NBRE). An important effect of a pressure gradient is that the oncoming FSVD are distorted by the non-uniform inviscid flow outside the boundary layer through convection and stretching. This process is accounted for by using the rapid distortion theory. The impact of the distorting FSVD is analysed to provide the appropriate initial and boundary conditions, which form, along with the NBRE, the appropriate initial boundary value problem describing the excitation and nonlinear evolution of the vortices. Numerical results show that an adverse/favourable pressure gradient cause the Görtler vortices to saturate earlier/later, but at a lower/higher amplitude than that in the case of zero-pressure-gradient. On the other hand, for the same pressure gradient and at low levels of FSVD, the vortices saturate earlier and at a higher amplitude as $G_{\Lambda }$ increases. Raising FSVD intensity reduces the effects of the pressure gradient and curvature. At a high FSVD level of 14 %, the curvature has no impact on the vortices, while the pressure gradient only influences the saturation intensity. The unsteadiness of FSVD is found to reduce the boundary layer response significantly at FSVD levels, but that effect weakens as the turbulence level increases. A secondary instability analysis of the vortices is performed for moderate adverse and favourable pressure gradients. Three families of unstable modes have been identified, which may become dominant depending on the frequency and streamwise location. In the presence of an adverse pressure gradient, the secondary instability occurs earlier, but the unstable modes appear in a smaller band of frequency, and have smaller growth rates. The opposite is true for a favourable pressure gradient. The present theoretical framework, which accounts for the influence of the curvature, turbulence level and pressure gradient, allows for a detailed and integrated description of the key transition processes, and represents a useful step towards predicting the pretransitional flow and transition itself of the boundary layer over a blade in turbomachinery. Effects of streamwise-elongated and spanwise-periodic surface roughness elements on boundary-layer instability Csaba B. Kátai, Xuesong Wu Journal: Journal of Fluid Mechanics / Volume 899 / 25 September 2020 Published online by Cambridge University Press: 27 July 2020, A34 Print publication: 25 September 2020 We investigate the impact on the boundary-layer stability of spanwise-periodic, streamwise-elongated surface roughness elements. Our interest is in their effects on the so-called lower-branch Tollmien–Schlichting modes, and so the spanwise spacing of the elements is taken to be comparable with the spanwise wavelength of the latter, which is of $O(R^{-3/8} L)$ , where $L$ is the dimensional length from the leading edge of the flat plate to the surface roughness, and $R$ is the Reynolds number based on $L$ . The streamwise length is much longer, consistent with experimental set-ups. The roughness height is chosen such that the wall shear is altered by $O(1)$ . From the generic triple-deck theory for three-dimensional roughness elements with both the streamwise and spanwise length scales being of $O(R^{-3/8 }L)$ , we derived the relevant governing equations by appropriate rescaling. The resulting equations are nonlinear but parabolic because the pressure gradient in the streamwise direction is negligible while in the spanwise direction it is completely determined by the roughness shape. Appropriate upstream, boundary and matching conditions are derived for the problem. Owing to the parabolicity, the equations are solved efficiently using a marching method to obtain the streaky flow. The instability of the streaky flow is shown to be controlled by the spanwise-dependent (periodic) wall shear. Two- and weakly three-dimensional lower-frequency modes are found to be stabilised by the streaks, confirming previous experimental findings, while stronger three-dimensional and higher-frequency modes are destabilised. Among the three roughness shapes considered, the roughness elements in the form of hemispherical caps are found to be most effective for a given height. A resonant subharmonic interaction was found to occur for modes with spanwise wavelength twice that of the roughness elements. An instability mechanism for channel flows in the presence of wall roughness Published online by Cambridge University Press: 24 July 2020, R2 The flow in a channel having walls with periodic undulations of small amplitude $\epsilon$ in the streamwise direction is considered as a model for wall roughness. It is shown that the undulations act as a catalyst to allow a new instability related to vortex–wave interactions to grow. The roughness couples a wave disturbance with a roll–streak flow and it is shown that channel flows, both wall and pressure gradient driven, are unstable when the Reynolds number exceeds a critical value proportional to ${\epsilon ^{-({3}/{2})} [\vert {\log \epsilon }\vert ]^{-({3}/{4})}}$ , the constant of proportionality depending on the wall wavelengths and amplitudes. The roughness is an integral part of the instability mechanism and not simply the seed for an existing flow instability as in receptivity theory. The mechanism involves an interaction of the rolls, streaks and waves very similar to that in vortex–wave interaction theory but now facilitated by the wall roughness. Surprisingly, the subtle interaction between waves, rolls, streaks and the walls can be solved in closed form, and an explicit form for the neutral configuration is found. The theoretical predictions are in good agreement with numerical investigations of similar problems and are applicable to a wide range of shear flows. Mechanisms of stationary cross-flow instability growth and breakdown induced by forward-facing steps Jenna L. Eppink Journal: Journal of Fluid Mechanics / Volume 897 / 25 August 2020 Published online by Cambridge University Press: 11 June 2020, A15 Print publication: 25 August 2020 An experimental study is performed to determine the mechanisms by which a forward-facing step impacts the growth and breakdown to turbulence of the stationary cross-flow instability. Particle image velocimetry measurements are obtained in the boundary layer of a $30^{\circ }$ swept flat plate with a pressure body. Step heights range from 53 % to 71 % of the boundary-layer thickness. The critical step height is approximately 60 % of the boundary-layer thickness for the current study, although it is also shown that the critical step height depends on the initial amplitude of the stationary cross-flow vortices. For the critical cases, the stationary cross-flow amplitude grows sharply downstream of the step, decays for a short region and then grows again. The initial growth region is linear, and can be explained primarily through the impact of the step on the mean flow. Namely, the step causes abrupt changes to the mean flow, resulting in large values of wall-normal shear, as well as highly inflectional profiles, due to either cross-flow reversal, separation or both. These inflectional profiles are highly unstable for the stationary cross-flow. Additionally, the reversed flow regions are significantly modulated by the stationary cross-flow vortices. The second region of growth occurs due to the stationary-cross-flow-induced modulation of the shear layer, which leads to multiple smaller wavelength streamwise vortices. High-frequency fluctuations indicate that the unsteady transition mechanism for the critical cases relates to the shedding of vortices downstream of reattachment of the modulated separated regions. Transition mechanisms in cross-flow-dominated hypersonic flows with free-stream acoustic noise Adriano Cerminara, Neil Sandham Transition to turbulence in high-speed flows is determined by multiple parameters, many of which are not fully understood, leading to problems in developing physics-based prediction methods. In this contribution, we compare transition mechanisms in configurations with unswept and swept leading edges that are exposed to free-stream acoustic disturbances. Direct numerical simulations are run at a Mach number of six with the same free-stream noise, consisting of either fast or slow acoustic disturbances, with two different amplitudes to explore the linear and nonlinear aspects of receptivity and transition. For the unswept configuration, receptivity follows an established mechanism involving synchronisation of fast acoustic disturbances with boundary-layer modes. At high forcing amplitudes, transition proceeds via the formation of streaks and their eventual breakdown. In the swept case, the process of streak-induced transition is modified by the presence of a cross-flow instability in the leading-edge region. Linear stability analysis confirms the presence of a cross-flow mode as well as weaker first and second mode waves. Both fast and slow types of forcing independently stimulate an unusual transition mechanism involving significantly narrower streaks than those arising from the cross-flow instability behind the swept leading edge or those induced nonlinearly in the unswept case. In the observed transition process, the cross-flow mode leads to a thin layer of streamwise vorticity that breaks up under the influence of high spanwise wavenumber disturbances. These disturbances first appear in the leading-edge region. The bypass transition mechanism of the Stokes boundary layer in the intermittently turbulent regime Chengwang Xiong, Xiang Qi, Ankang Gao, Hui Xu, Chengjiao Ren, Liang Cheng Published online by Cambridge University Press: 27 May 2020, A4 This numerical study focuses on the coherent structures and bypass transition mechanism of the Stokes boundary layer in the intermittently turbulent regime. In particular, the initial disturbance is produced by a temporary roughness element that is removed immediately after triggering a two-dimensional vortex tube under an inflection-point instability. The present study reveals a complete scenario of self-induced motion of a vortex tube after rollup from the boundary layer. The trajectory of the vortex tube is reasonably described based on the Helmholtz point-vortex equation. The three-dimensional transition of the vortex tube is attributed to the Crow instability, which leads to a sinusoidal disturbance that eventually evolves into a ring-like structure, especially for the weaker vortex. Further investigation demonstrates that three-dimensional or quasi-three-dimensional vortex perturbations in the free stream play a critical role in the boundary layer transition through a bypass mechanism, which is featured by the non-modal and explosive transient growth of the subsequent boundary layer instabilities. This transition scenario is found to be analogous to the oblique transition in the steady boundary layer, both of which are characterised by the formation of streaks, rollup of hairpin-like vortices and burst into turbulent spots. In addition, the streamwise propagation of turbulent spots is discussed in detail. To shed more light on the nature of the intermittently turbulent Stokes boundary layer, a conceptual model is proposed for the periodically self-sustaining mechanism of the turbulent spots based on the present numerical results and experimental evidence reported in the literature. The linear stability of an acceleration-skewed oscillatory Stokes layer Christian Thomas Journal: Journal of Fluid Mechanics / Volume 895 / 25 July 2020 Published online by Cambridge University Press: 22 May 2020, A27 Print publication: 25 July 2020 The linear stability of the family of flows generated by an acceleration-skewed oscillating planar wall is investigated using Floquet theory. Neutral stability curves and critical conditions for linear instability are determined for an extensive range of acceleration-skewed oscillating flows. Results indicate that acceleration skewness is destabilising and reduces the critical Reynolds number for the onset of linearly unstable behaviour. The structure of the eigenfunctions is discussed and solutions suggest that disturbances grow in the direction of highest acceleration. Mechanism for frustum transition over blunt cones at hypersonic speeds Pedro Paredes, Meelan M. Choudhari, Fei Li Numerical and experimental studies have demonstrated laminar–turbulent transition in hypersonic boundary layers over sharp cones via the modal growth of planar Mack-mode instabilities. However, due to the strong reduction in Mack-mode growth at higher nose bluntness values, the mechanisms underlying the observed onset of transition over the cone frustum are currently unknown. Linear non-modal growth analysis has shown that both planar and oblique travelling disturbances that peak within the entropy layer experience appreciable energy amplification for moderate to large nose bluntness. However, due to their weak signature within the boundary-layer region, the route to transition onset via non-modal growth of travelling disturbances remains unclear. Nonlinear parabolized stability equations (NPSE) and direct numerical simulations (DNS) are used to identify a potential mechanism for transition over a 7-degree blunt cone that was tested in the AFRL Mach-6 high-Reynolds-number facility. Specifically, computations are conducted to study the nonlinear development of a pair of oblique, unsteady non-modal disturbances in the regime of moderately blunt nose tips. Excellent agreement was demonstrated between the NPSE and DNS predictions. Results reveal that, even though the linear non-modal disturbances are primarily concentrated outside the boundary layer, their nonlinear interaction can generate stationary streaks that penetrate and amplify within the boundary layer, eventually inducing the onset of transition via the breakdown of these streaks. The results indicate that a pair of oblique, controlled non-modal disturbances can produce transition at the location measured in the experiment when their initial amplitude is chosen to be approximately 0.15 % of the free-stream velocity. Dense-gas effects on compressible boundary-layer stability X. Gloerfelt, J.-C. Robinet, L. Sciacovelli, P. Cinnella, F. Grasso Journal: Journal of Fluid Mechanics / Volume 893 / 25 June 2020 Published online by Cambridge University Press: 22 April 2020, A19 Print publication: 25 June 2020 A study of dense-gas effects on the stability of compressible boundary-layer flows is conducted. From the laminar similarity solution, the temperature variations are small due to the high specific heat of dense gases, leading to velocity profiles close to the incompressible ones. Concurrently, the complex thermodynamic properties of dense gases can lead to unconventional compressibility effects. In the subsonic regime, the Tollmien–Schlichting viscous mode is attenuated by compressibility effects and becomes preferentially skewed in line with the results based on the ideal-gas assumption. However, the absence of a generalized inflection point precludes the sustainability of the first mode by inviscid mechanisms. On the contrary, the viscous mode can be completely stable at supersonic speeds. At very high speeds, we have found instances of radiating supersonic instabilities with substantial amplification rates, i.e. waves that travel supersonically relative to the free-stream velocity. This acoustic mode has qualitatively similar features for various thermodynamic conditions and for different working fluids. This shows that the leading parameters governing the boundary-layer behaviour for the dense gas are the constant-pressure specific heat and, to a minor extent, the density-dependent viscosity. A satisfactory scaling of the mode characteristics is found to be proportional to the height of the layer near the wall that acts as a waveguide where acoustic waves may become trapped. This means that the supersonic mode has the same nature as Mack's modes, even if its frequency for maximal amplification is greater. Direct numerical simulation accurately reproduces the development of the supersonic mode and emphasizes the radiation of the instability waves. Effects of pore scale on the macroscopic properties of natural convection in porous media Stefan Gasow, Zhe Lin, Hao Chun Zhang, Andrey V. Kuznetsov, Marc Avila, Yan Jin Journal: Journal of Fluid Mechanics / Volume 891 / 25 May 2020 Published online by Cambridge University Press: 27 March 2020, A25 Print publication: 25 May 2020 Natural convection in porous media is a fundamental process for the long-term storage of CO2 in deep saline aquifers. Typically, details of mass transfer in porous media are inferred from the numerical solution of the volume-averaged Darcy–Oberbeck–Boussinesq (DOB) equations, even though these equations do not account for the microscopic properties of a porous medium. According to the DOB equations, natural convection in a porous medium is uniquely determined by the Rayleigh number. However, in contrast with experiments, DOB simulations yield a linear scaling of the Sherwood number with the Rayleigh number ( $Ra$ ) for high values of $Ra$ ( $Ra\gg 1300$ ). Here, we perform direct numerical simulations (DNS), fully resolving the flow field within the pores. We show that the boundary layer thickness is determined by the pore size instead of the Rayleigh number, as previously assumed. The mega- and proto-plume sizes increase with the pore size. Our DNS results exhibit a nonlinear scaling of the Sherwood number at high porosity, and for the same Rayleigh number, higher Sherwood numbers are predicted by DNS at lower porosities. It can be concluded that the scaling of the Sherwood number depends on the porosity and the pore-scale parameters, which is consistent with experimental studies. Injection-gas-composition effects on hypersonic boundary-layer transition Published online by Cambridge University Press: 10 March 2020, R4 The thermal protection system in atmospheric-entry and hypersonic-cruise vehicles are oftentimes designed to ablate during their operation, thus injecting a mixture of various gases with distinct properties into the boundary layer. Such outgassing affects the propagation of instabilities within the boundary layer that ultimately originates the transition to turbulence. This work uses linear stability theory, in combination with the e $^{N}$ method, to establish the underlying reason for the experimentally observed advancement/delay of transition in sharp slender hypersonic cones, when injecting lighter/heavier gases. Contrary to the current understanding and experimental correlations, this numerical analysis suggests that such a behaviour is not linked to the isolated effect of the injected gas' molar weight, but to its combination with the blowing discontinuity, porosity and the appearance of a shocklet, consequence of the injected gas composition. The shocklet constitutes a density gradient that acts on second-mode instabilities like a thermoacoustic impedance. Global linear analysis of a jet in cross-flow at low velocity ratios Guillaume Chauvat, Adam Peplinski, Dan S. Henningson, Ardeshir Hanifi Journal: Journal of Fluid Mechanics / Volume 889 / 25 April 2020 Published online by Cambridge University Press: 21 February 2020, A12 Print publication: 25 April 2020 The stability of the jet in cross-flow is investigated using a complete set-up including the flow inside the pipe. First, direct simulations were performed to find the critical velocity ratio as a function of the Reynolds number, keeping the boundary-layer displacement thickness fixed. At all Reynolds numbers investigated, there exists a steady regime at low velocity ratios. As the velocity ratio is increased, a bifurcation to a limit cycle composed of hairpin vortices is observed. The critical bulk velocity ratio is found at approximately $R=0.37$ for the Reynolds number $Re_{D}=495$ , above which a global mode of the system becomes unstable. An impulse response analysis was performed and characteristics of the generated wave packets were analysed, which confirmed results of our global mode analysis. In order to study the sensitivity of this flow, we performed transient growth computations and also computed the optimal periodic forcing and its response. Even well below this stability limit, at $R=0.3$ , large transient growth ( $10^{9}$ in energy amplification) is possible and the resolvent norm of the linearized Navier–Stokes operator peaks above $2\times 10^{6}$ . This is accompanied with an extreme sensitivity of the spectrum to numerical details, making the computation of a few tens of eigenvalues close to the limit of what can be achieved with double precision arithmetic. We demonstrate that including the meshing of the jet pipe in the simulations does not change qualitatively the dynamics of the flow when compared to the simple Dirichlet boundary condition representing the jet velocity profile. This is in agreement with the recent experimental results of Klotz et al. (J. Fluid Mech., vol. 863, 2019, pp. 386–406) and in contrast to previous studies of Cambonie & Aider (Phys. Fluids, vol. 26, 2014, 084101). Our simulations also show that a small amount of noise at subcritical velocity ratios may trigger the shedding of hairpin vortices. Acoustic streaming in turbulent compressible channel flow for heat transfer enhancement Iman Rahbari, Guillermo Paniagua Published online by Cambridge University Press: 18 February 2020, A2 Acoustic streaming in high-speed compressible channel flow and its impact on heat and momentum transfer is analysed numerically at two different Mach numbers, $M_{b}=0.75$ and 1.5, and moderate Reynolds numbers, $Re_{b}=3000$ and 6000. An external time-periodic forcing function is implemented to model the effect of acoustic drivers placed on the sidewalls. The excitation frequency is chosen according to the linear stability analysis of the background (unexcited) flow. High-fidelity numerical simulations performed at the optimal resonant condition reveal an initially exponential growth of perturbations followed by a nonlinear regime leading to the limit-cycle oscillations. In the last stage, we observe an acoustic (steady) streaming appearing as a result of nonlinear interactions between the periodic external wave and the background flow. This causes a steady enhancement in heat transfer at a rate higher than the skin-friction augmentation. We also show that perturbations of similar amplitude, but at suboptimal frequencies, may not lead to such limit-cycle oscillations and cannot make any noticeable modifications to the time-averaged flow quantities. The present research is the first study to demonstrate the acoustic streaming in compressible turbulent flows, and it introduces a novel technique towards enhancing the heat transfer with minimal skin-friction production. Transition induced by streamwise arrays of roughness elements on a flat plate in Mach 3.5 flow Amanda Chou, Michael A. Kegerise, Rudolph A. King The flow behind streamwise arrays of roughness elements was examined with a hot-wire probe. The roughness elements had heights of approximately 20 % and 40 % of the boundary layer thickness, and different spacings and orientations of these roughness elements were tested. The circular roughness elements were spaced two diameters apart or four diameters apart from centre to centre. Transition moved upstream only when the roughness elements were spaced four diameters apart. The rectangular roughness elements were oriented so that they were at a $45^{\circ }$ angle relative to the leading edge of the plate. Tandem rectangular elements had either the same orientation or opposing orientations. Mean mass-flux and total-temperature profiles of the flow field downstream of the roughness elements were examined for mean-flow distortion. Mass-flux fluctuation profiles showed that a 45 kHz odd-mode disturbance was present downstream of the shorter circular roughness elements. The dominant instability downstream of the taller circular roughness elements was a 65–85 kHz even-mode disturbance. Mass-flux fluctuation profiles showed that the dominant mode downstream of the tandem rectangular roughness elements with the same orientation was similar to that of a single roughness element and centred at a frequency of approximately 55 kHz. The 55 kHz instability appeared to correspond to increased spanwise shear, and thus was determined to be an odd-like mode. The dominant instability downstream of the tandem roughness elements with opposing orientations was centred at a frequency of 65 kHz and did not transition in the measurement region.
CommonCrawl
DCFT: Density Cumulant Functional Theory¶ Code author: Alexander Yu. Sokolov, Andrew C. Simmonett, and Xiao Wang Section author: Alexander Yu. Sokolov Module: Keywords, PSI Variables, DCFT Theory¶ Density cumulant functional theory (DCFT) is a density-based ab initio theory that can compute electronic energies without the use of a wavefunction. The theory starts by writing the exact energy expression in terms of the one- and two-particle density matrices (\(\boldsymbol{\gamma_1}\) and \(\boldsymbol{\gamma_2}\)): \[E = h_p^q \gamma_q^p + \frac{1}{2} g_{pq}^{rs} \gamma_{rs}^{pq}\] Here we used Einstein convention for the summation over the repeated indices. \(h_p^q\) and \(g_{pq}^{rs}\) are the standard one- and two-electron integrals, and \(\gamma_p^q\) and \(\gamma_{pq}^{rs}\) are the elements of \(\boldsymbol{\gamma_1}\) and \(\boldsymbol{\gamma_2}\), respectively. Naively, one might expect that it is possible to minimize the energy functional in the equation above and obtain the exact energy. This is, however, not trivial, as the density matrix elements \(\gamma_p^q\) and \(\gamma_{pq}^{rs}\) cannot be varied arbitrarily, but must satisfy some conditions that make sure that the density matrices are N-representable, i.e. correspond to an antisymmetric N-electron wavefunction. Unfortunately, no simple set of necessary and sufficient N-representability conditions are known, and some of the known conditions are not easily imposed. In addition, the lack of separability of the density matrices may result in the loss of size-consistency and size-extensivity. In DCFT, one takes a different route and replaces \(\boldsymbol{\gamma_2}\) in favor of its two-particle density cumulant: \[\lambda_{pq}^{rs} = \gamma_{pq}^{rs} - \gamma_p^r \gamma_q^s + \gamma_p^s \gamma_q^r\] The one-particle density matrix is separated into its idempotent part \(\boldsymbol{\kappa}\) and a correction \(\boldsymbol{\tau}\): \[\gamma_p^q = \kappa_p^q + \tau_p^q\] The idempotent part of \(\boldsymbol{\gamma_1}\) corresponds to a mean-field Hartree–Fock-like density, while the non-idempotent correction \(\boldsymbol{\tau}\) depends on the density cumulant and describes the electron correlation effects. Inserting the above two equations into the energy expression, we obtain: \[E_{DCFT} = \frac{1}{2} \left( h_p^q + f_p^q \right) \gamma_q^p + \frac{1}{4} \bar{g}_{pq}^{rs} \lambda_{rs}^{pq}\] where the antisymmetrized two-electron integrals and the generalized Fock operator matrix elements were defined as follows: \[\bar{g}_{pq}^{rs} = g_{pq}^{rs} - g_{pq}^{sr}\] \[f_p^q = h_p^q + \bar{g}_{pr}^{qs} \gamma_{s}^{r}\] Energy functional \(E_{DCFT}\) has several important properties. First, the energy is now a function of two sets of independent parameters, the idempotent part of \(\boldsymbol{\gamma_1}\) (\(\boldsymbol{\kappa}\)) and the density cumulant (\(\boldsymbol{\lambda_2}\)). As a result, the energy functional is Hermitian, which is important for the evaluation of the molecular properties. The additive separability of the density cumulant guarantees that all of the DCFT methods are size-extensive and size-consistent. Furthermore, the N-representability problem is now greatly simplified, because the idempotent part of \(\boldsymbol{\gamma_1}\) is N-representable by construction. One only needs to worry about the N-representability of the density cumulant, which is a relatively small part of \(\boldsymbol{\gamma_2}\). In order to obtain the DCFT energy, two conditions must be satisfied: The energy must be stationary with respect to a set of orbitals. This can be done by diagonalizing the generalized Fock operator (as in the DC-06 and DC-12 methods, see below), which introduces partial orbital relaxation, or by fully relaxing the orbitals and minimizing the entire energy expression (as in the ODC-06 and ODC-12 methods). The energy must be stationary with respect to the variation of the density cumulant \(\boldsymbol{\lambda_2}\), constrained to N-representability conditions. Making the energy stationary requires solution of two sets of coupled equations for orbitals and density cumulant, respectively (also known as residual equations). At the present moment, three different algorithms for the solution of the system of coupled equations are available (see Iterative Algorithms for details). Publications resulting from the use of the DCFT code should cite contributions listed here. Methods¶ Currently five DCFT methods (functionals) are available: DC-06, DC-12, ODC-06, ODC-12, and ODC-13. The first four methods use approximate N-representability conditions derived from second-order perturbation theory and differ in the description of the correlated (non-idempotent) part \(\boldsymbol{\tau}\) of the one-particle density matrix and orbital optimization. While in the DC-06 and ODC-06 methods \(\boldsymbol{\tau}\) is derived from the density cumulant in an approximate way (labelled by '06'), the DC-12 and ODC-12 methods derive this contribution exactly, and take full advantage of the N-representability conditions (which is denoted by '12'). The corresponding DC and ODC methods have similar description of the \(\boldsymbol{\gamma_1}\) N-representability, but differ in describing the orbital relaxation: the former methods account for the relaxation only partially, while the latter fully relax the orbitals. Both DC-06 and DC-12 methods have similar computational cost, same is true when comparing ODC-06 and ODC-12. Meanwhile, the DC methods are generally more efficient than their ODC analogs, due to a more expensive orbital update step needed for the full orbital optimization. In the ODC-13 method, the third- and fourth-order N-representability conditions are used for the density cumulant and the correlated contribution \(\boldsymbol{\tau}\), respectively, and the orbitals are variationally optimized. For most of the applications, it is recommended to use the ODC-12 method, which provides an optimal balance between accuracy and efficiency, especially for molecules with open-shell character. If highly accurate results are desired, a combination of the ODC-13 method with a three-particle energy correction [\(\mbox{ODC-13$(\lambda_3)$}\)] can be used (see below). For the detailed comparison of the quality of these methods we refer users to our publications. The DCFT functional can be specified by the DCFT_FUNCTIONAL option. The default choice is the ODC-12 functional. In addition to five methods listed above, DCFT_FUNCTIONAL option can be set to CEPA0 (coupled electron pair approximation zero, equivalent to linearized coupled cluster doubles method, LCCD). CEPA0 can be considered as a particular case of the DC-06 and DC-12 methods in the limit of zero non-idempotency of \(\boldsymbol{\gamma_1}\). This option has a limited functionality and should only be used for test purposes. For the production-level CEPA0 code, see the OCC module. The DCFT code can also be used to compute the \((\lambda_3)\) energy correction that perturbatively accounts for three-particle correlation effects, similarly to the (T) correction in coupled cluster theory. Computation of the \((\lambda_3)\) correction can be requested by setting the THREE_PARTICLE option to PERTURBATIVE. A combination of the ODC-13 functional with the \((\lambda_3)\) correction [denoted as \(\mbox{ODC-13$(\lambda_3)$}\)] has been shown to provide highly accurate results for open-shell molecules near equilibrium geometries. At the present moment, all of the DCFT methods support unrestricted reference orbitals (REFERENCE UHF), which can be used to perform energy and gradient computations for both closed- and open-shell molecules. In addition, the ODC-06 and ODC-12 methods support restricted reference orbitals (REFERENCE RHF) for the energy and gradient computations of closed-shell molecules. Note that in this case restricted reference orbitals are only available for ALGORITHM SIMULTANEOUS. Iterative Algorithms¶ As explained in the Theory section, in order to obtain the DCFT energy one needs to solve a system of coupled equations for orbitals and density cumulant. At the present moment three iterative algorithms for the solution of the equations are available. The choice of the algorithm is controlled using the ALGORITHM option. SIMULTANEOUS [Default] In the simultaneous algorithm the DCFT equations are solved in macroiterations. Each macroiteration consists of a single iteration of the cumulant update followed by a single iteration of the orbital update and orbital transformation of the integrals. The macroiterations are repeated until the simultaneous convergence of the cumulant and orbitals is achieved. Convergence of the simultaneous algorithm is accelerated using the DIIS extrapolation technique. TWOSTEP In the two-step algorithm each macroiteration consists of two sets of microiterations. In the first set, the density cumulant equations are solved iteratively, while the orbitals are kept fixed. After the density cumulant is converged, the second set of microiterations is performed for the self-consistent update of the orbitals with the fixed density cumulant. Each macroiteration is completed by performing the orbital transformation of the integrals. As in the simultaneous algorithm, the DIIS extrapolation is used to accelerate convergence. Two-step algorithm is only available for the DC-06 and DC-12 methods. In the quadratically-convergent algorithm, the orbital and cumulant update equations are solved using the Newton-Raphson method. Each macroiteration of the quadratically-convergent algorithm consists of a single Newton-Raphson update followed by the orbital transformation of the integrals. The solution of the Newton-Raphson equations is performed iteratively using the preconditioned conjugate gradients method, where only the product of the electronic Hessian with the step vector is computed for efficiency. By default, the electronic Hessian is build for both the cumulant and orbital updates and both updates are performed simultaneously. Setting the QC_TYPE option to TWOSTEP will perform the Newton-Raphson update only for the orbitals, while the equations for the cumulant will be solved using a standard Jacobi update. If requested by the user (set QC_COUPLING to TRUE), the electronic Hessian can include matrix elements that couple the orbitals and the density cumulant. The computation of these coupling elements increases the cost of the macroiteration, but usually leads to faster convergence and is recommended for open-shell systems. It is important to note that the quadratically-convergent algorithm is not yet fully optimized and often converges slowly when the RMS of the cumulant or the orbital gradient is below \(10^{-7}\). The choice of the iterative algorithm can significantly affect the cost of the energy computation. While the two-step algorithm requires a small number of disk-intensive \({\cal O}(N^5)\) integral transformations, the simultaneous algorithm benefits from a smaller number of expensive \({\cal O}(N^6)\) cumulant updates. As a result, for small closed-shell systems the two-step algorithm is usually preferred, while for larger systems and molecules with open-shell character it is recommended to use the simultaneous algorithm. Efficiency of the simultaneous algorithm can be greatly increased by avoiding the transformation of the four-index virtual two-electron integrals \((vv|vv)\) and computing the terms that involve these integrals in the AO basis. In order to do that one needs to set the AO_BASIS option to DISK (currently used by default). For more recommendations on the choice of the algorithm see Recommendations. Analytic Gradients¶ Analytic gradients are available for the DC-06, ODC-06, ODC-12, and ODC-13 methods. For DC-06, the evaluation of the analytic gradients requires the solution of the coupled response equations. Two algorithms are available for their iterative solution: TWOSTEP (default) and SIMULTANEOUS. These algorithms are similar to those described for the orbital and cumulant updates in the Iterative Algorithms section and usually exhibit similar efficiency. The choice of the algorithm can be made using the RESPONSE_ALGORITHM option. For the DC-12 method the analytic gradients are not yet available, one has to use numerical gradients to perform the geometry optimizations. For the ODC-06, ODC-12 and ODC-13 methods no response equations need to be solved, which makes the computation of the analytic gradients very efficient. Analytic gradients are not available for the three-particle energy correction \((\lambda_3)\). Methods Summary¶ The table below summarizes current DCFT code features: Available algorithms ODC-06 SIMULTANEOUS, QC Y Y RHF/UHF ODC-13 SIMULTANEOUS, QC Y Y UHF \(\mbox{ODC-12$(\lambda_3)$}\) SIMULTANEOUS, QC Y N UHF DC-06 SIMULTANEOUS, QC, TWOSTEP Y Y UHF DC-12 SIMULTANEOUS, QC, TWOSTEP Y N UHF Note that for ODC-06 and ODC-12 REFERENCE RHF is only available for ALGORITHM SIMULTANEOUS. To compute \((\lambda_3)\) correction, the THREE_PARTICLE option needs to be set to PERTURBATIVE. Minimal Input¶ Minimal input for the DCFT single-point computation looks like this: molecule { H 1 1.0 set basis cc-pvdz energy('dcft') The energy('dcft') call to energy() executes the DCFT module, which will first call the SCF module and perform the SCF computation with RHF reference to obtain a guess for the DCFT orbitals. After SCF is converged, the program will perform the energy computation using the ODC-12 method. By default, simultaneous algorithm will be used for the solution of the equations. One can also request to perform geometry optimization following example below: optimize('dcft') The optimize('dcft') call will first perform all of the procedures described above to obtain the ODC-12 energy. After that, the ODC-12 analytic gradients code will be executed and geometry optimization will be performed. Recommendations¶ Here is a list of recommendations for the DCFT module: Generally, the use of the simultaneous algorithm together with the AO_BASIS DISK option is recommended (set by default). In cases when available memory is insufficient, the use of the AO_BASIS DISK option is recommended. This will significantly reduce the memory requirements. However, when used together with the two-step algorithm, this option can significantly increase the cost of the energy computation. In cases when the oscillatory convergence is observed before the DIIS extrapolation is initialized, it is recommended to increase the threshold for the RMS of the density cumulant or orbital update residual, below which the DIIS extrapolation starts. This can be done by setting the DIIS_START_CONVERGENCE option to the value greater than \(10^{-3}\) by one or two orders of magnitude (e.g. \(10^{-2}\) or \(10^{-1}\)). This can be particularly useful for computations using the ODC methods, because it can greatly reduce the number of iterations. If oscillatory convergence is observed for atoms or molecules with high symmetry, it is recommended to use the quadratically-convergent algorithm. When using the quadratically-convergent algorithm for the closed-shell molecules, it is recommended to set the QC_COUPLING option to FALSE for efficiency reasons (set by default). For the ODC computations, the user has a choice of performing the computation of the guess orbitals and cumulants using the corresponding DC method (set ODC_GUESS to TRUE). This can often lead to significant computational savings, since the orbital update step in the DC methods is cheap. Convergence of the guess orbitals and cumulants can be controlled using the GUESS_R_CONVERGENCE option. DCFT: Density Cumulant Functional Theory Iterative Algorithms Analytic Gradients Methods Summary Minimal Input
CommonCrawl
B. Parent • AE61280 Convective Heat Transfer 2018 Convective Heat Transfer Final Exam Monday June 18th 2018 NO NOTES OR BOOKS; USE CONVECTIVE HEAT TRANSFER TABLES THAT WERE DISTRIBUTED; ALL QUESTIONS HAVE EQUAL VALUE; STATE ALL ASSUMPTIONS; ANSWER ALL 6 QUESTIONS. Starting from the energy equation $$ \rho\frac{\partial E}{\partial t} + \rho u\frac{\partial H}{\partial x} + \rho v\frac{\partial H}{\partial y} = \frac{\partial }{\partial x}\left( k \frac{\partial T}{\partial x} \right) +\frac{\partial }{\partial y}\left( k \frac{\partial T}{\partial y} \right) + \frac{\partial u \tau_{xx}}{\partial x} + \frac{\partial u \tau_{yx}}{\partial y} + \frac{\partial v \tau_{xy}}{\partial x} + \frac{\partial v \tau_{yy}}{\partial y} $$ the $x$ momentum equation $$ \rho \frac{\partial u}{\partial t} + \rho u \frac{\partial u}{\partial x} + \rho v \frac{\partial u}{\partial y}=-\frac{\partial P}{\partial x}+ \frac{\partial \tau_{xx}}{\partial x}+\frac{\partial \tau_{yx}}{\partial y} $$ and the $y$ momentum equation $$ \rho \frac{\partial v}{\partial t} + \rho u \frac{\partial v}{\partial x} + \rho v \frac{\partial v}{\partial y}=-\frac{\partial P}{\partial y}+ \frac{\partial \tau_{xy}}{\partial x}+\frac{\partial \tau_{yy}}{\partial y} $$ Show that the energy equation for a constant-$\rho$ and constant-$\mu$ fluid corresponds to: $$ \rho\frac{\partial e}{\partial t}+\rho u\frac{\partial e}{\partial x}+\rho v\frac{\partial e}{\partial y} = \frac{\partial }{\partial x}\left( k \frac{\partial T}{\partial x} \right) +\frac{\partial }{\partial y}\left( k \frac{\partial T}{\partial y} \right) + \phi $$ with $\phi$ the viscous dissipation per unit volume defined as: $$ \phi\equiv\mu\left(\frac{\partial u}{\partial x} \right)^2 + \mu\left(\frac{\partial u}{\partial y} \right)^2 + \mu\left(\frac{\partial v}{\partial x} \right)^2 + \mu\left(\frac{\partial v}{\partial y} \right)^2 $$ An air stream with a speed of $50$ m/s and density of $\rho=1.0$ kg/m$^3$ flows parallel to a flat plate with a length of 45 cm and a width of 100 cm. Determine the total drag force on the flat plate and calculate the boundary layer thickness 10 and 45 cm from the leading edge. Take the kinematic viscosity as $15\times 10^{-6}$ m$^2$/s. Consider a 30 m long pipe with a diameter of 1 cm and with a smooth interior wall surface. The pipe wall temperature is kept constant at 60$^\circ$C. (a) Some liquid enters the pipe with a temperature of 20$^\circ$C and exits the pipe with a mixing cup (bulk) temperature of 57$^\circ$C. Knowing that the mass flow rate of the liquid is of $0.015$ kg/s, that the liquid density is of 1000 kg/m$^3$, that the friction force exerted on the pipe due to the motion of fluid is equal to 0.144 N, determine the viscosity and the Prandtl number of the liquid. (b) Using the Prandtl number and viscosity found in part (a), estimate the bulk temperature at the exit of the pipe for the same inflow temperature as in (a) but with the mass flow rate increased to 0.15 kg/s. Hint: When the flow in a pipe is fully-developed, the friction factor is equal to: $$f=\frac{(-{\rm d}P/{\rm d}x)D}{\rho u_{\rm b}^2/2}$$ A flow of hot water vapor interacts with a 1-meter-long and 1-meter-wide flat plate and forms a boundary layer. In order to keep the plate temperature below 100$^\circ$C, you decide to cool the plate through film cooling. Film cooling consists of injecting some liquid water through the plate so that it evaporates when in contact with the hot vapor flow. The flow of water vapor has the following range of properties: $$10~\frac{\rm m}{\rm s}\le U_\infty\le 100~\frac{\rm m}{\rm s}$$ $$0.2~\frac{\rm kg}{{\rm m}^3}\le \rho_\infty \le 0.6~\frac{\rm kg}{{\rm m}^3}$$ $$400~{\rm K}\le T_\infty \le 800~{\rm K}$$ Knowing that $\Delta H_{\rm vap}=2260$ kJ/kg, $T_{\rm sat}=100^\circ$C, and that the vapor has a viscosity of $\mu_{\rm v}=2\cdot 10^{-5}$ kg/ms, a specific heat at constant pressure of $(c_p)_{\rm v}=2000$ J/kgK, and a thermal conductivity of $k_{\rm v}=0.04$ W/mK, do the following: (a) Determine the vapor freestream conditions (within the range specified) that will result in the maximum amount of heat transfer to the plate. (b) For the freestream conditions determined in (a), find the minimum amount of injected liquid water in kg/s that prevents the plate temperature to exceed 100$^\circ$C. (c) For the freestream conditions determined in (a) and the mass flow rate of injected liquid water found in (b), find the heat flux at the trailing edge of the flat plate in W/m$^2$K. Consider an electrical cable made of copper with a length of 2 meters and a diameter of 2 mm. The cable stands horizontally and is surrounded by air at a temperature of 300 K and a density of 1.2 kg/m$^3$. Knowing that the electrical resistivity of copper is of $15.4\times 10^{-9}~\Omega$m, find the maximum allowed current (in amps) that keeps the surface temperature of the cable below 400 K. Make your design safe by taking into consideration that the convective heat transfer coefficient obtained through the correlations can be off by as much as 30%. Consider a long vertical pipe with smooth walls and with an inner radius of 1 cm. Water fills up the pipe and is pulled downwards through the gravitational acceleration (g=9.81 m/s$^2$). Knowing that the pressure gradient is zero within the pipe, do the following: (a) Find the wall shear stress within the pipe in N/m$^2$. (b) Find the bulk velocity of the water in m/s within the pipe. (c) Find the mass flow rate of the water in kg/s within the pipe. (d) Determine whether the flow in the pipe is laminar or turbulent. Note: for water, $c=4000$ J/kgK, $\rho=1000$ kg/m$^3$, $\mu=10^{-3}$ kg/ms, $k=0.6$ W/mK. PDF 1✕1 2✕1 2✕2 $\pi$
CommonCrawl
Hawking radiation and reversibility It's often said that, as long as the information that fell into a black hole comes out eventually in the Hawking radiation (by whatever means), pure states remain pure rather than evolving into mixed states, and "the universe is safe for quantum mechanics." But that can't be the whole story! For quantum-mechanical reversibility doesn't merely say that information must eventually be retrievable, after 1070 years or whatever; but also that, if U is an admissible transformation of a physical system, then U-1 is also an admissible transformation. So, just like it must be consistent with reversible microlaws for smoke and ash to spontaneously reassemble into a book, it must also be consistent for a black hole to spontaneously "uncollapse" into a star, or into whatever configuration of ordinary matter could have collapsed to form the black hole in the first place. And this "white-hole uncollapse process" must be possible in exactly the same amount of time as the black-hole collapse process, rather than an astronomically longer time (as with Hawking radiation). In both cases, the explanation for why we never see these processes must be thermodynamic -- i.e., sure they're allowed, but they involve such a crazy decrease in entropy that they're exponentially suppressed. I get that. But I'm still confused about something, and here's my best attempt to crystallize my confusion: In order to explain how it could even be possible for information to come out of a black hole, physicists typically appeal to Hawking radiation, which provides a mechanism based on more-or-less understood quantum field theory in curved spacetime. (Granted, QFT also predicts that the radiation should be thermal! But because of AdS/CFT and so forth, today people seem pretty confident that the information, after hanging out near the event horizon, is carried away by the Hawking radiation in some not-yet-understood way.) However, suppose it's objected that a Hawking radiation process seems nothing whatsoever like the time-reverse of an ordinary black-hole formation process. Then the only response I know would be along the lines of, "well, do you believe that QM will survive unaltered in a future quantum theory of gravity, or don't you? If you do, then consider the unitary U corresponding to a black-hole formation process, and invert it to get U-1!" My question is: why couldn't people have made that same straightforward argument even before they knew anything about Hawking radiation? (Or did they make it?) More generally, even if Hawking radiation does carry away the infalling information, that still seems extremely far from implying full quantum-mechanical reversibility. So, how much does the existence of Hawking radiation really have to do with the case for the compatibility between quantum mechanics and black holes? quantum-mechanics black-holes hawking-radiation reversibility black-hole-thermodynamics Scott AaronsonScott Aaronson $\begingroup$ Hm, interesting. I don't really know the history, but I thought it was originally assumed that GR broke unitarity, probably at the singularity. When Hawking radiation was discovered it perhaps gave some hope of a method to save unitarity at least outside the event horizon, but I'm not sure what the current status is. Inside a black hole the arrow of time actually points spatially inward, and there has been theoretical speculation on white holes, which would be the reversed version of that. No observations, though, of course ;-) $\endgroup$ – David Z Oct 9 '12 at 6:42 $\begingroup$ For anyone wanting details, it sounds like David Z's referring to Nikodem Poplawski's "Cosmology with torsion", "Universe in a black hole", Non-parametric reconstruction of an inflaton potential", and his other 2009-2019 papers, that use Einstein-Cartan gravity (fermions with spatial extent). Some of the fermions split from partners in virtual pairs (by an EH of a rotating and collapsing star maybe also wobbling thru tidal effects) get their trajectories accelerated & reversed by contact with the larger fermions of the star itself, and form a white hole masking the new BH's inboard side. $\endgroup$ – Edouard Jul 24 '19 at 2:46 As you said, the case of black holes is conceptually totally analogous to the burning books. In principle, the process is reversible, but the probability of the CPT-conjugated process (more accurate a symmetry than just time reversal) is different from the original one because $$ \frac{Prob(A\to B)}{Prob(B^{CPT}\to A^{CPT})} \approx \exp(S_B-S_A ).$$ This is true because the probabilities of evolution between ensembles are obtained by summing over final states but averaging over initial states. The averaging differs from summing by the extra factor of $1/N = \exp(-S)$, and that's why the exponential of the entropy difference quantifies the past-future asymmetry of the evolution. At the qualitative level, a white hole is exactly as impossible in practice as a burning coal suddenly rearranging into a particular book. Quantitatively speaking, it's more impossible because the drop of entropy would be much greater: black holes have the greatest entropy among all localized or bound objects of the same total mass. However, the Hawking radiation isn't localized or bound and it actually has an even greater entropy – by a significant factor – than the black hole from which it evaporated. That's needed and that's true because even the Hawking evaporation process agrees with the second law of thermodynamics. At the level of classical general relativity, nothing prevents us from drawing a white hole spacetime. In fact, the spacetime for an eternal black hole is already perfectly time-reversal-symmetric. We still mostly call it a black hole but it's a "white hole" at the same moment. Such solutions don't correspond to the reality in which black holes always come from a lower-entropy initial state – because the initial state of the Universe couldn't have any black holes. So the real issue are the realistic diagrams for a star collapsing into a black hole which later evaporates. Such a diagram is clearly time-reversal-asymmetric. The entropy increases during the star collapse as well as during the Hawking radiation. You may flip the diagram upside down and you will get a picture that solves the equations of general relativity. However, it will heavily violate the second law of thermodynamics. Any consistent classical or quantum theory explains and guarantees the thermodynamic phenomena and laws microscopically, i.e. by statistical physics applied to its phase space or Hilbert space. That's true for burning books but that's true for theories containing black holes, too. So if one has a consistent microscopic quantum theory for this process – but the same comment would hold for a classical theory as well: your question has really nothing to do with quantum mechanics per se – then this theory must predict that the inverted processes that decrease entropy are exponentially unlikely. Whenever there is a specific model with well-defined microstates and a microscopic T or CPT symmetry, it's easy to prove the equation I started with. A genuine microscopic theory really establishes that the inverted processes (those that lower the total entropy) are possible but very unlikely. A classical theory of macroscopic matter however "averages over many atoms". For solids, liquids, and gases, this is manifested by time-reversal-asymmetric terms in the effective equations - diffusion, heat diffusion, friction, viscosity, all these things that slow things down, heat them up, and transfer heat from warmer bodies to cooler ones. The transfer of heat from warmer bodies to cooler ones may either occur by "direct contact" which really looks classical but it may also proceed via the black body radiation – which is a quantum process and may be found in the first semiclassical corrections to classical physics. The Hawking radiation is an example of the "transfer of heat from warmer to cooler bodies", too. The black hole has a nonzero temperature so it radiates energy away to the empty space whose temperature is zero. Again, it doesn't "realistically" occur in the opposite chronological order because the entropy would decrease and a cooler object would spontaneously transfer its heat to a warmer one. In an approximate macroscopic effective theory that incorporates the microscopic statistical phenomena collectively, much like friction terms in mechanics, those time-reversal-violating terms appear explicitly: they are replacements/results of some statistical physics calculations. In the exact microscopic theory, however, there are no explicit time-reversal-breaking terms. And indeed, according to the full microscopic theory – e.g. a consistent theory of quantum gravity – the entropy-lowering processes aren't strictly forbidden, they may just be calculated to be exponentially unlikely. The probability that we arrange the initial state of the black hole so that it will evolve into a star with some particular shape and composition is extremely tiny. It is hard to describe the state of the black hole microstates explicitly, but even in setups where we know them in principle, it's practically impossible to locate black hole microstates that have evolved from a recent star (or will evolve into a star soon, which is the same mathematical problem). Your $U^{-1}$ transformation undoubtedly exists in a consistent theory of quantum gravity – e.g. in AdS/CFT – but if you want the final state $U^{-1}|initial\rangle$ to have a lower entropy than the initial one, you must carefully cherry-pick the initial one and it's exponentially unlikely that you will be able to prepare such an initial state, whether it is experimental preparation or a theoretical one. For "realistically preparable" initial states, the final states will have a higher entropy. This is true everywhere in physics and has nothing specific in the context of quantum gravity with black holes. Let me also say that the "white hole" microstates exist but they're the same thing as the "black hole microstates". The reason why these microstates almost always behave as black holes and not white holes is the second law of thermodynamics once again: it's just very unlikely for them to evolve to a lower-entropy state (at least if we expect this entropy drop to be imminent: within a long enough, Poincaré recurrence time, such thing may occur at some point). That's true for burned books, too. A "white hole" is analogous to a "burned book that will conspire its atomic vibrations and rearrange itself into a nice and healthy book again". But macroscopically, such "books waiting to be revived" don't differ from other piles of ashes; that's the analogous claim to the claim that there is no visible difference between black hole and white hole microstates, and due to their "very likely" future evolution, the whole class should better be called "black hole microstates" and not "white hole microstates" even the microstates that will drop entropy soon represent a tiny fraction of this set. My main punch line is that at the level of general reversibility, there has never been any qualitative difference between black holes and other objects that are subject to thermodynamics and, which is related, there has never been (and there is not) any general incompatibility between the general principles of quantum mechanics, microscopic reversibility, and macroscopic irreversibility, whether black holes are present or not. The only "new" feature of black holes that sparked the decades of efforts and debates was the causality. While a burning book may still transfer the information in both ways, the material inside the black hole should no longer be able to transfer the information about itself to infinity because it's equivalent to superluminal signals forbidden in relativity. However, we know today that the laws of causality aren't this strict in the presence of black holes and the information is leaked, so the qualitative features of a collapsing star and evaporating black hole are literally the same as in a book that is printed by diffusing ink and then burned. Luboš MotlLuboš Motl $\begingroup$ My latest take on this whole discussion is that what needs to preserved is the uncertainty principle (e.g. normally understood quantum complementarity). Information in the Shannon sense is viewed as freedom of choice, although we can distinguish between freedom of choice by the sender and noise (equivocation), the uncertainty is still viewed as information. From a classical point of view, the black hole represents a definite position and momentum state. The quantum argument is that complementarity is still preserved, and it could be potentially argued that (continued) $\endgroup$ – user11547 Oct 9 '12 at 9:59 $\begingroup$ the evaporation processes are driven by complementarity, which becomes more significant as the black hole mass becomes smaller (which also implies a reduction in the number of potential subsystems). If we incorrectly viewed quantum uncertainty as the result of hidden variables, then a loss of information would be viewed as a loss of those hidden variables. QM says no, this is not possible, the complementarity is intrinsic and can not be lost, so the information associated with uncertainty is preserved. $\endgroup$ – user11547 Oct 9 '12 at 10:03 $\begingroup$ The only flaw I can see in all this is the assumption, mentioned in the answer's 4th or 5th paragraph, that the universe had an initial state: It is nevertheless a serious one, given the resemblance between psychotic "ideas of reference" and the idea that we happen to exist at a "special time", whence we can date a beginning more reliably than we can date an end. That's evident in human life, but the notion that the universe modeled itself on us is a little extravagant, and itself implies a notion that both time AND entropy might run backward, with our surroundings working like a mirror. $\endgroup$ – Edouard Jul 27 '19 at 17:50 $\begingroup$ At arxiv.org/pdf/1907.05292.pdf, I've found a description of what I was getting at in yesterday's comment: The 2019 paper, on astronomically-realistic black holes (although it starts with a description of Schwarzchild BHs), puts what I was saying in terms of cosmology that I believe to be past- (as well as future-) eternal. Its math's rather opaque to me, so other comments on it would be welcome. $\endgroup$ – Edouard Jul 29 '19 at 12:03 $\begingroup$ Dear @Edouard, I am pretty sure that my argument doesn't depend on any assumptions about the distant past, e.g. on the existence of the Big Bang and/or a static Universe or something like that. The arrow of time doesn't have anything to do with any assumptions about the cosmological beginnings, it exists in every region of spacetime locally. $\endgroup$ – Luboš Motl Jul 30 '19 at 13:39 There are two equivalent descriptions for the same process in terms of the time-forward version, and the time-reversed version. Externally, both look the same; some matter in a pure state collapses together into a dense state — a gravitational hole — and slowly, over time, it evaporates Hawking radiation until nothing is left of it. The totality of all the Hawking radiation remains pure. In the time-forward version, matter collapses into a black hole with a future singularity, and ends there. Entangled Hawking pairs are produced just outside the horizon. One particle of each pair falls into the hole and hits the singularity. Postselection to an entangled state of the infalling Hawking radiation and infalling matter is imposed at the future singularity. The outgoing Hawking particles carry information about the infalling matter after postselection. Before postselection, it remains entangled with the infalling Hawking radiation. In the time-reversed version, a white hole with a past singularity forms with a white hole horizon. Matter can only emerge from the white hole, not enter it, and all matter emerging from inside it was created at the past singularity. Matter just outside the white hole is still attracted gravitationally, but it only accumulates just outside the horizon, unable to penetrate it. Matter emerging from the past white hole singularity and crossing the horizon, but without enough escape velocity to leave the white hole's pull also accumulates just outside the horizon. The shell of accumulating matter just outside the horizon quickly forms a black hole shell with a future singularity. This is the time-reversed firewall, which is very real in the time-reversed version. Postselection occurs at this shell. All the matter emerging from the past white hole singularity at the same location are entangled with each other. After postselection, matter emerging from the white hole singularity which has enough escape velocity carries information about the infalling matter which accumulates at the shell just outside the white hole horizon because it was initially entangled with other matter which collides with the infalling matter at the shell, and they are postselected together in an entangled state. Far away from the horizon outside the hole, both the time-forward and time-reversed versions look identical after their respective postselections. However, around the horizon and inside the hole, they look very different. This is "time reversal complementarity"! The time reversed version of a time-forward version is a time-reversed version, and vice versa. However, operationally, the only information one can have about these regions are those carried by concrete physical information emerging from the hole and recorded far outside it. Operationally, one can never tell the difference. Is there a firewall just outside the horizon? In the time-forward picture, no. In the time-reversed picture, yes. Far outside the hole, we can't tell the difference. Sure, we can send a probe to measure the presence/absence of a firewall, and beam the result outside. Then, external observers will see a signal telling them "I'm from the probe, and I don't see any firewall". However, in the time-reversed picture, there is a firewall shell, and the probe is thermalized there. After postselection, the beamed signal that external observers pick up originated from the past white hole singularity itself, which heads straight to the external observers. Prior to postselection, the radiation from the past singularity carries no such information, but after postselection, it does. This process looks conspiratorial, but then, the time-reversed picture works best when entropy is decreasing, i.e. reversed thermodynamic arrow of time, which isn't the case here. With a reversed time arrow, such conspiracies are the norm. See also the related question Why are white holes the same thing as black holes in quantum gravity? and What are cosmological "firewalls"?. Harebrained and sophomoricHarebrained and sophomoric Think of the Reeh-Schlieder theorem for a vacuum. It states that the vacuum is an entangled state, even between spatially distinct regions. By acting upon a local region here on Earth by a local operator which is appropriately fine-tuned, you can create any arbitrary configuration of matter behind the moon. For a vacuum that is, but we're not living in a vacuum... Anyway, a black hole is filled with entangled Hawking radiation, which isn't exactly a vacuum. But the same principle applies. By a judicious choice of operator acting upon infalling Hawking radiation at the singularity, you can form an arbitrary configuration of matter for the outgoing Hawking radiation. The catch is, the operator acting at the singularity mustn't be unitary. Udi PintarUdi Pintar It is possible to realize lower entropy states through finetuned projective measurements or simple measurements where the environment has minimal degrees of freedom. In such a scenario the blackhole micro states are not same as white hole microstes,and thermalisation due to time reversal scenrio can happen Suresh Kumar.SSuresh Kumar.S Not the answer you're looking for? Browse other questions tagged quantum-mechanics black-holes hawking-radiation reversibility black-hole-thermodynamics or ask your own question. What are cosmological "firewalls"? Thermodynamically reversed black holes, firewalls, Casimir effect, null energy condition violations Irreversibility of Hawking radiation emission and Noether's theorem Why are white holes the same thing as black holes in quantum gravity? CPT and Event Horizon Hawking radiation and quark confinement From where (in space-time) does Hawking radiation originate? What is Hawking radiation and how does it cause a black hole to evaporate? Why isn't Hawking radiation frozen on the boundary, like in-falling matter? How would you detect Hawking radiation? Reconstruction of the initial state from Hawking radiation? What does it mean "Hawking radiation is in a pure state"? Won't Hawking radiation be sucked back into the black hole? Is Hawking radiation real for a far away observer?
CommonCrawl
Migrant cross-border connections and political remittances in Mexico Theoretical expectations Research design and methodology Data and measurement Foreign connections and the difference they make: how migrant ties influence political interest and attitudes in Mexico Lauren Duquette-Rury1Email authorView ORCID ID profile, Roger Waldinger1 and Nelson Lim2 Comparative Migration Studies20186:35 Received: 5 May 2017 Accepted: 6 August 2018 Beyond the economic and social effects of international migration researchers show regular exchanges between immigrants and stay-at-homes produce political spillovers in sending countries. As a broad body of literature demonstrates, most migrants maintain at least some form of contact with key connections back home, whether through long-distance communication, remittance sending, or in person visits. We investigate if exposure to international migration affects non-migrant citizens political interest, awareness, and attitudes about the efficacy of elections using longitudinal survey data from the Mexico 2006 Panel Study. We use a novel statistical approach combining Double Robust estimation technique with propensity score weighting. Our results suggest that Mexican non-migrant citizens exposed to international migration through social connections and remittances are more likely to be politically aware than those without. We also offer theoretical pathways to explain how ideational and material resources embedded in migrant social networks influences the political interest of stay-at-home citizens. Transnational migration Political interest Social ties A network driven phenomenon, population movements across borders inherently and recurrently generate home country spillovers. While connections linking points of origin and destination cannot trigger migrations, once created they keep migration and information flowing: Cross-border connections enable informational exchanges between migrants and stay-at-homes about opportunities found abroad; support to newcomers; and the adoption of new forms of consumption, behaviors, and attitudes learned in the society of destination. For these reasons, international migration is a self-feeding, path dependent process, in which initiating causes reinforce feedbacks in both place of origin and place of destination. The transnational social networks that form around these connections between origin and destination are imbued with ideational and material resources affecting social, political, and economic life in origin countries. Beyond the economic effects of international migration, which have been extensively examined, regular exchanges between migration and stay-at-homes produce social and political spillovers in sending countries. Most migrants maintain at least some form of contact with key connections back home, whether through long-distance communication or in person visits (Soehl & Waldinger, 2010). As the capacity for long-distance communication steadily grows – for reasons having to do with cost declines, the growing prevalence of communication technology in places of origin and technological changes making for more intimate contact (e.g. videocast) – and the costs of traveling also falls, these exchanges can yield the transmission of ideas, norms, expectations, skills and contacts acquired in the society of destination. Capitalizing on the interest in economic remittances, Peggy Levitt advanced the concept of "social" remittances to characterize the transmission of ideas, norms, values, and behaviors transmitted through migrant social networks (Levitt, 1998). More recently, the concept of social remittances has given birth to the cognate idea of "political" remittances, in which the ongoing transnational exchanges between migrants and stay-at-homes serve as vehicles for "remitting" political experiences, ideas, values and expectations (Lacroix, Levitt, & Vari-Lavoisier, 2016; Piper, 2009). Unlike economic remittances – where the impact derives from the wage difference in sending and receiving country, for example – non-material remittances become channeled to the political arena only when the migration entails a move across institutionally distinctive polities. In this light, exposure to the disparate characteristics of the receiving polity, whether more democratic, peaceful, representative, participatory, accountable, or institutionally more predictable (or less), leads migrants to remit implicit and explicit political ideas, preferences, and behaviors related to their new, possibly "enriching" experiences back home to their compatriots (Batista & Vicente, 2011). In this paper, we investigate if migrant transnational network ties affect non-migrant citizens' political interest in and attitudes about Mexican elections. We focus on citizens' interest in, following of, and attitudes about elections, all of which serve as an important precursor to formal political behaviors such as voting (Córdova & Hiskey, 2015; Rosenstone & Hansen, 1993; Verba, Schlozman, & Brady, 1995). While a growing body of research explores whether migration enhances (or stymies) political behaviors and attitudes in origin countries and has yet to reach consensus (Bravo, 2009; Careja & Emmenegger, 2012; Chauvet, Gubert, & Mesplé-Somps, 2016; Chauvet & Mercier, 2014; Córdova & Hiskey, 2015; Goodman & Hiskey, 2008; Meseguer, Lavezzolo, & Aparicio, 2016; Pérez-Armendáriz & Crow, 2010; Rother, 2009; Rüland, Kessler, & Rother, 2009), we add to this growing literature by focusing on the role of migrant transnational social ties and remittances in affecting non-migrants' interest in and following of politics. We assess the political consequences of international migration in Mexico, a democracy with substantial emigration, through an analysis of the Mexican 2006 Panel Study. The Mexican elections study is a high quality, nationally representative, longitudinal survey, fielded at two intervals prior to the national election and then a third time after votes were cast. Any presidential election provides a strategic opportunity for examining political attitudes, as the publicity and mobilization it generates awakens political interest that might otherwise lie dormant. That generalization especially applies to this particular election: the next to occur after the precedent breaking 2000 Mexican presidential election, in which the PRI – Mexico's heretofore ruling party – was swept out of office for the first time since the Mexican Revolution. As a result, the 2006 election entailed intense political competition. Moreover, it provided the very first chance for Mexican citizens living abroad to vote in homeland elections. While a variety of factors kept emigrant participation low (Leal, Lee, & McCann, 2012), the election nonetheless stimulated interest among immigrants in the United States. Hence, in analyzing this survey we can gain the capacity to identify any migration-related effects at a time when political interest across the electorate is high. Additionally, the survey provides leverage on this question as, in addition to collecting standard political and demographic data, the 2006 survey also asked questions related to contact with migrant kin and receipt of remittances, making it possible to identify micro-level connections between emigrant relatives living abroad and non-migrants residing in Mexico and as well as the transfer of remittances from the former to the latter. Unfortunately, the 2012 survey did not include questions relating to immigration. The availability of data on both remittances and migrant social ties in the 2006 survey permits us to analyze the extent to which these cross-border connections yield independent effects on political interest or if more variation can be explained by considering the weight that these social and economic connections produce jointly. We use a statistical approach called Doubly Robust (DR) estimation technique (Bang & Robins, 2005) that blends propensity score (PS) weighting and familiar linear regression models. This approach helps to mitigate the selection effects often associated with emigration and political outcomes since the DR technique is a rigorous approach that controls for observed confounders using observational data. As the name suggests, the DR technique is superior to either the PS analysis or regression models alone because it yields results that remain consistent even if either of the estimations are misspecified. We find compelling evidence that Mexican citizens with migrant social networks ties are more likely to report being interested in politics, talking about politics, and have more critical opinions of Mexican elections than Mexican voters without migrant ties. However, familial social ties and receiving monetary remittances do not impart the same political effects across interest in, awareness of, and attitudes towards Mexican elections. First, Mexican voters with one or more close relatives in the U.S. but who do not report receiving migrant remittances are more likely to report talking about politics than individuals without family ties to migration. Second, receipt of remittances, though it does not appear to affect political talk, does affect the likelihood that respondents report being interested in politics: those non-migrants with relatives in the U.S. and who also receive remittances are more likely to be interested in politics and be critical of Mexican elections than either individuals with only a family member abroad or absence of any migrant network ties, all other things equal. In general, we find that individual exposure to international migration drives political interest in and attitudes about of Mexican elections, although social ties and remittances have independent and joint effects. We structure the paper as follow. First, we theorize how migrant cross-border connections affect political awareness of stay-at-home citizens through several potential pathways. Next, we discuss the research design and methodological approach followed by a presentation of the results and a discussion of their significance. Finally, we examine how well our findings on international migration and political awareness in the Mexican context may travel to other countries with extensive out-migration. The U.S.—Mexican migration corridor is the most frequently crossed border in the world. As of 2012, 11.4 million immigrants born in Mexico reside in the United States accounting for two thirds of the U.S. Hispanic population (Gonzalez-Barrera & Lopez, 2013). As a large body of research shows, in addition to retaining cultural pride for the homeland and maintaining religious practices across borders (Smith, 2006), most migrants also stay connected to Mexico either through return visits, phone calls and video conferencing or sending monetary remittances (Waldinger, Soehl, & Lim, 2012). As of 2016, remittances to Mexico hit a record $27 billion surpassing oil revenues for the first time (Ratha, 2016). More recently, researchers have begun to consider the political consequences of international migration for countries of origin. Researchers are assessing how absentee voting affects election outcomes (Leal et al., 2012; McCann, Cornelius, & Leal, 2009; Nyblade & O'Mahony, 2014; O'Mahony, 2013), how migrant remittances affect governance and public goods provision (Adida & Girod, 2010; Burgess, 2012; Duquette-Rury, 2014; Duquette-Rury, 2016; Pfutze, 2012); and how migrant absence and return affect political engagement and attitudes about democracy and public policy (Bravo, 2009; Chauvet & Mercier, 2014; Chauvet et al., 2016; Córdova & Hiskey, 2015; Dionne, Inman, & Montinola, 2014; Duquette-Rury & Chen, 2018; Goodman & Hiskey, 2008; Meseguer et al., 2016; Pérez-Armendáriz, 2014; Pérez-Armendáriz & Crow, 2010). Since researchers have been largely remiss in characterizing the political features of international migration for sending countries, this growing body of research is a welcome addition to the inter-disciplinary literature on the causes and consequences of international migration. Still, there is a great deal more we need to understand about how international migration influences the political landscape in migrant origin countries. We build on this existing research and identify the extent to which non-migrant citizens' social networks ties and remittance transfers translate into political interest during an election campaign. Since, as Verba et al. (1995) argue, political interest is often a prerequisite of political participation, the extent to which citizens are politically interested tells us something about how likely they may be to participate in formal and non-electoral forms of politics (Córdova & Hiskey, 2015). Those indirect channels of influence linking migrants abroad to politics at home, via their non-migrant relatives, are likely to comprise the key vector of migrant political influence, at least in Mexico, as absentee voting of Mexican nationals in the U.S. has been shown to do very little to directly affect elections (Leal et al., 2012; McCann et al., 2009). Thus, we advance the literature by showing how migrant remittances and social networks modify the Mexican electorates' political interest and awareness during presidential elections. The bulk of the comparative scholarship in the U.S. and beyond explains political interest and engagement using the standard socioeconomic model – education, income and occupation. Since interest and attitudes are the property of individual preferences and psychological orientations, much of the literature concentrates on the predictive capacity of different socioeconomic characteristics. However, other studies convincingly show how individual political orientations are also affected by information sharing, recruitment, and mobilization occurring through inter-personal social connections among acquaintances, friends and relatives, and institutional features of the political system that both constrain and encourage interest and engagement (Holzner, 2007; Rosenstone & Hansen, 1993). In this article, we analyze how interested citizens are in politics, how important politics is in their daily lives, and the degree to which they believe in the efficacy of Mexican elections. While we cannot evaluate every measurable facet of political engagement in this study, for example, political behaviors, we suspect, following the seminal research of Rosenstone and Hansen (1993) and Verba et al. (1995), that non-migrant political interest informs some understanding of political behaviors and personal political efficacy. As Baker (2006) finds in his analysis of regionalized voting behavior in Mexican national politics, voters do not make political decisions in a social vacuum. Rather, the Mexican electorate makes political decisions amidst ongoing interpersonal interactions and exchanges with political discussion partners. Baker contends that the voting public discusses politics and "openly deliberate [s] over their choices with family and friends, accepting advice and new information from others while at times attempting themselves to persuade...citizens are embedded in social networks that sustain politically relevant interpersonal exchanges" (Baker, 2006, p. 6). Those networks can span territorial boundaries because in opting for life in a new country and leaping over the borders separating home and host societies, international migrants paradoxically knit those societies together (Mouw, Chavez, Edelblute, & Verdery, 2014). Cross-border communication and political learning Cross-border exchanges are not universal, but are nonetheless prevalent, as only a minority of migrants completely severs the ties to close associates still living in the country of origin. Although over time, family networks tend to shift the center of gravity from sending to receiving country, that process is long, uncertain, and rarely complete. Inertia exercises significant influence on the location of kinship ties: older parents are less likely to leave; in turn, their continuing home country residence is a constraint on the movement of others, as demands for parental caring can keep adult children in place. Those persisting ties motivate continuing contacts; as noted in a recent study of migrants in Spain, frequent contact is more prevalent among those with immediate kin still in the home country as opposed to those whose closest family ties had undergone emigration (Park & Waldinger, 2017). These ongoing, long-distance, conversations provide one channel for the flow of political information. As shown by Pérez-Armendáriz (2014) in a study of cross-border conversation among Mexican immigrants in the United States and their relatives at home, those exchanges principally revolved around practical matters, related to family well-being, health, everyday life, and future plans. Nonetheless, in-depth analysis of these same conversations showed that these discussions contained political content, as the emigrants conveyed both information about politically relevant experiences in the United States as well as opinions regarding the political implications for Mexico. Other forms of cross-border connectedness provide a more proximal basis for the acquisition and transmission of political information. Though visiting is more occasional than communication, home country travel is widespread; those in-person visits will yield opportunities for the transmission and acquisition of political information that can only be gleaned in situ, as when a visit coinciding with a homeland political campaign brings the migrant face-to-face with the politics that she had left behind. Moreover, migration itself may trigger homeland responses that directly transmit political signals. Thus since long-term, large-scale migrations frequently yield return visits that are recurrent and patterned, as in the annual pilgrimages made by countless Mexican migrants for a 1 week celebration of their hometown's patron saint (Massey, Alarcón, Durand, & González, 1987, pp. 143–145), they can also lay the basis for institutionalized contact with homeland political leaders, who make their presence known to the otherwise absent sons and daughters (Fitzgerald, 2008). Last, the migratory circuit itself may yield a strong sense of home community membership, as exemplified by the growing number of hometown associations. Though these organizations are locally focused, oriented towards philanthropy, they necessarily connect migrants and their hometown networks with politics (Duquette-Rury, 2016). Thus, the persistence of cross-border ties yields political inputs, providing migrants with multiple opportunities to stay abreast of developments at home, years of physical separation notwithstanding. Those connections also provide the channel for political outputs, via communication of the lessons learned as a result of the experience of movement to a more democratic polity with better functioning institutions. International migrants all begin as foreigners, and therefore spend some significant portion (possibly all) of their lives in the destination country outside the polity. Nonetheless presence does yield basic personhood rights: foreigners have the capacity to engage in a broad variety of non-electoral, political and civic activities; as they often do so (though at highly variable rates), these modes of participation can provide instruction in democratic processes and also increase a sense of individual political efficacy. Indeed, as found by Pérez-Armendáriz, Mexican immigrants communicating with their relatives abroad shared "their understandings of and experiences with public and political life in the USA, including norms, values, and practices" (2014, p. 75). Fostering the development of those understandings are the efforts maintained by numerous organizations – unions, schools, and civic associations – to reach out to or mobilize immigrant community members regardless of legal status in the destination country (Leal, 2002). Since foreigners are not segregated into migrant-only communities, but rather live alongside (and often with) citizens, they gain exposure to the political messages directed at otherwise similar neighbors who, however, possess the right to vote. As demonstrated by the immigrant rights marches that swept U.S. cities in 2006, those messages can be directed at a non-citizen population with great effect (Zepeda-Millán, 2017). Second, citizenship acquisition and the political opportunities it creates can yield even greater effects. The very process of acquiring citizenship entails greater attention to the destination polity and its characteristics (Waldinger & Duquette-Rury, 2016). New citizens obtain the right to vote, which, when exercised, entails a significant uptick in political activity, and probably attention, relative to the prior years of exclusion from the polity. Insofar as citizenship acquisition is one component of a broader process of assimilation in which immigrants' exposure to the citizenry progressively grows, so too will their connection to networks of more politically engaged persons from whom more political information may flow. Consequently, we hypothesize that: H1: In Mexico, persons with ties to relatives living outside the country will exhibit higher levels of interest in politics as compared to those with no ongoing cross-border social ties. H2: In Mexico, persons with ties to relatives living outside the country will talk about politics more frequently with discussion partners as compared to those with no ongoing cross-border social ties. H3: In Mexico, persons with ties to relatives living outside the country will exhibit more critical views of Mexican politics as compared to those with no ongoing cross-border ties. As noted above, the unbundling of cross-border ties is often a highly protracted process, with the result that migrants long retain connections and commitments to the people and places left behind. Those connections provide the channels for the circulation of information, ideas, and opinions going back and forth between place of origination and destination. Yet one still has to ask why migrants retain an interest in home country political matters and why stay-at-homes might heed the preferences of their relatives who have opted for life in another country. The likely answer lies in an insight conveyed by the "new economics of labor migration": namely, that the very decision to leave home was embedded in family-level processes, which subsequently exercise long-term influence (Stark & Bloom, 1985). Cross-border ties frequently advance the ends of both migrants and stay-at-homes since in developing societies, emigration is often undertaken without the goal of immigration: rather, relocating to a developed society takes place so that emigrants can gain the access to the resources that can only be found there. In turn, those gains get channeled back home in order to stabilize, secure, and improve the options of the kin network remaining in place. The stay-at-homes are not just receiving help, but also providing it, whether caring for the children or elderly parents left behind, attending to the house that the migrant has built with her remittances or the property that he owns (or hopes to inherit), providing assistance when trouble strikes in the host country or some home country document is needed to stabilize the host country situation. These interdependencies give the migrants a stake in political developments on the ground back home, while similarly disposing the stay-at-homes, receiving help from the migrants, to respond to the latter's preferences. The intertwined survival strategies of both migrants and stay-at-homes might explain both why migrants might be motivated to communicate political lessons learned abroad and why stay-at-homes would be inclined to listen to those messages. If so, the set of persons responding most intently to messages transmitted from the country of immigration will only comprise a subset of all persons with ties to migrants abroad, as with settlement and the shifting of key kinship ties from home to host societies cross-border interdependency drops. The sending of remittances lies at the core of those interdependencies. Though cross-border communication – whether by phone, social media, or email – is more common than the material-demanding sending of remittances, remittances comprise a prevalent migrant social practice of huge economic significance. Remittances are typically transmitted electronically and, strictly speaking, convey money, not talk. Nonetheless, the sending of remittances, as Lacroix (2015) has contended, inherently involves a communicative act and one that is embedded in the ongoing, complex negotiations that keep long-distance, cross-border contacts alive. The sending of remittances also comprises but one piece in a set of connected interchanges, as remittances are often triggered by requests for help from recipients, whereas senders are likely to contact receivers to verify that the funds have arrived and to inquire into their use. Moreover, while remittance sending may be stimulated by material concerns, the multi-faceted nature of remittances – involving a broad range of scripts, as contended by Carling (2014) – as well as the likely ambiguities as to how these changes are to be understood at both sides of the chain, suggest that communications concerning the flow of money are unlikely to focus on that matter alone. And as noted above, even though those communications are likely to pivot around practical matters, political issues are often present. Distance allows the migrants to exercise influence, but not control, which is why the way in which remittances are spent often emerges as a source of conflict in the cross-border relationship: in particular, migrants prefer that remittances be used for savings or investment, whereas recipients show a preference for consumption. Paradoxically, the response to this dilemma takes the form of the frequent sending of relatively small sums, a strategy that imposes significant material costs (in the form of a transaction fee), but may also be one that provides the migrant with greater control (Yang, 2011). Yet, for our concerns, this same strategy suggests that remittance senders and receivers will be in more regular contact than those stay-at-homes who are managing without assistance from relatives abroad. Hence, the relationship between remittance senders and receivers may be one that is particularly conducive to the transmission of political messages from societies of immigration to societies of emigration. On the other hand, just as recipients tend to have the last word over how remittances are spent, so too can they decide how to respond to political signals sent from the immigrants, possibly concluding that the monies coming from abroad let them buy the resources that governments might otherwise provide, making political detachment rather than interest the more rational response. H4: In Mexico, the combination of cross-border ties and remittance receipt will heighten political interest: persons with relatives living in the United States and who also receive remittance will exhibit greater interest in Mexican politics, as compared to those who have relatives in the United States but do not receive remittances as well as those with no ongoing cross-border ties. H5: In Mexico, the combination of cross-border kinship ties and remittance receipt will increase political talk: persons belonging to cross-border kinship networks and who also receive remittance will talk with discussion partners more frequently about Mexican politics, as compared to those who only belong to cross-border kinship networks but do not receive remittances as well as those with no ongoing cross-border ties. We use Doubly Robust (DR) estimation technique in all our specifications (Bang & Robins, 2005). DR estimation is a technique to estimate treatment effects in observational studies across a treated and non-treated control group. Since the treatment effect, in this case exposure to international migration through migrant social ties and remittances, is not randomly assigned into the two groups, it may be the case that the two groups are different in pretreatment characteristics that results in selection bias. The DR method combines both propensity score matching (PS) with regression analysis to alleviate common concerns about selection effects when using observational data. Using both the propensity score weights with statistical regression models we are able to approximate, holding all other observed factors constant, what the political interest of respondents without migration exposure (control group) would be if they would have had ties to migrants abroad and/or received remittances (treatment). The DR approach thus allows us to create two groups of respondents that are most similar but for their exposure to migration to isolate the potential effects of migrant social ties and remittances on political interest, talk, and attitudes of non-migrant citizens in Mexico. Additionally, the DR estimation controls for a wide range of potentially confounding variables that relate to both migration and the political outcomes of interest and the resulting estimates are robust against failures to meet assumptions about propensity score models and regression models. This approach gives us greater confidence that the causal inferences we draw from the Mexican elections data are more likely attributed to migration exposure than individual level characteristics associated with who migrates. There are several steps in the DR estimation technique. The first step is to compute a propensity score for each respondent in the control group. A respondent's propensity score is her probability of being in the treatment group (having a US social tie or receives remittances), conditional on the observable characteristics. Rosenbaum and Rubin (1983) show that we can remove confounding influence of the observable characteristics by comparing outcomes of observations in the treatment and control groups with similar propensity scores (Rosenbaum & Rubin, 1983). We estimate propensity scores for various comparisons using an approach proposed by McCaffrey, Ridgeway, and Morral called the generalized boosted model (GBM) (McCaffre, Ridgeway, & Morral, 2004). The GBM is a flexible, nonparametric estimation technique, based on regression trees, that captures the relationship between the respondents' characteristics and the treatment indicator.1 Because of its flexibility and fewer assumptions needed compared to linear logistic regression models, the GBM outperforms alternative models in comparison analysis (Westreich, Lessler, & Funk, 2010). Once the GBM estimates the propensity scores, they can be used to weight respondents in the control/comparison group to match the distribution of observed characteristics of the treatment group, or $$ f\left(x| treatment\ group\right)=w(x)f\left(x| control\ group\right), $$ where x is the vector of observed characteristics. McCaffre et al. (2004) show that, solving for w(x) and applying Bayes theorem produces the following results: $$ w(x)=\left[\frac{f\left(t=1\right)}{f\left(t=0\right)}\right]\left[\frac{f\left(t=1|x\right)}{1-f\left(t=1|x\right)}\right] $$ The above equation suggests that in order to remove any difference in observed characteristics between the two groups, respondents in the control group should receive a weight equal to $$ \frac{\mathit{\Pr}\left( treatment|x\right)}{1-\mathit{\Pr}\left( treatment|x\right)} $$ which are the odds of being a Mexican citizen with cross-border connections given the observed characteristics. Since we are using the survey sample, we multiply these propensity score weights with survey weights (Dugoff, Schuler, & Stuart, 2014). The effectiveness of using these propensity score weights to balance the treatment and control group is apparent in Table 1. The distributions of observed characteristics between observed "Migrant Respondents" and "weighted non-Migrant Respondents" are more similar than the distributions between observed "Migrant Respondents" and observed "non-Migrant Respondents." Distribution of Selected Characteristics by Migration Treatment and Control Groups Non-Migrant Families Migrant Families Weighted Non-Migrant Families No schooling Incomplete primary Complete primary Incomplete middle school/technical school Complete middle school/technical school Incomplete high school Complete high school Incomplete college Complete college or more 0 a 1,299 1,300 a 1,999 9,200 a 10,499 10,500 or more City Type Only on special occasions Source: Lawson et al. (2007). Authors' calculations using doubly robust estimation with propensity score weighting in R. Although the weighted comparison does attempt to control for the observed characteristics, we take an additional step and perform a DR analysis with a weighted generalized logistic regression (GLM) controlling again for observed characteristics. As we stated, the DR methods are superior to either the propensity score weighted comparison or the parametric regression alone because the results remain consistent even if either the propensity score model or the regression is misspecified. In each specification, the estimator computes the average difference between each treatment respondent's outcome and what the GLM predicts would be the respondent's outcome had they been in the control group. We assess the role of international migration on political interest, political talk, and attitudes in Mexico using data from the Mexico 2006 Panel Study (Lawson et al., 2007). The Mexico Panel Study is a national survey instrument fielded between 2005 and 2006 by Reforma newspaper's Polling and Research Team. Team members conducted in-person interviews with selected Mexican voters at their residences before and after the presidential elections held July 2nd 2006. Respondents selected for the first panel wave (response rate was 34%) were re-contacted once again prior to the national election and once after the election for a total of three panel waves; the re-contact rates for the second and third panel waves were 74 and 67%, respectively. In addition to a national survey sample (N = 1600), two additional oversamples were collected including one for Mexico City (N = 500) and one for villages in rural areas in the states of Chiapas, Jalisco, and Oaxaca (N = 300). In total, 2400 interviews were conducted for the first wave and 1776 and 1594 respondents were successfully re-interviewed in the second and third waves, respectively.2 The multiple panel waves before and after the national election gives us an opportunity to assess how migration exposure affected political awareness and attitudes over the course of the electoral campaign. Measuring international migration The survey asked participating respondents questions related to candidates, political parties, policy preferences, mass media, and political engagement as well as a few questions about international migration relevant to our study. In the first wave, participants were asked whether they had a close relative living in the United States, while in the second wave, re-contacted respondents were asked whether they or anyone in the household received money from someone living in the United States. These two questions form the bases of two related but theoretically distinct "treatment effects" of international migration exposure we evaluate in the analyses.3 In separate estimations, we study both independent and joint effects of having a relative living in the US and receiving remittances during the presidential election cycle. While other studies have argued that having a close relative in the U.S. and receiving remittances is correlated and thus including both measures would lead to multicollinearity and inflated standard errors in OLS model estimates (Bravo, 2009), we believe that having a relative in the U.S. and receiving remittances, as opposed to only having a relative in the U.S., might convey different kinds of political information affecting non-migrant political interest. In our sample, it is the case that receiving remittances and having a U.S. relative is correlated,4 as individuals receiving remittances must know someone in the U.S. in order to receive contributions from migrants, but there is no evidence of multicollinearity. We proceed by studying first, if international migration affects the political interest of non-migrants in any capacity and second, if there is something systematically different in the political awareness of those respondents that also receive material resources from kin abroad in the U.S. Since we use the doubly robust estimation technique in conjunction with propensity score matching to avoid selection bias in our estimations, inflated standard errors arising from multicollinearity is less of a factor in our estimations and we are able to evaluate the independent and joint effects of having a relative in the U.S. and receiving remittances in the logistic regression. In the present sample, the majority of respondents do not receive remittances from the U.S. (87%), but many do have a close relative living in the U.S. (46%). Of those respondents that report having a U.S. relative, 20% also receive remittances while 80% do not. This provides additional support for analyzing the effects of social ties and material transfers separately. In addition to controlling for standard socioeconomic predictors of political interest including income, education, and race we also account for other individual and contextual covariates, which have been shown to affect political behavior.5 First, as Baker (2006) and Klesner (1995) show, Mexico has deep regionalized political cleavages related to wealth disparities, urban-rural cleavages, political discussion partners and religion that affect voting behavior (Baker, 2006; Klesner, 1995). We include dummy variables for Mexican region (north, center, metro, south), omitting one region in the analysis. The region variables also provide a control for areas with higher level of Mexican emigration as traditional Mexican sending states are concentrated in the central-western part of the country. Second, we include categorical measures of marital status, age, gender and urban residency to account for the effects of social connectedness, urbanization and other demographic characteristics on political interest. Finally, we capture a measure of religiosity in the form of frequency of church attendance, which is often a significant predictor of political interest. Political interest, political talk and the efficacy of Mexican elections The Mexico 2006 Panel Study provides two distinct measures of political awareness, which we use in our analysis. First, we include a measure of political interest based on the question: How much interest do you have in politics? (a lot some, a little, or none). We use a dichotomous measure of political interest which takes the value of 1 if the respondent reports having any interest in politics and 0, otherwise. Second, we evaluate the frequency with which respondents' report talking about politics with other people based on the following question: How often do you talk about politics with other people? (daily, a few days a week, a few days a month, rarely, never). Again, we dichotomize the outcome variable and code the value as 1 if respondents report talking about politics at all, and 0 otherwise. We use binary dependent variables in the statistical analysis for ease of interpretation using the DR approach, which is challenging to interpret using categorical dependent variables. However, we also assess the full categories for political interest and political talk as continuous variables and discuss these results in the next section.6 Finally, we evaluate the role of international migration on opinions of Mexican elections. The survey asks respondents if they agree or disagree that elections in Mexico are free and fair. The dependent variable is a dichotomous variable that takes the value of 1 if respondents agree and 0 if they disagree. As described in the data section, the DR estimator first estimates the propensity scores by GBM and then fits weighted general linear models using the propensity score weights as observation weights. We also include survey weights in both the PS estimations and the weighted regression models. The DR estimator computes the average difference between each treatment case outcome (migration ties and remittances) and what the GLM predicts would be the respondent's outcome had the respondent been in the control group. We report the DR estimations for political interest, political talk, and efficacy of Mexican elections for each wave and the effects of migration exposure in Table 2.7 Doubly Robust Estimations of Migration Treatments on Political Interest, Talk and Efficacy of Mexican Elections, Panel Waves 1, 2 & 3 Unweighted Average Outcome Weighted Average Outcome for Treated if in Control Treatment Group DRϕ Interest in Politics Talk about Politics 4.41** Treated Sample Size Elections are Free and Fair Source: Lawson et al. (2007). Authors' calculations using doubly robust estimation with propensity score weighting in R. ϕ Doubly robust estimator is the unweighted treatment group average minus the weighted control group average. Note: Effective sample size is after weighting. Signif. codes: p < 0.10 *p < 0.05 **p < 0.01 ***p < 0.001 In Table 2, we report the unweighted average outcomes for both the treatment and control groups, the weighted average outcome for the treatment group if they had been in the control group and the DR estimator, which is the difference between the unweighted average outcome for the treatment group and the weighted average outcome for the treatment group if they were in the control group. In Table 3, we present the key findings for the political interest, political talk, and efficacy of Mexican elections models. We also report the effective sample size of the treatment and control groups after weighting in Additional file 1: Tables S1–3. The coefficient, standard error, and p-value for the coefficient of the treatment variable and all covariates are reported in the Additional file 1: Tables S1–3. Migration effect of immigrant relatives and remittances on political interest, talk, and efficacy of Mexican elections Treatment 1: Has immigrant relative in U.S Treatment 2: Has immigrant relative in U.S & receives remittances 0.2** Efficacy of Elections Source: Lawson et al. (2007). Authors' calculations using doubly robust estimation with propensity score weighting in R. Control variables omitted. *p < 0.05 **p < 0.01 ***p < 0.001 Across the three panel waves, having migrant social ties and receiving remittances is a positive and significant predictor of political interest (treatment 2). In pre-election wave 1, for those that have only a relative living in the U.S. (treatment 1) 73% have at least some interest in politics, whereas slightly fewer of those without a close immigrant relative are interested in politics at 69%. If those in the treatment group did not have immigrant relatives in the U.S., the percentage would be 78%, reducing the probability of having interest in politics by almost 6%. However, the p-value is not significant in any of the panel waves for those respondents who only have migrant social ties.8 By contrast, having both a close relative in the U.S. and receiving remittances is related to being interested in Mexican politics. In panel waves 1 and 2, respondents that report both forms of migration exposure are more likely to be interested in politics at the 10% level in the pre-election wave and at the 5% level in in the post-election wave 3. The results show a change in the sign of the DR estimator between the first and subsequent waves, suggesting respondents with a close migrant relative that also receive remittances are more likely to become interested in politics over the course of the 2006 election cycle. In wave 3, the unweighted average outcome of respondents reporting interest in politics for the treatment and control groups were 83 and 82%, respectively. This suggests that the treatment group was only slightly larger than the control group. However, after the DR estimation, the weighted average outcome for the treatment group if they had been in the control group decreases to 75%. This tells us that citizens also receiving remittances through the migrant social network increases the probability of being interested in politics by 8%. Having a relative in the U.S. has a different effect on talking about politics than on political interest. While exposure to international migration operationalized as having close relative (s) in the U.S. (treatment 1) does not affect the probability of being interested in politics in any of the panel waves, it does have a substantive and significant effect on talking about Mexican politics in waves 1 and 3. The implicit and/or explicit political information transmitted through respondents' transnational social networks positively affects talking about politics with others whereas the addition of receiving remittances through kin and friend networks has no joint impact. Having a close relative in the U.S. increases the probability of talking about politics by 4% in wave 1, and by 5%, in wave 3.9 Respondents' social connections to U.S. family independently impacts political talk, although does not affect their interest in politics unless they also receive remittances. We note, though, the data suggests that in general many fewer respondents report talking about politics compared to having interest in politics; 30% less of the total treatment group sample and 33% less of the total control group sample talked about politics at all compared to levels of political interest. Finally, in the post-election wave, respondents were asked whether they agree or disagree that Mexican elections are free and fair, giving us some insight into how efficacious Mexican voters believe elections to be. The DR estimates show that the unweighted treatment groups are more likely to agree that Mexican elections are free and fair (77% for treatment 1 and 76% treatment 2) than the unweighted control groups (71 and 74%). However, the DR estimator shows that the weighted average outcome for the treatment group had they been in the control group increases the probability that respondents agree that elections are free and fair for treatment two. Therefore, having a relative in the U.S. and receiving remittances reduces the probability that Mexican voters agree that elections are free and fair (8%). In other words, the ideational values and material resources respondents receive through their cross-border social connections to migrants in the U.S. makes them 8% more likely to be critical of Mexican elections. Finally, we note that we estimate a third treatment effect, only receiving remittances abroad, but not from close family members, which captures 20% of the sample. In none of the specifications does only receiving remittances have an effect on our measures of political interest and attitudes.10 In addition to reporting the treatment effect estimations, the DR models allow us to assess the relative influence of the covariates estimated and how well the propensity score matching panels are balanced.11 First, region, education and income are consistently the top three covariates correlated to both migrant social ties and remittances effects across all models, while the urban dummy variable is more strongly correlated to having both social ties and remittances in addition to the other three variables.12 Second, the DR estimations report the weighted mean of each covariate for both the control and treatment groups as well as the Kolmogorov-Smirnov test (K-S test) results.13 Results from the K-S test suggest that the treatment and control groups are well balanced across all covariates, with only very small differences between treatment and control. For example, the average K-S result for the DR models in which the treatment effect is significant is 0.0007. The doubly robust models provide several key insights regarding the role of international migration on political interest, talk, and attitudes about Mexican elections for the non-migrant Mexican electorate. The data reveal that cross-border social networks and monetary resources have independent and joint effects on three outcomes: talking about politics; being interested in politics; and believing that elections are free and fair institutions in Mexico. First, having migrant family ties independently explains the frequency with which individuals talk about politics before and after the presidential election. While we cannot describe the content or precise pathway of the political information migrant family members abroad share with their kin, we do know that that social exposure to international migration, via kinship ties to relatives living in the United States, has a positive effect on non-migrants' political interest. To probe the role of social ties on political talk a little deeper, we also test whether the density and type of relationship type has a direct effect on non-migrant political interest. The survey includes a question asking respondents to select the kinds of kinship ties they maintain in the U.S. including spouse, parent (s), children, sibling (s), uncle (s) and/or aunt (s), grandparent (s), cousin (s), niece (s) and/or nephew (s). We created two dichotomous variables from this question. First, we estimate the independent effect of relationship type in a variable that takes the value of one if the social tie was "close" (spouse, child, sibling) and zero if the social tie involved a "distant" relative. Relationship type had no effect on any non-migrant political awareness indicators. Second, we estimate if the density of social ties plays a significant role in the political talk of non-migrants with their political discussion partners. Recall that we estimate the treatment effect of having "any" or "no" migrant relatives in the U.S. in the initial political talk model. DR estimates show that having two or more kinship ties in the U.S. does have a positive and significant effect on political talk in the pre-election and post-election panel waves. Mexican non-migrant voters are 5% more likely to discuss politics if they had more than two migrant social ties in the first wave and 6% more likely in the last wave. Neither the relationship type nor number of ties treatment variables had independent effects on any other dependent variable (political interest or perceived efficacy of elections).14 Our results suggest that interpersonal social connections stretching across borders are a necessary, but not sufficient condition of increasing interest in politics or affecting opinions of Mexican elections. Receiving monetary remittances from migrant relatives abroad yields an additional effect that triggers more criticism of Mexican elections and interest in politics, more generally. The acquisition of material resources from migrant kin abroad may reflect tighter social cohesion and in turn, the implicit and explicit political information conveyed through the network may be of greater salience to the non-migrant recipient. While non-migrants are benefitting from international migration by receiving additional household revenue, which helps to mitigate risks even without making the sojourn abroad, stay-at-homes surely bear some of the burden of the migration, whether caring for family or property or keeping businesses afloat in migrants' absence. Receiving monetary resources from the U.S., which both enable and constrain current and future prospects for migrant households, may strengthen the quality of the social bonds in the transnational social network and thus the political information and opinions shared by individuals becomes more salient to non-migrant voters. Whether the content of the political information is direct or indirect or whether non-migrant recipients agree or disagree is beyond the scope of this paper. The key insight our data reveals is that international migration influences the political interest of Mexican non-migrant through cross-border social networks when they also receive remittances through their network ties. Additionally, non-migrants with more migration exposure become more critical of Mexican elections and interested in politics. The political and social logic of international migration produces international families. Migrants and stay-at-homes pursue entwined survival strategies: migrants relocate to a developed society to gain access to the resources that can only be found there, in turn channeling those gains back home in order to stabilize, secure, and improve the options of the kin network remaining in place. However, the migrants are also dependent on the stay at homes, whether providing care to the elderly or to children, looking after property and other investments, and furnishing assistance when problems in the society of destination compel the migrants to look homeward for help. In today's world, moreover, these decisions to build family economies across borders reflect the additional impact of receiving states' ever intensifying efforts to police national boundaries. While leaving home for life abroad requires both finances and social capital, those resources no longer suffice; migrants need to find a way through or around control systems. Since not every family member can penetrate borders with equal ease, those most able to cross go first. Consequently, other kin members are left home to wait, remaining there until a visa allows for legal passage or resources permit yet another unauthorized crossing. While the locus of the migrant's key connections tends to shift over time, that transition may be highly protracted. Regardless of the motivation leading any one family or individual to leave home, the core familial network almost always moves gradually, erratically, and incompletely. As the migrant has but limited influence over the locational decisions made by the various persons comprising the kin network, some significant other is usually to be found at home. Because other commitments, such as property ownership, further keep emigrants rooted in the place from which they began, inertia exercises considerable weight. These connections may explain why all politics need not necessarily be local, but can, under the circumstances generated by migration, provide both the mechanisms and the motivations – among both migrants and stay-at-homes – for political signals to cross borders. On the other hand, the possession of ties to some relative abroad may by a necessary, but not sufficient condition for the activation of political activity from afar. As time goes on, increasingly large portions of the home society are connected through social ties to relatives and friends living abroad. In a country like Mexico, with a century-plus long history of migration, roughly one out of two persons living in Mexico has a relative living in the United States; a trait so commonly shared is unlikely to produce much political variance. Moreover, to exercise influence, those ties have to function as circuits of exchange, providing the vectors whereby information, ideas, and resources move back and forth from place of origin to place of destination (Lacroix et al., 2016). Yet, keeping up those ties requires commitment, which is why migrants maintain cross-border connections in selective fashion. The motivation for doing so may be sapped as the core familial network shifts from country of emigration to country of immigration: the needs associated with life in the high-cost society of residence are likely to reduce the capacity to help out those abroad; over time, distance, separation, and exposure to a different society and way of life makes the immigrants increasingly different from those they left behind. However, while cross-border ties are occasionally cut and often attenuated, in other cases, migrants' material and affective commitments remain firmly implanted in the country of origin. Under those conditions, migrants and stay-at-homes remain interconnected and cross-border political effects including an increase in the political interest and awareness of stay-at-homes may be more likely in country cases beyond Mexico. As this article shows, those ties provide reasons for migrants residing abroad to pay attention to political matters at home and for relatives remaining at home to attend to the preferences of their migrant kin who have opted for life in a foreign country. The GBM is a downloadable R package developed by Greg Ridgway that is an algorithm for iteratively forming a collection of simple regression tree models to estimate propensity scores. The GBM's nonparametric nature reduces the chance of model misspecification. Senior Project Personnel for the Mexico 2006 Panel Study include (in alphabetical order): Andy Baker, Kathleen Bruhn, Roderic Camp, Wayne Cornelius, Jorge Domínguez, Kenneth Greene, Joseph Klesner, Chappell Lawson (Principal Investigator), Beatriz Magaloni, James McCann, Alejandro Moreno, Alejandro Poiré, and David Shirk. Funding for the study was provided by the National Science Foundation (SES-0517971) and Reforma newspaper; fieldwork was conducted by Reforma newspaper's Polling and Research Team, under the direction of Alejandro Moreno. http://mexicopanelstudy.mit.edu/ We note that the survey did not include a question probing for cross-border exchanges other than those involving remittances. Pearson chi-square test = 120.91; p = 0.000. Race serves as a proxy for indigenous respondents, which are the poorest and most marginalized groups in Mexico. Previous research shows that race and ethnicity plays a role in shaping political interest, personal efficacy, attitudes and engagement in the electoral process (see Leighley & Vedlitz, 1999). We also evaluate how frequency of following the presidential elections is conditioned by migration exposure in specifications not reported due to space constraints. A question on the survey asks: How closely are you following the presidential campaign: a lot, some, a little, or none? We evaluate this outcome (positive and statistically significant for treatment 1, no effect for treatment 2), but do not report these effects. Note: the question asking respondents whether they agree or disagree that Mexican elections are free and fair only appears in panel wave 3. In the specifications in which we use all categories of response (a lot, some, a little, none) as continuous, we note positive and statistically significant effects for treatment 1 as well. Having a relative in the US increases the likelihood of political interest from one category ("some") to another ("a lot") by 20% on average. There is some caution warranted when interpreting ordinal variables as continuous as it may violate some OLS assumptions. However, since the distribution is not skewed, the practical effect is minor and the simplicity of interpreting an OLS should outweigh the technical correctness of ordered logit (see Angrist & Pischke, 2008). The DR results for the political talk models when using the full ordinal values (daily, a few days a week, a few days a month, rarely, never) and interpreting them as continuous values yield similar, albeit stronger results across all three panel waves (from 22 to 24 to 20% across waves 1, 2, and 3, respectively). For example, a respondent with a US social tie (treatment 1) is about 24% more likely, on average, to report talking about politics "a few days a week" than "a few days a month" in wave 3. These results are available by request. We also assess the potential role of media exposure across treatment and control groups using a survey question that asks whether respondents watch news on TV and how frequently. Media exposure across groups was not systematically different and did not affect any of the regression results. We thank reviewer 2 for this suggestion. The covariates relative influence on the treatment effects estimated by GBM are not reported here. We also included an additional contextual variable indicating the percent of international migration living abroad, but the results were not statistically significant and produced no changes in the other covariates. The K-S test compares two samples by quantifying the distance between the empirical distribution functions of two samples, in this case the treatment and control groups. We do not report the results here due to space limitations, but will provide if requested. We would like to acknowledge the helpful research assistance of Zhenxiang Chen and two anonymous reviewers for their insightful feedback. The dataset supporting the conclusions of this article is available in the MIT Drupal Cloud repository accessed here: http://mexicopanelstudy.mit.edu/. All authors made substantial contributions to conception and design, or acquisition of data, and analysis and interpretation of data. LD-R and RW drafted the manuscript and revised it critically for important intellectual content. All authors read and approved the final manuscript. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Additional file 1: Table S1. Migration effects of immigrant relatives and receiving remittances on interest in politics. Table S2. Migration effects of immigrant relative and receiving remittances on political talk. Table S3. Migration effects of immigrant relative and receiving remittances on efficacy of Mexican elections. (DOCX 36 kb) University of California, Los Angeles, 264 Haines Hall, 375 Portola Plaza, Los Angeles, CA 90095-1551, USA Fels Institute of Government, University of Pennsylvania, 3814 Walnut Street, Philadelphia, PA 19104, USA Adida, C. L., & Girod, D. M. (2010). Do migrants improve their hometowns? Remittances and access to public Services in Mexico, 1995–2000. Comparative Political Studies, 44(1), 3–27.View ArticleGoogle Scholar Angrist, J. D., & Pischke, J.-S. (2008). Mostly harmless econometrics: An Empiricist's companion. Princeton: Princeton University Press.Google Scholar Baker, A. (2006). Why is Voting Behavior so Regionalized in Mexico? Social Networks, Political Discussion, and Electoral Choice in the 2006 Campaign (pp. 1–31). Paper Presented at the 2006 Annual Meeting of the American Political Science Association, Philadelphia. Retrieved from http://scholar.google.com/scholar?oi=bibs&hl=en&q=related:-T3Ll6CWxnsJ:scholar.google.com/#2 Bang, H., & Robins, J. (2005). Doubly robust estimation in missing data and causal inference models. Biometrics, 61, 962–972.View ArticleGoogle Scholar Batista, C., & Vicente, P. (2011). Do migrants improve governance at home? Evidence from a voting experiment. The World Bank Economic Review, 25(1), 77–104. Retrieved from https://academic.oup.com/wber/article-abstract/25/1/77/1677266?redirectedFrom=fulltext View ArticleGoogle Scholar Bravo, J. (2009). Emigración y compromiso político en México [Emigration and political engagement in Mexico]. Politica y Gobierno, Temático, XVI(1), pp. 273–310. Retrieved from http://www.politicaygobierno.cide.edu/index.php/pyg/article/view/656/556 Burgess, K. (2012). Collective remittances and migrant-state collaboration in Mexico and El Salvador. Latin American Politics and Society, 54, 119–146.View ArticleGoogle Scholar Careja, R., & Emmenegger, P. (2012). Making democratic citizens: the effects of migration experience on political attitudes in Central and Eastern Europe. Comparative Political Studies, 45(7), 875–902.View ArticleGoogle Scholar Carling, J. (2014). Scripting Remittances: Making Sense of Money Transfers in Transnational Relationships. International Migration Review, 48(1_suppl), 218-262.View ArticleGoogle Scholar Chauvet, L., Gubert, F., & Mesplé-Somps, S. (2016). Do migrants adopt new political attitudes from abroad? Evidence using a multi-sited exit-poll survey during the 2013 Malian elections. Comparative Migration Studies, 4. https://doi.org/10.1186/s40878-016-0033-z Chauvet, L., & Mercier, M. (2014). Do return migrants transfer political norms to their origin country? Evidence from Mali. Journal of Comparative Economics, 42(3), 630–651.View ArticleGoogle Scholar Córdova, A., & Hiskey, J. (2015). Shaping politics at home: Cross-border social ties and local-level political engagement. Comparative Political Studies, 48(11), 1454–1487.View ArticleGoogle Scholar Dionne K., Inman K. L., Montinola G. R. (2014). Another resource curse? The impact of remittances on political participation (Afrobarometer Working Paper No. 145). Retrieved from http://www.afrobarometer.org/publications/wp145-another-resource-curse-impact-remittances-political-participation Dugoff, E. H., Schuler, M., & Stuart, E. A. (2014). Generalizing observational study results: Applying propensity score methods to complex surveys. Health Services Research, 49, 284–303.View ArticleGoogle Scholar Duquette-Rury, L. (2014). Collective remittances and transnational coproduction: The 3× 1 program for migrants and household access to public goods in Mexico. Studies in Comparative International Development, 49(1), 112–139.View ArticleGoogle Scholar Duquette-Rury, L. (2016). Migrant transnational participation how citizen inclusion and government engagement matter for local democratic development in Mexico. American Sociological Review, 81(4), 771–799.View ArticleGoogle Scholar Duquette-Rury, L., & Chen, Z. (2018). Does International Migration Affect Political Participation? Evidence from Multiple Data Sources across Mexican Municipalities, 1990–2013. International Migration Review. https://doi.org/10.1177/0197918318774499 Fitzgerald, D. (2008). Colonies of the little motherland: Membership, space, and time in Mexican migrant hometown associations. Comparative Studies in Society and History, 50(1), 145–169.View ArticleGoogle Scholar Gonzalez-Barrera, A., & Lopez, M. H. (2013). A demographic portrait of Mexican-origin Hispanics in the United States. Washington, DC: Pew Hispanic Center.Google Scholar Goodman, G. L., & Hiskey, J. T. (2008). Exit without leaving: political disengagement in high migration municipalities in Mexico. Comparative Politics, 40(2), 169–188.View ArticleGoogle Scholar Holzner, C. A. (2007). The poverty of democracy: Neoliberal reforms and political participation of the poor in Mexico. Latin American Politics and Society, 49(2), 87–122.View ArticleGoogle Scholar Klesner, J. L. (1995). The 1994 Mexican elections: Manifestation of a divided society? Mexican Studies/Estudios Mexicanos, 11(1), 137–149.View ArticleGoogle Scholar Lacroix, T., Levitt, P., & Vari-Lavoisier, I. (2016). Social remittances and the changing transnational political landscape. Comparative Migration Studies, 4, 1–5. https://doi.org/10.1186/s40878-016-0032-0 Lacroix, T. (2015). Hometown transnationalism: Long distance villageness among Indian Punjabis and North African Berbers. Basingstoke: Palgrave MacmillanGoogle Scholar Lawson, C., Baker, A., Bruhn, K., Camp, R., Cornelius, W., Domínguez, . . . Shirk, D. (2007). The Mexico 2006 Panel Study. Retrieved from http://mexicopanelstudy.mit.edu/ Leal, D. L., Lee, B.-J., & McCann, J. A. (2012). Transnational absentee voting in the 2006 Mexican presidential election: The roots of participation. Electoral Studies, 31(3), 540–549.View ArticleGoogle Scholar Leal, D. L. (2002). Political participation by Latino non-citizens in the United States. British Journal of Political Science, 32(2), 353–370.View ArticleGoogle Scholar Leighley, J. E., & Vedlitz, A. (1999). Race, ethnicity, and political participation: Competing models and contrasting explanations. The Journal of Politics, 61(4), 1092–1114.View ArticleGoogle Scholar Levitt, P. (1998). Social remittances: Migration driven local-level forms of cultural diffusion. International Migration Review, 32(4), 926.View ArticleGoogle Scholar Massey, D., Alarcón, R., Durand, J., & González, H. (1987). Return to Aztlan: The social process of international migration from western Mexico. Berkeley: University of California Press.Google Scholar McCaffre, D., Ridgeway, G., & Morral, A. (2004). Propensity score estimation with boosted regression for evaluating adolescent substance abuse treatment. Psychological Methods, 9, 403–425.View ArticleGoogle Scholar McCann, J. A., Cornelius, W. A., & Leal, D. (2009). Absentee voting and transnational civic engagement among Mexican expatriates. In J. I., Domínguez, C.H. Lawson & A. Moreno (Eds.), Consolidating Mexico's democracy: the 2006 presidential campaign in comparative perspective (pp. 89-108). Johns Hopkins University Press.Google Scholar Meseguer, C., Lavezzolo, S., & Aparicio, J. (2016). Financial remittances, trans-border conversations, and the state. Comparative Migration Studies, 4. https://doi.org/10.1186/s40878-016-0040-0 Mouw, T., Chavez, S., Edelblute, H., & Verdery, A. (2014). Binational social networks and assimilation: A test of the importance of transnationalism. Social Problems, 61(3), 329–359.View ArticleGoogle Scholar Nyblade, B., & O'Mahony, A. (2014). Migrants' remittances and home country elections: Cross-national and subnational evidence. Studies in Comparative International, 49, 44–66.View ArticleGoogle Scholar O'Mahony, A. (2013). Political investment, remittances, and elections. British Journal of Political Science, 43, 799–820.View ArticleGoogle Scholar Park, S. S., & Waldinger, R. D. (2017). Bridging the territorial divide: immigrants' cross-border communication and the spatial dynamics of their kin networks. Journal of Ethnic and Migration Studies, 43(1), 18–40.View ArticleGoogle Scholar Pérez-Armendáriz, C. (2014). Cross-border discussions and political behavior in migrant-sending countries. Studies in Comparative International Development, 49, 67–88.View ArticleGoogle Scholar Pérez-Armendáriz, C., & Crow, D. (2010). Do migrants remit democracy? International migration, political beliefs, and behavior in Mexico. Comparative Political Studies, 43, 119–148.View ArticleGoogle Scholar Pfutze, T. (2012). Does migration promote democratization? Evidence from the Mexican transition. Journal of Comparative Economics, 40, 159–175.View ArticleGoogle Scholar Piper, N. (2009). Temporary migration and political remittances: The role of organisational networks in the transnationalisation of human rights. European journal of East Asian studies, 8(2), 215–243.View ArticleGoogle Scholar Ratha, D. (2016). Migration and remittances Factbook 2016 Third Edition. The World Bank Group. Retrieved from https://openknowledge.worldbank.org/bitstream/handle/10986/23743/9781464803192.pdf. Rosenbaum, P., & Rubin, D. (1983). The central role of the propensity score in observational studies for causal effects. Biometricka, 70, 41–55.View ArticleGoogle Scholar Rosenstone, S. J., & Hansen, J. M. (1993). Mobilization, Participation, and Democracy in America, (p. 333). New York: Macmillan.Google Scholar Rother, S. (2009). Changed in migration? Philippine return migrants and (un) democratic remittances. European Journal of East Asian Studies, 8(2), 245–274.View ArticleGoogle Scholar Rüland, J., Kessler, C., & Rother, S. (2009). Democratisation through international migration? Explorative thoughts on a novel research agenda. European Journal of East Asian Studies, 8(2), 161–179.View ArticleGoogle Scholar Smith, R. C. (2006). Mexican New York: Transnational lives of new immigrants. Berkeley: University of California Press.Google Scholar Soehl, T., & Waldinger, R. (2010). Making the connection: Latino immigrants and their cross-border ties. Ethnic and Racial Studies, 33(9), 1489–1510.View ArticleGoogle Scholar Stark, O., & Bloom, D. E. (1985). The new economics of labor migration. The American Economic Review, 75(2), 173–178.Google Scholar Verba, S., Schlozman, K. L., & Brady, H. E. (1995). Voice and equality: Civic voluntarism in American politics. Cambridge: Harvard University Press.Google Scholar Waldinger, R., & Duquette-Rury, L. (2016). Emigrant Politics, Immigrant Engagement: Homeland Ties and Immigrant Political Identity in the United States. The Russell Sage Foundation Journal of the Social Sciences, 2(3), 42-59. Retrieved from https://www.rsfjournal.org/doi/full/10.7758/RSF.2016.2.3.03 Waldinger, R., Soehl, T., & Lim, N. (2012). Emigrants and the body politic left behind: Results from the Latino National Survey. Journal of Ethnic and Migration Studies, 38(5), 711–736.View ArticleGoogle Scholar Westreich, D., Lessler, J., & Funk, M. J. (2010). Propensity score estimation: Neural networks, support vector machines, decision trees (CART), and meta-classifiers as alternatives to logistic regression. Journal of Clinical Epidemiology, 63(8), 826–833.View ArticleGoogle Scholar Yang, D. (2011). Migrant remittances. The Journal of Economic Perspectives, 25(3), 129–151.View ArticleGoogle Scholar Zepeda-Millán, C. (2017). Latino mass mobilization: Immigration, racialization, and activism. New York: Cambridge University Press.View ArticleGoogle Scholar
CommonCrawl
Recent questions tagged uniform-distribution GATE2019-47 Suppose $Y$ is distributed uniformly in the open interval $(1,6)$. The probability that the polynomial $3x^2 +6xY+3Y+6$ has only real roots is (rounded off to $1$ decimal place) _______ asked Feb 7, 2019 in Probability by Arjun Veteran (431k points) | 3.9k views gate2019 numerical-answers engineering-mathematics uniform-distribution Ace Test Series: Probability - Uniform Distribution asked Feb 2, 2019 in Probability by Na462 Loyal (7k points) | 173 views random-variable ace-test-series GATE ECE 2014 asked Dec 8, 2018 in Probability by aditi19 Active (5.2k points) | 65 views 2014-ece Probability of missing the bus You arrive at bus stop some time uniformly distributed between $10:00$ and $10:15$ and bus leaves the bus stop sometime uniformly distributed between $10:00$ and $10:25$. What is the probability of you missing the bus? asked Oct 5, 2018 in Probability by Mk Utkarsh Boss (36.5k points) | 123 views Probability- Gravner- 80.c After your complaint about their service, a representative of an insurance company promised to call you "between $7$ and $9$ this evening." Assume that this means that the time $T$ of the call is uniformly distributed in the specified interval. (c) ... $M$ be the amount of time of the show that you miss because of th call. Compute the expected value of $M$. asked Sep 27, 2018 in Probability by Pooja Khatri Boss (10.9k points) | 44 views Probability- Gravner- 80.b After your complaint about their service, a representative of an insurance company promised to call you "between $7$ and $9$ this evening." Assume that this means that the time $T$ of the call is uniformly distributed in the specified interval (b) At $8.30$, the call still hasn't arrived. What is the probability that it arrives in the next $10$ minutes? Probability- Gravner- 80 After your complaint about their service, a representative of an insurance company promised to call you "between $7$ and $9$ this evening." Assume that this means that the time $T$ of the call is uniformly distributed in the specified interval. (a) Compute the probability that the call arrives between $8.30$ and $8.20$. Universality of Uniform According to Universality of Uniform , We can get from the uniform distribution to the other distributions and also from other distributions back to the uniform distribution. Please explain how we would simulate from one distribution to other distribution ? asked May 6, 2018 in Probability by ankitgupta.1729 Boss (17k points) | 161 views ISI2017-MMA-17 Suppose that $X$ is chosen uniformly from $\{1,2,\ldots,100\}$ and given $X =x$, $Y$ is chosen uniformly from $\{1,2,\ldots,x\}. $Then $P(Y = 30)=$ $\dfrac{1}{100}$ $\dfrac{1}{100} \times \left(\dfrac{1}{30} + \ldots+\dfrac{1}{100}\right)$ $\dfrac{1}{30}$ $\dfrac{1}{100} \times \left(\dfrac{1}{1} + \ldots +\dfrac{1}{30}\right)$ asked Mar 27, 2018 in Probability by jjayantamahata Active (1.5k points) | 250 views isi2017-mma Mathematics: Gate EE 17 Assume that in a traffic junction, the cycle of traffic signal lights is 2 minutes of green(vehicle does not stop) and 3 minutes of red (vehicle stops). Consider the arrival time of vehicles at the junction is uniformly distributed over 5 minute cycle. The expected waiting time in minutes for the vehicle at the junction is _________ asked Jun 28, 2017 in Probability by Amitesh Sharma Junior (791 points) | 1.1k views gate2017-ee The average number of donuts a nine-year old child eats per month is uniformly distributed from 0.5 to 4 donuts, inclusive. Let X= the average number of donuts a nine-year-old child eats per month. Then X~∪(0.5,4) The probability that a different nine-year-old child eats an average of more than two donuts given that his or her amount is more than 1.5 donuts is ________. 4/5 1/5 2/5 3/5 asked Dec 11, 2016 in Mathematical Logic by Neal Caffery Junior (959 points) | 199 views probability distribution A arrives at office at 8-10am regularly; B arrives at 9-11 am every day. Probability that one day B arrives before A? [Assume arrival time of both A and B are uniformly distributed] asked Nov 17, 2016 in Mathematical Logic by vaishali jhalani Active (4.8k points) | 262 views In the cartesian plane, selection of a point P along the y axis in [0,2] is uniformly random. Similarly selection of a point Q along the x axis in [0,2] also uniformly distributed. What is the probability of the area of the triangle POQ to be less than or equal to 1, where O is the origin ? asked Sep 15, 2016 in Probability by dd Veteran (57.2k points) | 404 views In cartesian co-ordinate system,along the x axis two points p and q are selected uniformly at random in $\left [ 0,L \right ]$ where L > 0. What is the probability of $\text{distance(p,q)} \leq \frac{L}{4}$. Question A subway train in a certain line runs after every half hour, between every midnight and 6 in the morning. What is the probability that a man entering the station at random will have to wait at least 20 minutes? I'm stuck here... It ... in this case what is the upper limit of the integral ? URL : https://www.assignmentexpert.com/homework-answers/Math-Answer-40654.pdf asked Jun 18, 2016 in Probability by pC Boss (21.5k points) | 901 views TIFR2015-A-12 Consider two independent and identically distributed random variables $X$ and $Y$ uniformly distributed in $[0, 1]$. For $\alpha \in \left[0, 1\right]$, the probability that $\alpha$ max $(X, Y) < XY$ is $1/ (2\alpha)$ exp $(1 - \alpha)$ $1 - \alpha$ $(1 - \alpha)^{2}$ $1 - \alpha^{2}$ asked Dec 5, 2015 in Probability by makhdoom ghaya Boss (30.7k points) | 287 views tifr2015 Consider three independent uniformly distributed (taking values between $0$ and $1$) random variables. What is the probability that the middle of the three values (between the lowest and the highest value) lies between $a$ and $b$ where $0 ≤ a < b ≤ 1$? $3 (1 - b) a (b - a)$ ... $(1 - b) a (b - a)$ $6 ((b^{2}- a^{2})/ 2 - (b^{3} - a^{3})/3)$. asked Nov 5, 2015 in Probability by makhdoom ghaya Boss (30.7k points) | 285 views GATE2014-1-2 Suppose you break a stick of unit length at a point chosen uniformly at random. Then the expected length of the shorter stick is ________ . asked Sep 26, 2014 in Probability by Arjun Veteran (431k points) | 6.3k views gate2014-1 GATE1998-3a Two friends agree to meet at a park with the following conditions. Each will reach the park between 4:00 pm and 5:00 pm and will see if the other has already arrived. If not, they will wait for 10 minutes or the end of the hour whichever is earlier and leave. What is the probability that the two will not meet? asked Sep 26, 2014 in Probability by Kathleen Veteran (52.2k points) | 1.6k views Suppose we uniformly and randomly select a permutation from the $20 !$ permutations of $1, 2, 3\ldots ,20.$ What is the probability that $2$ appears at an earlier position than any other even number in the selected permutation? $\left(\dfrac{1}{2} \right)$ $\left(\dfrac{1}{10}\right)$ $\left(\dfrac{9!}{20!}\right)$ None of these A point is randomly selected with uniform probability in the $X-Y$ plane within the rectangle with corners at $(0,0), (1,0), (1,2)$ and $(0,2).$ If $p$ is the length of the position vector of the point, the expected value of $p^{2}$ is $\left(\dfrac{2}{3}\right)$ $\quad 1$ $\left(\dfrac{4}{3}\right)$ $\left(\dfrac{5}{3}\right)$ Two $n$ bit binary strings, $S_1$ and $S_2$ are chosen randomly with uniform probability. The probability that the Hamming distance between these strings (the number of bit positions where the two strings differ) is equal to $d$ is $\dfrac{^{n}C_{d}}{2^{n}}$ $\dfrac{^{n}C_{d}}{2^{d}}$ $\dfrac{d}{2^{n}}$ $\dfrac{1}{2^{d}}$ @dr_Jackal yes exactly in question no where...
CommonCrawl
Transcranial color-coded duplex sonography assessment of cerebrovascular reactivity to carbon dioxide: an interventional study Stephanie Klinzing ORCID: orcid.org/0000-0003-2539-68681, Federica Stretti2, Alberto Pagnamenta3,4,5, Markus Bèchir1 & Giovanna Brandi1 The investigation of CO2 reactivity (CO2-CVR) is used in the setting of, e.g., traumatic brain injury (TBI). Transcranial color-coded duplex sonography (TCCD) is a promising bedside tool for monitoring cerebral hemodynamics. This study used TCCD to investigate CO2-CVR in volunteers, in sedated and mechanically ventilated patients without TBI and in sedated and mechanically ventilated patients in the acute phase after TBI. This interventional investigation was performed between March 2013 and February 2016 at the surgical ICU of the University Hospital of Zurich. Ten volunteers (group 1), ten sedated and mechanically ventilated patients (group 2), and ten patients in the acute phase (12–36 h) after severe TBI (group 3) were included. CO2-CVR to moderate hyperventilation (∆ CO2 -5.5 mmHg) was assessed by TCCD. CO2-CVR was 2.14 (1.20–2.70) %/mmHg in group 1, 2.03 (0.15–3.98) %/mmHg in group 2, and 3.32 (1.18–4.48)%/mmHg in group 3, without significant differences among groups. Our data did not yield evidence for altered CO2-CVR in the early phase after TBI examined by TCCD. Part of this trial was performed as preparation for the interventional trial in TBI patients (clinicaltrials.gov NCT03822026, 30.01.2019, retrospectively registered). Cerebral autoregulation allows the maintenance of stable cerebral blood flow (CBF) despite changes in cerebral perfusion pressure (CPP) through variations of cerebral vascular resistance (CVR) [25]. Carbon dioxide (CO2) is a potent cerebral vasodilator, with a sigmoid relationship between paCO2 (arterial carbon dioxide) and CBF that can be assumed to be linear during acute changes in normophysiologic states [7] and which is mediated by CO2 -related changes in extracellular pH. This CO2—induced mechanism is commonly used in the clinical setting to reduce elevated intracranial pressure (ICP) by application of hyperventilation (HV), leading to hypocapnia. A decrease in paCO2 leads to a reduction in CBF, thus reducing cerebral blood volume, and, consequently, ICP. Changes in CVR and CBF in response to changes in CO2 are termed cerebrovascular reactivity to CO2 (CO2-CVR). Several invasive and non-invasive techniques are currently available to assess CBF. These include, e.g., arterial and jugular venous tracer-concentration measurements (Kety-Schmidt method), Xenon clearance technique, positron emission tomography, near-infrared spectroscopy (NIRS), and transcranial Doppler (TCD). The choice of technique is dependent on the clinical scenario. The non-invasive bedside ultrasonography technique of TCD is an attractive tool for determining CBF and CO2-CVR. Reference values for CO2-CVR assessed by TCD in healthy volunteers are reported to range between 2.9 and 3.7%/mmHg [9, 11, 12, 16, 29]. For patients under general anesthesia, however, the potential effect of anesthetic agents has to be taken into account. Current data suggest maintained CO2-CVR during anesthesia and generally accepted values of 2.5–6% change in cm/s/mmHgCVR for CO2-CVR have been reported [5, 8, 15, 19, 27, 28]. In TBI, cerebral circulation may be compromised after injury. Data suggest that CO2-CVR may be preserved or impaired at various stages of TBI [12, 14, 21, 24]. Research concerning the association of impaired CO2-CVR and neurological outcome is ongoing, because conflicting results have been reported [3, 24]. Transcranial color-coded duplex sonography (TCCD) is an ultrasound technique, combining Doppler and Duplex effects, thus allowing the visualization of the examined vessels. As TCCD is more observer- independent than TCD [18], it could be an attractive tool for serial bedside measurements of flow velocities in the Intensive Care Unit (ICU) setting. In the present interventional study, TCCD was used for assessing CO2-CVR. A systematic investigation of CO2-CVR by TCCD in healthy volunteers, patients on mechanical ventilation, and patients with TBI was conducted to investigate whether there is evidence for altered CO2-CVR in the acute phase of TBI in our study population. This study was conducted as an interventional trial in the surgical ICU of the University Hospital of Zurich between March 2013 and February 2016. The Cantonal Ethics Committee of Zurich approved and registered the study (KEK-ZH 2012–0542). Informed written consent was obtained from all participants or next of kin prior to study enrollment and/or from the patient after ICU discharge. Patients in the TBI group were included in a study focusing on the effect of moderate hyperventilation on cerebral metabolism and thus selected according to previously published inclusion criteria [2]. Part of this trial was performed as preparation for the interventional trial in TBI patients (clinicaltrials.gov NCT03822026, retrospectively registered). Patient population The study was conducted in spontaneously breathing volunteers (Group 1), sedated and mechanically ventilated patients with presumed preserved CO2-CVR (Group 2), and sedated and mechanically ventilated patients suffering from severe TBI (TBI Group 3). Inclusion criteria for Group 3 were adults (≥ 18 years of age) with non-penetrating head injury, with an initial Glasgow Coma Scale (GCS) score < 9 prior to sedation and intubation, extended neuromonitoring with ICP, brain tissue oxygenation (PbrO2), and/or microdialysis probes (TBI group), and also undergoing invasive mechanical ventilation with FIO2 < 60% and PEEP < 15 cmH2O. Exclusion criteria for all groups were decompressive craniectomy, pregnancy, pre-existing neurologic disease, previous TBI, acute cardiovascular disease, severe respiratory failure, acute or chronic liver disease, sepsis, and failure to obtain satisfactory bilateral TCCD signals. Patients with persisting hypovolemia or hemodynamic instability despite previous fluid resuscitation (defined as Global End-Diastolic Volume Index < 680 ml/m2, central venous oxygen saturation (ScvO2) < 60% and/or increase in mean arterial blood pressure (MAP) > 15% after passive leg raising test) were excluded. The study was performed in the acute phase (12–36 h) after severe TBI (Group 3), while patients in Group 2 were investigated within 36 h after onset of mechanical ventilation. All TBI patients were treated according to a cerebral perfusion orientated protocol aiming to achieve CPP > 70 mmHg, ICP ≤ 20 mmHg, PbrO2 > 15 mmHg, PaCO2 between 4.8 and 5.2 kPa. For Group 2, a MAP of 65 mmHg was targeted. TCCD measurements TCCD examination of the middle cerebral artery (MCA) was performed bilaterally via the transtemporal acoustic window by two experienced investigators (GB, SK), following standard techniques using a 5–1 MHz Probe (Philips CX 50, USA) [17]. Three repeated measurements of the peak systolic (PSV) and end-diastolic (EDV) velocity were performed for each side and an average value was calculated. The device also automatically calculated CBF-velocity (CBFV) and pulsatility index (PI). In Group 1, ten spontaneously breathing volunteers were examined (Fig. 1, Panel A) using end-tidal carbon dioxide (EtCO2) to monitor ventilation. Subsequently, each volunteer was asked to gradually increase respiratory rate and tidal volume to achieve a reduction in EtCO2 of approximatively 5.5 mmHg. Once the desired ∆ETCO2 was achieved, the volunteer maintained a stable minute ventilation and EtCO2 for the duration of the TCCD measurements. After the TCCD measurements, the volunteer returned to resting ventilation. Study protocol. Panel A. Study protocol for group 1. During baseline conditions (A) and after short-term hyperventilation (B) the following parameters were recorded: end-tidal CO2 (ETCO2), peripheral capillary oxygen saturation (SpO2), heart rate (HR), mean arterial pressure (MAP), and auricular temperature (T). Measurements with transcranial color-coded duplex sonography (TCCD) were performed at time points A and B. Panel B. Study protocol for groups 2 and 3. During baseline conditions (A), after short-term hyperventilation (B), stabilization (C), and sustained hyperventilation (D) several parameters were recorded. ETCO2 remained stable during B, C, and D. TCCD measurements were performed during A, C, and D. An arterial blood gas analysis (ABGA) was obtained during A, C, and D. Only for patients in group 3 were values for intracranial pressure (ICP) and cerebral perfusion pressure (CPP) noted during A, B, C, and D. MV: minute volume Ten sedated and mechanically ventilated ICU patients in Group 2 and ten patients with severe TBI in Group 3 were investigated (Fig. 1, Panel B). Under baseline conditions, a TCCD examination was performed and all variables were recorded (Fig. 1, point A). The minute ventilation was then increased over a 10-min period to obtain moderate HV by a stepwise increase in tidal volume and respiratory rate until a reduction of EtCO2 of 0.7 kPa (Fig. 1, point B) was achieved. After 10 min of stable EtCO2, a second TCCD measurement was undertaken (begin of HV, Fig. 1, point C). The EtCO2 value was kept stable for 40 min, and then followed by a third TCCD examination (Fig. 1, point D). Finally, normoventilation was re-established over 10 min and all variables were allowed to return to baseline (Fig. 1, point E). A final TCCD examination was conducted at this time point. At each time point, MAP, SpO2 and EtCO2 were recorded. Arterial blood gas tests (ABG) were obtained at points A, C, D and E, to monitor the changes in pH and PaCO2. For study purpose, measurements and values obtained at timepoint A and B was used for group 1, while timepoint A and D was used for group 2 and 3. Definition of cerebrovascular reactivity to carbon dioxide CO2-CVR is expressed in terms of absolute and relative reactivity. Absolute CO2-CVR is defined as change in MFV (cm/s) per mmHg change in CO2. Relative CO2-CVR is defined as percentage change compared to baseline value. $$\text{Absolute}\; \text{CO}_{2}-\text{CVR} = \Delta\text{MFV}/\Delta \text{CO}_{2}$$ $$\text{Relative}\; \text{CO}_{2}-\text{CVR} = ( \text{Absolute}\; \text{CO}_{2}-\text{CVR} / \text{baseline}\; \text{MFV}) \times 100$$ As the relative reactivity is less dependent on baseline values, it has been proposed as a more valuable indicator of CO2-CVR for analysis [10]. Relative reactivity was therefore chosen as the indicator for CO2-CVR. ∆MFV = difference in MFV between baseline and after HV. ∆CO2 = difference in CO2 between baseline and after HV. In Group 1, EtCO2 was used, while PaCO2 was used in Group 2 and TBI Group 3. Hyperventilation constricts distal vessels, so a decrease in the absolute value of MFV is expected is the major intracranial vessels, as the ones investigated by TCCD. Descriptive statistics were presented as mean with standard deviation (SD) or as median with interquartile range (IQR) for quantitative data. Categorical data were presented as absolute numbers with percentages. Comparisons of continuous variables among the three groups were performed with one-way analysis of variance or with the Kruskal–Wallis-test, as appropriate. For statistically significant p-values, post-hoc tests were performed, taking the multiple comparisons into account. Qualitative data among the three groups were compared with the Chi-Square test. In cases of statistically significant results, post-hoc comparisons were made with the appropriate critical level adjustment. Comparisons of quantitative data before and during hyperventilation were conducted with the paired Student's t-test or with the Wilcoxon matched pairs test, as appropriate. All tests were done two-sided, and p-values < 0.05 were considered statistically significant. Stata version 12.1 (StatCorp. LP, College Station, TX, USA) was used for all statistical analysis. Baseline characteristics of Group 1, Group 2 and Group 3 are presented in Table 1. As stated in exclusion criteria, patients and volunteers included did not have comorbidities with known impact on cerebral autoregulation. Patients included in group 2 were admitted to the ICU after surgical care (Otolaryngoly (n = 3), plastic surgery (n = 2), thoracic surgery(n = 2), visceral surgery(n = 3)). Patients in Group 3 were under higher dosages of midazolam (p < 0.001), propofol (p = 0.004), fentanyl (p = 0.02), and norepinephrine (p = 0.008) compared to Group 2, while groups were comparable according to age, sex and BMI. Table 1 Baseline characteristics All patients included to group 3 showed traumatic subarachnoidal hemorrhage on the initial CT scan. Seven patients showed bilateral contusional hemorrhage and three patients predominantly left sided contusional hemorrhage. Seven Patients were classified as Marshall 2, one patient as Marshall 3, one patient as Marshall 5 and two patients as Marshall 6. While HR remained stable in all groups, MAP was significantly different between Group 1 and Group 2 (p = 0.001 and p = 0.008) and between Group 2 and Group 3 (p = 0.001 and p = 0.005) at baseline and during HV. HV lead to a significant increase in MV and corresponding decrease in EtCO2 and PaCO2. as well as a significant reduction of MFV in the right and left MCA in all groups (Table 2). Baseline MFV did not differ significantly between group 2 and 3, but was significantly higher at baseline in group 3 compared to group 1 (p = 0.024 (right), p = 0.032 (left)). Table 2 Physiological data Absolute and relative values for CO2-CVR for all groups are presented in Table 3. CO2-CVR was 2.14 (1.20–2.70) %/mmHg in group 1, 2.03 (0.15–3.98) %/mmHg in group 2, and 3.32 (1.18–4.48)%/mmHg in group 3. Table 3 Cerebrovascular carbon dioxide reactivity Neither the CO2-CVR within-groups (comparison of the more- with the less-injured side) nor between-groups were significantly different. The present study used TCCD to assess CO2-CVR in healthy volunteers, patients under sedation and mechanical ventilation without TBI and patients with severe TBI in the first 12–36 h after trauma. TCCD was conducted in the acute phase after TBI as part of another study. [2] A relative CO2-CVR of 2.14%/mmHg (95% CI 1.20–2.70) was found in volunteers, 2.03%/mmHg ( 95% CI 0.15–3.98) in sedated and mechanically ventilated patients and 3.32%/mmHg (95% CI 1.18–4.48) in patients in the acute phase after TBI. CO2-CVR values between groups was not significantly different. How our data compare to the literature In our TCCD study, relative CO2-CVR values in healthy volunteers 2.14%/mmHg (95% CI 1.20–2.70) were lower than those obtained by Klingelhofer et al.[12], which showed a mean CO2-CVR of 3.7 ± 0.5%/mmHg. Flow velocities obtained via TCCD might be higher than TCD values due to correction of the angle of incidence in TCCD measurements [1]. This may influence relative CO2-CVR when TCCD is used. For patients under general anesthesia undergoing major surgery, CO2-CVR assessed with TCD was reported to be preserved and mainly comparable with that of healthy volunteers [5, 8, 19, 27, 28]. This suggests a negligible influence of routinely used anesthetic agents on CO2-CVR. In our study, patients received intravenous analgosedation with Propofol and Remifentanil or Fentanil, in accordance to the referred studies, we did not find evidence of impact of those agents on CO2-CVR. Current values of CO2-CVR around 2.5–6% change in cm/s/mmHg are generally accepted [15]. In accordance with published data, we found a preserved CO2-CVR in our group of sedated and mechanically ventilated patients without TBI [5, 8, 19, 27, 28]. In our TBI patients, CO2-CVR was 3.32%/mmHg (95% CI 1.18–4.48). However, the increase in CO2-CVR did not reach statistical significance. Comparing our data with that in existing literature, some aspects deserve consideration. Klingelhofer et al. [12] reported a decreased but preserved CO2-CVR of 2.0 ± 1.1%/mmHg in 40 patients with acute traumatic and spontaneous cerebral hemorrhage, of whom 24 were in barbiturate coma. As barbiturates have been shown to influence CO2-CVR by metabolic suppression [23], this needs to be taken into account. CO2-CVR was reported to be preserved in other studies with TBI patients, although especially in the acute phase after TBI, impaired CO2-CVR was observed [14, 21, 24, 26]. In comparison with the cumbersome direct measurement of CBF, the non-invasive, bedside tool of sonography has the advantage of serial measurements of MFV and CO2-CVR in critically ill patients, although invasive and non-invasive methods complement each other, depending on the clinical scenario. In our opinion, TCCD offers advantages compared to TCD in the daily setting of an ICU for non-continuous serial measurements, as it has been proven to be less operator dependent [18]. Furthermore, good reliability of interobserver results of TCCD measurements in TBI patients for trained operators has been reported, thus underscoring the value of TCCD to obtain reliable measurements [4]. This is an important aspect in the ICU setting, where serial measurements are performed by variably skilled operators. We were previously able to demonstrate a steep learning curve for residents introduced to TCCD in healthy volunteers [13]. Depending on the clinical scenario, TCCD seems to be interchangeable with TCD for serial monitoring of CO2-CVR, while TCD offers the advantage of continuous monitoring over time with a fixed probe. TBI patients have been shown to have impaired cerebrovascular reactivity during long periods of their ICU stay, with a limited impact of current ICU treatment and an association of impaired cerebrovascular reactivity and outcome [6, 30]. Our study results do not suggest impaired CO2 – CVR. Of notice, CO2 – CVR is only one of several mechanism of cerebral autoregulation, thus preserved CO2-CVR does not imply intact cerebral autoregulation. While on the one hand it is known that prolonged HV can negatively affect outcome[20], on the other hand it has been postulated that hyperventilation, when CO2-CVR is intact, temporarily improves cerebral autoregulation[22]. Thus, our finding of preserved CO2-CVR in the early phase after TBI encourages that cautious hyperventilation under monitoring may be considered a therapeutic option [2]. Furthermore, TCCD may serve as a monitoring tool for serial assessment of CO2-CVR, which may change during the course of TBI, to detect signs of deterioration or recovery of CO2-CVR. One limitation of this study is the small sample size; our results should be confirmed in larger studies of TBI patients. As well, the number of volunteers and patients examined in our number is too small to establish reference values. In a larger study, TCCD measurements for the assessment of CO2-CVR should be performed taking the localization of the insult into account. Furthermore, TCCD measurements for the assessment of CO2-CVR should be performed in both the early and later time course after trauma, taking the localization of the insult into account. Finally, a comparison of CO2-CVR obtained by TCCD and TCD would be desirable. Our data did not yield evidence for altered CO2-CVR in the early phase after TBI and TCCD a reliable tool for determination of CO2-CVR. The datasets used and analyses during the current study are available from the corresponding author on reasonable request. ABGA: Arterial blood gas analysis CO2-CVR: CO2 reactivity CPP: Cerebral perfusion pressure ETCO2 : End-tidal carbon dioxide HR: HV: ICU: Mean arterial pressure MCA: MFV: Mean flow velocity MV: Minute ventilation SAPS II: Simplified acute physiology score II TCD: Transcranial doppler sonography TCCD: Transcranial color-coded duplex sonography Bartels E. Transcranial color-coded duplex ultrasound–possibilities and limits of this method in comparison with conventional transcranial Doppler ultrasound. Ultraschall Med. 1993;14:272–8. Brandi G, Stocchetti N, Pagnamenta A, et al. Cerebral metabolism is not affected by moderate hyperventilation in patients with traumatic brain injury. Crit Care. 2019;23:45. Carmona Suazo JA, Maas AI, Van Den Brink WA, et al. CO2 reactivity and brain oxygen pressure monitoring in severe head injury. Crit Care Med. 2000;28:3268–74. Dupont G, Burnol L, Jospe R, et al. Transcranial Color Duplex Ultrasound: A Reliable Tool for Cerebral Hemodynamic Assessment in Brain Injuries. J Neurosurg Anesthesiol. 2016;28:159–63. Eng C, Lam AM, Mayberg TS, et al. The influence of propofol with and without nitrous oxide on cerebral blood flow velocity and CO2 reactivity in humans. Anesthesiology. 1992;77:872–9. Froese L, Batson C, Gomez A, et al. The limited impact of current therapeutic interventions on cerebrovascular reactivity in traumatic brain injury: a narrative overview. Neurocrit Care. 2021;34:325–35. Grubb RL Jr, Raichle ME, Eichling JO, et al. The effects of changes in PaCO2 on cerebral blood volume, blood flow, and vascular mean transit time. Stroke. 1974;5:630–9. Harrison JM, Girling KJ, Mahajan RP. Effects of target-controlled infusion of propofol on the transient hyperaemic response and carbon dioxide reactivity in the middle cerebral artery. Br J Anaesth. 1999;83:839–44. Izumi Y, Tsuda Y, Ichihara S, et al. Effects of defibrination on hemorheology, cerebral blood flow velocity, and CO2 reactivity during hypocapnia in normal subjects. Stroke. 1996;27:1328–32. Kaiser L. Adjusting for baseline: change or percentage change? Stat Med. 1989;8:1183–90. Kastrup A, Thomas C, Hartmann C, et al. Sex dependency of cerebrovascular CO2 reactivity in normal subjects. Stroke. 1997;28:2353–6. Klingelhofer J, Sander D. Doppler CO2 test as an indicator of cerebral vasoreactivity and prognosis in severe intracranial hemorrhages. Stroke. 1992;23:962–6. Klinzing S, Steiger P, Schupbach RA, et al. Competence for transcranial color-coded Duplex sonography is rapidly acquired. Minerva Anestesiol. 2015;81:298–304. Lee JH, Kelly DF, Oertel M, et al. Carbon dioxide reactivity, pressure autoregulation, and metabolic suppression reactivity after head injury: a transcranial Doppler study. J Neurosurg. 2001;95:222–32. Mariappan R, Mehta J, Chui J, et al. Cerebrovascular reactivity to carbon dioxide under anesthesia: a qualitative systematic review. J Neurosurg Anesthesiol. 2015;27:123–35. Markwalder TM, Grolimund P, Seiler RW, et al. Dependency of blood flow velocity in the middle cerebral artery on end-tidal carbon dioxide partial pressure–a transcranial ultrasound Doppler study. J Cereb Blood Flow Metab. 1984;4:368–72. Mccarville MB. Comparison of duplex and nonduplex transcranial Doppler ultrasonography. Ultrasound Q. 2008;24:167–71. Mcmahon CJ, Mcdermott P, Horsfall D, et al. The reproducibility of transcranial Doppler middle cerebral artery velocity measurements: implications for clinical practice. Br J Neurosurg. 2007;21:21–7. Mirzai H, Tekin I, Tarhan S, et al. Effect of propofol and clonidine on cerebral blood flow velocity and carbon dioxide reactivity in the middle cerebral artery. J Neurosurg Anesthesiol. 2004;16:1–5. Muizelaar JP, Marmarou A, Ward JD, et al. Adverse effects of prolonged hyperventilation in patients with severe head injury: a randomized clinical trial. J Neurosurg. 1991;75:731–9. Newell DW, Aaslid R, Stooss R, et al. Evaluation of hemodynamic responses in head injury patients with transcranial Doppler monitoring. Acta Neurochir (Wien). 1997;139:804–17. Newell DW, Weber JP, Watson R, et al. Effect of transient moderate hyperventilation on dynamic cerebral autoregulation after severe head injury. Neurosurgery. 1996;39:35–43 discussion 43–34. Nordstrom CH, Messeter K, Sundbarg G, et al. Cerebral blood flow, vasoreactivity, and oxygen consumption during barbiturate therapy in severe traumatic brain lesions. J Neurosurg. 1988;68:424–31. Overgaard J, Tweed WA. Cerebral circulation after head injury. 1. Cerebral blood flow and its regulation after closed head injury with emphasis on clinical correlations. J Neurosurg. 1974;41:531–41. Paulson OB, Strandgaard S, Edvinsson L. Cerebral Autoregulation. Cerebrovasc Brain Metab Rev. 1990;2:161–92. Rangel-Castilla L, Lara LR, Gopinath S, et al. Cerebral hemodynamic effects of acute hyperoxia and hyperventilation after severe traumatic brain injury. J Neurotrauma. 2010;27:1853–63. Sakai K, Cho S, Fukusaki M, et al. The effects of propofol with and without ketamine on human cerebral blood flow velocity and CO(2) response. Anesth Analg. 2000;90:377–82. Strebel S, Kaufmann M, Guardiola PM, et al. Cerebral vasomotor responsiveness to carbon dioxide is preserved during propofol and midazolam anesthesia in humans. Anesth Analg. 1994;78:884–8. Widder B, Paulat K, Hackspacher J, et al. Transcranial Doppler CO2 test for the detection of hemodynamically critical carotid artery stenoses and occlusions. Eur Arch Psychiatry Neurol Sci. 1986;236:162–8. Zeiler FA, Ercole A, Placek MM, et al. Association between Physiological Signal Complexity and Outcomes in Moderate and Severe Traumatic Brain Injury: A CENTER-TBI Exploratory Analysis of Multi-Scale Entropy. J Neurotrauma. 2021;38(2):272–82. We thank all volunteers for their highly-appreciated participation in this study. Institute for Intensive Medicine, University Hospital of Zurich, Raemistrasse 100, CH-8091, Zurich, Switzerland Stephanie Klinzing, Markus Bèchir & Giovanna Brandi Intensive Care Unit, Westmead Hospital, Westmead, NSW, Australia Federica Stretti Intensive Care Unit, Regional Hospital of Mendrisio, Mendrisio, Switzerland Alberto Pagnamenta Unit of Clinical Epidemiology, Ente Ospedaliero Cantonale, Bellinzona, Switzerland Division of Pneumology, University of Geneva, Geneva, Switzerland Stephanie Klinzing Markus Bèchir Giovanna Brandi SK and GB designed and performed the study, collected data and drafted the paper. FS collected and interpreted data and critically revised a draft version. AP analysed and interpreted data, also carrying out a critical revision of the draft. MB contributed substantial intellectual input to the design and performance of the study as well as checking interpretation of data and undertaking a critical revision of the draft. All authors read and approved the manuscript. Correspondence to Stephanie Klinzing. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committee (Kantonale Ethikkommission Zürich, Switzerland) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The Cantonal Ethics Committee of Zurich approved and registered the study (KEK-ZH 2012–0542). Informed written consent was obtained from all participants or next of kin prior to study enrollment and/or from the patient after ICU discharge. Klinzing, S., Stretti, F., Pagnamenta, A. et al. Transcranial color-coded duplex sonography assessment of cerebrovascular reactivity to carbon dioxide: an interventional study. BMC Neurol 21, 305 (2021). https://doi.org/10.1186/s12883-021-02310-9 Intensive care ultrasound Cerebral blood flow measurements
CommonCrawl
Article | July 2018 Attention alters spatial resolution by modulating second-order processing Michael Jigo; Marisa Carrasco Michael Jigo Center for Neural Science, New York University, New York, NY, USA Marisa Carrasco Center for Neural Science and Department of Psychology, New York University, New York, NY, USA [email protected] Journal of Vision July 2018, Vol.18, 2. doi:https://doi.org/10.1167/18.7.2 Michael Jigo, Marisa Carrasco; Attention alters spatial resolution by modulating second-order processing. Journal of Vision 2018;18(7):2. doi: https://doi.org/10.1167/18.7.2. Endogenous and exogenous visuospatial attention both alter spatial resolution, but they operate via distinct mechanisms. In texture segmentation tasks, exogenous attention inflexibly increases resolution even when detrimental for the task at hand and does so by modulating second-order processing. Endogenous attention is more flexible and modulates resolution to benefit performance according to task demands, but it is unknown whether it also operates at the second-order level. To answer this question, we measured performance on a second-order texture segmentation task while independently manipulating endogenous and exogenous attention. Observers discriminated a second-order texture target at several eccentricities. We found that endogenous attention improved performance uniformly across eccentricity, suggesting a flexible mechanism that can increase or decrease resolution based on task demands. In contrast, exogenous attention improved performance in the periphery but impaired it at central retinal locations, consistent with an inflexible resolution enhancement. Our results reveal that endogenous and exogenous attention both alter spatial resolution by differentially modulating second-order processing. The visual system's spatial resolution varies systematically with eccentricity: Resolution peaks at the fovea and declines toward the periphery. As a consequence of this inhomogeneity, closely spaced visual objects are best discriminated at the fovea but are cluttered and blurred in the periphery. Covert spatial attention, the selection of visuospatial information in the absence of eye movements, can alleviate this eccentricity-dependent limitation by enhancing spatial resolution (for reviews, see Anton-Erxleben & Carrasco, 2013; Carrasco & Barbot, 2014; Carrasco & Yeshurun, 2009). Conversely, when high-resolution information is sparse or when a global assessment of the scene is required (e.g., viewing the forest rather than individual trees), attention can improve discriminability (Barbot & Carrasco, 2017; Yeshurun, Montagna, & Carrasco, 2008) by reducing spatial resolution (Barbot & Carrasco, 2017). Attention's role in modifying spatial resolution has been demonstrated with tasks that benefit from enhanced resolution: visual search (Carrasco & Yeshurun, 1998; Giordano, McElree, & Carrasco, 2009), acuity and hyperacuity (Carrasco, Williams, & Yeshurun, 2002; Montagna, Pestilli, & Carrasco, 2009; Yeshurun & Carrasco, 1999), and crowding (Grubb et al., 2013; Montaser-Kouhsari & Rajimehr, 2005; Yeshurun & Rashal, 2010). However, enhanced resolution is not always beneficial as illustrated by texture segmentation tasks in which performance is constrained by the visual system's spatial resolution. In texture segmentation tasks, observers detect a texture target of a fixed spatial scale that is embedded at various eccentricities within a larger background texture (Gurnsey, Pearson, & Day, 1996; Kehrer, 1989; Morikawa, 2000; Potechin & Gurnsey, 2003). Performance typically varies nonmonotonically with eccentricity, peaking when resolution is optimal and declining when resolution is too high or too low for the texture's spatial scale. For instance, the discriminability of a large-scale texture is poor at the fovea where resolution is high, peaks in the midperiphery where resolution is optimal, and declines farther in the periphery where resolution is low. The advantage of the midperiphery over more central locations is referred to as the central performance drop (CPD), and its magnitude varies with the texture's spatial scale: the larger the scale of the texture, the farther the eccentricity at which performance peaks and the more pronounced the CPD (Gurnsey et al., 1996; Joffe & Scialfa, 1995; Kehrer, 1989; Yeshurun & Carrasco, 1998; Yeshurun, Montagna et al., 2008). The effects of attention on texture segmentation depend on the type of spatial attention that is manipulated. Exogenous covert attention—the stimulus-driven and transient orienting response to a given location (for a review, see Carrasco, 2011)—automatically and inflexibly enhances resolution even if detrimental to the task. When engaged by brief peripheral cues, exogenous attention improves texture segmentation in the periphery where resolution is low but impairs it at central locations where resolution is already high—a pattern referred to as the central attentional impairment (Carrasco, Loula, & Ho, 2006; Talgar & Carrasco, 2002; Yeshurun & Carrasco, 1998, 2008). In contrast, endogenous covert attention—the voluntary and sustained prioritization of information at a given location (Carrasco, 2011)—improves texture segmentation at both peripheral and central locations, suggesting a more flexible mechanism that can either increase or decrease resolution depending on resolution constraints (Barbot & Carrasco, 2017; Yeshurun, Montagna et al., 2008; for reviews, see Carrasco & Barbot, 2014; Carrasco & Yeshurun, 2009). The resolution account of texture segmentation performance has been ascribed to the degree of overlap between the spatial extent of the texture pattern and the spatial characteristics of second-order filters (Kehrer & Meinecke, 2003; Yeshurun & Carrasco, 2000). These filters comprise models of early visual processing that postulate two successive stages of linear filtering: a first stage that detects luminance-defined (first-order) boundaries and a second stage that pools across space to detect texture-defined (second-order) boundaries (e.g., variations in contrast, orientation, or spatial frequency; for reviews, see Landy, 2013; Victor, Conte, & Chubb, 2017). Second-order filters are tuned to orientation and spatial frequency (SF) and mediate the sensitivity to texture patterns of various spatial scales (Ellemberg, Allen, & Hess, 2006; Graham, Sutter, & Venkatesan, 1993; Landy & Oruç, 2002; Sutter, Sperling, & Chubb, 1995). The SF tuning of these filters is related to receptive-field (RF) size. At central locations, RFs are predominantly small, which allows information to be integrated across narrow regions of space and mediates high-SF tuning (DeValois & DeValois, 1988; Jones & Palmer, 1987). At more peripheral locations, RFs increase in size and are, consequently, tuned to lower SFs (Freeman & Simoncelli, 2011; Gattass, Gross, & Sandell, 1981; Gattass, Sousa, & Gross, 1988; Hess, Baker, May, & Wang, 2008). Such changes in SF tuning across space have also been linked to performance in texture segmentation tasks (Gurnsey et al., 1996; Kehrer, 1997; Kehrer & Meinecke, 2003; Yeshurun & Carrasco, 2000). Normally, both first- and second-order filters simultaneously contribute to task performance, and therefore, their separate influences cannot be disentangled. Thus, to isolate the influence of second-order filters, texture stimuli have been designed to contain only texture-defined (second-order) boundaries, rendering them invisible to luminance-based (first-order) mechanisms. This has been achieved for both dynamic and static stimuli. For instance, Chubb and Sperling (1988, 1989, 1991) constructed second-order motion stimuli devoid of first-order motion by ensuring that stimuli contained equal Fourier energy (i.e., the output of first-order spatiotemporal mechanisms) in opposite motion directions, thereby precluding first-order filters from signaling any apparent motion. Static second-order stimuli can be constructed by arranging high-frequency Gabor elements (first-order content) in low-frequency patterns (second-order content; Graham et al., 1993) or by modulating the luminance contrast of carrier noise (first-order content) with a Gabor pattern at a particular SF (Sutter et al., 1995); both methods yield second-order textures that are invisible to first-order mechanisms and activate a narrow band of second-order filters. Yeshurun and Carrasco (2000) used these two kinds of static stimuli to investigate whether exogenous attention automatically enhances spatial resolution by modulating first- or second-order filters. They found that manipulating first-order content did not alter the attentional effect whereas increasing the SF of second-order content diminished the central attentional impairment and shifted peak performance closer to the fovea. An open question is whether endogenous attention also modifies spatial resolution by modulating second-order filters. Addressing this question could reveal important mechanistic differences between endogenous and exogenous attention and could help identify potential cortical loci that underlie these attentional effects. In particular, if both endogenous and exogenous attention acted on the same second-order filters, their distinct effects on behavior would suggest distinct attentional mechanisms. Moreover, because the striate cortex is capable of extracting second-order boundaries (Hallum, Landy, & Heeger, 2011; Lamme, 1995; Lamme, van Dijk, & Spekreijse, 1993; Larsson, Landy, & Heeger, 2006; Purpura, Victor, & Katz, 1994), these attentional effects could be mediated by cortical processing in V1. Recently, Barbot and Carrasco (2017) investigated the mechanism by which endogenous attention improves performance across eccentricity (Yeshurun, Montagna et al., 2008). Observers detected the presence of a texture target comprising oriented lines with first- and second-order boundaries after selectively adapting to second-order filters of high or low SF. They found that when observers selectively adapted to a low SF, the attentional effect remained. In contrast, after adaptation to a high SF, the benefit at central locations disappeared. These results suggest that endogenous attention operates by modulating sensitivity to high SFs but do not reveal whether these modulations impact first- or second-order processing. Here, we directly investigate endogenous attention's effects on second-order filters by using second-order stimuli in a texture segmentation task. In addition, given that the effects of exogenous and endogenous attention differ for texture targets with first-order content (Yeshurun, Montagna et al., 2008), we assessed whether this difference extends to second-order stimuli. Thus, by independently manipulating both types of attention while keeping the observers, task, and stimuli constant, we provide the first direct comparison of attentional effects on second-order texture processing. Nine New York University (NYU) students (two females, seven males, age range: 23–32) with normal or corrected-to-normal vision participated in this study. All participants were naïve to the purpose of the study (except the author, M.J.) and gave written informed consent under the protocol approved by the Institutional Review Board at NYU. Five participants volunteered, and four participants were remunerated at a rate of $10/hour. Data from one participant were excluded due to below-chance performance in all experimental conditions. Thus, the results reported here are based on eight participants. We based our sample size on previous studies of visual attention and texture segmentation that used a similar experimental design and analytical approach (Yeshurun & Carrasco, 2000, experiment 3, exogenous cueing; Yeshurun, Montagna et al., 2008, experiments 1 and 2, endogenous and exogenous cueing, respectively). We estimated the power in each experiment for a range of sample sizes (two to 10 participants) by drawing random samples from two-dimensional normal distributions. Each dimension's mean and standard deviation were determined from the group-averaged mean and standard error in each cueing condition (neutral, valid) from each experiment. Separate distributions were generated for each eccentricity. For each sample size, we drew a corresponding number of samples at each eccentricity and performed a two-way (cue × eccentricity) repeated-measures ANOVA. This process was repeated 10,000 times, and separate p-value distributions were constructed for the main effects and their interaction. Power was computed as the proportion of significant (p < 0.05) effects in each p-value distribution, and power greater than 0.8 was considered sufficient (Cohen, 1988). Assuming that our study would yield similar cueing effects, we found that a sample size of eight yielded sufficient power for the main effect of cue (endogenous cueing) and the interaction effect with eccentricity (exogenous cueing). Visual stimuli were generated using MGL (http://justingardner.net/mgl), a set of OpenGL libraries running in MATLAB (MathWorks, Natick, MA), and displayed on a 21-in. CRT monitor (1,024 × 768 resolution, 60 Hz). The display was calibrated using a Konica Minolta LS-100 (Ramsey, NJ) to produce linearized look-up tables. Participants sat in a dark and quiet room with their head stabilized by a chin rest placed 57 cm from the monitor. The position of the left eye was monitored at 1000 Hz with an Eyelink 1000 eye tracker (SR research, Ottawa, Ontario, Canada). Second-order texture stimuli (30.5° × 10°) were generated by modulating the luminance contrast of a noise carrier pattern (Figure 1A). The carrier was generated by filtering zero mean random noise (with values ranging from −1 to 1) with an isotropic band-pass filter that had a center spatial frequency of 2 c/deg and a bandwidth of one octave. Pixel values were constrained within ±3 standard deviations of the mean and were normalized to span the maximum range of the CRT display (i.e., zero to one). To create the target, the carrier was multiplied with a Gabor function, G(x,y), \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}G\left( {x,y} \right) = 1 + \alpha \times \cos \left( {2\pi \omega y + \rho } \right) \times \exp \left( {{{ - {{\left( {x - \rho } \right)}^2} - {y^2}} \over {2{\sigma ^2}}}} \right){\rm {,}}\end{equation} whose amplitude, α, was constrained between zero and one to ensure that the function was always positive and, thus, could be rendered on the display. The spatial frequency of the grating, ω, was set to 0.25 c/deg, and its orientation was always vertical. The grating was windowed by a two-dimensional Gaussian whose standard deviation, σ, was 0.8° vertically and horizontally. The location parameter, ρ, determined the phase of the grating and the center of the Gaussian distribution along the horizontal meridian such that the center of the Gaussian function coincided with peak luminance in the grating; this ensured that the appearance of the Gabor modulation was constant at all eccentricities. By constructing the texture stimuli in this way, we ensured that the luminance (first-order content) of the Gabor modulation and the carrier were equivalent while their contrast (second-order content) could differ. View OriginalDownload Slide Schema of stimuli and trial sequence for the 2IFC texture segmentation tasks. (A) Schematic of second-order texture construction. The luminance contrast of isotropic carrier noise was modulated by a Gabor function, yielding a second-order texture target. (B) Endogenous attention task trial sequence. Texture stimuli (stim) differed in each interval; one interval contained noise while the other contained the target. Texture stimuli were followed by an eccentricity cue (EC). (C) Exogenous attention task trial sequence. Duplicate images from panel B were omitted. At the beginning of each trial, two unique textures were generated: One was unmodulated (α = 0, carrier) whereas the other contained a Gabor modulation (target) that was centered at the fovea (0°) or one of six possible eccentricities along the horizontal meridian (1.2°, 2.4°, 3.6°, 4.8°, 6°, 7.2°) to the left or right of fixation (13 total possible locations). All stimuli were displayed on a gray background (31.2 cd/m2). Texture segmentation tasks Endogenous attention task We used a two-interval, forced-choice (2IFC) task, during which two temporal intervals containing texture stimuli were preceded and followed by a cue (Figure 1B). This procedure was based on previous studies (Barbot & Carrasco, 2017; Yeshurun, Montagna et al., 2008). Prior to each interval, participants fixated on a black central cross (0.5° × 0.5°) for 500 ms. Following the fixation period, a precue was presented for 150 ms. On neutral trials, the cue was composed of two long green bars (30.5° × 0.1°, 68.8 cd/m2) that were located 5° above and below the horizontal meridian, respectively. This cue informed participants of the temporal onset of the texture but provided no prior information about the location of the target. On valid trials, endogenous attention was manipulated by a central symbolic cue consisting of a black digit (0.5° × 0.5°) and a green horizontal bar (0.7° × 0.1°) presented on the horizontal meridian.1 This cue provided information about the texture's temporal onset and its location. In the interval containing the target, the digit indicated the target's eccentricity: zero represented the fovea, and one through three represented progressively more eccentric locations (1.2°, 3.6°, and 6° or 2.4°, 4.8°, and 7.2°; see Procedure below). The bar indicated the visual hemifield (left or right) where the target would be displayed; no bar accompanied the cue for the fovea. For nontarget intervals, the cue indicated another possible target location. The precue was followed by a 350-ms interstimulus interval (ISI). A texture stimulus was then presented for 100 ms. Following a 200-ms ISI, an eccentricity cue composed of two white vertical lines (0.03° × 0.7°, 121.2 cd/m2, presented 1.6° above and below the horizontal meridian, respectively) was displayed for 200 ms. In the target interval, the eccentricity cue was centered on the target location whereas in the nontarget interval, the eccentricity cue was centered on another possible target location. This cue eliminated location uncertainty in both cueing conditions. During valid trials, the location of the precue and eccentricity cue was identical in each interval. Following the second interval, the fixation cross turned green, and participants used their right hand to press "1" or "2" on a numeric keyboard to report whether the first or second interval contained the target. Participants were instructed to respond as accurately as possible, without time stress, and auditory feedback was provided for correct and incorrect responses. Exogenous attention task To allow a direct comparison between exogenous and endogenous attention manipulations, the task design was identical to that described above with the following exceptions regarding the spatial and temporal characteristics of the cue. Such changes were necessary to appropriately manipulate exogenous attention (Figure 1C). Following the fixation period in each temporal interval, the precue was presented for 40 ms followed by a 50-ms ISI. On valid trials, the precue was a short horizontal green bar (0.7° × 0.1°) located 1.6° above the horizontal meridian and centered on a possible target location. The precue was centered on the target location when the interval contained a target and appeared at another possible target location during the nontarget interval. This procedure and cue characteristics were similar to those used in previous studies (e.g., Talgar & Carrasco, 2002; Yeshurun & Carrasco, 1998; Yeshurun & Carrasco, 2000). Within each block, cueing condition (neutral or valid), target interval (first or second), target eccentricity, and target hemifield were randomly interleaved. Each participant performed 24 blocks of 56 trials each (total: 1,344 trials) across four experimental sessions that were completed on separate days. Task order was counterbalanced across participants; half completed the exogenous task first and then the endogenous task, and the other half followed the opposite order. In each session and as in previous studies (Barbot & Carrasco, 2017; Yeshurun, Montagna et al., 2008), target eccentricity was constrained to one of two sets of eccentricities: 0°, 1.2°, 3.6°, and 6° or 0°, 2.4°, 4.8°, and 7.2°. Because the central location (0°) was common to both sets, the eccentric locations were tested twice as often as the central location within a session. At the beginning of each block, participants were shown an unmodulated carrier with the tested locations indicated by their respective symbolic digits (zero through three). This informed participants of the possible target locations in the upcoming block and allowed them to readily associate each symbolic cue with a specific target location during the endogenous attention task. For each task, the Gabor's modulation amplitude was adjusted using performance on neutral trials via best PEST—an adaptive staircase procedure as implemented in the Palamedes toolbox (Prins & Kingdom, 2009)—to maintain an average performance of 75% across the three central eccentricities (0°, 1.2°, and 2.4°). Prior to the first experimental session, participants performed an average of 14 practice blocks, during which the staircase was initialized with a uniform prior between modulation amplitudes 0.1 and 1, in steps of 0.02 (46 total levels). On each trial, the posterior probability for a participant's modulation threshold was updated based on task performance. When participants reached equivalent thresholds (defined as the final threshold estimate on each block) on at least three blocks, their final posterior distribution was saved and served as the prior probability distribution used to initialize the staircase in the main experiment. During the main experiment, the modulation amplitude was updated on each trial to account for any further learning effects and/or fatigue. Although the modulation amplitude was allowed to vary, the narrow prior distribution prevented large changes across trials (average SD = 0.03, approximately one step). A repeated-measures ANOVA revealed that modulation amplitude did not vary systematically with eccentricity, cue or their interaction (all Fs ≤ 0.2). Performance, d′ = z(hit rate) minus z(false-alarm rate), was computed for each observer across experimental sessions and separately for each eccentricity and cue (neutral, peripheral, and central). A hit was (arbitrarily) defined as a first-interval response to a target occurring in the first interval and a false alarm as a first-interval response to a second-interval target. Performance was averaged across hemifields, and repeated-measures ANOVAs were used to assess the effects of cue condition and eccentricity. In all cases in which Mauchly's test of sphericity indicated a violation of the sphericity assumption, Greenhouse–Geisser corrected values were used. ANOVA effect sizes were reported in terms of generalized eta-squared (Display Formula\(\eta _G^2\); Bakeman, 2005; Olejnik & Algina, 2003). In our formulation of d′, we opted to ignore the Display Formula\(\sqrt 2 \) relation between performance in 2IFC and yes–no tasks because this relation is predicated on the assumption that performance is equal in each interval of a 2IFC task (e.g., Egan, 1975; Green & Swets, 1973; Macmillan & Creelman, 2005). This assumption, however, has been shown to be invalid for many data sets from different labs (Yeshurun, Carrasco, & Maloney, 2008). To verify our d′ results, we also evaluated performance as proportion correct; our results were not impacted by the performance measure used. Results using d′ are reported below. Although participants were instructed to be as accurate as possible without any time constraint, reaction times (RTs) were analyzed to rule out any speed–accuracy trade-offs. To assess RTs when the target was correctly detected, we collapsed across hit and correct rejection trials (i.e., a second-interval response to a second-interval target), computed geometric means, and performed a two-way, repeated-measures ANOVA with cue condition and eccentricity as factors. Group-averaged performance was fitted with the best-fitting polynomial as determined by a leave-one-out cross-validation approach. For each cueing condition, we removed one data point (i.e., one eccentricity), fit polynomials (orders one through four) to the remaining six data points, and computed the residual sum of squares for the excluded value. This process was repeated seven times with a unique eccentricity excluded each time. We used the mean squared error across iterations as an index of the goodness of fit and observed that performance in both tasks was best fit by third-order polynomials. Individual observer data were then fit with third-order polynomials, which provided an estimate of the peak eccentricity. Within each task, paired t tests were used to compare the peak eccentricity between neutral and valid conditions; effect sizes were reported in terms of Cohen's d (Cohen, 1988; Lakens, 2013). Participants were instructed to maintain fixation until they made their response. If a blink or an eye movement ≥1.5° occurred before the response, the trial was immediately aborted and a tone was played, reminding participants to maintain fixation. These trials were rerun at the end of the block. To verify that participants maintained fixation, off-line eye-position analysis was performed. Raw eye data were converted to eye position in degrees of visual angle. For each temporal interval, the mean eye position during the fixation period served as a baseline and was subtracted from the stimulus presentation interval to compensate for any slow drift within the trial. The average eye position during stimulus presentation was within 1° on 99.9% of trials. Endogenous attention uniformly improves performance Performance on the endogenous attention task (Figure 1B) is depicted in Figure 2 for discriminability (top panel) and RT (bottom panel). The effects of cue (neutral vs. central) and eccentricity (seven eccentricities) were assessed with a two-way, repeated-measures ANOVA. Average performance (top) and reaction time (bottom) as a function of target eccentricity and cue type (neutral vs. central) when endogenous attention was manipulated. In the neutral condition (black), performance peaked at the midperiphery (∼2° of eccentricity) and declined toward more central and peripheral locations, replicating the CPD. Precueing the target location with a central cue (red) improved performance at all eccentricities. Error bars represent ± within-subject SEM (Cousineau, 2005). Performance data were fit with third-order polynomials (solid lines), and shaded regions represent 95% confidence intervals for peak eccentricity estimates as determined by a bootstrapping procedure. There was a significant main effect of eccentricity, F(2.13, 14.88) = 4.41, p < 0.05; Display Formula\(\eta _G^2\) = 0.24, with performance peaking in the midperiphery and declining at both central and peripheral locations (Figure 2). This pattern is consistent with texture segmentation studies using oriented line stimuli (e.g., Gurnsey et al., 1996; Kehrer, 1989; Morikawa, 2000; Yeshurun & Carrasco, 1998) and extends the occurrence of the CPD to second-order textures. Importantly, there was a significant main effect of attention, F(1, 7) = 18.04, p < 0.005; Display Formula\(\eta _G^2\) = 0.043, which did not interact with eccentricity (F < 1). That is, endogenous attention uniformly increased performance at all eccentricities for second-order textures. This pattern is consistent with reported effects of endogenous attention on texture segmentation with first-order textures (Barbot & Carrasco, 2017; Yeshurun, Montagna et al., 2008). Group-averaged performance was well described by third-order polynomials (neutral: R2 = 0.90; central: R2 = 0.86). To assess whether attention altered the eccentricity of the performance peak, third-order polynomials were fit to individual participant data, and estimates of peak eccentricity were obtained. The location of the peak did not differ between cue conditions (neutral: 2.59° ± 0.80°, central: 2.69° ± 0.76°), t(7) = 0.09, p = 0.9, as depicted by the overlapping 95% confidence intervals (Figure 2, shaded regions). Analysis of RT confirmed that there were no speed–accuracy trade offs (Figure 2, bottom). A significant main effect of cue, F(1, 7) = 26.40, p < 0.005; Display Formula\(\eta _G^2\) = 0.015, revealed faster RTs with the central cue (mean: 0.15 s) than the neutral cue (mean: 0.17 s). Thus, participants were more sensitive and responded faster with the central cue, ruling out speed–accuracy trade offs. The main effect of eccentricity and the interaction were nonsignificant (all Fs < 2.5, p > 0.06). Exogenous attention yields eccentricity-dependent effects on performance Performance on the exogenous attention task (Figure 1C) is depicted in Figure 3 for discriminability (top panel) and RT (bottom panel). The effects of cue (neutral vs. peripheral) and eccentricity (seven eccentricities) were assessed with a two-way, repeated-measures ANOVA. Average performance (top) and reaction time (bottom) as a function of target eccentricity and cue type (neutral vs. peripheral) when exogenous attention was manipulated. Precueing the target location with a peripheral cue (red) improved performance in the periphery but impaired performance at central locations. Error bars represent ± within-subject SEM. Performance data were fit with third-order polynomials (solid lines), and shaded regions represent the 95% confidence interval for peak eccentricity estimates as determined by a bootstrapping procedure. There was a significant main effect of cue, F(1, 7) = 48.38, p < 0.001; Display Formula\(\eta _G^2\) = 0.10, and eccentricity, F(6, 42) = 3.13, p < 0.05; Display Formula\(\eta _G^2\) = 0.19. Critically, there was a significant cue × eccentricity interaction, F(2.82, 19.73) = 7.73, p < 0.005; Display Formula\(\eta _G^2\) = 0.084; peripheral cues improved performance at peripheral locations but impaired it at central locations (Figure 3, top). This central attentional impairment is consistent with previously reported effects of exogenous attention on texture segmentation with first-order (Carrasco et al., 2006; Talgar & Carrasco, 2002; Yeshurun & Carrasco, 1998) and second-order textures (Yeshurun & Carrasco, 2000). Group-averaged performance was well described by third-order polynomials (neutral: R2 = 0.90; peripheral: R2 = 0.76), which were used to estimate peak eccentricities for individual participants. Peripheral cues shifted peak eccentricity toward the periphery (neutral: 1.13° ± 0.33°, peripheral: 3.80° ± 0.98°), t(7) = 2.50, p < 0.05, Cohen's d = 0.89 (Figure 3, shaded regions), consistent with an increase in spatial resolution relative to the fixed scale of the texture target (Carrasco et al., 2006; Talgar & Carrasco, 2002; Yeshurun & Carrasco, 1998). Moreover, this result indicates that the peak eccentricity shift by exogenous attention extends to second-order textures. Analysis of RT ruled out speed–accuracy trade offs (Figure 3, bottom). A significant main effect of cue, F(1, 7) = 12.17, p < 0.05; Display Formula\(\eta _G^2\) = 0.014, revealed that RTs were faster with the peripheral cue (mean: 0.15 s) than the neutral cue (mean: 0.18 s), ruling out speed–accuracy trade-offs. The main effect of eccentricity and the interaction were nonsignificant (all Fs < 1.3, p > 0.3). Spatial covert attention yields distinct effects on texture segmentation Because exogenous and endogenous attention both were within-subject manipulations, their effects on texture segmentation could be directly compared. Cueing effects (valid minus neutral) were computed for each observer, and the group average was fit with straight lines to aid visualization (Figure 4). Cueing effects (valid minus neutral) as a function of target eccentricity and cue type (central vs. peripheral). Precuing with peripheral cues impaired performance at central locations and improved performance in the periphery. Central cues yielded a performance improvement that was independent of eccentricity. Straight-line fits (solid lines) were used to aid in data visualization. Whereas endogenous attention produced an approximately uniform benefit across eccentricity, the effect of exogenous attention scaled with eccentricity: Costs became larger toward the fovea, and benefits increased toward the periphery. In accordance with this crossover in cueing effects, a two-way, repeated-measures ANOVA (2 attention types × 7 eccentricities) revealed a significant interaction, F(2.79, 19.54) = 3.53, p < 0.05; Display Formula\(\eta _G^2\) = 0.18. In addition, we observed a main effect of eccentricity, F(6, 42) = 2.41, p < 0.05; Display Formula\(\eta _G^2\) = 0.13, and attention, F(1, 7) = 8.26, p < 0.05; Display Formula\(\eta _G^2\) = 0.03, due to a larger benefit of exogenous attention at peripheral locations. These results highlight the distinct mechanisms of both types of attention: an inflexible exogenous mechanism whose resolution enhancement is most beneficial in the periphery (Carrasco et al., 2002; Carrasco & Yeshurun, 1998; Yeshurun & Carrasco, 1999) and a flexible endogenous mechanism that modifies resolution according to task demands. Using a second-order texture-segmentation task, this study shows that endogenous attention improves performance at all attended locations, suggesting that it is an adaptive mechanism that can flexibly modulate second-order filters. Such an adaptive mechanism has been suggested (Yeshurun, Montagna et al., 2008) and demonstrated (Barbot & Carrasco, 2017) before. However, this is the first study to isolate second-order processing while comparing the effects of exogenous and endogenous attention on texture segmentation. Our results reveal that second-order filters mediate the differential effects of exogenous and endogenous attention. Exogenous attention impairs performance at central locations and improves it at peripheral locations, consistent with an automatic resolution enhancement. In contrast, endogenous attention improves performance across eccentricity, consistent with a flexible modulation of resolution. Such flexible control of spatial resolution by endogenous attention could be explained in terms of interfrequency inhibition; small RFs tuned to high SFs inhibit large RFs tuned to low SFs at the same spatial location (Foley & McCourt, 1985; Gurnsey et al., 1996; McCourt, 1982; Yeshurun & Carrasco, 2000). According to this hypothesis, the CPD is the consequence of greater high-SF sensitivity at the fovea inhibiting sensitivity to low SFs, thereby narrowing spatial filters and yielding a spatial resolution that is too high for the texture target. Thus, to alleviate the CPD and improve performance at central locations, endogenous attention reduces sensitivity to high SFs, which effectively reduces the resolution at central locations. Conversely, to improve performance in the periphery, endogenous attention increases high-SF sensitivity and effectively narrows spatial filters to increase resolution. Interfrequency inhibition is consistent with the finding that endogenous attention operates primarily on high-SF filters (Barbot & Carrasco, 2017). In addition, we provide further evidence that exogenous attention inflexibly increases spatial resolution. The improved performance in the periphery but impaired performance in central locations is consistent with studies that demonstrate this effect with oriented line stimuli (Carrasco et al., 2006; Talgar & Carrasco, 2002; Yeshurun & Carrasco, 1998; Yeshurun & Carrasco, 2008) and second-order textures (Yeshurun & Carrasco, 2000). Thus, our results support the notion that exogenous attention automatically modulates second-order processing by inflexibly narrowing spatial filters, which increases resolution. As described by Yeshurun and Carrasco (2000), this narrowing could be explained in terms of interfrequency inhibition. That is, exogenous attention automatically increases high-SF sensitivity and consequently inhibits low SFs, leading to an enhanced spatial resolution at the attended location. Consistent with this hypothesis, selective adaptation to high SFs diminishes the detrimental effects of exogenous attention on texture segmentation (Carrasco et al., 2006). Because both endogenous and exogenous attention operate on second-order filters tuned to high SFs, their distinct effects on spatial resolution highlight their different modes of operation. Whereas endogenous attention enhances or reduces resolution depending on the resolution constraints of the visual system, exogenous attention inflexibly enhances resolution. Our findings provide converging evidence for an automatic exogenous mechanism whose effects are invariant to the predictability of the cue (Giordano et al., 2009) and have been attributed to modulations of high-SF neurons that enhance spatial resolution and reduce temporal resolution (Carrasco et al., 2006; Megna, Rocchi, & Baldassi, 2012; Yeshurun & Levy, 2003; Yeshurun & Sabo, 2012). Our findings also provide further evidence for a more adaptive endogenous mechanism that improves both spatial and temporal discriminability (Barbot & Carrasco, 2017; Barbot, Landy, & Carrasco, 2012; Giordano et al., 2009; Hein, Rolke, & Ulrich, 2006; Yeshurun, Montagna et al., 2008). It is unlikely that the effects of attention were mediated by our stimuli's first-order content for the following reasons: First, the discriminability of a second-order stimulus is invariant to the contrast of its first-order carrier (Barbot, Landy, & Carrasco, 2011). Thus, attention-related changes in first-order contrast sensitivity would have a negligible impact on the detection of a second-order target. Second, attentional effects on second-order stimuli are immune to changes in first-order content (Yeshurun & Carrasco, 2000). Taken together, these facts support the interpretation that the attentional effects we report here were mediated by the second-order content of our stimuli. Our exogenous attention effects paralleled those of Yeshurun and Carrasco (2000) on second-order textures, but the overall pattern of performance differed. Specifically, performance in their study peaked near the fovea whereas our results exhibited a pronounced CPD. Additionally, we observed that exogenous attention shifted the peak eccentricity 2.7° toward the periphery whereas the peak shift in that study was negligible. We attribute these differences in the overall pattern of performance to the differences in performance titration. Whereas we controlled performance by adjusting the modulation amplitude (second-order contrast) of the texture, they adjusted its duration, which does not reliably elicit the CPD (Morikawa, 2000). Moreover, by presenting the second-order texture at contrast threshold, we presumably activated second-order filters narrowly tuned to the target, which provided a better probe for their role in texture segmentation. Our findings of endogenous attention's effect on spatial resolution are consistent with neurophysiological and human fMRI evidence (review by Anton-Erxleben & Carrasco, 2013). Yet we note that these studies have not investigated second-order textures directly. In macaques, endogenous attention yields changes in RF size and shifts their position toward the attended location. In particular, RFs shrink around the attended location when attention is directed inside the RF and expand when directed outside (Anton-Erxleben, Stephan, & Treue, 2009; Womelsdorf, Anton-Erxleben, Pieper, & Treue, 2006). Indeed, recent neurophysiological evidence has shown that when attention is allocated near the RFs, they are elongated toward the attentional locus (Obara, O'Hashi, & Tanifuji, 2017). Similarly, converging evidence from human fMRI shows that attention narrows the spatial overlap in blood-oxygen-level dependent responses between adjacent locations (Fischer & Whitney, 2009) whereas withdrawing attention from the periphery enlarges peripheral population receptive fields (pRFs; de Haas, Schwarzkopf, Anderson, & Rees, 2014). In addition, attention attracts pRFs toward the attended location across the visual field and throughout cortical areas (Klein, Harvey, & Dumoulin, 2014). Likewise, encoding models that extract the spatial selectivity of individual voxels (vRFs) have shown that vRFs shift toward the attended location, improving spatial discriminability (Vo, Sprague, & Serences, 2017). In sum, the attention-induced shifts and narrowing of spatial selectivity are possible mechanisms by which endogenous attention increases the sensitivity of small RFs at the attended location, leading to enhanced resolution. Conversely, by enlarging RFs, attention could reduce the spatial resolution at the attended location. The neural mechanisms of exogenous attention on spatial resolution, however, have not been well characterized. But given our observed differences between both types of spatial attention, further investigations will likely reveal distinct RF modulations. Our findings provide novel evidence that endogenous attention alters the visual system's spatial resolution by modulating second-order processing. In particular, endogenous attention flexibly enhances or reduces resolution based on the spatial scale of relevant visual information and the constraints of the visual system. In addition, consistent with a previous study (Yeshurun & Carrasco, 2000), we provide converging evidence that exogenous attention inflexibly enhances resolution via second-order filters. Thus, our results highlight the distinct mechanisms of both types of covert spatial attention. M. Carrasco was supported by NIH Grants RO1-EY016200 and RO1-EY019693. We want to thank Antoine Barbot as well as the members of the Carrasco lab for constructive comments on the manuscript. The authors declare no conflicts of interest. Commercial relationships: none. Corresponding author: Marisa Carrasco. Email: [email protected]. Address: Department of Psychology, New York University, New York, NY, USA. Anton-Erxleben, K., & Carrasco, M. (2013). Attentional enhancement of spatial resolution: Linking behavioural and neurophysiological evidence. Nature Reviews Neuroscience, 14 (3), 188–200, https://doi.org/10.1038/nrn3443. Anton-Erxleben, K., Stephan, V. M., & Treue, S. (2009). Attention reshapes center-surround receptive field structure in macaque cortical area MT. Cerebral Cortex, 19 (10), 2466–2478, https://doi.org/10.1093/cercor/bhp002. Bakeman, R. (2005). Recommended effect size statistics for repeated measures designs. Behavior Research Methods, 37 (3), 379–384. Barbot, A., & Carrasco, M. (2017). Attention modifies spatial resolution according to task demands. Psychological Science, 28 (3), 285–296. Barbot, A., Landy, M. S., & Carrasco, M. (2011). Exogenous attention enhances 2nd-order contrast sensitivity. Vision Research, 51 (9), 1086–1098, https://doi.org/10.1016/j.visres.2011.02.022. Barbot, A., Landy, M. S., & Carrasco, M. (2012). Differential effects of exogenous and endogenous attention on second-order texture contrast sensitivity. Journal of Vision, 12 (8): 6, 1–15, https://doi.org/10.1167/12.8.6. [PubMed] [Article] Carrasco, M. (2011). Visual attention: The past 25 years. Vision Research, 51 (13), 1484–1525, https://doi.org/10.1016/j.visres.2011.04.012. Carrasco, M., & Barbot, A. (2014). How attention affects spatial resolution. Cold Spring Harbor Symposia on Quantitative Biology, 79, 149–160, https://doi.org/10.1101/sqb.2014.79.024687. Carrasco, M., Loula, F., & Ho, Y.-X. (2006). How attention enhances spatial resolution: Evidence from selective adaptation to spatial frequency. Attention, Perception, & Psychophysics, 68 (6), 1004–1012. Carrasco, M., Williams, P. E., & Yeshurun, Y. (2002). Covert attention increases spatial resolution with or without masks: Support for signal enhancement. Journal of Vision, 2 (6): 4, 467–479, https://doi.org/10.1167/2.6.4. [PubMed] [Article] Carrasco, M., & Yeshurun, Y. (1998). The contribution of covert attention to the set-size and eccentricity effects in visual search. Journal of Experimental Psychology: Human Perception and Performance, 24 (2), 673–692. Carrasco, M., & Yeshurun, Y. (2009). Covert attention effects on spatial resolution. In Srinivasan N. (Ed.), Progress in brain research, vol. 176 ( pp. 65–86). Amsterdam, the Netherlands: Elsevier, https://doi.org/10.1016/S0079-6123(09)17605-7. Chubb, C., & Sperling, G. (1988). Drift-balanced random stimuli: A general basis for studying non-Fourier motion perception. Journal of the Optical Society of America A, 5 (11), 1986–2007. Chubb, C., & Sperling, G. (1989). Second-order motion perception: Space/time separable mechanisms. In Proceedings of the Workshop on Visual Motion, 1989 ( pp. 126–138). IEEE. Chubb, C., & Sperling, G. (1991). Texture quilts: Basic tools for studying motion-from-texture. Journal of Mathematical Psychology, 35 (4), 411–442. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum Associates. Cousineau, D. (2005). Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson's method. Tutorials in Quantitative Methods for Psychology, 1 (1), 42–45. de Haas, B., Schwarzkopf, D. S., Anderson, E. J., & Rees, G. (2014). Perceptual load affects spatial tuning of neuronal populations in human early visual cortex. Current Biology, 24 (2), R66–R67. DeValois, R. L., & DeValois, K. K. (1988). Spatial vision. New York: Oxford University Press. Egan, J. P. (1975). Signal detection theory and ROC analysis. New York: Academic Press. Ellemberg, D., Allen, H. A., & Hess, R. F. (2006). Second-order spatial frequency and orientation channels in human vision. Vision Research, 46 (17), 2798–2803, https://doi.org/10.1016/j.visres.2006.01.028. Fischer, J., & Whitney, D. (2009). Attention narrows position tuning of population responses in V1. Current Biology, 19 (16), 1356–1361, https://doi.org/10.1016/j.cub.2009.06.059. Foley, J. M., & McCourt, M. E. (1985). Visual grating induction. Journal of the Optical Society of America A, 2 (7), 1220–1230. Freeman, J., & Simoncelli, E. P. (2011). Metamers of the ventral stream. Nature Neuroscience, 14 (9), 1195–1201, https://doi.org/10.1038/nn.2889. Gattass, R., Gross, C. G., & Sandell, J. H. (1981). Visual topography of V2 in the macaque. Journal of Comparative Neurology, 201 (4), 519–539. Gattass, R., Sousa, A. P., & Gross, C. G. (1988). Visuotopic organization and extent of V3 and V4 of the macaque. Journal of Neuroscience, 8 (6), 1831–1845. Giordano, A. M., McElree, B., & Carrasco, M. (2009). On the automaticity and flexibility of covert attention: A speed-accuracy trade-off analysis. Journal of Vision, 9 (3): 30, 1–10, https://doi.org/10.1167/9.3.30. [PubMed] [Article] Graham, N., Sutter, A., & Venkatesan, C. (1993). Spatial-frequency- and orientation-selectivity of simple and complex channels in region segregation. Vision Research, 33 (14), 1893–1911. Green, D. M., & Swets, J. A. (1973). Signal detection theory and psychophysics. Huntington, NY: Krieger Publishing. Grubb, M. A., Behrmann, M., Egan, R., Minshew, N. J., Heeger, D. J., & Carrasco, M. (2013). Exogenous spatial attention: Evidence for intact functioning in adults with autism spectrum disorder. Journal of Vision, 13 (14): 9, 1–13, https://doi.org/10.1167/13.14.9. [PubMed] [Article] Gurnsey, R., Pearson, P., & Day, D. (1996). Texture segmentation along the horizontal meridian: Nonmonotonic changes in performance with eccentricity. Journal of Experimental Psychology: Human Perception and Performance, 22 (3), 738–757. Hallum, L. E., Landy, M. S., & Heeger, D. J. (2011). Human primary visual cortex (V1) is selective for second-order spatial frequency. Journal of Neurophysiology, 105 (5), 2121–2131, https://doi.org/10.1152/jn.01007.2010. Hein, E., Rolke, B., & Ulrich, R. (2006). Visual attention and temporal discrimination: Differential effects of automatic and voluntary cueing. Visual Cognition, 13 (1), 29–50, https://doi.org/10.1080/13506280500143524. Hess, R. F., Baker, D. H., May, K. A., & Wang, J. (2008). On the decline of 1st and 2nd order sensitivity with eccentricity. Journal of Vision, 8 (1): 19, 1–12, https://doi.org/10.1167/8.1.19. [PubMed] [Article] Joffe, K. M., & Scialfa, C. T. (1995). Texture segmentation as a function of eccentricity, spatial frequency and target size. Spatial Vision, 9 (3), 325–342. Jones, J. P., & Palmer, L. A. (1987). An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58 (6), 1233–1258. Kehrer, L. (1989). Central performance drop on perceptual segregation tasks. Spatial Vision, 4 (1), 45–62. Kehrer, L. (1997). The central performance drop in texture segmentation: A simulation based on a spatial filter model. Biological Cybernetics, 77 (4), 297–305. Kehrer, L., & Meinecke, C. (2003). A space-variant filter model of texture segregation: Parameter adjustment guided by psychophysical data. Biological Cybernetics, 88 (3), 183–200, https://doi.org/10.1007/s00422-002-0369-3. Klein, B. P., Harvey, B. M., & Dumoulin, S. O. (2014). Attraction of position preference by spatial attention throughout human visual cortex. Neuron, 84 (1), 227–237, https://doi.org/10.1016/j.neuron.2014.08.047. Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4: 863, https://doi.org/10.3389/fpsyg.2013.00863. Lamme, V. A. (1995). The neurophysiology of figure-ground segregation in primary visual cortex. Journal of Neuroscience, 15 (2), 1605–1615. Lamme, V. A. F., van Dijk, B. W., & Spekreijse, H. (1993). Organization of texture segregation processing in primate visual cortex. Visual Neuroscience, 10 (05), 781–790, https://doi.org/10.1017/S0952523800006039. Landy, M. S. (2013). Texture analysis and perception. In Werner J. S. & Chalupa L. M. (Eds.), The new visual neurosciences (pp. 639–652). Cambridge, MA: MIT Press. Landy, M. S., & Oruç, İ. (2002). Properties of second-order spatial frequency channels. Vision Research, 42 (19), 2311–2329. Larsson, J., Landy, M. S., & Heeger, D. J. (2006). Orientation-selective adaptation to first- and second-order patterns in human visual cortex. Journal of Neurophysiology, 95 (2), 862–881, https://doi.org/10.1152/jn.00668.2005. Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: A user's guide. Mahwah, NJ: Lawrence Erlbaum Associates. McCourt, M. E. (1982). A spatial frequency dependent grating-induction effect. Vision Research, 22 (1), 119–134. Megna, N., Rocchi, F., & Baldassi, S. (2012). Spatio-temporal templates of transient attention revealed by classification images. Vision Research, 54, 39–48, https://doi.org/10.1016/j.visres.2011.11.012. Montagna, B., Pestilli, F., & Carrasco, M. (2009). Attention trades off spatial acuity. Vision Research, 49 (7), 735–745, https://doi.org/10.1016/j.visres.2009.02.001. Montaser-Kouhsari, L., & Rajimehr, R. (2005). Subliminal attentional modulation in crowding condition. Vision Research, 45 (7), 839–844, https://doi.org/10.1016/j.visres.2004.10.020. Morikawa, K. (2000). Central performance drop in texture segmentation: The role of spatial and temporal factors. Vision Research, 40 (25), 3517–3526. Obara, K., O'Hashi, K., & Tanifuji, M. (2017). Mechanisms for shaping receptive field in monkey area TE. Journal of Neurophysiology, 118 (4), 2448–2457, https://doi.org/10.1152/jn.00348.2017. Olejnik, S., & Algina, J. (2003). Generalized eta and omega squared statistics: Measures of effect size for some common research designs. Psychological Methods, 8 (4), 434–447, https://doi.org/10.1037/1082-989X.8.4.434. Potechin, C., & Gurnsey, R. (2003). Backward masking is not required to elicit the central performance drop. Spatial Vision, 16 (5), 393–406. Prins, N., & Kingdom, F. A. A. (2009). Palamedes: Matlab routines for analyzing psychophysical data. Retrieved from http://www.palamedestoolbox.org. Purpura, K. P., Victor, J. D., & Katz, E. (1994). Striate cortex extracts higher-order spatial correlations from visual textures. Proceedings of the National Academy of Sciences, USA, 91 (18), 8482–8486. Sutter, A., Sperling, G., & Chubb, C. (1995). Measuring the spatial frequency selectivity of second-order texture mechanisms. Vision Research, 35 (7), 915–924. Talgar, C. P., & Carrasco, M. (2002). Vertical meridian asymmetry in spatial resolution: Visual and attentional factors. Psychonomic Bulletin & Review, 9 (4), 714–722. Victor, J. D., Conte, M. M., & Chubb, C. F. (2017). Textures as probes of visual processing. Annual Review of Vision Science, 3, 275–296. Vo, V. A., Sprague, T. C., & Serences, J. T. (2017). Spatial tuning shifts increase the discriminability and fidelity of population codes in visual cortex. The Journal of Neuroscience, 37 (12), 3386–3401, https://doi.org/10.1523/JNEUROSCI.3484-16.2017. Womelsdorf, T., Anton-Erxleben, K., Pieper, F., & Treue, S. (2006). Dynamic shifts of visual receptive fields in cortical area MT by spatial attention. Nature Neuroscience, 9 (9), 1156–1160, https://doi.org/10.1038/nn1748. Yeshurun, Y., & Carrasco, M. (1998, November 5). Attention improves or impairs visual performance by enhancing spatial resolution. Nature, 396 (6706), 72–75. Yeshurun, Y., & Carrasco, M. (1999). Spatial attention improves performance in spatial resolution tasks. Vision Research, 39 (2), 293–306. Yeshurun, Y., & Carrasco, M. (2000). The locus of attentional effects in texture segmentation. Nature Neuroscience, 3 (6), 622–627. Yeshurun, Y., & Carrasco, M. (2008). The effects of transient attention on spatial resolution and the size of the attentional cue. Perception & Psychophysics, 70 (1), 104–113, https://doi.org/10.3758/PP.70.1.104. Yeshurun, Y., Carrasco, M., & Maloney, L. T. (2008). Bias and sensitivity in two-interval forced choice procedures: Tests of the difference model. Vision Research, 48 (17), 1837–1851, https://doi.org/10.1016/j.visres.2008.05.008. Yeshurun, Y., & Levy, L. (2003). Transient spatial attention degrades temporal resolution. Psychological Science, 14 (3), 225–231. Yeshurun, Y., Montagna, B., & Carrasco, M. (2008). On the flexibility of sustained attention and its effects on a texture segmentation task. Vision Research, 48 (1), 80–95, https://doi.org/10.1016/j.visres.2007.10.015. Yeshurun, Y., & Rashal, E. (2010). Precueing attention to the target location diminishes crowding and reduces the critical distance. Journal of Vision, 10 (10): 16, 1–12, https://doi.org/10.1167/10.10.16. [PubMed] [Article] Yeshurun, Y., & Sabo, G. (2012). Differential effects of transient attention on inferred parvocellular and magnocellular processing. Vision Research, 74, 21–29, https://doi.org/10.1016/j.visres.2012.06.006. 1 For one participant, the cue was placed 1.6° above the fixation cross due to an apparent masking effect when the target appeared in the center of the screen. Copyright 2018 The Authors Categorical grouping is not required for guided conjunction search Modality-specific and multisensory mechanisms of spatial attention and expectation Cross-modal social attention triggered by biological motion cues Two hands are better than one: Perceptual benefits by bimanual movements Effects of visual short-term memory load and attentional demand on the contrast response function From Other Journals Increasing Attentional Load Boosts Saccadic Adaptation A Preliminary Study on Normalized Pattern-Reversal Peripheral Field SSVEPs as a Potential Objective Indicator of Useful Field of View Performance The Effect of Aging and Attention on Visual Crowding and Surround Suppression of Perceived Contrast Threshold Impact of the Ability to Divide Attention on Reading Performance in Glaucoma Deep Learning for Predicting Refractive Error From Retinal Fundus Images Visual Psychophysics and Physiological Optics
CommonCrawl
Fast multiplication of highly structured matrix I want to compute a fast matrix-vector product using a matrix $T$ which has a peculiar quasi-Hankel structure. For example, \begin{equation} T_2= \left( \begin{array}{c|ccc|cccccc} a & b & c & d & e & f & g & h & i & j\\\hline b & e & f & h & 0 & 0 & 0 & 0 & 0 & 0\\ c & f & g & i & 0 & 0 & 0 & 0 & 0 & 0\\ d & h & i & j & 0 & 0 & 0 & 0 & 0 & 0\\\hline e & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ f & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ g & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ h & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ i & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ j & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right). \end{equation} \begin{equation*} T_3=\left( \begin{array}{c|ccc|cccccc|cccccccccc} a & b & c & d & e & f & g & h & i & j & k & l & m & n & o & p & q & r & s & t\\\hline b & e & f & h & k & l & m & o & p & r\\ c & f & g & i & l & m & n & p & q & s\\ d & h & i & j & o & p & q & r & s & t\\\hline e & k & l & o\\ f & l & m & p\\ g & m & n & q \\ h & o & p & r\\ i & p & q & s\\ j & r & s & t\\\hline k & \\ l & \\ m & \\ n & \\ o & \\ p & \\ q & \\ r & \\ s & \\ t \end{array} \right) \end{equation*} Larger matrices $(T_4,T_5,...)$ have the same block structure; in fact, each successive $T_i$ is contained as a partial block of all $T_j,j>i$ (see above examples). All of the unique elements are contained in the first column or row, but the matrix does not possess classical Hankel/Toeplitz/etc. structure typical of fast structured matrix vector multiplication. This matrix arises from a convolution of a certain sort, so I am convinced that the matrix-vector product can be computed in $\mathcal{O}(N\log N)$ time or something close rather than $\mathcal{O}(N^2)$. I'd appreciate the input of others, or potentially helpful references. Edit: The block structure is controlled by the parameter $P$ so that the matrix $T_P$ has a $(P+1)\times(P+1)$ upper left triangular block structure. The $p$th row or column block is of dimension $(p+1)(p+2)/2$. matrix acceleration Anton Menshov♦ ssssssssssssssssssssssssss $\begingroup$ The product can be performed in almost linear time. Look this reference. $\endgroup$ – The Doctor Dec 15 '17 at 22:36 $\begingroup$ I am aware of those types of algorithms but the standard algorithms referred to in the reference do not apply here due to the particular structure of these matrices. I think maybe my use of "quasi-Hankel" might be confusing. All I meant is that the blocks along the upper block antidiagonals contain the same matrix elements, not necessarily that the matrices themselves are identical. $\endgroup$ – sssssssssssss Dec 15 '17 at 23:09 $\begingroup$ Looking at $T_3$, doesn't it mean your matrix has $<N\times (P+1)$ nonzero elements, $\ll N^2$? So the direct straightforward matrix-vector product will take $N(P+1)$ operations. I'm not sure how $P+1$ compares to $\log N$, but it's not clear to me that you even need any complicated algorithm here. $\endgroup$ – Kirill Dec 16 '17 at 17:41 $\begingroup$ The complexity is certainly $P^6$ when worked out for $T_P$. The constant is small but the scaling is suboptimal $\endgroup$ – sssssssssssss Dec 29 '17 at 1:32 Late answer but the matrix can be thought of the following sum of rank-$2$ matrices which allows for having smaller block sizes as the number of blocks increase. $$ \begin{align*} c_{20}=\left( \begin{array}{cc} \frac{a}2 & 1 \\ b & 0\\ c & 0\\ d & 0\\ e & 0\\ \vdots&\vdots\\ r & 0\\ s & 0\\ t & 0 \end{array} \right) \left( \begin{array}{cccccccccccccccc} 1&0&0&0&0&\cdots&0&0&0&\\\frac{a}2 & b & c & d & e &\cdots & r & s & t \end{array} \right)\begin{pmatrix}x_1\\\vdots\\x_{20}\end{pmatrix}+\\ \left( \begin{array}{cc} \frac{e}2 & 1 \\ f & 0\\ h & 0\\ k & 0\\ l & 0\\ m & 0\\ o & 0\\ p & 0\\ r & 0 \end{array} \right) \left( \begin{array}{cccccccccccccccc} 1&0&0&0&0&0&0&0&0&\\\frac{e}2 & f & h & k & l &m & o & p & r \end{array} \right)\begin{pmatrix}x_2\\\vdots\\x_{10}\end{pmatrix}+\\\textrm{etc.} \\ \end{align*} $$ Note that, by performing the second multiplication at each term, most of the calculations are vector times scalar products and "get the first element" type operations hence allows for significant reduction. Thus this has one inner product of decreasing size, two lookups and one vector times scalar and two scalar addition to an element operations. I suspect this matrix could have been formulated as an iterative solution instead of the linear set. percussepercusse Not the answer you're looking for? Browse other questions tagged matrix acceleration or ask your own question. Finding the square root of a Laplacian matrix SVD of large block-hankel matrix Efficient algorithm for a matrix product Block-matrix SVD and rank bounds Implicit method for two coupled PDEs How can I speed up this code for sparse matrix-vector multiplication? Projection onto the set of Orthogonal matrices Is there an efficient way to form this block matrix with numpy or scipy? Efficiently finding binary vectors satisfying multiple conditions
CommonCrawl
Optomechanically induced transparency of x-rays via optical control Tunable free-electron X-ray radiation from van der Waals materials Michael Shentcis, Adam K. Budniak, … Ido Kaminer Light emission based on nanophotonic vacuum forces Nicholas Rivera, Liang Jie Wong, … Ido Kaminer Electro-optic spatial light modulator from an engineered organic layer Ileana-Cristina Benea-Chelmus, Maryna L. Meretska, … Federico Capasso Structured light Andrew Forbes, Michael de Oliveira & Mark R. Dennis Focusing of light energy inside a scattering medium by controlling the time-gated multiple light scattering Seungwon Jeong, Ye-Ryoung Lee, … Wonshik Choi Controlling Cherenkov angles with resonance transition radiation Xiao Lin, Sajan Easo, … Ido Kaminer Extreme-ultraviolet refractive optics L. Drescher, O. Kornilov, … B. Schütte Merging transformation optics with electron-driven photon sources Nahid Talebi, Sophie Meuret, … Peter A. van Aken A non-unitary metasurface enables continuous control of quantum photon–photon interactions from bosonic to fermionic Quanwei Li, Wei Bao, … Xiang Zhang Wen-Te Liao1,2 & Adriana Pálffy1 Scientific Reports volume 7, Article number: 321 (2017) Cite this article Single photons and quantum effects The search for new control methods over light-matter interactions is one of the engines that advances fundamental physics and applied science alike. A specific class of light-matter interaction interfaces are setups coupling photons of distinct frequencies via matter. Such devices, nontrivial in design, could be endowed with multifunctional tasking. Here we envisage for the first time an optomechanical system that bridges optical and robust, high-frequency x-ray photons, which are otherwise notoriously difficult to control. The x-ray-optical system comprises of an optomechanical cavity and a movable microlever interacting with an optical laser and with x-rays via resonant nuclear scattering. We show that optomechanically induced transparency of a broad range of photons (10 eV–100 keV) is achievable in this setup, allowing to tune nuclear x-ray absorption spectra via optomechanical control. This paves ways for metrology applications, e.g., the detection of the 229Thorium clock transition, and an unprecedentedly precise control of x-rays using optical photons. In cavity optomechanics1, the coupling of electromagnetic radiation to mechanical motion degrees of freedom2 can be used to connect quantum system with different resonant frequencies. For instance, via a common movable microlever, an optical cavity can be coupled to a microwave resonator to bridge the two frequency regimes3,4,5,6. Going towards shorter photon wavelengths is highly desirable and timely: in addition to improved detection, x-rays are better focusable and carry much larger momenta, potentially facilitating the entanglement of light and matter at a single-photon level. Unfortunately, a direct application of the so-far employed interface concept for a device that mediates an optical and an x-ray photon is bound to fail. First, the required high-performance cavities are not available for x-rays. Second, exactly the potentially advantageous high momentum carried by an x-ray photon renders necessary a different paradigm. We note here that x-rays are resonant to transitions in atomic nuclei which can be regarded as x-ray cavities with good quality. The rapidly developing field of x-ray quantum optics7,8,9,10,11,12 has recently reported so far key achievements and promising predictions for the mutual control of x-rays and nuclei13,14,15,16,17,18,19,20,21,22. Here we present an innovative solution for coupling x-ray quanta to an optomechanical, solid-state device which can serve as a node bridging optical and x-ray photons in a quantum network. We demonstrate theoretically that using resonant interactions of x-rays with nuclear transitions, in conjunction with an optomechanical setup interacting with optical photons, an optical-x-ray interface can be achieved. Such a device would allow to tune x-ray absorption spectra and eventually to shape x-ray wavepackets or spectra for single photons19, 23,24,25,26,27,28 by optomechanical control. The role of the x-ray cavity is here adopted by a nuclear transition with long coherence time that eventually stores the high-frequency photon. Our calculations show that optomechanically induced transparency of x-rays can be achieved in the optical-x-ray interface paving the way for both metrology29 and an unprecedently precise control of x-rays using optical photons. In particular, a metrology-relevant application for the nuclear clock transition of 229Th, which lies in the vacuum ultraviolet (VUV) region, is presented. The optomechanical-nuclear system under investigation is illustrated in Fig. 1a. An optomechanical cavity of length L driven by an optical laser has an embedded layer in the tip of the microlever containing Mössbauer nuclei that interact with certain sharply defined x-ray frequencies. The nuclei in the layer have a stable or very long-lived ground state, and a first excited state that can be reached by a resonant x-ray Mössbauer, i.e., recoilless, transition. Typically, this type of nuclear excitation or decay occurs without individual recoil, leading to a coherent scattering in the forward direction30. Another type of excitation including the nuclear transition together with the motion of the microlever, i.e., phonons, can also be driven by red or blue-detuned x-rays. The nuclear two-level system can be therefore coupled to the mechanical motion of the microlever of mass M. The term "phonon" is used here to describe the vibration of the center of mass of the cantilever, visible in the tip displacement y. According to the specifications1 of various mechanical microlever designs, the phonons in this setup are expected to be in the MHz regime. We choose to label the space coordinate with y since the notation x will be used in the following for the x-ray field. An effective model of nuclear harmonic oscillator interacting with x-rays can be constructed to describe the hybridization31, 32 of the x-ray-nuclei-optomechanical systems. To this end the well-known optomechanical Hamiltonian1, 33,34,35,36,37 is extended to include also the x-ray interaction with the nuclear layer embedded in the tip of the microlever. Since the nuclear transition widths are very narrow (10−9–10−15 eV), we assume that the nuclei interact with a single mode of the x-ray field. Sketch of the optomechanical interface between optical and x-ray photons. (a) The optical cavity is composed of a fixed mirror and a movable microlever whose oscillating frequency ω m can be controlled. A layer containing Mössbauer nuclei that can resonantly interact with x-rays is embedded in the tip of the microlever. (b) Level scheme of the effective nuclear harmonic oscillator. Lower (upper) three states correspond to the ground (excited) state g (e) while v (n) denotes the number of fluctuated cavity photons (number of phonons). Vertical green arrows depict the x-ray absorption by nuclei (with x-ray detuning Δ), and red diagonal arrows illustrate the beam splitter interaction between cavity photons and the microlever's mechanical motion. The full yellow ellipse indicates the initial state of the system. The full Hamiltonian of the system sketched in Fig. 1a is a combination of the optomechanical Hamiltonian1, 33, 36 and nuclear interaction with x-ray photons, which can be written in the interaction picture and linearized version as (see Methods and Supplementary Information for detailed derivation) as $$\begin{array}{ccc}\hat{H} & = & \hslash {\omega }_{m}{\hat{b}}^{ {\dagger } }\hat{b}-\hslash {{\rm{\Delta }}}_{c}{\hat{a}}^{ {\dagger } }\hat{a}-\hslash G({\hat{a}}^{ {\dagger } }\hat{b}+\hat{a}{\hat{b}}^{ {\dagger } })\\ & & +\hslash {\rm{\Delta }}|e\rangle\langle e|-\frac{\hslash {\rm{\Omega }}}{2}[|e\rangle\langle g|{e}^{i{k}_{x}{Y}_{{\rm{Z}}{\rm{P}}{\rm{F}}}({\hat{b}}^{ {\dagger } }+\hat{b})}+H.c.].\end{array}$$ Here, ω m is the optomechanically modified oscillation angular frequency of the microlever, Δ c is the effective optical laser detuning to the cavity frequency obtained after the linearization procedure, and G the coupling constant of the system. The operators \({\widehat{a}}^{ {\dagger } }\) \((\widehat{a})\) and \({\widehat{b}}^{ {\dagger } }\) \((\widehat{b})\) act as cavity photon and phonon creation (annihilation) operators, respectively. As further notations in Eq. 1, Δ = ω x − ω n is the x-ray detuning with ω n the nuclear transition angular frequency and ω x (k x ) is the x-ray angular frequency (wave vector), respectively. Ω is the Rabi frequency describing the coupling between the nuclear transition currents38 and the x-ray field, Y ZPF is the zero-point fluctuation, ħ the reduced Planck constant, and e and g denote the nuclear excited and ground state, respectively. The linearization procedure leading to the Hamiltonian in Eq. (1) was performed in the red-detuned regime, namely, cavity detuning Δ c = −ω m , which results in the so-called "beam-splitter" interaction1 with the optomechanical coupling strength G. We use the master equation involving the linearized interaction Hamiltonian to determine the dynamics of the interface system and the nuclear x-ray absorption spectra as detailed in Methods. Figure 2 demonstrates the x-ray/VUV absorption spectra for several nuclear targets, together with an illustration of the corresponding Lamb-Dicke parameter η = k x Y ZPF. We consider nuclear transitions from the ground state to the first excited state in 229Th, 73Ge and 67Zn, with the relevant nuclear and optomechanical parameters presented in Table 1. The chosen optomechanics setup parameters39 are M = 0.14 μg, the inherent phonon frequency ω 0 = 2π × 0.95 MHz, the optomechanical damping rate γ 0 = 2π × 0.14 kHz, the optical cavity decay rate κ = 2π × 0.2 MHz, cavity frequency ω c ~ 1015 Hz and the optomechanical coupling constant \({G}_{0}=\frac{{\omega }_{c}}{L}{Y}_{{\rm{ZPF}}}\sqrt{{\bar{n}}_{{\rm{cav}}}}=2\pi \times 3.9\) Hz. These parameters have been experimentally demonstrated39. The required optomechanical system is a 25-mm-long Fabry-Pérot cavity made of a high-reflectivity mirror pad (reflectivity >0.99991) that forms the end-face39. A realistic estimate of the optical thickness values for the nuclear x-ray absorption is presented in the Supplementary Information. Optomechanically tunable x-ray/VUV absorption spectra and the corresponding ratio of the x-ray wavelength and the zero-point fluctuation Y ZPF. The microlever has an embedded layer with (a) 229Th, (b) 73Ge, (c) 67Zn nuclei. Further parameters are taken from refs 1, 39 and the phonon number is chosen to be n = 5 × 106. Green solid line illustrates the spectra in the absence of the optomechanical coupling. Red dashed (blue dashed-dotted) lines show the optomechanically modified spectra under the action of an optical laser with about P = 2 nW (P = 5 nW). Red arrows indicate the first phonon lines. (d–f) Illustrations of the corresponding ratio of the x-ray wavelength and the zero-point fluctuation Y ZPF which determine the value of the Lamb-Dicke parameter. Table 1 Suitable nuclear Mössbauer transitions for the optomechanical control of x-ray absorption. For a comprehensive explanation, we begin with the case in the absence of optomechanical coupling, i.e., G = 0. Green lines in Fig. 2 illustrate a central nuclear absorption line with detuning Δ = 0 corresponding to m = n, and sidebands that occur with excitation or decay of phonons in the system m = n ± 1, n ± 2, …. The width of the peaks is determined by the value of \(s={\rm{\Gamma }}\mathrm{/2}+\kappa +{\gamma }_{m}\) (see Methods) of the order of MHz, similar in scale with the inhomogeneous broadening of the nuclear transition, which we neglect in the following. In order to resolve the sidebands, a constraint has to be imposed on the oscillation frequency of the microlever, i.e., the microlever frequency ω m > s, and the Franck-Condon coefficients \(|{F}_{n}^{m}|\mathop{ > }\limits_{ \tilde {}}0.1\) for at least the first phonon lines m = n ± 1 (see Methods). As a consequence, nuclear species with large Lamb-Dicke parameters η allow the observation of x-ray absorption sidebands. For example, compared to 229Th (η = 9.92 × 10−9) in Fig. 2a which presents only the zero phonon line, the spectra of 73Ge (η = 1.69 × 10−5) in Fig. 2b show an observable sideband as indicated by the red arrow. Moreover, there are several sidebands appearing in the spectrum of 67Zn (η = 11.87 × 10−5) depicted in Fig. 2c. Further Mössbauer nuclei with suitable first excited states whose decay rates are lower than the phonon angular frequency of around 6 MHz are for instance30 45Sc, 157Gd and 181Ta. We are now ready to discuss the results including the optomechanical coupling, G > 0, illustrated by the blue and red dashed lines in Fig. 2. Remarkably, the optomechanical coupling introduces a dip at the center of each line. As illustrated also in Fig. 1b, the line splittings are caused by the optomechanical coupling G, which links different phonon Fock states via the beam splitter interaction1. We stress here that the nuclear x-ray absorption is only modified by the optomechanical coupling and does not have to do with x-ray recoil which is not occuring in our scheme. The depth and the spacing of the dips are proportional to the input optical laser power which modifies the strength G. Figure 2 shows that the absorption gradually goes to zero with increasing laser power P. The diagonalization of the Hamiltonian shows that the two split peaks around the zero phonon line are approximately positioned at \({\rm{\Delta }}=\pm \sqrt{{(G\sqrt{m+v+2mv}+s)}^{2}-2{s}^{2}}\). These two eigenvalues correspond to transitions between the ground state |g, v, n〉 and the two eigenstates \(\sqrt{\frac{(1+m)v}{(1+v)m}}|e,v-1,m+1\rangle \mp \sqrt{\frac{m+v+2\,mv}{(1+v)m}}|e,v,m\rangle +|e,v+1,m-1\rangle \) (see Methods). These eigenstates result in an analog of the so-called optomechanically induced transparency34,35,36 in the x-ray domain and offer means of controlling x-ray spectra. This is a new mechanism compared to typical target vibration experiments of Mössbauer samples23,24,25,26, in the classical phonon regime. The width of the splitting indicates that, with sufficient phonon numbers, the compelling optomechanical coupling can be accomplished by an optical laser. This feature may render control of x-ray quanta by means of weak optical lasers possible. In order to demonstrate this possibility, laser power parameters of few nW are used in the calculation to implement full transparency of x-rays around the nuclear resonance (see blue dashed-dotted and red dashed lines in Fig. 2). Since the natural nuclear linewidths are far more narrow than present x-ray sources, the suitable solution for resolving the phonon sidebands of the keV x-ray or VUV resonance energies is to employ a Mössbauer drive setup. 67Zn Mössbauer spectroscopy for instance is a well-established technique with exceptionally high sensitivity for the gamma-ray energy. This has been exploited40 for precision measurements of hyperfine interactions 67Ga decay schemes, which populate excited states in 67Zn. The decay cascade will eventually populate the first excited level, which then releases single photons close to the resonance energy of the nuclear layer on the microlever. Assuming 50 mCi source activity and a solid angle corresponding to a 20 × 20 μm2 67Zn layer placed 10 cm away, the rate of x-ray photons close to the resonance is approx. 40 Hz. The fine-tuning for matching the exact resonance energy is achieved by means of the Doppler shift using a piezoelectric drive with μm/s velocities40. While the 7.8 eV transition of 229Th is not traditionally regarded as a Mössbauer case, studies have shown that when embedded in VUV-transparent crystals, thorium nuclei are expected to be confined to the Lamb-Dicke regime41,42,43. In this regime one expects clear parallels to nuclear forward scattering techniques as known from traditional Mössbauer transitions. The uniquely low lying state and the very narrow transition width of approx. 10−19 eV makes 229Th a candidate for a stable and accurate nuclear frequency standard29. The most important step in this direction would be a precise measurement of the nuclear transition frequency, at present considered to be 7.8 ± 0.5 eV44. However, two major difficulties have been encountered in such measurements. First, the extremely narrow linewidth of 10−5 Hz makes very difficult both the excitation and the detection of fluorescence for this transition. Second, the isomeric transition has a disadvantageous signal to background ratio and strong fake signals from the environment have been so far impairing experiments45,46,47. The VUV spectra of 229Th illustrated in Fig. 2a reveal that our chip-scale system could be used to determine the nuclear clock transition energy43, 44, 48. For this exceptional case with VUV nuclear transition energy, the excitation could be achieved with VUV lasers at present in development49. Two important advantages arise in the VUV-optomechanical interface: (i) the width \(s\gg {\rm{\Gamma }}\) broadens the VUV absorption linewidth by 10 orders of magnitude, namely, \(s\sim {10}^{10}\,{\rm{\Gamma }}\), facilitating the excitation and speeding up the nuclear target's decoherence. (ii) The VUV spectra are optomechanically tunable. This can offer a clear signature of nuclear excitation circumventing false signals which unavoidably appear from either crystal sample45 or surrounding atmosphere46, 47. We have put forward the theoretical formalism for optomechanically induced transparency of x-rays via optical control. In particular, our results show that the induced transparency may be achieved for nuclear transitions, with possible relevance for metrological studies, e.g., detection of nuclear clock transition. The opposite situation, of x-ray photons controlling the optomechanical setup, may open new possibilities for connecting quantum network devices50 on atomic and mesoscopic scales. The full Hamiltonian of the system sketched in Fig. 1a is a combination of the optomechanical Hamiltonian and nuclear interaction with x-ray photons1, 33, 36 (see also Supplementary Information), $$\begin{array}{rcl}\widehat{H} & = & \hslash {\omega }_{0}{\widehat{b}}^{ {\dagger } }\widehat{b}+\hslash {\omega }_{c}{\widehat{a}}^{ {\dagger } }\widehat{a}-\hslash {G}_{0}{\widehat{a}}^{ {\dagger } }\widehat{a}({\widehat{b}}^{ {\dagger } }+\widehat{b})\\ & & +\,\hslash {\omega }_{n}|e\rangle \langle e|+\frac{\hslash {\rm{\Omega }}}{2}({e}^{-i{\omega }_{x}t+i{k}_{x}{Y}_{{\rm{ZPF}}}({\widehat{b}}^{ {\dagger } }+\widehat{b})}\,\widehat{x}|e\rangle \langle g|\\ & & +\,{e}^{i{\omega }_{x}t-i{k}_{x}{Y}_{{\rm{ZPF}}}({\widehat{b}}^{ {\dagger } }+\widehat{b})}\,{\widehat{x}}^{ {\dagger } }|g\rangle \langle e|).\end{array}$$ Here, ω 0 denotes the inherent phonon, ω c the resonant cavity, and ω n the nuclear transition angular frequency, respectively, and Ω is the Rabi frequency describing the coupling between the nuclear transition currents38 and the x-ray field. The operators \({\widehat{x}}^{ {\dagger } }\) \((\widehat{x})\) act as x-ray photon creation (annihilation) operators, respectively. The optomechanical coupling constant is given by G 0 = ω c Y ZPF/L, where Y ZPF denotes the zero-point fluctuation. Typically, the Hamiltonian expression above is transformed in the interaction picture and linearized with respect to the cavity photon number at equilibrium1, 36, i.e., the balance between external pumping and cavity loss. It is therefore convenient to neglect external cavity driving terms by classical optical fields in the Hamiltonian of the system1, 35, 36. We will see below that one can effectively attribute the modified properties of the system to the new optomechanical coupling constant G. By an unitary transformation to the rotating frame1 (see Supplementary Information), we obtain the Hamiltonian in the interaction picture $$\begin{array}{rcl}\widehat{H} & = & \hslash {\omega }_{0}{\widehat{b}}^{ {\dagger } }\widehat{b}-\hslash {{\rm{\Delta }}}_{c}{\widehat{a}}^{ {\dagger } }\widehat{a}-\hslash {G}_{0}{\widehat{a}}^{ {\dagger } }\widehat{a}({\widehat{b}}^{ {\dagger } }+\widehat{b})\\ & & +\,\hslash {\rm{\Delta }}|e\rangle \langle e|-\frac{\hslash {\rm{\Omega }}}{2}[|e\rangle \langle g|{e}^{i{k}_{x}{Y}_{{\rm{ZPF}}}({\widehat{b}}^{ {\dagger } }+\widehat{b})}+H.c.],\end{array}$$ where Δ c = ω l − ω c is the optical laser detuning to the cavity frequency, ω l the optical laser angular frequency and Δ = ω x − ω n the x-ray detuning. The final step is to linearize the Hamiltonian by performing the transformation \(\widehat{a}\to \sqrt{{\overline{n}}_{cav}}+\widehat{a}\), where \({\overline{n}}_{cav}\) is the averaged cavity photon number, and \(\widehat{a}\) becomes the photon number fluctuation1, 36. The expression \({\overline{n}}_{cav}+\langle v|{\widehat{a}}^{ {\dagger } }\widehat{a}|v\rangle \) gives the photon number of the full cavity field. We neglect the first order terms of \({\widehat{a}}^{ {\dagger } }{\widehat{b}}^{ {\dagger } }\) and \(\widehat{a}\widehat{b}\) in the rotating wave approximation, and the second order terms proportional to \({\widehat{a}}^{ {\dagger } }\widehat{a}\). The zero order terms \({\overline{n}}_{{\rm{cav}}}({\widehat{b}}^{ {\dagger } }+\widehat{b})\) may be omitted1 after implementing an averaged cavity length shift \(\delta L=\hslash {\omega }_{c}{\overline{n}}_{cav}/(Lm{\omega }_{0}^{2})\) and the averaged cavity angular frequency shift \(\delta {\omega }_{c}=\hslash {\omega }_{c}^{2}{\overline{n}}_{cav}/({L}^{2}m{\omega }_{0}^{2})\), leading to the effective detuning1 Δ c → Δ c + δω c . We focus on the red-detuned regime, namely, cavity detuning Δ c = −ω m , which results in the so-called "beam-splitter" interaction1. We obtain the linearized Hamiltonian given in Eq. (1) with the new coupling constant \(G={G}_{0}\sqrt{{\overline{n}}_{{\rm{cav}}}}\). The effective phonon angular frequency ω m = ω 0 + δω 0 is introduced where \(\delta {\omega }_{0}=4{G}^{2}(\frac{{\omega }_{0}}{{\kappa }^{2}+16{\omega }_{0}^{2}})\) is the optomechanically modified oscillation angular frequency of the microlever1. The zero-point fluctuation of the microlever's mechanical motion can then be written as \({Y}_{{\rm{ZPF}}}=\sqrt{\hslash /(2M{\omega }_{m})}\). We use the master equation \({\partial }_{t}\widehat{\rho }=\frac{1}{i\hslash }[\widehat{H},\widehat{\rho }]+{\widehat{\rho }}_{dec}\) involving the linearized interaction Hamiltonian to determine the dynamics of the interface system (see Supplementary Information for the explicit form of each matrix). Decoherence processes are described by \({\widehat{\rho }}_{dec}\) including the spontaneous nuclear decay characterized by the rate Γ, the inherent mechanical damping rate of the microlever γ 0 and the optical cavity decay rate κ. The density matrix elements \({\rho }_{\beta d\nu }^{\alpha c\mu }={A}_{\alpha c\mu }^{\ast }{A}_{\beta d\nu }\) correspond to the state vector \(|\psi \rangle={A}_{gv-1n+1}|g,v-1,n+1\rangle+{A}_{gvn}|g,v,n\rangle+\) \({A}_{gv+1n-1}|g,v+1,n-1\rangle\) \(+{A}_{ev-1m+1}|e,v-1,m+1\rangle +{A}_{evm}|e,v,m\rangle +{A}_{ev+1m-1}|e,v+1,m-1\rangle \) where the system is initially prepared39, 51 in the nuclear ground state with \({\overline{n}}_{{\rm{cav}}}\) fluctuated cavity photons at the level of v and n phonons |g, v, n〉, and the nuclear excited state with m phonons |e, v, m〉 is reached by x-ray absorption, as illustrated in Fig. 1b. Four additional states with n ± 1 and m ± 1 phonons are coupled by the beam splitter interaction. In the red-detuned regime1 the mechanics of the optically tunable microlever can be described as \({\partial }_{t}^{2}y+{\gamma }_{m}{\partial }_{t}y+{\omega }_{m}^{2}y=0\), where y(t) denotes the displacement of the microlever as illustrated in Fig. 1(a), and the optomechanical damping rate shift is given by \(\delta {\gamma }_{0}=4{G}^{2}(\frac{1}{\kappa }-\frac{\kappa }{{\kappa }^{2}+16{\omega }_{0}^{2}})\). The effective optomechanical damping rate γ m = γ 0 + δγ 0. A relevant quantity is the average number of photons inside the cavity, which depends on the optical laser power \(P\) and is given by ref. 1 \({\overline{n}}_{{\rm{cav}}}=\frac{\kappa P}{\hslash {\omega }_{l}[{({\omega }_{l}-{\omega }_{c})}^{2}+{(\kappa \mathrm{/2})}^{2}]}\). The x-ray absorption spectrum of the interface system is determined by the off-diagonal terms of the Hamiltonian \(\widehat{H}\), i.e., \(\langle e,v,m|\widehat{H}|g,v,n\rangle =\frac{\hslash {\rm{\Omega }}}{2}\langle m|{e}^{i{k}_{x}{Y}_{{\rm{ZPF}}}({\widehat{b}}^{ {\dagger } }+\widehat{b})}|n\rangle \). The phase term η = k x Y ZPF is the so-called Lamb-Dicke parameter, and for \(\eta \sqrt{n} < 1\), \({F}_{n}^{m}=\langle m|{e}^{i\eta ({\widehat{b}}^{ {\dagger } }+\widehat{b})}|n\rangle \) denotes the Franck-Condon coefficient33 $${F}_{n}^{m\ge n}=\frac{{(i\eta )}^{|m-n|}}{|m-n|!}\sqrt{\frac{m!}{n!}},$$ $${F}_{n}^{m < n}=\frac{{(i\eta )}^{|m-n|}}{|m-n|!}\sqrt{\frac{n!}{m!}}.$$ Typically, only low nuclear excitation is achieved in nuclear scattering with x-rays, such that the master equation in the perturbation region \({\rm{\Gamma }}\mathrm{/2}+\kappa +{\gamma }_{m} > G\gg {\rm{\Omega }}\) can be used, corresponding to the stable regime. We note here that nuclear scattering experiments and simulations have confirmed in this low excitation regime the validity of the semi-classical limit for x-ray-nucleus interaction52. The steady state solution reads $${\rho }_{gn}^{em}({\rm{\Delta }})=\frac{{\rm{\Omega }}\{{F}_{n}^{m}(2s-{\rm{\Gamma }})[is+{\rm{\Delta }}-(m-n){\omega }_{m}]-2i{G}^{2}{F}_{n-1}^{m-1}\sqrt{mn}\}}{2(2s-{\rm{\Gamma }})\{{G}^{2}m+{[s-i({\rm{\Delta }}-(m-n){\omega }_{m})]}^{2}\}}$$ where the total decoherence rate notation \(s={\rm{\Gamma }}\mathrm{/2}+\kappa +{\gamma }_{m}\) was introduced. By replacing \({F}_{n}^{m}\) with \(|{F}_{n}^{m}|\), the sum of the imaginary part of Eq. (6) for corresponding transitions, namely, \({\sum }_{m=n-6}^{n+6}{\rm{Im}}[{\rho }_{gvn}^{evm}({\rm{\Delta }})]\), provides the x-ray absorption spectrum. Eq. (6) shows that the x-ray absorption is directly dependent on the numbers of photons \({\overline{n}}_{{\rm{cav}}}\) and averaged number of phonons m and n and their statistics. Aspelmeyer, M., Kippenberg, T. J. & Marquardt, F. Cavity optomechanics. Rev. Mod. Phys. 86, 1391–1452 (2014). Brawley, G. et al. Non-linear optomechanical measurement of mechanical motion. Nature Commun. 7, 10988, doi:10.1038/ncomms10988 (2015). Barzanjeh, S., Abdi, M., Milburn, G. J., Tombesi, P. & Vitali, D. Reversible optical-to-microwave quantum interface. Phys. Rev. Lett. 109, 130503, doi:10.1103/PhysRevLett.109.130503 (2012). Bochmann, J., Vainsencher, A., Awschalom, D. D. & Cleland, A. N. Nanomechanical coupling between microwave and optical photons. Nature Phys. 9, 712–716 (2013). Andrews, R. W. et al. Bidirectional and efficient conversion between microwave and optical light. Nature Phys. 10, 321–326 (2014). Barzanjeh, S. et al. Microwave quantum illumination. Phys. Rev. Lett. 114, 080503, doi:10.1103/PhysRevLett.114.080503 (2015). Suckewer, S., Skinner, C. H., Milchberg, H., Keane, C. & Voorhees, D. Amplification of stimulated soft x-ray emission in a confined plasma column. Phys. Rev. Lett. 55, 1753–1756 (1985). Rocca, J. J. et al. Demonstration of a discharge pumped table-top soft-x-ray laser. Phys. Rev. Lett. 73, 2192–2195 (1994). Lemoff, B. E., Yin, G. Y., Gordon, C. L. III, Barty, C. P. J. & Harris, S. E. Demonstration of a 10-hz femtosecond-pulse-driven XUV laser at 41.8 nm in Xe IX. Phys. Rev. Lett. 74, 1574–1577 (1995). Glover, T. E. et al. Controlling x-rays with light. Nature Phys. 6, 69–74 (2010). Rohringer, N. et al. Atomic inner-shell x-ray laser at 1.46 nanometres pumped by an x-ray free-electron laser. Nature 481, 488–491 (2012). Adams, B. W. et al. X-ray quantum optics. J. Mod. Opt. 60, 2–21 (2013). Röhlsberger, R., Schlage, K., Sahoo, B., Couet, S. & Rüffer, R. Collective Lamb shift in single-photon superradiance. Science 328, 1248–1251 (2010). Article ADS PubMed MATH Google Scholar Röhlsberger, R., Wille, H. C., Schlage, K. & Sahoo, B. Electromagnetically induced transparency with resonant nuclei in a cavity. Nature 482, 199–203 (2012). Liao, W.-T., Pálffy, A. & Keitel, C. H. Coherent storage and phase modulation of single hard-x-ray photons using nuclear excitons. Phys. Rev. Lett. 109, 197403, doi:10.1103/PhysRevLett.109.197403 (2012). Heeg, K. P. et al. Vacuum-assisted generation and control of atomic coherences at x-ray energies. Phys. Rev. Lett. 111, 073601, doi:10.1103/PhysRevLett.111.073601 (2013). Liao, W.-T. & Pálffy, A. Proposed entanglement of x-ray nuclear polaritons as a potential method for probing matter at the subatomic scale. Phys. Rev. Lett. 112, 057401, doi:10.1103/PhysRevLett.112.057401 (2014). Liao, W.-T. Coherent Control of Nuclei and X-Rays (Springer, 2014). Vagizov, F., Antonov, V., Radeonychev, Y., Shakhmuratov, R. & Kocharovskaya, O. Coherent control of the waveforms of recoilless γ-ray photons. Nature 508, 80–83 (2014). Liao, W.-T. & Ahrens, S. Gravitational and relativistic deflection of x-ray superradiance. Nature Photon. 9, 169–173 (2015). Heeg, K. P. et al. Interferometric phase detection at x-ray energies via fano resonance control. Phys. Rev. Lett. 114, 207401, doi:10.1103/PhysRevLett.114.207401 (2015). Heeg, K. P. et al. Tunable subluminal propagation of narrow-band x-ray pulses. Phys. Rev. Lett. 114, 203601, doi:10.1103/PhysRevLett.114.203601 (2015). Perlow, G. J. Quantum beats of recoil-free γ radiation. Phys. Rev. Lett. 40, 896–899 (1978). Mketchyan, A., Arutyunyan, G., Arakelyan, A. & Gabrielyan, R. Modulation of Mössbauer radiation by coherent ultrasonic excitation in crystals. Phys. Stat. Sol. B 92, 23–29 (1979). Helistö, P., Ikonen, E. & Katila, T. Enhanced transient effects due to saturated absorption of recoilless γ radiation. Phys. Rev. B 34, 3458–3461 (1986). Popov, S. L., Smirnov, G. V. & Shvyd'ko, Y. V. Observed strengthening of radiative mechanism for a nuclear reaction in the interaction of radiation with nuclei in a vibrating absorber. JETP Lett. 49, 747–750 (1989). Kocharovskaya, O., Kolesov, R. & Rostovtsev, Y. Coherent optical control of Mössbauer spectra. Phys. Rev. Lett. 82, 3593–3596 (1999). Vagizov, F., Kolesov, R., Olariu, S., Rostovtsev, Y. & Kocharovskaya, O. Experimental observation of vibrations produced by pulsed laser beam in MgO: 57Fe. Hyperfine Interact. 167, 917–921 (2006). Peik, E. & Tamm, C. Nuclear laser spectroscopy of the 3.5 eV transition in Th-229. Europhys. Lett. 61, 181–186 (2003). Röhlsberger, R. Nuclear Condensed Matter Physics With Synchrotron Radiation: Basic Principles, Methodology and Applications (Springer-Verlag, 2004). Shkarin, A. B. et al. Optically mediated hybridization between two mechanical modes. Phys. Rev. Lett. 112, 013602, doi:10.1103/PhysRevLett.112.013602 (2014). Sete, E. A. & Eleuch, H. Controllable nonlinear effects in an optomechanical resonator containing a quantum well. Phys. Rev. A 85, 043824, doi:10.1103/PhysRevA.85.043824 (2012). Eschner, J., Morigi, G., Schmidt-Kaler, F. & Blatt, R. Laser cooling of trapped ions. J. Opt. Soc. Am. B 20, 1003–1015 (2003). Weis, S. et al. Optomechanically induced transparency. Science 330, 1520–1523 (2010). Agarwal, G. S. & Huang, S. Electromagnetically induced transparency in mechanical effects of light. Phys. Rev. A 81, 041803, doi:10.1103/PhysRevA.81.041803 (2010). Agarwal, G. S. Quantum Optics (Cambridge University Press, 2012). Faust, T., Rieger, J., Seitner, M. J., Kotthaus, J. P. & Weig, E. M. Coherent control of a classical nanomechanical two-level system. Nature Phys. 9, 485–488 (2013). Shvyd'ko, Y. V. et al. Storage of nuclear excitation energy through magnetic switching. Phys. Rev. Lett. 77, 3232–3235 (1996). Gröblacher, S., Hammerer, K., Vanner, M. R. & Aspelmeyer, M. Observation of strong coupling between a micromechanical resonator and an optical cavity field. Nature 460, 724–727 (2009). Long, G. J. & Grandjean, F. (eds) Mössbauer Spectroscopy Applied to Magnetism and Materials Science (Springer Science + Business Media, New York, 1993). Rellergert, W. G. et al. Constraining the evolution of the fundamental constants with a solid-state optical frequency reference based on the 229Th nucleus. Phys. Rev. Lett. 104, 200802, doi:10.1103/PhysRevLett.104.200802 (2010). Kazakov, G. A. et al. Performance of a 229-Thorium solid-state nuclear clock. New J. Phys. 14, 083019, doi:10.1088/1367-2630/14/8/083019 (2012). Liao, W.-T., Das, S., Keitel, C. H. & Pálffy, A. Coherence-enhanced optical determination of the th229 isomeric transition. Phys. Rev. Lett. 109, 262502, doi:10.1103/PhysRevLett.109.262502 (2012). Beck, B. R. et al. Energy splitting of the ground-state doublet in the nucleus Th229. Phys. Rev. Lett. 98, 142501, doi:10.1103/PhysRevLett.98.142501 (2007). Stellmer, S., Schreitl, M. & Schumm, T. Radioluminescence and photoluminescence of Th:CaF2 crystals. Sci. Rep. 5, 15580, doi:10.1038/srep15580 (2015). Shaw, R. W., Young, J. P., Cooper, S. P. & Webb, O. F. Spontaneous ultraviolet emission from 233Uranium/229Thorium samples. Phys. Rev. Lett. 82, 1109–1111 (1999). Utter, S. B. et al. Reexamination of the optical gamma ray decay in 229Th. Phys. Rev. Lett. 82, 505–508 (1999). Jeet, J. et al. Results of a direct search using synchrotron radiation for the low-energy 229Th nuclear isomeric transition. Phys. Rev. Lett. 114, 253001, doi:10.1103/PhysRevLett.114.253001 (2015). Nomura, Y. et al. Coherent quasi-cw 153 nm light source at 33 MHz repetition rate. Optics Lett. 36, 1758–1760 (2011). Azuma, K., Tamaki, K. & Lo, H.-K. All-photonic quantum repeaters. Nature Commun. 6, 6787, doi:10.1038/ncomms7787 (2015). Chan, J. et al. Laser cooling of a nanomechanical oscillator into its quantum ground state. Nature 478, 89–92 (2011). Kong, X., Liao, W.-T. & Pálffy, A. Field control of single x-ray photons in nuclear forward scattering. New J. Phys. 16, 013049, doi:10.1088/1367-2630/16/1/013049 (2014). The authors would like to thank Markus Aspelmeyer for fruitful discussions. WTL is supported by the Ministry of Science and Technology, Taiwan (Grant No. MOST 105-2112-M-008-001-MY3). WTL is also supported by the National Center for Theoretical Sciences, Taiwan. AP gratefully acknowledges funding by the EU FET-Open project 664732. Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, D-69117, Heidelberg, Germany Wen-Te Liao & Adriana Pálffy Department of Physics, National Central University, 32001, Taoyuan City, Taiwan Wen-Te Liao Adriana Pálffy W.T.L. and A.P. contributed equally to this work. W.T.L. performed the numerical calculations. W.T.L. and A.P. discussed the results and wrote the manuscript text. Correspondence to Wen-Te Liao or Adriana Pálffy. This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ Liao, WT., Pálffy, A. Optomechanically induced transparency of x-rays via optical control. Sci Rep 7, 321 (2017). https://doi.org/10.1038/s41598-017-00428-w DOI: https://doi.org/10.1038/s41598-017-00428-w Slowing down x-ray photons in a vibrating recoilless resonant absorber I. R. Khairulin Y. V. Radeonychev Olga Kocharovskaya Acoustically induced transparency for synchrotron hard x-ray photons Towards a 229Th-Based Nuclear Clock L. von der Wense B. Seiferle P. G. Thirolf Measurement Techniques (2018) Tailored plasmon-induced transparency in attenuated total reflection response in a metal–insulator–metal structure Kouki Matsunaga Yusuke Hirai Makoto Tomita About Scientific Reports Guide to referees Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
Electronic Journal of Statistics Electron. J. Statist. Volume 13, Number 2 (2019), 4573-4595. Asymptotic hypotheses testing for the colour blind problem Laura Dumitrescu and Estate V. Khmaladze More by Laura Dumitrescu More by Estate V. Khmaladze Full-text: Open access Enhanced PDF (297 KB) Within a nonparametric framework, we consider the problem of testing the equality of marginal distributions for a sequence of independent and identically distributed bivariate data, with unobservable order in each pair. In this case, it is not possible to construct the corresponding empirical distributions functions and yet this article shows that a systematic approach to hypothesis testing is possible and provides an empirical process on which inference can be based. Furthermore, we identify the linear statistics that are asymptotically optimal for testing the hypothesis of equal marginal distributions against contiguous alternatives. Finally, we exhibit an interesting property of the proposed stochastic process: local alternatives of dependence can also be detected. Electron. J. Statist., Volume 13, Number 2 (2019), 4573-4595. Received: March 2019 First available in Project Euclid: 12 November 2019 https://projecteuclid.org/euclid.ejs/1573527691 Digital Object Identifier doi:10.1214/19-EJS1634 Mathematical Reviews number (MathSciNet) MR4029803 Zentralblatt MATH identifier Primary: 62G10: Hypothesis testing Secondary: 62G20: Asymptotic properties Asymptotically optimal test contiguous alternatives dependence alternatives empirical process goodness of fit Kolmogorov–Smirnov statistics unordered pairs Dumitrescu, Laura; Khmaladze, Estate V. Asymptotic hypotheses testing for the colour blind problem. Electron. J. Statist. 13 (2019), no. 2, 4573--4595. doi:10.1214/19-EJS1634. https://projecteuclid.org/euclid.ejs/1573527691 [1] Banerjee, T., Chattopadhyay, G. and Banerjee, K. (2017). Two stages test of means of unordered pairs., Statistics in Medicine, 36, 2466–2480. [2] Bernstein, A. V. and Sidorov, A. A. (1972). Estimates of the set of expectations for a normal population., Theory of Probability and Its Applications, 17, 723–726. Zentralblatt MATH: 0276.62034 Digital Object Identifier: doi:10.1137/1117090 [3] Blum, J. R., Kiefer, J. and Rosenblatt, M. (1961). Distribution free tests of independence based on the sample distribution function., Annals of Mathematical Statistics, 32, 485–496. Digital Object Identifier: doi:10.1214/aoms/1177705055 Project Euclid: euclid.aoms/1177705055 [4] Davies, P. and Phillips, A. J. (1988). Nonparametric tests of population differences and estimation of the probability of misidentification with unidentified paired data., Biometrika, 75, 753–760. Digital Object Identifier: doi:10.1093/biomet/75.4.753 [5] Day, S. J. and Altman, D. G. (2000). Blinding in clinical trials and other studies., BMJ, 321, 504. [6] Gindilis, V. M. (1966). Mitotic chromosome spiralization and karyogrammic analysis in man., Citologia, 8, 144–157. [7] Hájek, J., Šidák, Z. and Sen, P. K. (1999)., Theory of rank tests, 2nd edition. Academic Press. [8] Hinkley, D. V. (1973). Two-sample tests with unordered pairs., Journal of the Royal Statistical Society, Series B, 35, 337–346. Digital Object Identifier: doi:10.1111/j.2517-6161.1973.tb00963.x [9] Janssen, A. and Rahnenführer, J. (2002). A hazard-based approach to dependence tests for bivariate censored models., Mathematical Methods of Statistics, 11, 297–322. [10] Kuo, H. -H. (1975)., Gaussian measures in Banach spaces, Lecture Notes in Mathematics. Springer-Verlag, Berlin-New York. [11] Lauder, I. J. (1977). Tracing quantitative measurements on human chromosomes in family studies., Annals of Human Genetics, 41, 77–82. [12] Li, P. and Qin, J. (2011). A new nuisance-parameter elimination method with application to the unordered homologous chromosome pairs problem., Journal of the American Statistical Association, 106, 1476–1484. Digital Object Identifier: doi:10.1198/jasa.2011.tm10670 [13] Miller, F., Friede, T. and Kieser, M. (2009). Blinded assessment of treatment effects utilizing information about the randomization block length., Statistics in Medicine, 28, 1690–1706. [14] Oosterhoff J. and van Zwet W. R. (2012). A note on contiguity and hellinger distance. In: van de Geer S., Wegkamp M. (eds) Selected Works of Willem van Zwet. Selected Works in Probability and Statistics. Springer, New York, NY. [15] Parsadanishvili, E. G. and Khmaladze, E. V. (1982). The testing of statistical hypotheses on unidentifiable objects., Theory of Probability and Its Applications, 27, 175–182. [16] van der Vaart, A. W. and Wellner, J. A. (1996)., Weak convergence and empirical processes: With applications to statistics. Springer Series in Statistics. Springer-Verlag, New York. The Institute of Mathematical Statistics and the Bernoulli Society IMS Co-sponsored Journal New content alerts Email RSS ToC RSS Article Asymptotic distribution and local power of the log-likelihood ratio test for mixtures: bounded and unbounded cases Azaïs, Jean-Marc, Gassiat, Élisabeth, and Mercadier, Cécile, Bernoulli, 2006 Significance testing in nonparametric regression based on the bootstrap Delgado, Miguel A. and Manteiga, Wenceslao González, The Annals of Statistics, 2001 Tests for Independence in Infinite Contingency Tables Shirahata, Shingo, The Annals of Statistics, 1976 Simultaneous Tests for the Equality of Covariance Matrices Against Certain Alternatives Krishnaiah, P. R., The Annals of Mathematical Statistics, 1968 Asymptotic power of sphericity tests for high-dimensional data Onatski, Alexei, Moreira, Marcelo J., and Hallin, Marc, The Annals of Statistics, 2013 Goodness of Fit Testing in $\mathbb{R}^m$ Based on the Weighted Empirical Distribution of Certain Nearest Neighbor Statistics Schilling, Mark F., The Annals of Statistics, 1983 A Bayesian Analysis of Some Nonparametric Problems Ferguson, Thomas S., The Annals of Statistics, 1973 Nonparametric two-sample tests for increasing convex order Baringhaus, Ludwig and Grübel, Rudolf, Bernoulli, 2009 Asymptotic distribution-free tests for semiparametric regressions with dependent data Escanciano, Juan Carlos, Pardo-Fernández, Juan Carlos, and Van Keilegom, Ingrid, The Annals of Statistics, 2018 Rank Tests of Dispersion Moses, Lincoln E., The Annals of Mathematical Statistics, 1963 euclid.ejs/1573527691
CommonCrawl
On some singular mean-field games JDG Home Approximation of an optimal control problem for the time-fractional Fokker-Planck equation October 2021, 8(4): 403-443. doi: 10.3934/jdg.2021023 Linear-quadratic zero-sum mean-field type games: Optimality conditions and policy optimization René Carmona , Kenza Hamidouche , Mathieu Laurière and Zongjun Tan Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08540, USA Received July 2020 Revised July 2021 Published October 2021 Early access August 2021 Fund Project: A preliminary version of this work was submitted to the 59th Conference on Decision and Control In this paper, zero-sum mean-field type games (ZSMFTG) with linear dynamics and quadratic cost are studied under infinite-horizon discounted utility function. ZSMFTG are a class of games in which two decision makers whose utilities sum to zero, compete to influence a large population of indistinguishable agents. In particular, the case in which the transition and utility functions depend on the state, the action of the controllers, and the mean of the state and the actions, is investigated. The optimality conditions of the game are analysed for both open-loop and closed-loop controls, and explicit expressions for the Nash equilibrium strategies are derived. Moreover, two policy optimization methods that rely on policy gradient are proposed for both model-based and sample-based frameworks. In the model-based case, the gradients are computed exactly using the model, whereas they are estimated using Monte-Carlo simulations in the sample-based case. Numerical experiments are conducted to show the convergence of the utility function as well as the two players' controls. Keywords: Mean field games, mean field control, mean field type games, zero sum games. Mathematics Subject Classification: Primary: 91A05, 91A07, 93E20, 49N80. Citation: René Carmona, Kenza Hamidouche, Mathieu Laurière, Zongjun Tan. Linear-quadratic zero-sum mean-field type games: Optimality conditions and policy optimization. Journal of Dynamics & Games, 2021, 8 (4) : 403-443. doi: 10.3934/jdg.2021023 Y. Achdou, F. Camilli and I. Capuzzo-Dolcetta, Mean field games: Numerical methods for the planning problem, SIAM J. Control Optim., 50 (2012), 77-109. doi: 10.1137/100790069. Google Scholar Y. Achdou and I. Capuzzo-Dolcetta, Mean field games: Numerical methods, SIAM J. Numer. Anal., 48 (2010), 1136-1162. doi: 10.1137/090758477. Google Scholar Y. Achdou and J.-M. Lasry, Mean field games for modeling crowd motion, in Contributions to Partial Differential Equations and Applications, Comput. Methods Appl. Sci., 47, Springer, Cham, 2019, 17-42. doi: 10.1007/978-3-319-78325-3_4. Google Scholar Y. Achdou and M. Laurière, Mean field games and applications: Numerical aspects, in Mean Field Games, Lecture Notes in Math., 2281, Fond. CIME/CIME Found. Subser., Springer, Cham, 2020,249-307. doi: 10.1007/978-3-030-59837-2_4. Google Scholar Y. Achdou and M. Laurière, Mean field type control with congestion (Ⅱ): An augmented Lagrangian method, Appl. Math. Optim., 74 (2016), 535-578. doi: 10.1007/s00245-016-9391-z. Google Scholar Y. Achdou and M. Laurière, On the system of partial differential equations arising in mean field type control, Discrete Contin. Dyn. Syst., 35 (2015), 3879-3900. doi: 10.3934/dcds.2015.35.3879. Google Scholar A. Al-Tamimi, F. L. Lewis and M. Abu-Khalaf, Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control, Automatica J. IFAC, 43 (2007), 473-481. doi: 10.1016/j.automatica.2006.09.019. Google Scholar C. Alasseur, I. Ben Tahar and A. Matoussi, An extended mean field game for storage in smart grids, J. Optim. Theory Appl., 184 (2020), 644-670. doi: 10.1007/s10957-019-01619-3. Google Scholar B. Anahtarci, C. D. Karıksı z and N. Saldi, Value iteration algorithm for mean-field games, Systems Control Lett., 143 (2020), 10pp. doi: 10.1016/j.sysconle.2020.104744. Google Scholar J. Barreiro-Gomez, T. E. Duncan and H. Tembine, Discrete-time linear-quadratic mean-field-type repeated games: Perfect, incomplete, and imperfect information, Automatica J. IFAC, 112 (2020), 16pp. doi: 10.1016/j.automatica.2019.108647. Google Scholar T. Başar and P. Bernhard, $H^{\infty}$ Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach, Birkhäuser, Boston, MA, 2008. doi: 10.1007/978-0-8176-4757-5. Google Scholar D. Bauso, Game Theory with Engineering Applications, Advances in Design and Control, 30, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2016. doi: 10.1137/1.9781611974287. Google Scholar D. Bauso, H. Tembine and T. Başar, Robust mean field games with application to production of an exhaustible resource, IFAC Proceedings Volumes, 45 (2012), 454-459. doi: 10.3182/20120620-3-DK-2025.00135. Google Scholar A. Bensoussan, G. Da Prato, M. C. Delfour and S. K. Mitter, Representation and Control of Infinite Dimensional Systems, Systems & Control: Foundations & Applications, Birkhäuser Boston, Inc., Boston, MA, 2007. doi: 10.1007/978-0-8176-4581-6. Google Scholar A. Bensoussan, J. Frehse and P. Yam, Mean Field Games and Mean Field Type Control Theory, SpringerBriefs in Mathematics, Springer, New York, 2013. doi: 10.1007/978-1-4614-8508-7. Google Scholar A. Bensoussan, T. Huang and M. Laurière, Mean field control and mean field game models with several populations, Minimax Theory Appl., 3 (2018), 173-209. Google Scholar L. Briceño-Arias, D. Kalise, Z. Kobeissi, M. Laurière, Á. Mateos González and F. J. Silva, On the implementation of a primal-dual algorithm for second order time-dependent mean field games with local couplings, in CEMRACS 2017-Numerical Methods for Stochastic Models: Control, Uncertainty Quantification, Mean-Field, ESAIM Proc. Surveys, 65, EDP Sci., Les Ulis, 2019,330-348. doi: 10.1051/proc/201965330. Google Scholar L. M. Briceño-Arias, D. Kalise and F. J. Silva, Proximal methods for stationary mean field games with local couplings, SIAM J. Control Optim., 56 (2018), 801-836. doi: 10.1137/16M1095615. Google Scholar H. Cao, X. Guo and M. Laurière, Connecting GANs, MFGs, and OT, preprint, arXiv: 2002.04112. Google Scholar P. Cardaliaguet, Notes on Mean Field Games, 2013. Available from: https://www.ceremade.dauphine.fr/cardaliaguet/MFG20130420.pdf. Google Scholar P. Cardaliaguet and C.-A. Lehalle, Mean field game of controls and an application to trade crowding, Math. Financ. Econ., 12 (2018), 335-363. doi: 10.1007/s11579-017-0206-z. Google Scholar E. Carlini and F. J. Silva., A fully discrete semi-Lagrangian scheme for a first order mean field game problem, SIAM J. Numer. Anal., 52 (2014), 45-67. doi: 10.1137/120902987. Google Scholar R. Carmona and F. Delarue, Probabilistic Theory of Mean Field Games with Applications. I. Mean Field FBSDEs, Control, and Games, Probability Theory and Stochastic Modelling, 83, Springer, Cham, 2018. doi: 10.1007/978-3-319-58920-6. Google Scholar R. Carmona, J.-P. Fouque and L.-H. Sun, Mean field games and systemic risk, Commun. Math. Sci., 13 (2015), 911-933. doi: 10.4310/CMS.2015.v13.n4.a4. Google Scholar R. Carmona, K. Hamidouche, M. Laurière and Z. Tan, Policy optimization for linear-quadratic zero-sum mean-field type games, Proceedings of the IEEE Conference on Decision and Control, Jeju, Korea, 2020. doi: 10.1109/CDC42340.2020.9303734. Google Scholar R. Carmona and M. Laurière, Convergence analysis of machine learning algorithms for the numerical solution of mean field control and games Ⅰ: The ergodic case, SIAM J. Numer. Anal., 59 (2021), 1455-1485. doi: 10.1137/19M1274377. Google Scholar R. Carmona and M. Laurière, Convergence analysis of machine learning algorithms for the numerical solution of mean field control and games Ⅱ: The finite horizon case, preprint, arXiv: 1908.01613. Google Scholar R. Carmona, M. Laurière and Z. Tan, Linear-quadratic mean-field reinforcement learning: Convergence of policy gradient methods, preprint, arXiv: 1910.04295. Google Scholar R. Carmona, M. Laurière and Z. Tan, Model-free mean-field reinforcement learning: Mean-field MDP and mean-field Q-learning, preprint, arXiv: 1910.12802. Google Scholar A. Cherukuri, B. Gharesifard and J. Cortés, Saddle-point dynamics: Conditions for asymptotic stability of saddle points, SIAM J. Control Optim., 55 (2017), 486-511. doi: 10.1137/15M1026924. Google Scholar A. Cosso and H. Pham, Zero-sum stochastic differential games of generalized McKean-Vlasov type, J. Math. Pures Appl. (9), 129 (2019), 180-212. doi: 10.1016/j.matpur.2018.12.005. Google Scholar C. Daskalakis and I. Panageas, The limit points of (optimistic) gradient descent in min-max optimization, NIPS'18: Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, 9256-9266. Available from: https://dl.acm.org/doi/pdf/10.5555/3327546.3327597. Google Scholar B. Djehiche and S. Hamadène, Optimal control and zero-sum stochastic differential game problems of mean-field type, Appl. Math. Optim., 81 (2020), 933-960. doi: 10.1007/s00245-018-9525-6. Google Scholar B. Djehiche, A. Tcheukam and H. Tembine, Mean-field-type games in engineering, AIMS Electronics and Electrical Engineering, 1 (2017), 18-73. doi: 10.3934/ElectrEng.2017.1.18. Google Scholar C. Domingo-Enrich, S. Jelassi, A. Mensch, G. M. Rotskoff and J. Bruna, A mean-field analysis of two-player zero-sum games, preprint, arXiv: 2002.06277. Google Scholar R. Elie, T. Ichiba and M. Laurière, Large banking systems with default and recovery: A mean field game model, preprint, arXiv: 2001.10206. Google Scholar R. Elie, J. Pérolat, M. Laurière, M. Geist and O. Pietquin, On the convergence of model free learning in mean field games, Proceedings of the AAAI Conference on Artificial Intelligence, 34 (2020), 7143-7150. doi: 10.1609/aaai.v34i05.6203. Google Scholar M. Fazel, R. Ge, S. M. Kakade and M. Mesbahi, Global convergence of policy gradient methods for the linear quadratic regulator, preprint, arXiv: 1801.05039. Google Scholar Z. Fu, Z. Yang, Y. Chen and Z. Wang, Actor-critic provably finds Nash equilibria of linear-quadratic mean-field games, preprint, arXiv: 1910.07498. Google Scholar H. Gu, X. Guo, X. Wei and R. Xu, Mean-field controls with Q-learning for cooperative MARL: Convergence and complexity analysis, preprint, arXiv: 2002.04131. Google Scholar X. Guo, A. Hu, R. Xu and J. Zhang, Learning mean-field games, Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, 4967-4977. Google Scholar M. Huang, R. P. Malhamé and P. E. Caines, Large population stochastic dynamic games: Closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst., 6 (2006), 221-251. doi: 10.4310/CIS.2006.v6.n3.a5. Google Scholar C. Jin, P. Netrapalli and M. I. Jordan, What is local optimality in nonconvex-nonconcave minimax optimization?, preprint, arXiv: 1902.00618. Google Scholar H. Kim, J. Park, M. Bennis, S.-L. Kim and M. Debbah, Mean-field game theoretic edge caching in ultra-dense networks, IEEE Transactions on Vehicular Technology, 69 (2019), 935-947. doi: 10.1109/TVT.2019.2953132. Google Scholar V. Kučera, The discrete Riccati equation of optimal control, Kybernetika (Prague), 8 (1972), 430-447. Google Scholar J.-M. Lasry and P.-L. Lions, Mean field games, Jpn. J. Math., 2 (2007), 229-260. doi: 10.1007/s11537-007-0657-8. Google Scholar Z. Liu, B. Wu and H. Lin, A mean field game approach to swarming robots control, 2018 Annual American Control Conference (ACC), Milwaukee, WI, 2018. doi: 10.23919/ACC.2018.8431807. Google Scholar T.-T. Lu and S.-H. Shiou, Inverses of 2 × 2 block matrices, Comput. Math. Appl., 43 (2002), 119-129. doi: 10.1016/S0898-1221(01)00278-4. Google Scholar E. Mazumdar, M. I. Jordan and S. S. Sastry, On finding local Nash equilibria (and only local Nash equilibria) in zero-sum continuous games, preprint, arXiv: 1901.00838. Google Scholar F. Mériaux, V. Varma and S. Lasaulce, Mean field energy games in wireless networks, 2012 Conference Record of the Forty Sixth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, 2012. doi: 10.1109/ACSSC.2012.6489095. Google Scholar M. Nouiehed, M. Sanjabi, T. Huang, J. D. Lee and M. Razaviyayn, Solving a class of non-convex min-max games using iterative first order methods, Advances in Neural Information Processing Systems, 32 (2019), 14934-14942. Google Scholar A. C. M. Ran and R. Vreugdenhil, Existence and comparison theorems for algebraic Riccati equations for continuous- and discrete-time systems, Linear Algebra Appl., 99 (1988), 63-83. doi: 10.1016/0024-3795(88)90125-5. Google Scholar D. Shi, H. Gao, L. Wang, M. Pan, Z. Han and H. V. Poor, Mean field game guided deep reinforcement learning for task placement in cooperative multi-access edge computing, IEEE Internet of Things Journal, 7 (2020), 9330-9340. doi: 10.1109/JIOT.2020.2983741. Google Scholar J. Sun, J. Yong and S. Zhang, Linear quadratic stochastic two-person zero-sum differential games in an infinite horizon, ESAIM: Control Optim. Calc. Var., 22 (2016), 743-769. doi: 10.1051/cocv/2015024. Google Scholar [55] J. von Neumann and O. Morgenstern, Theory of Games and Economic Behavior, Princeton University Press, Princeton, NJ, 2007. Google Scholar R. Xu, Zero-sum stochastic differential games of mean-field type and bsdes, Proceedings of the 31st Chinese Control Conference, (2012), 1651-1654. Google Scholar K. Zhang, Z. Yang and T. Basar, Policy optimization provably converges to Nash equilibria in zero-sum linear quadratic games, Advances in Neural Information Processing Systems, (2019) 11598-11610. Google Scholar Figure 1. Model-based policy optimization: Convergence of each part of the utility. (a) $ C_y $ as a function of $ (K_1,K_2) $. (b) $ C_z $ as a function of $ (L_1,L_2) $ Figure 2. Model-based policy optimization: Convergence of the control parameters in (a) and of the relative error on the utility in (b) Figure 3. Sample-based policy optimization: Convergence of each part of the utility. (a) $ C_y $ as a function of $ (K_1,K_2) $. (b) $ C_z $ as a function of $ (L_1,L_2) $ Figure 4. Sample-based policy optimization: Convergence of the control parameters in (a) and of the relative error on the utility in (b) Table 1. Simulation parameters $ A $ $ \overline{A} $ $ B_1=\overline{B}_1 $ $ B_2=\overline{B}_2 $ $ Q $ $ \overline{Q} $ $ R_1=\overline{R}_1 $ $ R_2=\overline{R}_2 $ $ \gamma $ Initial distribution and noise processes $ \epsilon_0^0 $ $ \epsilon^1_0 $ $ \epsilon^0_t $ $ \epsilon^1_t $ $ \mathcal{U}([-1, 1]) $ $ \mathcal{U}([-1, 1]) $ $ \mathcal{N}(0, 0.01) $ $ \mathcal{N}(0, 0.01) $ AG and DGA methods parameters $ \mathcal{N}^{max}_1 $ $ \mathcal{N}^{max}_2 $ $ T $ $ \eta_1 $ $ \eta_2 $ $ K_1^0 $ $ L_1^0 $ $ K_2^0 $ $ L_2^0 $ 10 200 2000 0.1 0.1 0.0 0.0 0.0 0.0 Gradient estimation algorithm parameters $ \mathcal{T} $ $ M $ $ \tau $ 50 10000 0.1 Salah Eddine Choutri, Boualem Djehiche, Hamidou Tembine. Optimal control and zero-sum games for Markov chains of mean-field type. Mathematical Control & Related Fields, 2019, 9 (3) : 571-605. doi: 10.3934/mcrf.2019026 Pierre Cardaliaguet, Jean-Michel Lasry, Pierre-Louis Lions, Alessio Porretta. Long time average of mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 279-301. doi: 10.3934/nhm.2012.7.279 Josu Doncel, Nicolas Gast, Bruno Gaujal. Discrete mean field games: Existence of equilibria and convergence. Journal of Dynamics & Games, 2019, 6 (3) : 221-239. doi: 10.3934/jdg.2019016 Yves Achdou, Manh-Khang Dao, Olivier Ley, Nicoletta Tchou. A class of infinite horizon mean field games on networks. Networks & Heterogeneous Media, 2019, 14 (3) : 537-566. doi: 10.3934/nhm.2019021 Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A model problem for Mean Field Games on networks. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4173-4192. doi: 10.3934/dcds.2015.35.4173 Martin Burger, Marco Di Francesco, Peter A. Markowich, Marie-Therese Wolfram. Mean field games with nonlinear mobilities in pedestrian dynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1311-1333. doi: 10.3934/dcdsb.2014.19.1311 Adriano Festa, Diogo Gomes, Francisco J. Silva, Daniela Tonon. Preface: Mean field games: New trends and applications. Journal of Dynamics & Games, 2021, 8 (4) : i-ii. doi: 10.3934/jdg.2021025 Marco Cirant, Diogo A. Gomes, Edgard A. Pimentel, Héctor Sánchez-Morgado. On some singular mean-field games. Journal of Dynamics & Games, 2021, 8 (4) : 445-465. doi: 10.3934/jdg.2021006 Lucio Boccardo, Luigi Orsina. The duality method for mean field games systems. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022021 Kuang Huang, Xuan Di, Qiang Du, Xi Chen. A game-theoretic framework for autonomous vehicles velocity control: Bridging microscopic differential games and macroscopic mean field games. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4869-4903. doi: 10.3934/dcdsb.2020131 Martino Bardi. Explicit solutions of some linear-quadratic mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 243-261. doi: 10.3934/nhm.2012.7.243 Diogo A. Gomes, Gabriel E. Pires, Héctor Sánchez-Morgado. A-priori estimates for stationary mean-field games. Networks & Heterogeneous Media, 2012, 7 (2) : 303-314. doi: 10.3934/nhm.2012.7.303 Yves Achdou, Victor Perez. Iterative strategies for solving linearized discrete mean field games systems. Networks & Heterogeneous Media, 2012, 7 (2) : 197-217. doi: 10.3934/nhm.2012.7.197 Matt Barker. From mean field games to the best reply strategy in a stochastic framework. Journal of Dynamics & Games, 2019, 6 (4) : 291-314. doi: 10.3934/jdg.2019020 Olivier Guéant. New numerical methods for mean field games with quadratic costs. Networks & Heterogeneous Media, 2012, 7 (2) : 315-336. doi: 10.3934/nhm.2012.7.315 Juan Pablo Maldonado López. Discrete time mean field games: The short-stage limit. Journal of Dynamics & Games, 2015, 2 (1) : 89-101. doi: 10.3934/jdg.2015.2.89 Laura Aquilanti, Simone Cacace, Fabio Camilli, Raul De Maio. A Mean Field Games model for finite mixtures of Bernoulli and categorical distributions. Journal of Dynamics & Games, 2021, 8 (1) : 35-59. doi: 10.3934/jdg.2020033 Siting Liu, Levon Nurbekyan. Splitting methods for a class of non-potential mean field games. Journal of Dynamics & Games, 2021, 8 (4) : 467-486. doi: 10.3934/jdg.2021014 Jun Moon. Linear-quadratic mean-field type stackelberg differential games for stochastic jump-diffusion systems. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021026 Max-Olivier Hongler. Mean-field games and swarms dynamics in Gaussian and non-Gaussian environments. Journal of Dynamics & Games, 2020, 7 (1) : 1-20. doi: 10.3934/jdg.2020001 René Carmona Kenza Hamidouche Mathieu Laurière Zongjun Tan
CommonCrawl
4 Integral Calculus Use basic integration formulas to compute the following antiderivatives. 1. \(\displaystyle ∫(\sqrt{x}−\frac{1}{\sqrt{x}})dx\) \(\displaystyle ∫(\sqrt{x}−\frac{1}{\sqrt{x}})dx=∫x^{1/2}dx−∫x^{−1/2}dx=\frac{2}{3}x^{3/2}+C_1−2x^{1/2+}C_2=\frac{2}{3}x^{3/2}−2x^{1/2}+C\) 2. \(\displaystyle ∫(e^{2x}−\frac{1}{2}e^{x/2})dx\) 3. \(\displaystyle ∫\frac{dx}{2x}\) \(\displaystyle ∫\frac{dx}{2x}=\frac{1}{2}ln|x|+C\) 4. \(\displaystyle ∫\frac{x−1}{x^2}dx\) 5. \(\displaystyle ∫^π_0(sinx−cosx)dx\) \(\displaystyle ∫^π_0sinxdx−∫^π_0cosxdx=−cosx|^π_0−(sinx)|^π_0=(−(−1)+1)−(0−0)=2\) 6. \(\displaystyle ∫^{π/2}_0(x−sinx)dx\) Write an integral for the following problems: 1. Write an integral that expresses the increase in the perimeter \(\displaystyle P(s)\) of a square when its side length s increases from 2 units to 4 units and evaluate the integral. \(\displaystyle P(s)=4s,\) so \(\displaystyle \frac{dP}{ds}=4\) and \(\displaystyle ∫^4_24ds=8.\) 2. Write an integral that quantifies the change in the area \(\displaystyle A(s)=s^2\)of a square when the side length doubles from S units to 2S units and evaluate the integral. 3. A regular N-gon (an N-sided polygon with sides that have equal length s, such as a pentagon or hexagon) has perimeter Ns. Write an integral that expresses the increase in perimeter of a regular N-gon when the length of each side increases from 1 unit to 2 units and evaluate the integral. \(\displaystyle ∫^2_1Nds=N\) 4. The area of a regular pentagon with side length \(\displaystyle a>0\) is \(\displaystyle pa^2\) with \(\displaystyle p=\frac{1}{4}\sqrt{5+\sqrt{5+2\sqrt{5}}}\). The Pentagon in Washington, DC, has inner sides of length 360 ft and outer sides of length 920 ft. Write an integral to express the area of the roof of the Pentagon according to these dimensions and evaluate this area. 5. A dodecahedron is a Platonic solid with a surface that consists of 12 pentagons, each of equal area. By how much does the surface area of a dodecahedron increase as the side length of each pentagon doubles from 1 unit to 2 units? With p as in the previous exercise, each of the 12 pentagons increases in area from 2p to 4p units so the net increase in the area of the dodecahedron is 36punits. 6. An icosahedron is a Platonic solid with a surface that consists of 20 equilateral triangles. By how much does the surface area of an icosahedron increase as the side length of each triangle doubles from a unit to 2a units? 7. Write an integral that quantifies the change in the area of the surface of a cube when its side length doubles from s unit to 2s units and evaluate the integral. \(\displaystyle 18s^2=6∫^{2s}_s2xdx\) 8. Write an integral that quantifies the increase in the volume of a cube when the side length doubles from s unit to 2s units and evaluate the integral. 9. Write an integral that quantifies the increase in the surface area of a sphere as its radius doubles from R unit to 2R units and evaluate the integral. \(\displaystyle 12πR^2=8π∫^{2R}_Rrdr\) 10. Write an integral that quantifies the increase in the volume of a sphere as its radius doubles from R unit to 2R units and evaluate the integral Solve the following particle problems: 1. Suppose that a particle moves along a straight line with velocity \(\displaystyle v(t)=4−2t,\) where \(\displaystyle 0≤t≤2\) (in meters per second). Find the displacement at time t and the total distance traveled up to \(\displaystyle t=2.\) \(\displaystyle d(t)=∫^t_0v(s)ds=4t−t^2\). The total distance is \(\displaystyle d(2)=4m.\) 2. Suppose that a particle moves along a straight line with velocity defined by \(\displaystyle v(t)=t^2−3t−18,\) where \(\displaystyle 0≤t≤6\) (in meters per second). Find the displacement at time t and the total distance traveled up to \(\displaystyle t=6.\) 3. Suppose that a particle moves along a straight line with velocity defined by \(\displaystyle v(t)=|2t−6|,\) where \(\displaystyle 0≤t≤6\) (in meters per second). Find the displacement at time t and the total distance traveled up to \(\displaystyle t=6.\) \(\displaystyle d(t)=∫^t_0v(s)ds.\) For \(\displaystyle t<3,d(t)=∫^t_0(6−2t)dt=6t−t^2\). For \(\displaystyle t>3,d(t)=d(3)+∫^t_3(2t−6)dt=9+(t^2−6t)\). The total distance is \(\displaystyle d(6)=9m.\) 4. Suppose that a particle moves along a straight line with acceleration defined by \(\displaystyle a(t)=t−3,\) where \(\displaystyle 0≤t≤6\) (in meters per second). Find the velocity and displacement at time t and the total distance traveled up to \(\displaystyle t=6\) if \(\displaystyle v(0)=3\) and \(\displaystyle d(0)=0.\) Solve the following word probblems: 1. A ball is thrown upward from a height of 1.5 m at an initial speed of 40 m/sec. Acceleration resulting from gravity is −9.8 m/sec2. Neglecting air resistance, solve for the velocity \(\displaystyle v(t)\) and the height \(\displaystyle h(t)\) of the ball t seconds after it is thrown and before it returns to the ground. \(\displaystyle v(t)=40−9.8t;h(t)=1.5+40t−4.9t^2\)m/s 2. A ball is thrown upward from a height of 3 m at an initial speed of 60 m/sec. Acceleration resulting from gravity is \(\displaystyle −9.8 m/sec^2\). Neglecting air resistance, solve for the velocity \(\displaystyle v(t)\) and the height \(\displaystyle h(t)\) of the ball t seconds after it is thrown and before it returns to the ground. 3. The area \(\displaystyle A(t)\) of a circular shape is growing at a constant rate. If the area increases from 4π units to 9π units between times \(\displaystyle t=2\) and \(\displaystyle t=3,\) find the net change in the radius during that time. The net increase is 1 unit. 4. A spherical balloon is being inflated at a constant rate. If the volume of the balloon changes from \(\displaystyle 36π in.^3\) to \(\displaystyle 288π in.^3\) between time \(\displaystyle t=30\) and \(\displaystyle t=60\) seconds, find the net change in the radius of the balloon during that time. 5. Water flows into a conical tank with cross-sectional area \(\displaystyle πx^2\) at height x and volume \(\displaystyle \frac{πx^3}{3}\) up to height x. If water flows into the tank at a rate of 1 \(\displaystyle m^3/min\), find the height of water in the tank after 5 min. Find the change in height between 5 min and 10 min. At \(\displaystyle t=5\), the height of water is \(\displaystyle x=(\frac{15}{π})^{1/3}m..\) The net change in height from \(\displaystyle t=5\) to \(\displaystyle t=10\) is \(\displaystyle (\frac{(30}{π})^{1/3}−(\frac{15}{π})^{1/3}m.\) 6. A horizontal cylindrical tank has cross-sectional area \(\displaystyle A(x)=4(6x−x^2)m^2\) at height x meters above the bottom when \(\displaystyle x≤3.\) a. The volume V between heights a and b is \(\displaystyle ∫^b_aA(x)dx.\) Find the volume at heights between 2 m and 3 m. b. Suppose that oil is being pumped into the tank at a rate of 50 L/min. Using the chain rule, \(\displaystyle \frac{dx}{dt}=\frac{dx}{dV}\frac{dV}{dt}\), at how many meters per minute is the height of oil in the tank changing, expressed in terms of x, when the height is at x meters? c. How long does it take to fill the tank to 3 m starting from a fill level of 2 m? The following table lists the electrical power in gigawatts—the rate at which energy is consumed—used in a certain city for different hours of the day, in a typical 24-hour period, with hour 1 corresponding to midnight to 1 a.m. Hour Power Hour Power Find the total amount of power in gigawatt-hours (gW-h) consumed by the city in a typical 24-hour period. The total daily power consumption is estimated as the sum of the hourly power rates, or 911 gW-h. The average residential electrical power use (in hundreds of watts) per hour is given in the following table. a. Compute the average total energy used in a day in kilowatt-hours (kWh). b. If a ton of coal generates 1842 kWh, how long does it take for an average residence to burn a ton of coal? c. Explain why the data might fit a plot of the form \(\displaystyle p(t)=11.5−7.5sin(\frac{πt}{12}).\) The data in the following table are used to estimate the average power output produced by Peter Sagan for each of the last 18 sec of Stage 1 of the 2012 Tour de France. Second Watts Second Watts 1 600 10 1200 7 1050 16 950 Average Power OutputSource: sportsexercisengineering.com Estimate the net energy used in kilojoules (kJ), noting that 1W = 1 J/s, and the average power output by Sagan during this time interval. The data in the following table are used to estimate the average power output produced by Peter Sagan for each 15-min interval of Stage 1 of the 2012 Tour de France. Minutes Watts Minutes Watts Estimate the net energy used in kilojoules, noting that 1W = 1 J/s. The distribution of incomes as of 2012 in the United States in $5000 increments is given in the following table. The kth row denotes the percentage of households with incomes between \(\displaystyle $5000xk\) and \(\displaystyle 5000xk+4999\). The row \(\displaystyle k=40\) contains all households with income between $200,000 and $250,000 and \(\displaystyle k=41\) accounts for all households with income exceeding $250,000. 0 3.5 21 1.5 6 5.5 27 0.75 10 4.3 31 0.6 11 3.5 32 .5 20 2.1 41 Income DistributionsSource: http://www.census.gov/prod/2013pubs/p60-245.pdf a. Estimate the percentage of U.S. households in 2012 with incomes less than $55,000. b. What percentage of households had incomes exceeding $85,000? c. Plot the data and try to fit its shape to that of a graph of the form \(\displaystyle a(x+c)e^{−b(x+e)}\) for suitable \(\displaystyle a,b,c.\) \(\displaystyle a. 54.3%; b. 27.00%; c. \)The curve in the following plot is \(\displaystyle 2.35(t+3)e^{−0.15(t+3)}.\) 4.6 Integration Formulas and the Net Change Theorem 4.7 Definite integrals by substitution.
CommonCrawl
Link Prediction Algorithm for Signed Social Networks Based on Local and Global Tightness Miao-Miao Liu* , Qing-Cui Hu* , Jing-Feng Guo** and Jing Chen Corresponding Author: Qing-Cui Hu* , [email protected] Miao-Miao Liu*, Dept. of Computer Science, School of Computer and Information Technology, Northeast Petroleum University, Daqing, Heilongjiang, China, [email protected] Qing-Cui Hu*, Dept. of Computer Science, School of Computer and Information Technology, Northeast Petroleum University, Daqing, Heilongjiang, China, [email protected] Jing-Feng Guo**, Dept. of Computer Science and Engineering, College of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei, China, [email protected] Jing Chen, Dept. of Computer Science and Engineering, College of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei, China, [email protected] Received: November 7 2020 Accepted: December 4 2020 Abstract: Given that most of the link prediction algorithms for signed social networks can only complete sign prediction, a novel algorithm is proposed aiming to achieve both link prediction and sign prediction in signed networks. Based on the structural balance theory, the local link tightness and global link tightness are defined respectively by using the structural information of paths with the step size of 2 and 3 between the two nodes. Then the total similarity of the node pair can be obtained by combining them. Its absolute value measures the possibility of the two nodes to establish a link, and its sign is the sign prediction result of the predicted link. The effectiveness and correctness of the proposed algorithm are verified on six typical datasets. Comparison and analysis are also carried out with the classical prediction algorithms in signed networks such as CN-Predict, ICN-Predict, and PSNBS (prediction in signed networks based on balance and similarity) using the evaluation indexes like area under the curve (AUC), Precision, improved AUC′, improved Accuracy′, and so on. Results show that the proposed algorithm achieves good performance in both link prediction and sign prediction, and its accuracy is higher than other algorithms. Moreover, it can achieve a good balance between prediction accuracy and computational complexity. Keywords: Link Prediction , Sign Prediction , Signed Social Networks , Similarity , Structural Balance Theory , Tightness Nowadays, users typically take to online social media to express their opinions, which can inherently be both positive and negative. So, this kind of social networks can be represented by signed social networks. Namely, signed networks are social networks with positive and negative sign attributes. The signs "+" and "–", respectively indicate the positive relationships (such as support) and the negative relationships (such as opposition) between entities. The research of prediction in signed networks is of great significance for understanding the interaction between positive and negative relations, the formation principle and evolution mechanism of signs, and the analysis of network structure balance. It has important application value in node classification, attitude prediction, recommendation systems in the information field, and the analysis of intermolecular promotion or inhibition in the biological field. For the prediction research in signed networks, most of the existing algorithms can only complete the prediction of the missing sign. Few of them can achieve the double tasks of link prediction and sign prediction. Though the research of Liu et al. [1] has realized both the sign prediction and link prediction in signed networks, the accuracy of the algorithm depended on the value of the influence factor of the adjustable step size to a certain extent. Besides, relevant researches on mainstream link prediction methods based on the similarity show that, when the step size is greater than 3, the computational complexity of the algorithm increases greatly while its prediction accuracy is not significantly improved. Given the above problems, in order to achieve a better balance between prediction accuracy and computational complexity, a novel algorithm PSSN_TLG (prediction in signed social networks based on tightness of local and global, hereinafter referred to as TLG) is proposed aiming to complete both the link prediction and sign prediction. Based on the structural balance theory, the local link tightness and global link tightness are defined respectively by using the structure information of paths with a step size of 2 and 3 between the two nodes. Then the total similarity is obtained through fusing the above two results to perform the link and sign prediction. 2. Related Work This paper focuses on the link prediction and sign prediction in undirected signed networks. At present, most of the related works mainly focused on sign prediction. These methods can be roughly divided into supervised learning and unsupervised learning. The prediction method with supervised learning is mainly based on the existing sign attributes of links to select structural features and use decision trees and other classifiers to complete sign prediction. According to the different information used in the selection of network structure features, they are further divided into sign prediction algorithms based on structural balance theory and sign prediction algorithms based on user interaction behavior and context information [2]. For example, in [3], the authors mined the characteristics of unlabeled links and improved the sign prediction accuracy through migration learning of four network features such as node degree, structural balance, status, and potential state. The prediction method with unsupervised learning mainly uses the network structure attributes and the interactive information between nodes to complete sign prediction. They could be roughly divided into two categories, namely, sign prediction based on matrix decomposition or matrix filling and sign prediction based on the node similarity. The representative research achievements are as follows. Su and Song [4] proposed a low-rank matrix decomposition model with offset, which introduced the sign of out-edge and in-edge of neighbor nodes as offset information to improve the accuracy of sign prediction. Shen et al. [5] mainly focused on the prediction of negative links and proposed a framework based on projected non-negative matrix decomposition through unsupervised learning embedded in network structure and user attributes. Although the sign prediction algorithms based on supervised learning and matrix can achieve high prediction accuracy, they usually have high computational complexity, and these models are hard to evaluate. Moreover, due to the poor prediction effect for sparse networks, they are not conducive to the extensive promotion in practical applications. Therefore, the algorithm based on node similarity is still the mainstream link prediction method, and the representative works are as follows. Girdhar et al. [6] proposed a link prediction model based on local and global information, and they distinguished the connection strength between entities based on the fuzzy computing model of trust and distrust. Chen et al. [7] used the set pair theory to treat the sign network as an identity-discrepancy-reverse system and proposed a sign prediction algorithm by fusing the certainty and uncertainty relations as well as local and global information. Zhu and Ma [8] proposed a highly symmetrical quadrilateral structure by analyzing the sign generation mechanism. Based on the statistical characteristics of local structure, the similarities and divergences of node pairs, and the structural characteristics reflecting the positive and negative attitudes of nodes were extracted. Then the sign prediction was completed. In addition to the above methods, related scholars have also studied the link prediction in sparse signed networks. For the problem of the sparseness of signed link data in signed networks, i.e., only a small percentage of signed links are given and the number of negative links is much smaller than that of positive links, Beigi et al. [9] proposed a novel signed link prediction model which enabled the empirical exploration of user personalities via social media data. Based on psychology theories, they extracted additional information about users' personalities such as optimism and pessimism that can help determine their propensity in establishing positive and negative links to compensate for data sparsity. Generally, in social networks, users tend to establish positive (or negative) links with those whose generated content we frequently positively (or negatively) interact with online. Based on the verification of these assumptions, a framework for solving the link and interaction polarity prediction problem in signed networks was proposed [10] by understanding the correlation between these two types of opinions from both a local and global perspective. The algorithm was demonstrated to be helpful for both the data sparsity and cold-start problems found in signed networks. In recent years, some scholars have studied the link representation and prediction methods based on convolution neural networks and recurrent neural networks with deep learning mechanism. The ordered node sequence is constructed by using the local topological structure between nodes, and the matrix representation of potential links is generated by node vector expression. Finally, the multi-layer implicit relations of node pairs in the node sequence are extracted based on the neural network operations and used to realize link prediction. However, most of these algorithms focus on link prediction in traditional social networks, and few studies on link prediction and sign prediction for signed networks are found. 3. Problem Statement 3.1 Theoretical Foundation An undirected and unweighted signed network is usually formalized as G=(V, E, S), in which, [TeX:] $$V=\left\{v_{1}, v_{2}, \ldots, v_{n}\right\}$$represents the node-set, [TeX:] $$\mathrm{E}=\left\{e(i, j) \mid v_{i,} v_{i} \in V, i \neq j\right\}$$ represents the edge-set, and [TeX:] $$S=\{\operatorname{sign}(i, j)\left.\mid v_{i}, v_{j} \in V, i \neq j\right\}$$ represents the set of signs with values as follows [TeX:] $$e(i, j)=\left\{\begin{array}{ll} 1, & \left(v_{i}, v_{j}\right) \in E \\ 0, & \left(v_{i}, v_{j}\right) \notin E \end{array}\right.$$ [TeX:] $$\operatorname{sign}(i, j)=\left\{\begin{array}{ll} +1, & e(i, j)=1 \wedge \text { the link is positive } \\ -1, & e(i, j)=1 \wedge \text { the link is negative } \\ 0, & e(i, j)=1 \wedge \text { the sign is unknown } \vee e(i, j)=0 \end{array}\right.$$ The structural balance theory provides a theoretical basis for the analysis of undirected signed networks, as shown in Fig. 1. According to this theory, a loop consisting of k edges (k≥3) is structurally balanced if and only if the product of the sign of all edges is positive. Relevant studies show that the number of balanced rings is far greater than that of unbalanced rings, and with the dynamic development of the network, unbalanced structures will evolve towards balanced structures. Therefore, the structural balance theory has been widely used in the sign prediction of undirected signed networks. Sketch of the structural balance theory. 3.2 Problem Definition The prediction research in signed networks includes two aspects. On the one hand, it is necessary to analyze the probability of the establishment of links between two nodes that have not connected, namely, the link prediction problem. It is generally believed that the higher the similarity between the two nodes, the greater the possibility of establishing a link between them. On the other hand, it is necessary to analyze the sign type of new links or existing links without sign attributes, namely, the sign prediction problem. Generally, the algorithm should strive to ensure that the ring where the predicted link is located can maximize the structural balance of the network as much as possible. Therefore, in the paper, we define the similarity between two nodes based on the characteristics of network topology and the structural balance theory aiming to achieve the two tasks of link prediction and sign prediction. The goal of the algorithm can be expressed as follows. Given G= (V, E, S), [TeX:] $$\forall v_{i}, v_{j} \in V$$, we aim to predict the possibility of establishing a link between the node [TeX:] $$v_{i}$$ and [TeX:] $$v_{j}$$ where e(i, j)=0 and the corresponding sign type of the link. The algorithm also predicts the missing sign attribute of existing links where [TeX:] $${e(i, j)}=1$$. As shown in Fig. 2, the solid line represents the known edge, and the dotted line represents the unknown edge (which does not exist or exists but has not yet been observed). Nodes of [TeX:] $$v_{1}$$ and [TeX:] $$v_{10}$$ are not directly connected at present, but due to the existence of common neighbors, [TeX:] $$v_{1}$$ can reach [TeX:] $$v_{10}$$ through multiple paths. That is to say, the probability of establishing a link between them is very high. However, due to the prediction results of sign(1,10) based on each balanced ring are different which can be positive or negative, the algorithm should effectively integrate the local and global structural features that affect the similarity of the two nodes to define the total similarity based on the concept of the k-balanced ring. And its ultimate goal is to get the probability of establishing a link between [TeX:] $$v_{1}$$ and [TeX:] $$v_{10}$$ and to predict the corresponding sign type, namely, sign(1,10). Diagram of problem definition of the TLG algorithm. 4. Proposed Algorithm 4.1 Core Ideas and Related Definitions According to the local structure information of the social network graph, the classical CN-index took the number of first-order common neighbors of the node pair into account, and the Jaccard index considered the number of non-common neighbor nodes. While the AA index and RA index considered the degree of the first-order common neighbors. The four indicators respectively focused on one of the factors that affect the similarity, which ignored the influence of other local features on the similarity of the node pair. Herein, using sim(x,y) represents the similarity value of the node pair <[TeX:] $\mathrm{V}_{\mathrm{x}}$$$, [TeX:] $$\mathrm{V}_{\mathrm{y}}$$>. According to CN and Jaccard indexes, sim(1,10) in Fig. 3(a) is greater than sim(1,10) in Fig. 3(b). According to AA and RA indexes, sim(1,10) in Fig. 3(a) is greater than sim(1,10) in Fig. 3(c). Diagram of similarity definition based on local tightness. Definition 4.1 (Local link tightness). To improve the prediction accuracy, the TLG algorithm considers the degree of the two nodes, their first-order common neighbors and non-common neighbors, the sign of the edge, and other local structural characteristics. [TeX:] $$\forall v_{x}, v_{y} \in V$$ and sign(x,y)=0, based on the concept of the structural balanced ring, the local link tightness of the node pair is defined by using the information of paths connecting the two nodes with a step size of 2, which is recorded as TLSim(x, y). In this paper, k(x) represents the degree of the node [TeX:] $$v_{x}$$, [TeX:] $$N_{1}(x)$$ and [TeX:] $$N_{2}(x)$$, respectively, represent the first-order and second-order neighbor sets of node [TeX:] $$v_{x}$$, and [TeX:] $$\left|N_{1}(x)\right|$$ represents the size of the set [TeX:] $$N_{1}(x)$$. [TeX:] $$\operatorname{TLSim}(x, y)=\frac{z \in N_{1}(x) \cap N_{1}(y) \frac{\operatorname{sign}(x, z) \times \operatorname{sign}(z, y)}{k(z)}}{\left|N_{1}(x) \cup N_{1}(y)\right|}$$ Definition 4.2 (Global link tightness). According to the global structure information of the network, the LP and Katz indexes consider the influence of the path with a step size of 2 and 3 or all paths on the similarity respectively and assign weight decay factors to paths with different step size. But the LP index ignores the impact of local information on the similarity. While Katz index needs to calculate all path information, which will lead to high complexity. Given this, to achieve a better balance between accuracy and complexity, the TLG algorithm takes the influence of structure information of path with a step size of 3 connecting the two nodes on the similarity into account. Firstly, the link strength of two directly connected nodes is defined. Then, based on the concept of the structural balanced ring, the global link tightness is defined by the link strength of three groups of intermediate transmission node pairs on the path that connects the two nodes with the step size of 3, written as TGSim(x, y). [TeX:] $$\operatorname{TGSim}(x, y)=\sum_{l_{k}} \frac{\operatorname{sign}\left(x, z_{\mathrm{k}}^{\prime}\right)}{\log k(x)+\log k\left(z_{\mathrm{k}}^{\prime}\right)+1} \times \frac{\operatorname{sign}\left(z_{k}^{\prime}, z_{k}^{\prime \prime}\right)}{\log k\left(z_{k}^{\prime}\right)+\log k\left(z_{k}^{\prime}\right)+1} \times \frac{\operatorname{sign}\left(z_{k}^{\prime}, \mathrm{y}\right)}{\log k\left(z_{k}^{\prime}\right)+\log k(\mathrm{y})+1}$$ In the definition of TGSim(x,y), [TeX:] $$l_{k}$$ is the Kth path with a step size of 3 connecting [TeX:] $$v_{x}$$ and [TeX:] $$v_{y}$$, namely, [TeX:] $$l_{k}=v_{x} e\left(x, z_{k}^{\prime}\right) v_{z_{t}} e\left(z_{k}^{\prime}, z_{k}^{\prime}\right) v_{z_{k}} e\left(z_{k}^{\prime}, y\right) v_{y}$$. While [TeX:] $$v_{z_{k}^{\prime}}$$ and [TeX:] $$v_{z_{k}}$$ are the two intermediate nodes on the path [TeX:] $$l_{\mathrm{k}}$$. That is to say, [TeX:] $$v_{\varepsilon_{k}} \in N_{1}(x) \cap N_{2}(y), v_{\varepsilon_{k}} \in N_{1}(y) \cap N_{2}(x), \quad e\left(x, z_{k}^{\prime}\right)=1, \quad e\left(z_{k}^{\prime}, z_{k}^{\prime}\right)=1, e\left(z_{k}^{\prime}, y\right)=1$$. Definition 4.3 (Total similarity between the two nodes). The total similarity between two unconnected nodes is defined as the sum of the local and global link tightness of the two nodes, which is denoted as TLGSim(x, y), as follows. |TLGSim(x, y)| represents the possibility of the node [TeX:] $$v_{x}$$ and [TeX:] $$v_{y}$$ to establish a link, and the prediction result of sign(x, y) is the same as the sign type of TLGSim(x, y). [TeX:] $$\operatorname{TLGSim}(x, y)=\operatorname{TLSim}(x, y)+\mathrm{T} \operatorname{GSim}(x, y)$$ 4.2 Description and Implementation Algorithm TLG Input: G= (V, E, S), which describes the static snapshot graph of an undirected signed network. Output: the Top-k links that are the most likely to establish in G, their corresponding signs, and the missing sign attributes of existing edges in G. [TeX:] $$\forall v_{x}, v_{y} \in V$$, if e(x, y)=0 or e(x, y)=1 but sign(x, y)=0, perform the following operations. 1) Read the dataset file and obtain the adjacency matrix and adjacency table of the graph G. 2) Find all of the node pair <[TeX:] $$v_{x},v_{y}$$>, get the adjacency lists of [TeX:] $$v_{y}$$ and [TeX:] $$v_{y}$$, and calculate TLSim(x, y). 3) Get all the second-order neighbors of [TeX:] $$v_{x}$$ and [TeX:] $$v_{y}$$, and calculate TGSim(x, y). 4) Calculate TLGSim(x, y). 5) Do the prediction. In terms of the prediction of the missing sign of the existing edge, if TLGSim(x, y)>0, then sign(x,y)=+1. While if TLGSim(x, y)<0, sign(x, y)=-1. In terms of the prediction of unknown links, we sort |TLGSim(x, y)| in descending order, and output the node pairs corresponding to the Top-k values, which have the highest probability of establishing links. And the prediction result of sign(x, y) corresponding to the unknown link is the same as the sign type of the value of TLGSim(x, y). 5. Experiments Six representative datasets are selected, and the training set and test set are divided by the 10-fold cross-validation method. The improved AUC′ and Accuracy′ are used as evaluation indexes. And the results of the proposed algorithm are compared with CN (common neighbor), ICN (improved common neighbor), and PSNBS (prediction in signed networks based on balance and similarity) algorithm [1]. 5.1 Dataset Three classical large-scale real datasets and three small datasets are used for the experiment. The basic information is shown in Table 1. Among them, the FEC and CRA network are two simulation datasets, and the Gahuku Gama Sub-tribes signed network, written as GGS, is a real dataset. Basic characteristics of the dataset 5.2 Evaluation Index The classical index AUC [2] is suitable for the evaluation of link prediction accuracy in traditional social networks with only positive links. However, the total similarity score between two nodes in the TLG algorithm may be positive or negative. Its absolute value represents the probability of the two nodes to establish a link, and it is positive or negative represents the sign type of the predicted link. Therefore, in our experiments, the classical AUC index is adjusted to obtain a new index, written as AUC', which can correctly evaluate the link prediction results in signed networks. Its definition is shown as follows. In the experiment, the total similarity of each node pair corresponding to the edge selected randomly from the test set and the edge selected from the non-existent edge-set is calculated. And the comparison is done only when the signs of the two similarity values are the same. If the absolute value of the total similarity between the two nodes corresponding to the edge selected from the test set is greater than that of the edge selected from the nonexistent edge-set, [TeX:] $$\bar{n}^{\prime}$$' will plus 1. If the absolute values of the two are equal, [TeX:] $\bar{n}^{\prime \prime}$$$ will plus 1. If the signs of the two are different, the calculation will be canceled, and we will choose edges again. In each group of experiments, it should be ensured that there are 20,000 times that the signs of the total similarity value corresponding to the two edges selected are the same. That is to say, the value of [TeX:] $$\bar{n}$$ is 20,000. [TeX:] $$\mathrm{AUC}^{\prime}=\frac{\bar{n}^{\prime}+0.5 \times \bar{n}^{\prime \prime}}{\bar{n}}$$ For the sign prediction in signed networks, a series of evaluation indexes can be obtained, such as TP, FP, TN, FN, Recall, Precision, Accuracy, F1-score, etc. The sign prediction in signed networks needs to evaluate the comprehensive prediction accuracy of positive and negative signs. Relevant researches have shown that in most real signed networks the ratio of the number of positive links to the number of negative links is greater than 3:1. That is to say, the probability that positive links are selected in the experiment is much higher than that of negative links. Therefore, we improved the index of Accuracy, and the index Accuracy' is obtained by adjusting the above indexes to give a weight of 1 for sign prediction results of positive edges and a weight of 0.5 for sign prediction results of negative edges, which is used as the comprehensive evaluation index, as follows. [TeX:] $$\text { Accuracy }^{\prime}=\frac{T P+0.5 \times T N}{(T P+F N)+0.5 \times(T N+F P)}$$ 5.3 Experimental Results and Analysis (1) Analysis of link prediction accuracy: The experimental results of the link prediction accuracy of the TLG algorithm based on the AUC′ evaluation index are shown in Fig. 4. Prediction accuracy and the average value of ten independent experiments which is abbreviated as AVG are obtained. The number of links that are effectively predicted and compared in each independent experiment is 20,000, namely, [TeX:] $$\bar{n}$$=20,000 . As can be seen from Fig. 4, the TLG algorithm shows high link prediction accuracy for the three large real signed networks and the small simulation network CRA (as shown in Figs. 5 and 6). These four datasets are signed networks with conventional topology in which the number of positive links is much larger than that of negative links and the degree distribution of nodes has no particularity. Prediction results of TLG based on AUC′. Topology of CRA network. Degree distribution of CRA network. For the GGS network, the accuracy of the proposed algorithm is lower than that of the first four datasets. This network describes the real political alliance and enmity among 16 sub-tribes as shown in Fig. 7. The proportion of positive links and negative links is 1:1, and the degree distribution of 16 nodes is shown in Fig. 8. For the dataset with the same number of positive and negative links, the prediction accuracy of the TLG algorithm can still reach 67%, which shows its good robustness. Topology of GGS network. Degree distribution of GGS network. For the FEC network, the accuracy of the TLG algorithm remains 0.5. The topology and node degree distribution of the dataset are shown in Figs. 9 and 10. It can be seen that 28 nodes are divided into two types with the same degree distribution. When calculating AUC′, the topological structure of the links obtained from the test set and the nonexistent edge set is almost the same in most cases. In other words, the probability that the degree distribution of the node pairs corresponding to these two links is not the same is [TeX:] $$\mathrm{C}_{24}^{2} \mathrm{C}_{4}^{2} / \mathrm{C}_{28}^{2} \mathrm{C}_{26}^{2}$$, which equals 0.0135. Namely, [TeX:] $$\bar{n}^{\prime} \approx 0$$ and [TeX:] $$\bar{n}^{\prime \prime} \approx \bar{n}$$. Therefore, the value of AUC′ should be 0.5. The experimental results also verify the correctness of the TLG algorithm. Topology of FEC network . Degree distribution of FEC network. (2) Analysis of sign prediction accuracy: Similarly, the sign prediction accuracy of the proposed algorithm is verified on six datasets. The experimental results are shown in Table 2 and Fig. 11. Experimental results of sign prediction of TLG Prediction results of TLG based on Accuracy′. (3) Comparison with other algorithms: Experimental comparisons are carried out with the CN-Predict, ICN-Predict and the PSNBS algorithm. The AUC′, Accuracy′, and AUC [1] are used as evaluation indexes. Results are shown in Fig. 12, Fig. 13, and Fig. 14, respectively. Note that the results of the PSNBS algorithm are the highest accuracy when the step size influence factor [TeX:] $$\lambda$$ is assigned the optimal value. Results show that the TLG algorithm does not need to preset the influence factor, and it can achieve a good prediction effect for both conventional large-scale signed networks and small-scale signed networks with the special topology. Moreover, its accuracy is higher than other algorithms. Link prediction results based on AUC′. . Sign prediction results based on Accuracy′. Comparison results based on AUC [ 1]. (4) The Top-k recommended links: According to the TLG algorithm, the Top-100 recommended links are given for the first three large-scale networks, as shown in Figs. 15–17. Meanwhile, the top-50 recommended links are also given for the two small-scale datasets, as shown in Fig. 18. In these four figures, the serial number of the node pairs and their similarities are displayed. And the prediction of positive links and negative links are respectively represented by different colors. Besides, the recommendation results of links based on the TLG and PSNBS algorithm are compared on the two small datasets, as shown in Fig. 19. Results of the proposed algorithm are consistent with that of the PSNBS algorithm, which further verifies the correctness and effectiveness of the TLG algorithm. Recommended links for Epinions. Recommended links for Slashdot. Recommended links for Wikipedia. Recommended links for GGS and CRA Comparison of results of recommended links. A link prediction algorithm for signed networks is proposed. It effectively obtains the local and global structural features that affect the similarity of nodes and can achieve the dual goals of link prediction and sign prediction. Experimental results show its better prediction effect than other methods. However, for large-scale signed networks, how to reduce the computational complexity is a key problem. Besides, the proposed algorithm takes the static snapshot of the signed network as the research object while the real social network is dynamic. Therefore, in the next research work, perhaps we should integrate deep learning methods to mine more structural properties to complete the link prediction of signed networks. This work was supported by the Natural Science Foundation of China (No. 42002138 and 61871465), Natural Science Foundation of Heilongjiang Province (No. LH2019F042 and LH2020F003), Youth Science Foundation of Northeast Petroleum University (No. 2018QNQ-01), and Science & Technology Program of Hebei (No. 20310301D). Miao-Miao Liu She was born in 1982 and she is currently a full professor and the Master's Supervisor of Northeast Petroleum University in China. She got her doctorate in computer science and technology at Yanshan University in 2017. Her main research interests include community discovery and link prediction in social networks. Qing-Cui Hu Jing-Feng Guo He is currently a full professor and Doctoral supervisor of Yanshan University in China. His research interests include database theory, data mining and social network analysis. She is currently an associate professor and Master supervisor of Yanshan University in China. Her research interests include community discovery and information dissemination in social networks. 1 M. M. Liu, J. F. Guo, J. Chen, "Link prediction in signed networks based on the similarity and structural balance theory," Journal of Information Hiding and multimedia Signal Processing, vol. 8, no. 4, pp. 831-846, 2017.custom:[[[-]]] 2 M. M. Liu, Q. C. Hu, J. F. Guo, J. Chen, "Survey of link prediction algorithms in signed networks," Computer Science, vol. 47, no. 2, pp. 21-30, 2020.custom:[[[-]]] 3 D. Li, D. Shen, Y. Kou, Y. Shao, T. Nie, R. Mao, "Exploiting unlabeled ties for link prediction in incomplete signed networks," in Proceedings of 2019 3rd IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 2019;pp. 538-543. custom:[[[-]]] 4 X. Su, Y. Song, "Local labeling features and a prediction method for a signed network," CAAI Transactions on Intelligent Systems, vol. 13, no. 3, pp. 437-444, 2018.custom:[[[-]]] 5 P. Shen, S. Liu, Y. Wang, L. Han, "Unsupervised negative link prediction in signed social networks," Mathematical Problems in Engineering, vol. 2019, no. 7348301, 2019.doi:[[[10.1155//7348301]]] 6 N. Girdhar, S. Minz, K. K. Bharadwaj, "Link prediction in signed social networks based on fuzzy computational model of trust and distrust," Soft Computing, vol. 23, no. 22, pp. 12123-12138, 2019.custom:[[[-]]] 7 X. Chen, J. F. Guo, X. Pan, C. Zhang, "Link prediction in signed networks based on connection degree," Journal of Ambient Intelligence and Humanized Computing, vol. 10, no. 5, pp. 1747-1757, 2019.custom:[[[-]]] 8 X. Zhu, Y. Ma, "Sign prediction on social networks based nodal features," Complexity, vol. 2020, no. 4353567, 2020.doi:[[[10.1155//4353567]]] 9 G. Beigi, S. Ranganath, H. Liu, "Signed link prediction with sparse data: the role of personality information," in Companion Proceedings of the 2019 W orld Wide W eb Conference, San Francisco, CA, 2019;pp. 1270-1278. https://doi.org/10.1145/3308560.3316469. custom:[[[-]]] 10 T. Derr, Z. Wang, J. Dacon, J. Tang, "Link and interaction polarity predictions in signed networks," Social Network Analysis and Mining, vol. 10, no. 18, 2020.doi:[[[10.1007/s13278-020-0630-6]]] |V| |E| Percentage of edges (%) Epinions 131828 840799 85.00 15.00 Slashdot 82144 549202 77.40 22.60 Wikipedia 138592 740106 78.70 21.30 FEC 28 42 71.40 28.60 GGS 16 58 50.00 50.00 CRA 36 74 93.20 6.800 Epinions TP(+/+) 0.8218 0.6867 0.8659 0.6000 0.5000 0.8750 FP(+/–) 0.0623 0.0989 0.0617 0.2000 0.1667 0.0000 TN(–/v) 0.1048 0.1511 0.0593 0.0000 0.3333 0.0000 FN(–/+) 0.0111 0.0633 0.0131 0.2000 0.0000 0.1250 Recall 0.6533 0.9156 0.9851 0.7500 1.0000 0.8750 Precision 0.9295 0.8741 0.9335 0.7500 0.7500 1.0000 F1-score 0.9573 0.8944 0.9586 0.7500 0.8571 0.9333 Accuracy 0.9266 0.8378 0.9252 0.6000 0.8888 0.8750
CommonCrawl
Benefits of radar-derived surface current assimilation for South of Africa ocean circulation Xavier Couvelard ORCID: orcid.org/0000-0003-4868-714X1, Christophe Messager1, Pierrick Penven2, Sébastien Smet3 & Philippe Lattes4 Geoscience Letters volume 8, Article number: 5 (2021) Cite this article The oceanic circulation south of Africa is characterised by a complex dynamics with a strong variability due to the presence of the Agulhas current and numerous eddies. This area of interest is also the location of several natural gas fields under seafloor which are targeted for drilling and exploitation. The complex and powerful ocean currents induces significant issues for ship operations at the surface as well as under the surface for deep sea operations. Therefore, the knowledge of the state of the currents and the ability to forecast them in a realistic manners could greatly enforce the safety of various marine operation. Following this objective, an array of HF radar systems were deployed to allow a detailed knowledge of the Agulhas currents and its associated eddy activity. It is shown in this study that assimilation of HF radar allow to represent the surface circulation more realistically. Two kind of experiments have been performed, a one month analysis and nine consecutive forecast of two days each. The one month 4DVAR experiment have been compared to geostrophic currents issued from altimeters and highlight an important improvement of the geostrophic currents. Furthermore despite the restricted size of the area covered with HF radar, we show that the solution is improved almost in the whole domain, mainly upstream and downstream of the HF radar's covered area. We also show that while benefits of the assimilation on the surface current intensity is significantly reduced during the second day of forecast, the correction in direction persists after 48 h. The oceanic circulation south of Africa is characterised by a complex dynamics with a strong variability due to the presence of the Agulhas current and numerous mesoscale eddies from the Mozambique Channel (Penven et al. 2006; Halo et al. 2014). More recently, high resolution modeling study by Tedesco et al. (2019) has highlighted the existence of numerous submesoscale eddies along the Agulhas cyclonic front. Lutjeharms et al. (2003) observed the presence of cyclonic eddies embedded in the landward border of the southern Agulhas Current. These eddies have a diameter of about 50 km and are associated with a surface warm signature. Simulations suggest that those eddies remain trapped in the Agulhas Bank shelf bight and that eddies that travel downstream of the current represent leakages from the resident shear eddy. This occurs at a roughly 20 days occurrence frequency. The intensity of the meso-scale activity in this key region for the retro-flexion modulate the exchanges of heat and salt between oceans (Lutjeharms 1981; Reason et al. 2003; Van-Aken et al. 2013; Guerra et al. 2018) as well as towards the atmosphere (Messager and Stuart 2016). This region exhibits furthermore a dynamical upwelling induced by the Agulhas Currents (Arnone et al. 2017) as observed by Goschen et al. (2015) during Natal Pulses. This upwelling, as been shown by Lutjeharms et al. (2000) to occurs on the landward side of the Agulhas Current and have an effect on the nutrient availability, stratification and primary productivity in the eastern Agulhas Bank. It as also been shown by Meyer and Niekerk (2016) that implementing an ocean current power plant in this region would outperforms onshore wind power plants and could increase the load carrying capacity of the country. The area of interest of this paper, represented on Fig. 1 is also the location of several natural gas fields under seafloor which are targeted for drilling and exploitation. The complex and powerful ocean currents induces significant issues for ship operations at the surface as well as under the surface for deep sea operations. Strong ocean currents can also modify the height and direction of ocean waves, causing dangerous sea states (Quilfen et al. 2018). The risk of extreme waves is an important hazard for the shipping activity and off shore industry when crossing the main current systems. Therefore, knowledge of the currents state and the ability to forecast it in a realistic manners could greatly enforce the safety of various marine operations. Following this objective an array of HF radar was deployed along the coast to allow a detailed knowledge of the Agulhas currents and its associated eddy activity. The purpose of the present document is to present and evaluate the impact of the 4DVAR assimilation of those radar data on ocean model simulation and forecast of the sea surface currents. Data used for assimilation and validation are described in the following section. The model setup and the assimilation procedure are described in a third section while results are presented in section four and further discuss in the conclusion. Area of interest of this study. Main interest Focus on area 11b/12b and Brulpadda point. Credit: www.total.com To monitor the variability of the Agulhas currents during offshore operations, three WERA HF radars, manufactured by Helzel Messtechnik GmbH, were installed by ACTIMAR and LWANDLE companies on the south coast of South Africa. The location of the radar system and the averaged area of measurement during April 2020 is represented on Fig. 2. The radial velocities are estimated by using the conventional method of Beam Forming with an extra filtering of the residual artefacts. Then the radial velocities are combined on a Cartesian grid at 6km resolution using the method describe by Barth et al. (2010) and made available every 30 min. Comparisons with mobile and fixed ADCP measurements have been performed (cf. Fig. 3). For the fixed ADCP, differences intensity are observed for weak current (\(\le\)1.3 m/s) and a better matching is observed for stronger values. Current directions derived from radar are well correlated with ADCP measurements. The differences in intensity are explained by the low angular resolution of the BeamForming compared to the grid resolution (by a factor of about 4) at the position of the ADCP. For the mobile ADCP, the differences in intensity are lower compared to the mobile ADCP, while the differences in direction may be due to a poor calibration of the hull ADCP. Therefore, the currents provided with the Beam Forming method seems to be robust enough to be assimilated in ROMS simulations. Nevertheless, to overcome some inaccuracies, a hybrid Beam Forming/Direction Finding method developed by ACTIMAR called HYDDOA (cf. patent : FR 1562550) has been used for marine operations with better performances. Unfortunately, these data could not be used for this study. Black squares represent the emplacement of the HF radar installed to monitor the area delimited by the black contour (cf Fig. 1). The colored area represents the intensity of the averaged current measured by the radar during April 2020 and the white arrows are representative of the averaged direction of the surface currents during the same period Quantile-Quantile plot of speed and direction for comparison with fixed ADCP (a, c) and mobile ADCP (b, d) In addition, Altimeters data were generated by a processing system including data from several altimeter missions: Sentinel-3A/B, Jason-3, HY-2A, Saral[-DP]/AltiKa, Cryosat-2, OSTM/Jason-2, Jason-1, Topex/Poseidon, Envisat, GFO, ERS-1/2 and delivered by E.U. Copernicus Marine Service Information. Being at a significant lower resolution than both model experiment those data were excluded from the assimilation process (although there are somehow assimilated in the Mercator ocean simulation used as boundary forcing) and may be considered as an independent source of observations for our validation process. Nonetheless an import bias in the representation of the Agulhas current by the altimeters data has been highlighted by Rouault et al. (2010). Model setup & assimilation method (4DVAR) The ocean circulation model used in this study was the Regional Oceanic Modeling System described in detail in Shchepetkin and McWilliams (2003, 2005). ROMS is a split-explicit, free-surface and terrain-following vertical coordinate oceanic model with 4DVAR capabilties (Moore et al. 2011c). Tracers momentum advection use a third order upstream biased advection scheme with no additional explicit horizontal dissipation/diffusion while on the vertical a GLS scheme is used to determine vertical mixing coefficients (Warner et al. 2005). The model grid, the atmospheric forcing, the initial and boundary conditions were all built using pyroms package freely available at http://www.myroms.org (doi:https://doi.org/10.5281/zenodo.3727272). The bottom topography is derived from Etopo1 (doi:https://doi.org/10.7289/V5C8276M). To ensure an usefull resolution in the upper ocean, 35 vertical levels with stretched s-coordinates improved double stretching function (Shchepetkin and McWilliams 2005, 2009) were used, using surface and bottom stretching parameters \(\theta_s\) = 4, \(\theta_b\) = 1 respectively. ROMS was initialized and forced at the lateral boundaries, by temperature, salinity and velocities profiles extracted from Mercator ocean global_analyses_forecast_phy_001_024 which provide weekly analyses and daily forecast. Atmospheric fluxes (heat and water) were extracted from ERA5 (fifth generation of ECMWF atmospheric reanalyses of the global climate—Copernicus Climate Change Service (C3S) (2017)) and introduced in the ocean model through a bulk formulae (Fairall et al. 2003). The model domain extends from \(21^{\circ }\hbox { E}\) to \(26^{\circ }\hbox { E}\) and from \(-37^{\circ }\hbox { S}\) to \(-33^{\circ }\hbox { S}\), on a 1.8 km regular grid. The ocean model has been run without data assimilation from January 2019 to April 2020 (called FREE experiment hereafter) and 4DVAR data assimilation of HF radar surface currents was performed during April 2020 only and compared to April issued from FREE. Detailed description and evaluation of the 4DVAR data assimilation can be found in Di Lorenzo et al. (2007); Powell et al. (2008); Powell and Moore (2009); Broquet et al. (2009, 2011); Moore et al. (2011a, 2011b, 2011c); Song et al. (2016). In the present work, the dual formulation approach (Moore et al. 2011b; Gürol et al. 2014; Levin et al. 2019) has been used with 6 inner-loops and 2 outer-loops where the inner-loop correspond to the iterative linear minimization of the cost function and the outer-loop correspond to an updated non linear estimate of the circulation (for a complete description refer to Moore et al. (2011a)). This setting has been determined after several experiments to reach an optimum between accuracy of the results and computational time. It is furthermore coherent with Levin et al. (2019) who used 7 inner-loops and 2 outer-loops in their Mid-Atlantic Bight configuration. The data were assimilated using 1-day assimilation windows. The 4D-Var analysis produced at the end of each day was used as initial condition for the next assimilation cycle. Computational cost of this choice is around 90 min by days (using 224 cpus) Hindcast As detailed in "Model setup & assimilation method (4DVAR)" section, the 4DVAR simulation have been made for April 2020 and our analysis therefore target this periods. In the following the ROMS geostrophic velocities have been derived from surface elevation following Eq. 1 (where g represent the acceleration due to gravity, f the coriolis parameter and \(\eta\) the surface elevation) for both 4DVAR and FREE simulations and are compared to altimeter derived geostrophic currents after being interpolated on the ROMS grid. $$\begin{aligned} u=-\frac{g}{f} \frac{\partial \eta }{\partial y} v=\frac{g}{f} \frac{\partial \eta }{\partial x } \end{aligned}$$ Spatially averaged Root Mean Square Error (RMSE) between both simulations and altimeters derived geostrophic currents are shown on Fig. 4. Top panel of Fig. 4 represents the spatial average of the RMSE over the whole domain (hereafter global area), while bottom panel represents the RMSE averaged over the area corresponding to the HF radar observation zone (hereafter local area). Both panels show a significant decrease of the RMSE for the geostrophic currents with data assimilation, but while the local improvements are almost immediate it takes about 15 days to propagate them over the whole domain. Figure 5 represents the temporal and spatial evolution of the RMSE's differences between FREE, 4DVAR simulations during April 2020. April has been split in three periods; days 1 to 10 (panel a), days 11 to 20 (panel b) and days 21 to 30 (panel c). As expected from Fig. 4, panel (a) shows an improvement (blue color) mostly located in the HF radar area. Panel (b) shows an extension of the RMSE decrease to the south west and to the east and panel (c) a strengthening of this improvement when compared to altimetry data and highlight the positive impact of the data assimilation outside the area where HF radar data are assimilated. While previous RMSE times series (Fig. 4) and maps (Fig. 5) illustrated a global improvement of the geostrophic currents over the whole April month, Fig. 6 focuses on a daily averaged (23 of April 2020). Panels (a) and (b) show the daily averaged geostrophic currents for FREE and 4DVAR respectively. Panels (c) and (d) show the averaged geostrophic currents derived from altimeter data and the surface currents measured by the HF radar and assimilated in the model during this day. This figure illustrates the improvement between FREE and 4DVAR experiments. Indeed, west of \(24^{\circ }\hbox { E}\) the current is curving southward in FREE while 4DVAR is able to reproduce the northward bent seen by altimeters (panel (c)). Also, the area east of \(24^{\circ }\hbox { E}\) and north of \(36^{\circ }\hbox { S}\) is characterised by the presence of an anticyclonic eddy matching the eddy detected by altimeters. Finally 4DVAR also reproduces the intensity of the current in the southern branch of the eddy. It is furthermore interesting to note once again that beside the scarcity of the assimilated data (depicted on panel (d)) the 4DVAR assimilation is able to correct the circulation almost in the whole simulated domain. FREE and 4DVAR geostrophic velocities RMSE (against altimeter derived velocities) for the whole domain (a) and over the area of the HF radar observations (b) Maps of differences of FREE and 4DVAR geostrophic velocities RMSE (against altimeter derived velocities) during April 2020. a corresponds to the 1\(^{st}\) to 10\(^{th}\) of April, b corresponds to the 11\(^{th}\) to the 20\(^{th}\) of April and c corresponds to the 21\(^{th}\) to the 30\(^{th}\) of April. Blue color means RMSE reduction in 4DVAR solution. Dotted contour shows where HF radar data are assimilated Daily averaged geostrophic currents for the 23 April 2020 for: a the reference simulation, b the assimilated simulation, c from altimeters and d the assimilated HF radar data. white arrow represents current direction In the previous section, it has been shown that assimilating HF radar currents allows to improve the geostrophic circulation when compared to satellite-derived velocities. However, those altimeter data are at low resolution (25 km) with daily data only, while HF radar are available at 6 km resolution every 30 min. Since HF radar currents are assimilated in the model and therefore cannot be used for further validations, some forecast have been made starting from assimilated initial condition and FREE condition. This allow to validate the forecast against the HF radar data and explore the benefits of the assimilation on smaller scales and on the forecast capabilities of the current configuration. To achieve this goal 48 h forecasts were performed during 9 consecutive days from the 21 to the 29 of April and were compared with the same forecasts initiated from FREE run. RMSE time series of both forecast against HF radar data are presented on Fig. 7. They show a strong improvement in intensity (top panel) and direction (bottom panel) when the forecast is initiated from 4DVAR simulation with a better performance of the first 24 hr of forecast (yellow curve) than the next ones (green curve). When considering the first day of forecast, a reduction of more than 20% for both surface current intensity and direction is achieved for 87% and 99% of the time respectively. For the second day of forecast, those values change to 51% and 91% for intensity and direction respectively. When considering the averaged improvement of the current intensity RMSE, it is shown to move from 39 to 19% from the first to the second day of forecast. For directions, this RMSE improvement change from 42% to 28% from the first to the second day of forecast. Therefore, while the averaged forecast improvement is equivalent and around 40% during the first day of forecast, during the second day the averaged improvement is greater for the currents direction. Figure 8 represents the maps of RMSE differences between forecast issued from 4DVAR and FREE for surface current intensity (Fig. 8a, b) and direction (Fig. 8c, d) along the 10 days of forecast. Right panels represent the first day of forecast and the left panels the second day of forecast. It confirms both forecast improvement of intensity and direction of the surface current. It also shows that current intensity and direction were significantly improved in the center of the area covered by the HF radar measurement. By comparing forecast day 1 and day 2, this figure also confirms that the faster degradation of the forecast is related to the current intensity rather than the direction. Figure 9 shows the surface circulation of FREE, 4DVAR and HF radar currents averaged for two distinct first day of forecast as a superposition of arrows. It illustrates that the use of an assimilated initial condition allows to dramatically correct the path of the surface currents. Indeed on the forecast of the \(21^{th}\) april 2020 (Fig. 9, left panel) while the FREE forecast is characterized by a southward deviation and an cyclonic circulation between \(23^{\circ }\hbox { E}\) and \(24^{\circ }\hbox { E}\), the 4DVAR forecast surface circulation is almost the opposite. Indeed it is characterized by a northward shift inducing a anti-cyclonic circulation between \(22^{\circ }\hbox { E}\) and \(24^{\circ }\hbox { E}\). While the whole eddy cannot be depicted by the HF radar measurement, the northward shift of the Agulhas corresponds to what it is observed by the HF radar. On the forecast of the \(25^{th}\) of april 2020 (Fig. 9, right panel) while the eddy previously depicted seems to be dissipating, the northward bending of the 4DVAR forecast is once again coherent with the HF radar data and opposite to the southward bending of the FREE solution. This southward shift, seen in the FREE forecast is therefore an artefact of the model which can be corrected by using an assimilated initial condition. FREE and 4DVAR surface velocities (a) and direction (b) RMSE (against HF radar). 4DVAR-d1 and 4DVAR-d2 represent the first and the second day of forecast respectively RMSE maps between the two forecast initiated from FREE and 4DVAR and the HF radar data. Top panels (a, b) show surface currents intensities RMSE differences for the first (a) and the second (b) day of forecast. Surface currents directions RMSE differences for the first (c) and the second (d) day of forecast. Blue color means that the RMSE is reduced in the case of forecast issued from 4DVAR initial condition Daily averaged surface currents for the first day of forecast for the 21\(^{th}\) and the 25\(^{th}\) of April 2020. Blue arrows correspond to the FREE experiment. Red arrows correspond to the forecast and the green arrows correspond to the HF radar measured currents In this study, the benefits of the 4DVAR assimilation of surface currents issued from HF radar in one of the most highly dynamic region of the world is presented. While the intense dynamics of the region make difficult for most of the numerical oceanic models to realistically reproduce the position and intensity of the Agulhas current and associated eddies, it has been shown that a 4DVAR assimilation of HF radar allow to represent the surface circulation more realistically. Two kind of experiments have been performed, a one month analyses (April 2020) and nine consecutive forecasts of 48 h each (21 to 29 of April 2020). The one month 4DVAR experiment have been compared to geostrophic currents issued from altimeters and highlight an important improvement of the geostrophic currents in 4DVAR when compared to FREE. Furthermore despite the size of the area covered with HF radar, it has been shown that the solution is improved almost in the whole domain, mainly upstream and downstream of the HF radar's covered area. To evaluate the forecast capability of such a configuration and to be able to use the HF radar surface currents as an independent set of data, nine consecutive forecast of 48 h each, starting from the analysed simulation (4DVAR) have been realised and compared to FREE. It has been shown that during the first day of forecast the averaged RMSE improvement is around 40% for both the surface current intensity and its direction. The second day of forecast has shown those improvements to be reduced by roughly 50% for the current and by 25% for the direction. This highlights a stronger persistence of the correction in direction than in intensity of the surface currents. This improvement of the simulation, thanks to 4DVAR assimilation of HF radar surface currents, could have many impacts at all scales. Indeed long term reanalyses could provide better insight of the position of the Agulhas retro-flexion and the resulting ocean leakage between the Indian to the Atlantic ocean or the carbon uptake by the biological activity due to better representation of the up-welling areas and their variability. Furthermore, in this region of strong maritime activity, a realistic forecast of surface currents would increase marine safety and allow drilling campaign to be planed or suspended depending on the future surface current. Moreover although being out of the scope of this study, small scale currents having a strong impact on wave height variability (Ardhuin et al. 2017), the use of assimilated surface currents is also expected to improve wave forecast and therefore marine safety. ERA5 data were downloaded from Copernicus Climate Change Service (C3S) (2017): ERA5: Fifth generation of ECMWF atmospheric reanalyses of the global climate . Copernicus Climate Change Service Climate Data Store (CDS), date of access. https://cds.climate.copernicus.eu/cdsapp#!/home Mercator ocean GLOBAL_ANALYSES_FORECAST_PHY_001_024 where obtained from https://resources.marine.copernicus.eu Altimeters data SEALEVEL_GLO_PHY_L4_NRT_OBSERVATIONS_008_046 where obtained from https://resources.marine.copernicus.eu HF radar data that support the findings of this study are available from TOTAL but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of TOTAL. Ardhuin F, Rascle N, Chapron B, Gula J, Molemaker J, Gille ST, Menemenlis D, Rocha C (2017) Small scale currents have large effects on wind wave heights. J Geophys Res 122(C6):4500–4517. https://doi.org/10.1002/2016JC012413 Arnone V, González-Dávila M, Santana-Casiano JM (2017) Co2 fluxes in the South African coastal region. Marine Chem 195:41–49. https://doi.org/10.1016/j.marchem.2017.07.008 Barth A, Alvera-Azcárate A, Gurgel KW, Staneva J, Port A, Beckers JM, Stanev EV (2010) Ensemble perturbation smoother for optimizing tidal boundary conditions by assimilation of high-frequency radar surface currents - application to the german bight. Ocean Sci 6(1):161–178. https://doi.org/10.5194/os-6-161-2010 Broquet G, Edwards C, Moore A, Powell B, Veneziani M, Doyle J (2009) Application of 4d-variational data assimilation to the california current system. Dyn Atmos Oceans 48(1):69–92. https://doi.org/10.1016/j.dynatmoce.2009.03.001 Broquet G, Moore A, Arango H, Edwards C (2011) Corrections to ocean surface forcing in the california current system using 4d variational data assimilation. Ocean Modelling 36(1):116–132. https://doi.org/10.1016/j.ocemod.2010.10.005 Copernicus Climate Change Service (C3S) (2017) ERA5: fifth generation of ECMWF atmospheric reanalyses of the global climate. Copernicus Climate Change Service Climate Data Store (CDS). https://cds.climate.copernicus.eu/cdsapp#!/home Di Lorenzo E, Moore AM, Arango HG, Cornuelle BD, Miller AJ, Powell B, Chua BS, Bennett AF (2007) Weak and strong constraint data assimilation in the inverse regional ocean modeling system (roms): Development and application for a baroclinic coastal upwelling system. Ocean Modelling 16(3):160–187. https://doi.org/10.1016/j.ocemod.2006.08.002 Fairall CW, Bradley EF, Hare JE, Grachev AA, Edson JB (2003) Bulk parameterization of air–sea fluxes: Updates and verification for the coare algorithm. J Climate 16(4):571–591 Goschen W, Bornman T, Deyzel S, Schumann E (2015) Coastal upwelling on the far eastern agulhas bank associated with large meanders in the agulhas current. Continental Shelf Res 101:34–46. https://doi.org/10.1016/j.csr.2015.04.004 Guerra L, Paiva A, Chassignet E (2018) On the translation of agulhas rings to the western south atlantic ocean. Deep-Sea Res I 139:104–113. https://doi.org/10.1016/j.dsr.2018.08.005 Gürol S, Weaver AT, Moore AM, Piacentini A, Arango HG, Gratton S (2014) B-preconditioned minimization algorithms for variational data assimilation with the dual formulation. Quart J R Meteorol Soc 140(679):539–556. https://doi.org/10.1002/qj.2150 Halo I, Backeberg B, Penven P, Ansorge I, Reason C, Ullgren J (2014) Eddy properties in the mozambique channel:a comparison between observations and two numerical ocean circulation models. Deep-Sea Res II 100:38–53. https://doi.org/10.1016/j.dsr2.2013.10.015 Levin J, Arango HG, Laughlin B, Wilkin J, Moore AM (2019) The impact of remote sensing observations on cross-shelf transport estimates from 4d-var analyses of the mid-atlantic bight. Adv Space Res. https://doi.org/10.1016/j.asr.2019.09.012 Lutjeharms J, Cooper J, Roberts M (2000) Upwelling at the inshore edge of the agulhas current. Continental Shelf Res 20(7):737–761. https://doi.org/10.1016/S0278-4343(99)00092-8 Lutjeharms J, Penven P, Roy C (2003) Modelling the shear edge eddies of the southern agulhas current. Continental Shelf Res 23(11):1099–1115. https://doi.org/10.1016/S0278-4343(03)00106-7 Lutjeharms JRE (1981) Spatial scales and intensities of circulation in the ocean areas adjacent to south africa. Deep-Sea Res 28A:1289–1302 Messager C, Stuart S (2016) Significant atmospheric boundary layer change observed above an agulhas current warm cored eddy. Adv Meteorol. https://doi.org/10.1155/2016/3659657 Meyer I, Niekerk JLV (2016) Towards a practical resource assessment of the extractable energy in the agulhas ocean current. Int J Mar Energy 16:116–132. https://doi.org/10.1016/j.ijome.2016.05.010 Moore AM, Arango HG, Broquet G, Edwards C, Veneziani M, Powell B, Foley D, Doyle JD, Costa D, Robinson P (2011a) The regional ocean modeling system (roms) 4-dimensional variational data assimilation systems: Part ii - performance and application to the california current system. Progr Oceanogr 91(1):50–73. https://doi.org/10.1016/j.pocean.2011.05.003 Moore AM, Arango HG, Broquet G, Edwards C, Veneziani M, Powell B, Foley D, Doyle JD, Costa D, Robinson P (2011b) The regional ocean modeling system (roms) 4-dimensional variational data assimilation systems: Part iii - observation impact and observation sensitivity in the california current system. Progr Oceanogr 91(1):74–94. https://doi.org/10.1016/j.pocean.2011.05.005 Moore AM, Arango HG, Broquet G, Powell BS, Weaver AT, Zavala-Garay J (2011) The regional ocean modeling system (roms) 4-dimensional variational data assimilation systems: Part i - system overview and formulation. Progr Oceanogr 91(1):34–49. https://doi.org/10.1016/j.pocean.2011.05.004 Penven P, Lutjeharms JRE, Florenchie P (2006) Madagascar: A pacemaker for the agulhas current system? Geophys Res Lett. https://doi.org/10.1029/2006GL026854 Powell B, Arango H, Moore A, Di Lorenzo E, Milliff R, Foley D (2008) 4dvar data assimilation in the intra-americas sea with the regional ocean modeling system (roms). Ocean Modelling 23(3):130–145. https://doi.org/10.1016/j.ocemod.2008.04.008 Powell BS, Moore AM (2009) Estimating the 4dvar analysis error of godae products. Ocean Dynamics 59(1):121–138 Quilfen Y, Yurovskaya M, Chapron B, Ardhuin F (2018) Storm waves focusing and steepening in the agulhas current: satellite observations and modeling. Rem Sens Environ 216:561–571. https://doi.org/10.1016/j.marchem.2017.07.0081 Reason C, Luthjeharms HJJRE, Biastoch A, Roman R (2003) Inter-ocean fluxes south of africa in an eddy-permitting model. Deep-Sea Res II 50:281–298 Rouault MJ, Mouche A, Collard F, Johannessen JA, Chapron B (2010) Mapping the Agulhas Current from space: An assessment of ASAR surface current velocities. J Geophys Res. https://doi.org/10.1016/j.marchem.2017.07.0082 Shchepetkin AF, McWilliams JC (2003) A method for computing horizontal pressure-gradient force in an oceanic model with a nonaligned vertical coordinate. J Geophys Res Oceans. https://doi.org/10.1029/2001JC001047 Shchepetkin AF, McWilliams JC (2005) The regional oceanic modeling system (roms): a split-explicit, free-surface, topography-following-coordinate oceanic model. Ocean Modelling 9(4):347–404. https://doi.org/10.1016/j.marchem.2017.07.0083 Shchepetkin AF, McWilliams JC (2009) Correction and commentary for "ocean forecasting in terrain-following coordinates: Formulation and skill assessment of the regional ocean modeling system" by haidvogel et al., j. comp. phys. 227, pp. 3595–3624. J Comput Phys 228(24):8985–9000. https://doi.org/10.1016/j.jcp.2009.09.002 Song H, Edwards CA, Moore AM, Fiechter J (2016) Data assimilation in a coupled physical-biogeochemical model of the california current system using an incremental lognormal 4-dimensional variational approach: Part 1–model formulation and biological data assimilation twin experiments. Ocean Modelling 106:131–145. https://doi.org/10.1016/j.marchem.2017.07.0085 Tedesco P, Gula J, Ménesguen C, Penven P, Krug M (2019) Generation of submesoscale frontal eddies in the agulhas current. J Geophys Res Oceans 124(11):7606–7625. https://doi.org/10.1016/j.marchem.2017.07.0086 Van-Aken H, Lutjeharms J, Rouault M, Whittle C, de Ruijter W (2013) Observations of an early agulhas current retroflection event in 2001: a temporary cessation of inter-ocean exchange south of africa? Deep Sea Research Part I: Oceanogr Res Papers 72:1–8. https://doi.org/10.1016/j.marchem.2017.07.0087 Warner JC, Sherwood CR, Arango HG, Signell RP (2005) Performance of four turbulence closure models implemented using a generic length scale method. Ocean Modelling 8(1):81–113. https://doi.org/10.1016/j.marchem.2017.07.0088 Authors thank the two anonymous reviewers for their critics and suggestions which help produced an improved final version of this manuscript. All the simulations have been performed on the Datarmor HPC facility at Ifremer (Brest, France). HF radar installation and data distribution was made by ACTIMAR and LWANDLE companies. This study has been conducted using E.U. Copernicus Marine Service Information This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. This work was Funded by both TOTAL and EXWEXs. EXWEXs, 2 Rue de Keraliou, 29200, Brest, France Xavier Couvelard & Christophe Messager Univ. Brest, CNRS, IRD, Ifremer, Laboratoire d'Océanographie Physique et Spatiale (LOPS), IUEM, Brest, France Pierrick Penven Actimar, 36 Quai de la Douane, 29200, Brest, France Sébastien Smet Total E&P South Africa B.V., 27 Willie Van Schoor Avenue, Bellville, 7530, Western Cape, South Africa Philippe Lattes Xavier Couvelard Christophe Messager XC built the numerical setup made the simulations the analyses and wrote this letter. CM and PP contributed to definition of the numerical setup and made substantial revision of the letter. SS made the evaluation of the HF radar accuracy and wrote the corresponding paragraph. PL contributed to the conception, design and motivations of the present work. Correspondence to Xavier Couvelard. There is no competing interests. Couvelard, X., Messager, C., Penven, P. et al. Benefits of radar-derived surface current assimilation for South of Africa ocean circulation. Geosci. Lett. 8, 5 (2021). https://doi.org/10.1186/s40562-021-00174-y 4DVAR Agulhas current
CommonCrawl
5: Polynomial and Rational Functions Draft Custom Version MAT 131 College Algebra { "5.01:_Quadratic_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "5.02:_Power_Functions_and_Polynomial_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "5.03:_Graphs_of_Polynomial_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "5.04:_Rational_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } { "00:_Front_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "02:_Equations_and_Inequalities" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "03:_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "04:_Linear_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "05:_Polynomial_and_Rational_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "06:_Exponential_and_Logarithmic_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "07:_Systems_of_Equations_and_Inequalities" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "08:_Analytic_Geometry" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "09:_Sequences,_Probability,_and_Counting_Theory" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "zz:_Back_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } Thu, 16 Jan 2020 23:05:11 GMT 5.1: Quadratic Functions [ "article:topic", "general form of a quadratic function", "standard form of a quadratic function", "axis of symmetry", "vertex", "vertex form of a quadratic function", "authorname:openstax", "zeros", "license:ccby", "showtoc:no", "transcluded:yes", "source[1]-math-15067", "licenseversion:40", "source@https://openstax.org/details/books/precalculus" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FWestern_Connecticut_State_University%2FDraft_Custom_Version_MAT_131_College_Algebra%2F05%253A_Polynomial_and_Rational_Functions%2F5.01%253A_Quadratic_Functions Western Connecticut State University Recognizing Characteristics of Parabolas Understanding How the Graphs of Parabolas are Related to Their Quadratic Functions Finding the Domain and Range of a Quadratic Function Determining the Maximum and Minimum Values of Quadratic Functions Finding the x- and y-Intercepts of a Quadratic Function Rewriting Quadratics in Standard Form Recognize characteristics of parabolas. Understand how the graph of a parabola is related to its quadratic function. Determine a quadratic function's minimum or maximum value. Solve problems involving a quadratic function's minimum or maximum value. Curved antennas, such as the ones shown in Figure \(\PageIndex{1}\), are commonly used to focus microwaves and radio waves to transmit television and telephone signals, as well as satellite and spacecraft communication. The cross-section of the antenna is in the shape of a parabola, which can be described by a quadratic function. Figure \(\PageIndex{1}\): An array of satellite dishes. (credit: Matthew Colvin de Valle, Flickr) In this section, we will investigate quadratic functions, which frequently model problems involving area and projectile motion. Working with quadratic functions can be less complex than working with higher degree functions, so they provide a good opportunity for a detailed study of function behavior. The graph of a quadratic function is a U-shaped curve called a parabola. One important feature of the graph is that it has an extreme point, called the vertex. If the parabola opens up, the vertex represents the lowest point on the graph, or the minimum value of the quadratic function. If the parabola opens down, the vertex represents the highest point on the graph, or the maximum value. In either case, the vertex is a turning point on the graph. The graph is also symmetric with a vertical line drawn through the vertex, called the axis of symmetry. These features are illustrated in Figure \(\PageIndex{2}\). Figure \(\PageIndex{2}\): Graph of a parabola showing where the \(x\) and \(y\) intercepts, vertex, and axis of symmetry are. The y-intercept is the point at which the parabola crosses the \(y\)-axis. The x-intercepts are the points at which the parabola crosses the \(x\)-axis. If they exist, the x-intercepts represent the zeros, or roots, of the quadratic function, the values of \(x\) at which \(y=0\). Example \(\PageIndex{1}\): Identifying the Characteristics of a Parabola Determine the vertex, axis of symmetry, zeros, and y-intercept of the parabola shown in Figure \(\PageIndex{3}\). Figure \(\PageIndex{3}\). The vertex is the turning point of the graph. We can see that the vertex is at \((3,1)\). Because this parabola opens upward, the axis of symmetry is the vertical line that intersects the parabola at the vertex. So the axis of symmetry is \(x=3\). This parabola does not cross the x-axis, so it has no zeros. It crosses the \(y\)-axis at \((0,7)\) so this is the y-intercept. The general form of a quadratic function presents the function in the form \[f(x)=ax^2+bx+c\] where \(a\), \(b\), and \(c\) are real numbers and \(a{\neq}0\). If \(a>0\), the parabola opens upward. If \(a<0\), the parabola opens downward. We can use the general form of a parabola to find the equation for the axis of symmetry. The axis of symmetry is defined by \(x=−\frac{b}{2a}\). If we use the quadratic formula, \(x=\frac{−b{\pm}\sqrt{b^2−4ac}}{2a}\), to solve \(ax^2+bx+c=0\) for the x-intercepts, or zeros, we find the value of \(x\) halfway between them is always \(x=−\frac{b}{2a}\), the equation for the axis of symmetry. Figure \(\PageIndex{4}\) represents the graph of the quadratic function written in general form as \(y=x^2+4x+3\). In this form, \(a=1\), \(b=4\), and \(c=3\). Because \(a>0\), the parabola opens upward. The axis of symmetry is \(x=−\frac{4}{2(1)}=−2\). This also makes sense because we can see from the graph that the vertical line \(x=−2\) divides the graph in half. The vertex always occurs along the axis of symmetry. For a parabola that opens upward, the vertex occurs at the lowest point on the graph, in this instance, \((−2,−1)\). The x-intercepts, those points where the parabola crosses the x-axis, occur at \((−3,0)\) and \((−1,0)\). Figure \(\PageIndex{4}\): Graph of a parabola showing where the \(x\) and \(y\) intercepts, vertex, and axis of symmetry are for the function \(y=x^2+4x+3\). The standard form of a quadratic function presents the function in the form \[f(x)=a(x−h)^2+k\] where \((h, k)\) is the vertex. Because the vertex appears in the standard form of the quadratic function, this form is also known as the vertex form of a quadratic function. As with the general form, if \(a>0\), the parabola opens upward and the vertex is a minimum. If \(a<0\), the parabola opens downward, and the vertex is a maximum. Figure \(\PageIndex{5}\) represents the graph of the quadratic function written in standard form as \(y=−3(x+2)^2+4\). Since \(x–h=x+2\) in this example, \(h=–2\). In this form, \(a=−3\), \(h=−2\), and \(k=4\). Because \(a<0\), the parabola opens downward. The vertex is at \((−2, 4)\). Figure \(\PageIndex{5}\): Graph of a parabola showing where the \(x\) and \(y\) intercepts, vertex, and axis of symmetry are for the function \(y=-3(x+2)^2+4\). The standard form is useful for determining how the graph is transformed from the graph of \(y=x^2\). Figure \(\PageIndex{6}\) is the graph of this basic function. Figure \(\PageIndex{6}\): Graph of \(y=x^2\). If \(k>0\), the graph shifts upward, whereas if \(k<0\), the graph shifts downward. In Figure \(\PageIndex{5}\), \(k>0\), so the graph is shifted 4 units upward. If \(h>0\), the graph shifts toward the right and if \(h<0\), the graph shifts to the left. In Figure \(\PageIndex{5}\), \(h<0\), so the graph is shifted 2 units to the left. The magnitude of \(a\) indicates the stretch of the graph. If \(|a|>1\), the point associated with a particular x-value shifts farther from the x-axis, so the graph appears to become narrower, and there is a vertical stretch. But if \(|a|<1\), the point associated with a particular x-value shifts closer to the x-axis, so the graph appears to become wider, but in fact there is a vertical compression. In Figure \(\PageIndex{5}\), \(|a|>1\), so the graph becomes narrower. The standard form and the general form are equivalent methods of describing the same function. We can see this by expanding out the general form and setting it equal to the standard form. \[\begin{align*} a(x−h)^2+k &= ax^2+bx+c \\[4pt] ax^2−2ahx+(ah^2+k)&=ax^2+bx+c \end{align*} \] For the linear terms to be equal, the coefficients must be equal. \[–2ah=b \text{, so } h=−\dfrac{b}{2a}. \nonumber\] This is the axis of symmetry we defined earlier. Setting the constant terms equal: \[\begin{align*} ah^2+k&=c \\ k&=c−ah^2 \\ &=c−a\cdot\Big(-\dfrac{b}{2a}\Big)^2 \\ &=c−\dfrac{b^2}{4a} \end{align*}\] In practice, though, it is usually easier to remember that \(k\) is the output value of the function when the input is \(h\), so \(f(h)=k\). Definitions: Forms of Quadratic Functions A quadratic function is a function of degree two. The graph of a quadratic function is a parabola. The general form of a quadratic function is \(f(x)=ax^2+bx+c\) where \(a\), \(b\), and \(c\) are real numbers and \(a{\neq}0\). The standard form of a quadratic function is \(f(x)=a(x−h)^2+k\). The vertex \((h,k)\) is located at \[h=–\dfrac{b}{2a},\;k=f(h)=f(\dfrac{−b}{2a}).\] HOWTO: Write a quadratic function in a general form Given a graph of a quadratic function, write the equation of the function in general form. Identify the horizontal shift of the parabola; this value is \(h\). Identify the vertical shift of the parabola; this value is \(k\). Substitute the values of the horizontal and vertical shift for \(h\) and \(k\). in the function \(f(x)=a(x–h)^2+k\). Substitute the values of any point, other than the vertex, on the graph of the parabola for \(x\) and \(f(x)\). Solve for the stretch factor, \(|a|\). If the parabola opens up, \(a>0\). If the parabola opens down, \(a<0\) since this means the graph was reflected about the x-axis. Expand and simplify to write in general form. Example \(\PageIndex{2}\): Writing the Equation of a Quadratic Function from the Graph Write an equation for the quadratic function \(g\) in Figure \(\PageIndex{7}\) as a transformation of \(f(x)=x^2\), and then expand the formula, and simplify terms to write the equation in general form. Figure \(\PageIndex{7}\): Graph of a parabola with its vertex at \((-2, -3)\). We can see the graph of \(g\) is the graph of \(f(x)=x^2\) shifted to the left 2 and down 3, giving a formula in the form \(g(x)=a(x+2)^2–3\). Substituting the coordinates of a point on the curve, such as \((0,−1)\), we can solve for the stretch factor. \[\begin{align} −1&=a(0+2)^2−3 \\ 2&=4a \\ a&=\dfrac{1}{2} \end{align}\] In standard form, the algebraic model for this graph is \(g(x)=\dfrac{1}{2}(x+2)^2–3\). To write this in general polynomial form, we can expand the formula and simplify terms. \[\begin{align} g(x)&=\dfrac{1}{2}(x+2)^2−3 \\ &=\dfrac{1}{2}(x+2)(x+2)−3 \\ &=\dfrac{1}{2}(x^2+4x+4)−3 \\ &=\dfrac{1}{2}x^2+2x+2−3 \\ &=\dfrac{1}{2}x^2+2x−1 \end{align}\] Notice that the horizontal and vertical shifts of the basic graph of the quadratic function determine the location of the vertex of the parabola; the vertex is unaffected by stretches and compressions. We can check our work using the table feature on a graphing utility. First enter \(\mathrm{Y1=\dfrac{1}{2}(x+2)^2−3}\). Next, select \(\mathrm{TBLSET}\), then use \(\mathrm{TblStart=–6}\) and \(\mathrm{ΔTbl = 2}\), and select \(\mathrm{TABLE}\). See Table \(\PageIndex{1}\) Table \(\PageIndex{1}\) \(x\) -6 -4 -2 0 2 \(y\) -5 -1 -3 -1 5 The ordered pairs in the table correspond to points on the graph. Exercise \(\PageIndex{2}\) A coordinate grid has been superimposed over the quadratic path of a basketball in Figure \(\PageIndex{8}\). Find an equation for the path of the ball. Does the shooter make the basket? Figure \(\PageIndex{8}\): Stop motioned picture of a boy throwing a basketball into a hoop to show the parabolic curve it makes. (credit: modification of work by Dan Meyer) The path passes through the origin and has vertex at \((−4, 7)\), so \(h(x)=–\frac{7}{16}(x+4)^2+7\). To make the shot, \(h(−7.5)\) would need to be about 4 but \(h(–7.5){\approx}1.64\); he doesn't make it. Given a quadratic function in general form, find the vertex of the parabola. Identify \(a\), \(b\), and \(c\). Find \(h\), the x-coordinate of the vertex, by substituting \(a\) and \(b\) into \(h=–\frac{b}{2a}\). Find \(k\), the y-coordinate of the vertex, by evaluating \(k=f(h)=f\Big(−\frac{b}{2a}\Big)\). Example \(\PageIndex{3}\): Finding the Vertex of a Quadratic Function Find the vertex of the quadratic function \(f(x)=2x^2–6x+7\). Rewrite the quadratic in standard form (vertex form). The horizontal coordinate of the vertex will be at \[\begin{align} h&=–\dfrac{b}{2a} \\ &=-\dfrac{-6}{2(2)} \\ &=\dfrac{6}{4} \\ &=\dfrac{3}{2}\end{align}\] The vertical coordinate of the vertex will be at \[\begin{align} k&=f(h) \\ &=f\Big(\dfrac{3}{2}\Big) \\ &=2\Big(\dfrac{3}{2}\Big)^2−6\Big(\dfrac{3}{2}\Big)+7 \\ &=\dfrac{5}{2} \end{align}\] Rewriting into standard form, the stretch factor will be the same as the \(a\) in the original quadratic. \[f(x)=ax^2+bx+c \\ f(x)=2x^2−6x+7\] Using the vertex to determine the shifts, \[f(x)=2\Big(x–\dfrac{3}{2}\Big)^2+\dfrac{5}{2}\] One reason we may want to identify the vertex of the parabola is that this point will inform us what the maximum or minimum value of the function is, \((k)\),and where it occurs, \((h)\). Given the equation \(g(x)=13+x^2−6x\), write the equation in general form and then in standard form. \(g(x)=x^2−6x+13\) in general form; \(g(x)=(x−3)^2+4\) in standard form. Any number can be the input value of a quadratic function. Therefore, the domain of any quadratic function is all real numbers. Because parabolas have a maximum or a minimum point, the range is restricted. Since the vertex of a parabola will be either a maximum or a minimum, the range will consist of all y-values greater than or equal to the y-coordinate at the turning point or less than or equal to the y-coordinate at the turning point, depending on whether the parabola opens up or down. Definition: Domain and Range of a Quadratic Function The domain of any quadratic function is all real numbers. The range of a quadratic function written in general form \(f(x)=ax^2+bx+c\) with a positive \(a\) value is \(f(x){\geq}f ( −\frac{b}{2a}\Big)\), or \([ f(−\frac{b}{2a}),∞ ) \); the range of a quadratic function written in general form with a negative a value is \(f(x) \leq f(−\frac{b}{2a})\), or \((−∞,f(−\frac{b}{2a})]\). The range of a quadratic function written in standard form \(f(x)=a(x−h)^2+k\) with a positive \(a\) value is \(f(x) \geq k;\) the range of a quadratic function written in standard form with a negative \(a\) value is \(f(x) \leq k\). Given a quadratic function, find the domain and range. Identify the domain of any quadratic function as all real numbers. Determine whether \(a\) is positive or negative. If \(a\) is positive, the parabola has a minimum. If \(a\) is negative, the parabola has a maximum. Determine the maximum or minimum value of the parabola, \(k\). If the parabola has a minimum, the range is given by \(f(x){\geq}k\), or \(\left[k,\infty\right)\). If the parabola has a maximum, the range is given by \(f(x){\leq}k\), or \(\left(−\infty,k\right]\). Example \(\PageIndex{4}\): Finding the Domain and Range of a Quadratic Function Find the domain and range of \(f(x)=−5x^2+9x−1\). As with any quadratic function, the domain is all real numbers. Because \(a\) is negative, the parabola opens downward and has a maximum value. We need to determine the maximum value. We can begin by finding the x-value of the vertex. \[\begin{align} h&=−\dfrac{b}{2a} \\ &=−\dfrac{9}{2(-5)} \\ &=\dfrac{9}{10} \end{align}\] The maximum value is given by \(f(h)\). \[\begin{align} f(\dfrac{9}{10})&=5(\dfrac{9}{10})^2+9(\dfrac{9}{10})-1 \\&= \dfrac{61}{20}\end{align}\] The range is \(f(x){\leq}\frac{61}{20}\), or \(\left(−\infty,\frac{61}{20}\right]\). Find the domain and range of \(f(x)=2\Big(x−\frac{4}{7}\Big)^2+\frac{8}{11}\). The domain is all real numbers. The range is \(f(x){\geq}\frac{8}{11}\), or \(\left[\frac{8}{11},\infty\right)\). The output of the quadratic function at the vertex is the maximum or minimum value of the function, depending on the orientation of the parabola. We can see the maximum and minimum values in Figure \(\PageIndex{9}\). Figure \(\PageIndex{9}\): Minimum and maximum of two quadratic functions. There are many real-world scenarios that involve finding the maximum or minimum value of a quadratic function, such as applications involving area and revenue. Example \(\PageIndex{5}\): Finding the Maximum Value of a Quadratic Function A backyard farmer wants to enclose a rectangular space for a new garden within her fenced backyard. She has purchased 80 feet of wire fencing to enclose three sides, and she will use a section of the backyard fence as the fourth side. Find a formula for the area enclosed by the fence if the sides of fencing perpendicular to the existing fence have length \(L\). What dimensions should she make her garden to maximize the enclosed area? Let's use a diagram such as Figure \(\PageIndex{10}\) to record the given information. It is also helpful to introduce a temporary variable, \(W\), to represent the width of the garden and the length of the fence section parallel to the backyard fence. Figure \(\PageIndex{10}\): Diagram of the garden and the backyard. a. We know we have only 80 feet of fence available, and \(L+W+L=80\), or more simply, \(2L+W=80\). This allows us to represent the width, \(W\), in terms of \(L\). \[W=80−2L\] Now we are ready to write an equation for the area the fence encloses. We know the area of a rectangle is length multiplied by width, so \[\begin{align} A&=LW=L(80−2L) \\ A(L)&=80L−2L^2 \end{align}\] This formula represents the area of the fence in terms of the variable length \(L\). The function, written in general form, is \[A(L)=−2L^2+80L\]. The quadratic has a negative leading coefficient, so the graph will open downward, and the vertex will be the maximum value for the area. In finding the vertex, we must be careful because the equation is not written in standard polynomial form with decreasing powers. This is why we rewrote the function in general form above. Since \(a\) is the coefficient of the squared term, \(a=−2\), \(b=80\), and \(c=0\). To find the vertex: \[\begin{align} h& =−\dfrac{80}{2(−2)} &k&=A(20) \\ &=20 & \text{and} \;\;\;\; &=80(20)−2(20)^2 \\ &&&=800 \end{align}\] The maximum value of the function is an area of 800 square feet, which occurs when \(L=20\) feet. When the shorter sides are 20 feet, there is 40 feet of fencing left for the longer side. To maximize the area, she should enclose the garden so the two shorter sides have length 20 feet and the longer side parallel to the existing fence has length 40 feet. This problem also could be solved by graphing the quadratic function. We can see where the maximum area occurs on a graph of the quadratic function in Figure \(\PageIndex{11}\). Figure \(\PageIndex{11}\): Graph of the parabolic function \(A(L)=-2L^2+80L\) Given an application involving revenue, use a quadratic equation to find the maximum. Write a quadratic equation for revenue. Find the vertex of the quadratic equation. Determine the y-value of the vertex. Example \(\PageIndex{6}\): Finding Maximum Revenue The unit price of an item affects its supply and demand. That is, if the unit price goes up, the demand for the item will usually decrease. For example, a local newspaper currently has 84,000 subscribers at a quarterly charge of $30. Market research has suggested that if the owners raise the price to $32, they would lose 5,000 subscribers. Assuming that subscriptions are linearly related to the price, what price should the newspaper charge for a quarterly subscription to maximize their revenue? Revenue is the amount of money a company brings in. In this case, the revenue can be found by multiplying the price per subscription times the number of subscribers, or quantity. We can introduce variables, \(p\) for price per subscription and \(Q\) for quantity, giving us the equation \(\text{Revenue}=pQ\). Because the number of subscribers changes with the price, we need to find a relationship between the variables. We know that currently \(p=30\) and \(Q=84,000\). We also know that if the price rises to $32, the newspaper would lose 5,000 subscribers, giving a second pair of values, \(p=32\) and \(Q=79,000\). From this we can find a linear equation relating the two quantities. The slope will be \[\begin{align} m&=\dfrac{79,000−84,000}{32−30} \\ &=−\dfrac{5,000}{2} \\ &=−2,500 \end{align}\] This tells us the paper will lose 2,500 subscribers for each dollar they raise the price. We can then solve for the y-intercept. \[\begin{align} Q&=−2500p+b &\text{Substitute in the point $Q=84,000$ and $p=30$} \\ 84,000&=−2500(30)+b &\text{Solve for $b$} \\ b&=159,000 \end{align}\] This gives us the linear equation \(Q=−2,500p+159,000\) relating cost and subscribers. We now return to our revenue equation. \[\begin{align} \text{Revenue}&=pQ \\ \text{Revenue}&=p(−2,500p+159,000) \\ \text{Revenue}&=−2,500p^2+159,000p \end{align}\] We now have a quadratic function for revenue as a function of the subscription charge. To find the price that will maximize revenue for the newspaper, we can find the vertex. \[\begin{align} h&=−\dfrac{159,000}{2(−2,500)} \\ &=31.8 \end{align}\] The model tells us that the maximum revenue will occur if the newspaper charges $31.80 for a subscription. To find what the maximum revenue is, we evaluate the revenue function. \[\begin{align} \text{maximum revenue}&=−2,500(31.8)^2+159,000(31.8) \\ &=2,528,100 \end{align}\] This could also be solved by graphing the quadratic as in Figure \(\PageIndex{12}\). We can see the maximum revenue on a graph of the quadratic function. Figure \(\PageIndex{12}\): Graph of the parabolic function Much as we did in the application problems above, we also need to find intercepts of quadratic equations for graphing parabolas. Recall that we find the y-intercept of a quadratic by evaluating the function at an input of zero, and we find the x-intercepts at locations where the output is zero. Notice in Figure \(\PageIndex{13}\) that the number of x-intercepts can vary depending upon the location of the graph. Figure \(\PageIndex{13}\): Number of x-intercepts of a parabola. Given a quadratic function \(f(x)\), find the y- and x-intercepts. Evaluate \(f(0)\) to find the y-intercept. Solve the quadratic equation \(f(x)=0\) to find the x-intercepts. Example \(\PageIndex{7}\): Finding the y- and x-Intercepts of a Parabola Find the y- and x-intercepts of the quadratic \(f(x)=3x^2+5x−2\). We find the y-intercept by evaluating \(f(0)\). \[\begin{align} f(0)&=3(0)^2+5(0)−2 \\ &=−2 \end{align}\] So the y-intercept is at \((0,−2)\). For the x-intercepts, we find all solutions of \(f(x)=0\). \[0=3x^2+5x−2\] In this case, the quadratic can be factored easily, providing the simplest method for solution. \[0=(3x−1)(x+2)\] \[\begin{align} 0&=3x−1 & 0&=x+2 \\ x&= \frac{1}{3} &\text{or} \;\;\;\;\;\;\;\; x&=−2 \end{align}\] So the x-intercepts are at \((\frac{1}{3},0)\) and \((−2,0)\). By graphing the function, we can confirm that the graph crosses the \(y\)-axis at \((0,−2)\). We can also confirm that the graph crosses the x-axis at \(\Big(\frac{1}{3},0\Big)\) and \((−2,0)\). See Figure \(\PageIndex{14}\). Figure \(\PageIndex{14}\): Graph of a parabola. In Example \(\PageIndex{7}\), the quadratic was easily solved by factoring. However, there are many quadratics that cannot be factored. We can solve these quadratics by first rewriting them in standard form. Given a quadratic function, find the x-intercepts by rewriting in standard form. Substitute a and \(b\) into \(h=−\frac{b}{2a}\). Substitute \(x=h\) into the general form of the quadratic function to find \(k\). Rewrite the quadratic in standard form using \(h\) and \(k\). Solve for when the output of the function will be zero to find the x-intercepts. Example \(\PageIndex{8}\): Finding the x-Intercepts of a Parabola Find the x-intercepts of the quadratic function \(f(x)=2x^2+4x−4\). We begin by solving for when the output will be zero. \[0=2x^2+4x−4 \nonumber\] Because the quadratic is not easily factorable in this case, we solve for the intercepts by first rewriting the quadratic in standard form. \[f(x)=a(x−h)^2+k\nonumber\] We know that \(a=2\). Then we solve for \(h\) and \(k\). \[\begin{align*} h&=−\dfrac{b}{2a} & k&=f(−1) \\ &=−\dfrac{4}{2(2)} & &=2(−1)^2+4(−1)−4 \\ &=−1 & &=−6 \end{align*}\] So now we can rewrite in standard form. \[f(x)=2(x+1)^2−6\nonumber\] We can now solve for when the output will be zero. \[\begin{align*} 0&=2(x+1)^2−6 \\ 6&=2(x+1)^2 \\ 3&=(x+1)^2 \\ x+1&={\pm}\sqrt{3} \\ x&=−1{\pm}\sqrt{3} \end{align*}\] The graph has x-intercepts at \((−1−\sqrt{3},0)\) and \((−1+\sqrt{3},0)\). We can check our work by graphing the given function on a graphing utility and observing the x-intercepts. See Figure \(\PageIndex{15}\). Figure \(\PageIndex{15}\): Graph of a parabola which has the following x-intercepts: \((-2.732, 0)\) and \((0.732, 0)\). In Try It \(\PageIndex{1}\), we found the standard and general form for the function \(g(x)=13+x^2−6x\). Now find the y- and x-intercepts (if any). y-intercept at \((0, 13)\), No x-intercepts Example \(\PageIndex{9}\): Solving a Quadratic Equation with the Quadratic Formula Solve \(x^2+x+2=0\). Let's begin by writing the quadratic formula: \(x=\frac{−b{\pm}\sqrt{b^2−4ac}}{2a}\). When applying the quadratic formula, we identify the coefficients \(a\), \(b\) and \(c\). For the equation \(x^2+x+2=0\), we have \(a=1\), \(b=1\), and \(c=2\). Substituting these values into the formula we have: \[\begin{align*} x&=\dfrac{−b{\pm}\sqrt{b^2−4ac}}{2a} \\ &=\dfrac{−1{\pm}\sqrt{1^2−4⋅1⋅(2)}}{2⋅1} \\ &=\dfrac{−1{\pm}\sqrt{1−8}}{2} \\ &=\dfrac{−1{\pm}\sqrt{−7}}{2} \\ &=\dfrac{−1{\pm}i\sqrt{7}}{2} \end{align*}\] The solutions to the equation are \(x=\frac{−1+i\sqrt{7}}{2}\) and \(x=\frac{−1-i\sqrt{7}}{2}\) or \(x=−\frac{1}{2}+\frac{i\sqrt{7}}{2}\) and \(x=\frac{-1}{2}−\frac{i\sqrt{7}}{2}\). Example \(\PageIndex{10}\): Applying the Vertex and x-Intercepts of a Parabola A ball is thrown upward from the top of a 40 foot high building at a speed of 80 feet per second. The ball's height above ground can be modeled by the equation \(H(t)=−16t^2+80t+40\). When does the ball reach the maximum height? What is the maximum height of the ball? When does the ball hit the ground? The ball reaches the maximum height at the vertex of the parabola. \[\begin{align} h &= −\dfrac{80}{2(−16)} \\ &=\dfrac{80}{32} \\ &=\dfrac{5}{2} \\ & =2.5 \end{align}\] The ball reaches a maximum height after 2.5 seconds. To find the maximum height, find the y-coordinate of the vertex of the parabola. \[\begin{align} k &=H(−\dfrac{b}{2a}) \\ &=H(2.5) \\ &=−16(2.5)^2+80(2.5)+40 \\ &=140 \end{align}\] The ball reaches a maximum height of 140 feet. To find when the ball hits the ground, we need to determine when the height is zero, \(H(t)=0\). We use the quadratic formula. \[\begin{align} t & =\dfrac{−80±\sqrt{80^2−4(−16)(40)}}{2(−16)} \\ & = \dfrac{−80±\sqrt{8960}}{−32} \end{align} \] Because the square root does not simplify nicely, we can use a calculator to approximate the values of the solutions. \[t=\dfrac{−80-\sqrt{8960}}{−32} ≈5.458 \text{ or }t=\dfrac{−80+\sqrt{8960}}{−32} ≈−0.458 \] The second answer is outside the reasonable domain of our model, so we conclude the ball will hit the ground after about 5.458 seconds. See Figure \(\PageIndex{16}\). Figure \(\PageIndex{16}\) \(\PageIndex{5}\): A rock is thrown upward from the top of a 112-foot high cliff overlooking the ocean at a speed of 96 feet per second. The rock's height above ocean can be modeled by the equation \(H(t)=−16t^2+96t+112\). When does the rock reach the maximum height? What is the maximum height of the rock? When does the rock hit the ocean? a. 3 seconds b. 256 feet c. 7 seconds general form of a quadratic function: \(f(x)=ax^2+bx+c\) the quadratic formula: \(x=\dfrac{−b{\pm}\sqrt{b^2−4ac}}{2a}\) standard form of a quadratic function: \(f(x)=a(x−h)^2+k\) A polynomial function of degree two is called a quadratic function. The graph of a quadratic function is a parabola. A parabola is a U-shaped curve that can open either up or down. The axis of symmetry is the vertical line passing through the vertex. The zeros, or x-intercepts, are the points at which the parabola crosses the x-axis. The y-intercept is the point at which the parabola crosses the \(y\)-axis. Quadratic functions are often written in general form. Standard or vertex form is useful to easily identify the vertex of a parabola. Either form can be written from a graph. The vertex can be found from an equation representing a quadratic function. . The domain of a quadratic function is all real numbers. The range varies with the function. A quadratic function's minimum or maximum value is given by the y-value of the vertex. The minimum or maximum value of a quadratic function can be used to determine the range of the function and to solve many kinds of real-world problems, including problems involving area and revenue. Some quadratic equations must be solved by using the quadratic formula. The vertex and the intercepts can be identified and interpreted to solve real-world problems. axis of symmetry a vertical line drawn through the vertex of a parabola around which the parabola is symmetric; it is defined by \(x=−\frac{b}{2a}\). general form of a quadratic function the function that describes a parabola, written in the form \(f(x)=ax^2+bx+c\), where \(a,b,\) and \(c\) are real numbers and a≠0. standard form of a quadratic function the function that describes a parabola, written in the form \(f(x)=a(x−h)^2+k\), where \((h, k)\) is the vertex. the point at which a parabola changes direction, corresponding to the minimum or maximum value of the quadratic function vertex form of a quadratic function another name for the standard form of a quadratic function in a given function, the values of \(x\) at which \(y=0\), also called roots This page titled 5.1: Quadratic Functions is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. 5.2: Power Functions and Polynomial Functions source@https://openstax.org/details/books/precalculus source[1]-math-15067
CommonCrawl
Profillic Models, code, and papers for "S. Shankar Sastry": Persistency of Excitation for Robustness of Neural Networks Kamil Nar, S. Shankar Sastry When an online learning algorithm is used to estimate the unknown parameters of a model, the signals interacting with the parameter estimates should not decay too quickly for the optimal values to be discovered correctly. This requirement is referred to as persistency of excitation, and it arises in various contexts, such as optimization with stochastic gradient methods, exploration for multi-armed bandits, and adaptive control of dynamical systems. While training a neural network, the iterative optimization algorithm involved also creates an online learning problem, and consequently, correct estimation of the optimal parameters requires persistent excitation of the network weights. In this work, we analyze the dynamics of the gradient descent algorithm while training a two-layer neural network with two different loss functions, the squared-error loss and the cross-entropy loss; and we obtain conditions to guarantee persistent excitation of the network weights. We then show that these conditions are difficult to satisfy when a multi-layer network is trained for a classification task, for the signals in the intermediate layers of the network become low-dimensional during training and fail to remain persistently exciting. To provide a remedy, we delve into the classical regularization terms used for linear models, reinterpret them as a means to ensure persistent excitation of the model parameters, and propose an algorithm for neural networks by building an analogy. The results in this work shed some light on why adversarial examples have become a challenging problem for neural networks, why merely augmenting training data sets will not be an effective approach to address them, and why there may not exist a data-independent regularization term for neural networks, which involve only the model parameters but not the training data. Click for Model/Code and Paper Step Size Matters in Deep Learning Training a neural network with the gradient descent algorithm gives rise to a discrete-time nonlinear dynamical system. Consequently, behaviors that are typically observed in these systems emerge during training, such as convergence to an orbit but not to a fixed point or dependence of convergence on the initialization. Step size of the algorithm plays a critical role in these behaviors: it determines the subset of the local optima that the algorithm can converge to, and it specifies the magnitude of the oscillations if the algorithm converges to an orbit. To elucidate the effects of the step size on training of neural networks, we study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, we show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm. The results provide an explanation for several phenomena observed in practice, including the deterioration in the training error with increased depth, the hardness of estimating linear mappings with large singular values, and the distinct performance of deep residual networks. * Advances in Neural Information Processing Systems (NIPS) 2018 Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning Dexter R. R. Scobee, S. Shankar Sastry While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent's policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly described by a simple reward combined with a set of hard constraints. In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior. We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent's behavior. Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent's demonstrations given our knowledge of an MDP. Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations. We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle. Towards Verified Artificial Intelligence Sanjit A. Seshia, Dorsa Sadigh, S. Shankar Sastry Verified artificial intelligence (AI) is the goal of designing AI-based systems that are provably correct with respect to mathematically-specified requirements. This paper considers Verified AI from a formal methods perspective. We describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges. Dissimilarity-based Sparse Subset Selection Ehsan Elhamifar, Guillermo Sapiro, S. Shankar Sastry Finding an informative subset of a large collection of data points or models is at the center of many problems in computer vision, recommender systems, bio/health informatics as well as image and natural language processing. Given pairwise dissimilarities between the elements of a `source set' and a `target set,' we consider the problem of finding a subset of the source set, called representatives or exemplars, that can efficiently describe the target set. We formulate the problem as a row-sparsity regularized trace minimization problem. Since the proposed formulation is, in general, NP-hard, we consider a convex relaxation. The solution of our optimization finds representatives and the assignment of each element of the target set to each representative, hence, obtaining a clustering. We analyze the solution of our proposed optimization as a function of the regularization parameter. We show that when the two sets jointly partition into multiple groups, our algorithm finds representatives from all groups and reveals clustering of the sets. In addition, we show that the proposed framework can effectively deal with outliers. Our algorithm works with arbitrary dissimilarities, which can be asymmetric or violate the triangle inequality. To efficiently implement our algorithm, we consider an Alternating Direction Method of Multipliers (ADMM) framework, which results in quadratic complexity in the problem size. We show that the ADMM implementation allows to parallelize the algorithm, hence further reducing the computational time. Finally, by experiments on real-world datasets, we show that our proposed algorithm improves the state of the art on the two problems of scene categorization using representative images and time-series modeling and segmentation using representative~models. On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games Eric V. Mazumdar, Michael I. Jordan, S. Shankar Sastry We propose a two-timescale algorithm for finding local Nash equilibria in two-player zero-sum games. We first show that previous gradient-based algorithms cannot guarantee convergence to local Nash equilibria due to the existence of non-Nash stationary points. By taking advantage of the differential structure of the game, we construct an algorithm for which the local Nash equilibria are the only attracting fixed points. We also show that the algorithm exhibits no oscillatory behaviors in neighborhoods of equilibria and show that it has the same per-iteration complexity as other recently proposed algorithms. We conclude by validating the algorithm on two numerical examples: a toy example with multiple Nash equilibria and a non-Nash equilibrium, and the training of a small generative adversarial network (GAN). Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples Kamil Nar, Orhan Ocal, S. Shankar Sastry, Kannan Ramchandran State-of-the-art neural networks are vulnerable to adversarial examples; they can easily misclassify inputs that are imperceptibly different than their training and test data. In this work, we establish that the use of cross-entropy loss function and the low-rank features of the training data have responsibility for the existence of these inputs. Based on this observation, we suggest that addressing adversarial examples requires rethinking the use of cross-entropy loss function and looking for an alternative that is more suited for minimization with low-rank features. In this direction, we present a training scheme called differential training, which uses a loss function defined on the differences between the features of points from opposite classes. We show that differential training can ensure a large margin between the decision boundary of the neural network and the points in the training dataset. This larger margin increases the amount of perturbation needed to flip the prediction of the classifier and makes it harder to find an adversarial example with small perturbations. We test differential training on a binary classification task with CIFAR-10 dataset and demonstrate that it radically reduces the ratio of images for which an adversarial example could be found -- not only in the training dataset, but in the test dataset as well. Robust Subspace System Identification via Weighted Nuclear Norm Optimization Dorsa Sadigh, Henrik Ohlsson, S. Shankar Sastry, Sanjit A. Seshia Subspace identification is a classical and very well studied problem in system identification. The problem was recently posed as a convex optimization problem via the nuclear norm relaxation. Inspired by robust PCA, we extend this framework to handle outliers. The proposed framework takes the form of a convex optimization problem with an objective that trades off fit, rank and sparsity. As in robust PCA, it can be problematic to find a suitable regularization parameter. We show how the space in which a suitable parameter should be sought can be limited to a bounded open set of the two dimensional parameter space. In practice, this is very useful since it restricts the parameter space that is needed to be surveyed. * Submitted to the IFAC World Congress 2014 Provably Safe and Robust Learning-Based Model Predictive Control Anil Aswani, Humberto Gonzalez, S. Shankar Sastry, Claire Tomlin Controller design faces a trade-off between robustness and performance, and the reliability of linear controllers has caused many practitioners to focus on the former. However, there is renewed interest in improving system performance to deal with growing energy constraints. This paper describes a learning-based model predictive control (LBMPC) scheme that provides deterministic guarantees on robustness, while statistical identification tools are used to identify richer models of the system in order to improve performance; the benefits of this framework are that it handles state and input constraints, optimizes system performance with respect to a cost function, and can be designed to use a wide variety of parametric or nonparametric statistical tools. The main insight of LBMPC is that safety and performance can be decoupled under reasonable conditions in an optimization framework by maintaining two models of the system. The first is an approximate model with bounds on its uncertainty, and the second model is updated by statistical methods. LBMPC improves performance by choosing inputs that minimize a cost subject to the learned dynamics, and it ensures safety and robustness by checking whether these same inputs keep the approximate model stable when it is subject to uncertainty. Furthermore, we show that if the system is sufficiently excited, then the LBMPC control action probabilistically converges to that of an MPC computed using the true dynamics. Statistical Results on Filtering and Epi-convergence for Learning-Based Model Predictive Control Learning-based model predictive control (LBMPC) is a technique that provides deterministic guarantees on robustness, while statistical identification tools are used to identify richer models of the system in order to improve performance. This technical note provides proofs that elucidate the reasons for our choice of measurement model, as well as giving proofs concerning the stochastic convergence of LBMPC. The first part of this note discusses simultaneous state estimation and statistical identification (or learning) of unmodeled dynamics, for dynamical systems that can be described by ordinary differential equations (ODE's). The second part provides proofs concerning the epi-convergence of different statistical estimators that can be used with the learning-based model predictive control (LBMPC) technique. In particular, we prove results on the statistical properties of a nonparametric estimator that we have designed to have the correct deterministic and stochastic properties for numerical implementation when used in conjunction with LBMPC. Policy-Gradient Algorithms Have No Guarantees of Convergence in Continuous Action and State Multi-Agent Settings Eric Mazumdar, Lillian J. Ratliff, Michael I. Jordan, S. Shankar Sastry We show by counterexample that policy-gradient algorithms have no guarantees of even local convergence to Nash equilibria in continuous action and state space multi-agent settings. To do so, we analyze gradient-play in $N$-player general-sum linear quadratic games. In such games the state and action spaces are continuous and the unique global Nash equilibrium can be found be solving coupled Ricatti equations. Further, gradient-play in LQ games is equivalent to multi-agent policy gradient. We first prove that the only critical point of the gradient dynamics in these games is the unique global Nash equilibrium. We then give sufficient conditions under which policy gradient will avoid the Nash equilibrium, and generate a large number of general-sum linear quadratic games that satisfy these conditions. The existence of such games indicates that one of the most popular approaches to solving reinforcement learning problems in the classic reinforcement learning setting has no guarantee of convergence in multi-agent settings. Further, the ease with which we can generate these counterexamples suggests that such situations are not mere edge cases and are in fact quite common. Competitive Statistical Estimation with Strategic Data Sources Tyler Westenbroek, Roy Dong, Lillian J. Ratliff, S. Shankar Sastry In recent years, data has played an increasingly important role in the economy as a good in its own right. In many settings, data aggregators cannot directly verify the quality of the data they purchase, nor the effort exerted by data sources when creating the data. Recent work has explored mechanisms to ensure that the data sources share high quality data with a single data aggregator, addressing the issue of moral hazard. Oftentimes, there is a unique, socially efficient solution. In this paper, we consider data markets where there is more than one data aggregator. Since data can be cheaply reproduced and transmitted once created, data sources may share the same data with more than one aggregator, leading to free-riding between data aggregators. This coupling can lead to non-uniqueness of equilibria and social inefficiency. We examine a particular class of mechanisms that have received study recently in the literature, and we characterize all the generalized Nash equilibria of the resulting data market. We show that, in contrast to the single-aggregator case, there is either infinitely many generalized Nash equilibria or none. We also provide necessary and sufficient conditions for all equilibria to be socially inefficient. In our analysis, we identify the components of these mechanisms which give rise to these undesirable outcomes, showing the need for research into mechanisms for competitive settings with multiple data purchasers and sellers. * accepted in the IEEE Transactions on Automatic Control Compressive Shift Retrieval Henrik Ohlsson, Yonina C. Eldar, Allen Y. Yang, S. Shankar Sastry The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift. * Submitted to IEEE Transactions on Signal Processing. Accepted to the 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vancouver, Canada, May 2013 Scalable Anomaly Detection in Large Homogenous Populations Henrik Ohlsson, Tianshi Chen, Sina Khoshfetrat Pakazad, Lennart Ljung, S. Shankar Sastry Anomaly detection in large populations is a challenging but highly relevant problem. The problem is essentially a multi-hypothesis problem, with a hypothesis for every division of the systems into normal and anomal systems. The number of hypothesis grows rapidly with the number of systems and approximate solutions become a necessity for any problems of practical interests. In the current paper we take an optimization approach to this multi-hypothesis problem. We first observe that the problem is equivalent to a non-convex combinatorial optimization problem. We then relax the problem to a convex problem that can be solved distributively on the systems and that stays computationally tractable as the number of systems increase. An interesting property of the proposed method is that it can under certain conditions be shown to give exactly the same result as the combinatorial multi-hypothesis problem and the relaxation is hence tight. Quadratic Basis Pursuit Henrik Ohlsson, Allen Y. Yang, Roy Dong, Michel Verhaegen, S. Shankar Sastry In many compressive sensing problems today, the relationship between the measurements and the unknowns could be nonlinear. Traditional treatment of such nonlinear relationships have been to approximate the nonlinearity via a linear model and the subsequent un-modeled dynamics as noise. The ability to more accurately characterize nonlinear models has the potential to improve the results in both existing compressive sensing applications and those where a linear approximation does not suffice, e.g., phase retrieval. In this paper, we extend the classical compressive sensing framework to a second-order Taylor expansion of the nonlinearity. Using a lifting technique and a method we call quadratic basis pursuit, we show that the sparse signal can be recovered exactly when the sampling rate is sufficiently high. We further present efficient numerical algorithms to recover sparse signals in second-order nonlinear systems, which are considerably more difficult to solve than their linear counterparts in sparse optimization. Fast L1-Minimization Algorithms For Robust Face Recognition Allen Y. Yang, Zihan Zhou, Arvind Ganesh, S. Shankar Sastry, Yi Ma L1-minimization refers to finding the minimum L1-norm solution to an underdetermined linear system b=Ax. Under certain conditions as described in compressive sensing theory, the minimum L1-norm solution is also the sparsest solution. In this paper, our study addresses the speed and scalability of its algorithms. In particular, we focus on the numerical implementation of a sparsity-based classification framework in robust face recognition, where sparse representation is sought to recover human identities from very high-dimensional facial images that may be corrupted by illumination, facial disguise, and pose variation. Although the underlying numerical problem is a linear program, traditional algorithms are known to suffer poor scalability for large-scale applications. We investigate a new solution based on a classical convex optimization framework, known as Augmented Lagrangian Methods (ALM). The new convex solvers provide a viable solution to real-world, time-critical applications such as face recognition. We conduct extensive experiments to validate and compare the performance of the ALM algorithms against several popular L1-minimization solvers, including interior-point method, Homotopy, FISTA, SESOP-PCD, approximate message passing (AMP) and TFOCS. To aid peer evaluation, the code for all the algorithms has been made publicly available. On the Lagrangian Biduality of Sparsity Minimization Problems Dheeraj Singaraju, Ehsan Elhamifar, Roberto Tron, Allen Y. Yang, S. Shankar Sastry Recent results in Compressive Sensing have shown that, under certain conditions, the solution to an underdetermined system of linear equations with sparsity-based regularization can be accurately recovered by solving convex relaxations of the original problem. In this work, we present a novel primal-dual analysis on a class of sparsity minimization problems. We show that the Lagrangian bidual (i.e., the Lagrangian dual of the Lagrangian dual) of the sparsity minimization problems can be used to derive interesting convex relaxations: the bidual of the $\ell_0$-minimization problem is the $\ell_1$-minimization problem; and the bidual of the $\ell_{0,1}$-minimization problem for enforcing group sparsity on structured data is the $\ell_{1,\infty}$-minimization problem. The analysis provides a means to compute per-instance non-trivial lower bounds on the (group) sparsity of the desired solutions. In a real-world application, the bidual relaxation improves the performance of a sparsity-based classification framework applied to robust face recognition. Segmentation of Natural Images by Texture and Boundary Compression Hossein Mobahi, Shankar R. Rao, Allen Y. Yang, Shankar S. Sastry, Yi Ma We present a novel algorithm for segmentation of natural images that harnesses the principle of minimum description length (MDL). Our method is based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code. The optimal segmentation of an image is the one that gives the shortest coding length for encoding all textures and boundaries in the image, and is obtained via an agglomerative clustering process applied to a hierarchy of decreasing window sizes as multi-scale texture features. The optimal segmentation also provides an accurate estimate of the overall coding length and hence the true entropy of the image. We test our algorithm on the publicly available Berkeley Segmentation Dataset. It achieves state-of-the-art segmentation results compared to other existing methods. People as Sensors: Imputing Maps from Human Actions Oladapo Afolabi, Katherine Driggs-Campbell, Roy Dong, Mykel J. Kochenderfer, S. Shankar Sastry Despite growing attention in autonomy, there are still many open problems, including how autonomous vehicles will interact and communicate with other agents, such as human drivers and pedestrians. Unlike most approaches that focus on pedestrian detection and planning for collision avoidance, this paper considers modeling the interaction between human drivers and pedestrians and how it might influence map estimation, as a proxy for detection. We take a mapping inspired approach and incorporate people as sensors into mapping frameworks. By taking advantage of other agents' actions, we demonstrate how we can impute portions of the map that would otherwise be occluded. We evaluate our framework in human driving experiments and on real-world data, using occupancy grids and landmark-based mapping approaches. Our approach significantly improves overall environment awareness and out-performs standard mapping techniques. * 7 pages, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Sparse Illumination Learning and Transfer for Single-Sample Face Recognition with Image Corruption and Misalignment Liansheng Zhuang, Tsung-Han Chan, Allen Y. Yang, S. Shankar Sastry, Yi Ma Single-sample face recognition is one of the most challenging problems in face recognition. We propose a novel algorithm to address this problem based on a sparse representation based classification (SRC) framework. The new algorithm is robust to image misalignment and pixel corruption, and is able to reduce required gallery images to one sample per class. To compensate for the missing illumination information traditionally provided by multiple gallery images, a sparse illumination learning and transfer (SILT) technique is introduced. The illumination in SILT is learned by fitting illumination examples of auxiliary face images from one or more additional subjects with a sparsely-used illumination dictionary. By enforcing a sparse representation of the query image in the illumination dictionary, the SILT can effectively recover and transfer the illumination and pose information from the alignment stage to the recognition stage. Our extensive experiments have demonstrated that the new algorithms significantly outperform the state of the art in the single-sample regime and with less restrictions. In particular, the single-sample face alignment accuracy is comparable to that of the well-known Deformable SRC algorithm using multiple gallery images per class. Furthermore, the face recognition accuracy exceeds those of the SRC and Extended SRC algorithms using hand labeled alignment initialization. © Profillic 2020
CommonCrawl
Free undergraduate mathematics and physics courses [not a blog] Making sense of Euler's formula The supposed mysteriousness of Euler's formula is overrated, and actually harmful if you want to get through the light and smoke and actually learn the basic math. Euler's formula relates exponentials to periodic functions. Although the two kinds of functions look superficially very different (exponentials diverge really quickly, periodic functions keep oscillating back and forth), any serious math student would have noted a curious relation between the two -- periodic functions arise whenever you do some negative number-ish stuff with exponentials. For instance -- * Simple harmonic motion -- the differential equation $F=kx$ represents exponential motion when $k>0$, periodic motion when $k<0$. This is just a special case of the idea that the derivatives of the trigonometric functions match up with what you'd expect from $e^{ix}$ * Negative exponential bases -- although exponential functions like $e^x$ and $a^x$ for any positive $a$ would seem to diverge nuttily at some infinity, it turns out that $(-1)^x$ is actually a periodic function, at least for integer $x$ (other negative integers give you a periodic function times a crazy diverging function). * Conic sections -- Trigonometric functions are defined on the unit circle, if you defined similar functions on the unit rectangular hyperbola, you'll get linear combinations of exponentials. There are others, based on trigonometric identities, which I'll cover below, but the point is that this relationship is really natural, something you should expect, not some bizarre coincidence that arises from manipulating Taylor series around. (Above edited on 28 April 2018 -- older text with further identities follows.) From complex multiplication Suppose we haven't yet heard of the identity $e^{i\theta} = \cos\theta + i\sin\theta$, but only know that the right-hand-side is the polar-co-ordinate representation of a unit complex number in the complex plane (i.e. the form taken by $\frac{z}{|z|}$ for any complex number z), and are playing around with it for a geometric understanding of multiplication on the complex plane. For simplicity, we'll stick to unit complex numbers $z_1 = \cos\theta_1 +i\sin\theta_1$ and $z_2=\cos\theta_2+i\sin\theta_2$. Multiplying these two gives us: $$\begin{array}{c}{{\hat z}_1}{{\hat z}_2} = (\cos \theta + i\sin \theta )(\cos \phi + i\sin \phi )\\ = \left( {\cos \theta \cos \phi - \sin \theta \sin \phi } \right) + i\left( {\cos \theta \sin \phi + \sin \theta \cos \phi } \right)\end{array}$$ Well, they're all wearing real nice hats, but if you notice, the real part is equal to $\cos (\theta + \phi )$, and the imaginary part is equal to $\sin (\theta + \phi )$, by the angle addition identities from trigonometry. $${\hat z_1}{\hat z_2} = \cos (\theta + \phi ) + i\sin (\theta + \phi )$$ This just gives us another unit complex number, of direction (argument) $\theta+\phi$. This gives us our nice geometric interpretation of complex number multiplication -- dilate by the length of the other complex number, and rotate by its angle. More interestingly, though, this means that ${\hat z}(\theta) = \cos\theta+i\sin\theta$ as a function of its argument satisfies the relation ${\hat z}(\theta){\hat z}(\phi)={\hat z}(\theta+\phi)$, and considering non-unit complex numbers, $z(r,\theta)z(s,\phi)=z(rs,\theta+\phi)$. If you recall, this identity is satisfied by functions of the form $re^{a\theta}$. Since this yields real values for all real values of a, it makes sense to expect the value of a to be complex (it happens to be i). It is also satisfied by zero, but obviously, all complex numbers can't be zero. Note that this is not a proof of Euler's formula -- which is something you should already know from learning calculus and Taylor series -- but an attempt to explain why we shouldn't be surprised that it holds. Derivative on the unit circle You can find more proofs of the identity by considering other properties of $e^{i\theta}$. For instance, its derivative is i times itself, which also holds for $\operatorname{cis}\theta$. Since $f(x)=e^{ax}$ is the only solution to $f'(x)=af(x)$ among the reals, if we want this (defining) property of $e^{ax}$ to remain true for complex $a$, we would set $e^{i\theta}=\operatorname{cis}\theta$. A geometric interpretation of this is found in Needham, pg. 13 -- we let $\theta$ be the time of a particle moving around the unit circle in the complex plane. The instantaneous velocity of the particle at any point is $\frac{\pi}2$ counterclockwise to the complex number itself, and of the same magnitude. (Don't just listen to me -- prove it!) This is another way to prove that the angles add up when you multiply two complex numbers -- in the standard Cartesian system, we write complex multiplication as (a + ib)(c + id) = (ac - bd) + i(ad + bc). To show that the arguments add up, you must show that $\arctan\frac{b}{a}+\arctan\frac{d}{c}=\arctan\frac{ad+bc}{ac-bd}$, which is reduced to a plain-old trigonometry problem. Since we're talking about the sum of two angles and their tangent, we're inclined to think about something like two triangles combined in one or something like that. We could pile one triangle on the hypotenuse of another, but then we'd have to write the dimensions of the piled-up triangle things in a surd-y way (try it), so we'd rather pile the triangles with the bases matching. For this, we'd need the common denominator ac for the angles on the left-hand side. In other words, we need to prove that the angle on the left, subtended by bc + ad, has a tangent of $\frac{bc+ad}{ac-bd}$. To do so, we could construct another point such that the angle at the point subtended by the same line segment is the same, but the resulting triangle would be a right-angled triangle. For the angles to be the same, the points must lie on the same segment of a circle for which bc + ad is a chord, i.e. (a reader was so outraged at this diagram that he sent me an alternate version made in PowerPoint) Thus our problem is reduced to proving that the horizontal line segment at the top has length "ac - bd". Well, prove it! Many trigonometric identities can be shown through Euler's formula. For example, $$\begin{array}{c}\cos \left( {2\theta } \right) + i\sin \left( {2\theta } \right) = {e^{2i\theta }}\\ = {\left( {{e^{i\theta }}} \right)^2}\\ = {\left( {\cos \theta + i\sin \theta } \right)^2}\\ = \left( {{{\cos }^2}\theta - {{\sin }^2}\theta } \right) + \left( {2\cos \theta \sin \theta } \right)i\end{array}$$ Which carries both double-angle identities for the cosine and the sine in the real and imaginary parts respectively. Try to derive all the trigonometric identities you know this way. If you define the trigonometric functions in the traditional way, some of these proofs would be circular arguments. Which ones? Related, interesting: At 5:58, you will notice that the usage of $a-bi$ (as opposed to $a+bi$) is linked to the minus sign in the expansion of $\cos(\theta +\phi)$ and the plus sign in the expansion of $\cos(\theta - \phi)$ -- this gives us another explanation for the sign reversal in the cosine-of-a-sum formulae: because $i\cdot -i=1$. Written by Abhimanyu Pallavi Sudhir on November 25, 2016 Tags -- complex analysis, complex numbers, euler's formula, euler's identity, intuition Copyright © 2016-2019, The Winding Number by Abhimanyu Pallavi Sudhir. Picture Window theme. Powered by Blogger.
CommonCrawl
NACO Home Adaptive Neuro-Fuzzy vibration control of a smart plate September 2017, 7(3): 223-250. doi: 10.3934/naco.2017016 Bridging the gap between variational homogenization results and two-scale asymptotic averaging techniques on periodic network structures Erik Kropat 1,, , Silja Meyer-Nieberg 1, and Gerhard-Wilhelm Weber 2,+, University of the Bundeswehr Munich, Faculty of Informatics, Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany Middle East Technical University, Institute of Applied Mathematics, 06531 Ankara, Turkey * Corresponding author: Erik Kropat + Honorary positions: Faculty of Economics, Business and Law, University of Siegen, Germany; Center for Research on Optimization and Control, University of Aveiro, Portugal; University of North Sumatra, Indonesia Received August 2016 Revised July 2017 Published July 2017 In modern material sciences and multi-scale physics homogenization approaches provide a global characterization of physical systems that depend on the topology of the underlying microgeometry. Purely formal approaches such as averaging techniques can be applied for an identification of the averaged system. For models in variational form, two-scale convergence for network functions can be used to derive the homogenized model. The sequence of solutions of the variational microcsopic models and the corresponding sequence of tangential gradients converge toward limit functions that are characterized by the solution of the variational macroscopic model. Here, a further extension of this result is proved. The variational macroscopic model can be equivalently represented by a homogenized model on the superior domain and a certain number of reference cell problems. In this way, the results obtained by averaging strategies are supported by notions of convergence for network functions on varying domains. Keywords: Homogenization theory, two-scale convergence, two-scale transform, variational problems on graphs and networks, diffusion-advection-reaction systems, microstructures, periodic graphs. Mathematics Subject Classification: Primary: 4B45, 34E13, 34E05, 34E10. Citation: Erik Kropat, Silja Meyer-Nieberg, Gerhard-Wilhelm Weber. Bridging the gap between variational homogenization results and two-scale asymptotic averaging techniques on periodic network structures. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 223-250. doi: 10.3934/naco.2017016 A. Abdulle, Y. Bai and G. Vilmart, An offline-online homogenization strategy to solve quasilinear two-scale problems at the cost of one-scale problems, International Journal for Numerical Methods in Engineering, 99 (2014), 469-486. doi: 10.1002/nme.4682. Google Scholar T. Arbogast, J. Douglas Jr and U. Hornung, Derivation of the double porosity model of single phase flow via homogenization theory, SIAM Journal on Mathematical Analysis, 21 (1990), 823-836. doi: 10.1137/0521046. Google Scholar A. Braides, Γ-Convergence for Beginners, Oxford Lecture Series in Mathematics and Its Applications, Oxford University Press, Oxford, 1988. doi: 10.1093/acprof:oso/9780198507840.001.0001. Google Scholar S. Bochner, Beiträge zur theorie der fastperiodischen funktionen, Math. Annalen., 96 (1926), 119-147. doi: 10.1007/BF01209156. Google Scholar S. Bochner and J. von Neumann, Almost periodic function in a group Ⅱ, Trans. Amer. Math. Soc., 37 (1935), 21-50. doi: 10.2307/1989694. Google Scholar H. Bohr, Zur theorie der fastperiodischen funktionen I, Acta Mathematica, 45 (1925), 29-127. doi: 10.1007/BF02395468. Google Scholar D. Cioranescu and J. Saint Jean Paulin, Homogenization of Reticulated Structures, Springer, New York, 1999. doi: 10.1007/978-1-4612-2158-6. Google Scholar G. Dal Maso, An Introduction to Γ-Convergence, Progress in Nonlinear Differential Equations and Their Applications, Birkhuser, Basel, 1993. doi: 10.1007/978-1-4612-0327-8. Google Scholar A. Fortin, J. M. Urquiza and R. Bois, A mesh adaptation method for 1D-boundary layer problems, International Journal of Numerical Analysis and Modeling, Series B, 3 (2012), 408-428. Google Scholar I. G. Graham, T. Y. Hou, O. Lakkis and R. Scheichl, Numerical Analysis of Multiscale Problems, Lecture Notes in Computational Science and Engineering, 83, Springer, Berlin, Heidelberg, 2012. Interdisciplinary Applied Mathematics, Springer, New York, 2003. doi: 10.1007/978-3-642-22061-6. Google Scholar S. Göktepe and C. Miehe, A micro-macro approach to rubber-like materials. Part Ⅲ: The micro-sphere model of anisotropic Mullins-type damage, Journal of the Mechanics and Physics of Solids, 53 (2005), 2259-2283. doi: 10.1016/j.jmps.2005.04.010. Google Scholar B. Hassani and, E. Hinton, Homogenization and Structural Topology Optimization: Theory, Practice and Software, Springer, London, 2011. doi: 10.1007/978-1-4471-0891-7. Google Scholar V. V. Jikov, S. M. Kozlov and O. A. Oleinik, Homogenization of Differential Operators and Integral Functionals, Springer, Berlin, Heidelberg, 1994. doi: 10.1007/978-3-642-84659-5. Google Scholar E. Kropat, Über die Homogenisierung von Netzwerk-Differentialgleichungen, Wissenschaftlicher Verlag Berlin, Berlin, 2007.Google Scholar E. Kropat, Homogenization of optimal control problems on large curvilinear networks with a periodic microstructure -Results on $S$-homogenization and Γ-convergence, Numerical Algebra, Control and Optimization, 7 (2017), 51-76. doi: 10.3934/naco.2017003. Google Scholar E. Kropat and S. Meyer-Nieberg, Homogenization of singularly perturbed diffusion-advectionreaction equations on periodic networks, in Proceedings of the 15th IFAC Workshop on Control Applications of Optimization (CAO 2012), September 13-16,2012, Rimini, Italy, (2012), 83-88.Google Scholar E. Kropat, S. Meyer-Nieberg and G. -W. Weber, Two-scale asymptotic analysis of singularly perturbed elliptic differential equations on large periodic networks, Dynamics of Continuous, Discrete and Impulsive Systems -Series B: Applications, 22 (2015), 293-324. Google Scholar E. Kropat, S. Meyer-Nieberg and G. -W. Weber, Singularly perturbed diffusion-advection-reaction processes on extremely large three-dimensional curvilinear networks with a periodic microstructure -efficient solution strategies based on homogenization theory, Numerical Algebra, Control and Optimization, 9 (2016), 183-219. doi: 10.3934/naco.2016008. Google Scholar E. Kropat, S. Meyer-Nieberg and G. -W. Weber, A topology optimization approach for micro-architectured systems on singularly perturbed periodic manifolds -Two-scale asymptotic analysis and the influence of the network topology, Dynamics of Continuous, Discrete and Impulsive Systems -Series B: Applications & Algorithms, 23 (2016), 155-193. Google Scholar M. Lenczner, Homogénéisation d'un circuit électrique, Comptes Rendus de l'Académie des Sciences -Series IIB -Mechanics-Physics-Chemistry-Astronomy, 324 (1997), 537-542. Google Scholar M. Lenczner, Multiscale model for atomic force microscope array mechanical behaviour, Applied Physics Letters, 90 (2007), 091908. Google Scholar M. Lenczner and D. Mercier, Homogenization of periodic electrical networks including voltage to current amplifiers, Multiscale Modeling and Simulation, 2 (2004), 359-397. doi: 10.1137/S1540345903423919. Google Scholar M. Lenczner and G. Senouci-Bereksi, Homogenization of electrical networks including voltage-to-voltage amplifiers, Mathematical Models and Methods in Applied Sciences, 9 (1999), 899-932. doi: 10.1142/S0218202599000415. Google Scholar C. Miehe and S. Göktepe, A micro-macro approach to rubber-like materials. Part Ⅱ: The micro-sphere model of finite rubber viscoelasticity, Journal of the Mechanics and Physics of Solids, 53 (2005), 2231-2258. doi: 10.1016/j.jmps.2005.04.006. Google Scholar C. Miehe, S. Göktepe and F. Lulei, A micro-macro approach to rubber-like materials -Part Ⅰ: the non-affine micro-sphere model of rubber elasticity, Journal of the Mechanics and Physics of Solids, 52 (2004), 2617-2660. doi: 10.1016/j.jmps.2004.03.011. Google Scholar E. Nuhn and E. Kropat and W. Reinhardt and S. Pckl, Preparation of complex landslide simulation results with clustering approaches for decision support and early warning, in Proceedings of the 45th Annual Hawaii International Conference on System Sciences (HICSS-45), January 4-7,2012, Grand Wailea, Maui, Hawaii, (eds. H. Ralph and Jr. Sprague), IEEE Computer Society, (2012), 1089–1096.Google Scholar G. A. Pavliotis and A. Stuart, Multiscale methods, averaging and homogenization, Texts in Applied Mathematics Vol. 53, Springer, New York, 2008. Google Scholar C. Pechstein, Finite and Boundary Element Tearing and Interconnecting Solvers for Multiscale Problems, Lecture Notes in Computational Science and Engineering, 90, Springer, Berlin, Heidelberg, 2013. doi: 10.1007/978-3-642-23588-7. Google Scholar L. Tartar, The General Theory of Homogenization: A Personalized Introduction, Springer, Berlin, Heidelberg, 2010. doi: 10.1007/978-3-642-05195-1. Google Scholar M. Vogelius, A homogenization result for planar, polygonal networks, RAIRO Modélisation Mathématique et Analyse Numérique, 25 (1991), 483-514. doi: 10.1051/m2an/1991250404831. Google Scholar Figure 1. The homogenization process: The sequence of solutions of the microscopic model as well as the corresponding sequence of tangential gradients are weakly two-scale convergent. The limits of these sequences can be represented by the solution of the homogenized model Figure Options Download full-size image Download as PowerPoint slide Figure 2. Two-scale transform: The function $x:\Omega \times {\mathscr{Y}} \rightarrow {\cal{N}}^\Omega_\varepsilon$ is surjective, but not injective Figure 3. Two-scale transform: Mapping from ${\cal{N}}^\Omega_\varepsilon$ to the product $\Omega \times {\mathscr{Y}}$ Alexander Mielke, Sina Reichelt, Marita Thomas. Two-scale homogenization of nonlinear reaction-diffusion systems with slow diffusion. Networks & Heterogeneous Media, 2014, 9 (2) : 353-382. doi: 10.3934/nhm.2014.9.353 Robert E. Miller. Homogenization of time-dependent systems with Kelvin-Voigt damping by two-scale convergence. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 485-502. doi: 10.3934/dcds.1995.1.485 Aurore Back, Emmanuel Frénod. Geometric two-scale convergence on manifold and applications to the Vlasov equation. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 223-241. doi: 10.3934/dcdss.2015.8.223 Ibrahima Faye, Emmanuel Frénod, Diaraf Seck. Two-Scale numerical simulation of sand transport problems. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 151-168. doi: 10.3934/dcdss.2015.8.151 Alexandre Mouton. Two-scale semi-Lagrangian simulation of a charged particle beam in a periodic focusing channel. Kinetic & Related Models, 2009, 2 (2) : 251-274. doi: 10.3934/krm.2009.2.251 Fang Liu, Aihui Zhou. Localizations and parallelizations for two-scale finite element discretizations. Communications on Pure & Applied Analysis, 2007, 6 (3) : 757-773. doi: 10.3934/cpaa.2007.6.757 Alexandre Mouton. Expansion of a singularly perturbed equation with a two-scale converging convection term. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1447-1473. doi: 10.3934/dcdss.2016058 Erik Kropat, Silja Meyer-Nieberg, Gerhard-Wilhelm Weber. Singularly perturbed diffusion-advection-reaction processes on extremely large three-dimensional curvilinear networks with a periodic microstructure -- efficient solution strategies based on homogenization theory. Numerical Algebra, Control & Optimization, 2016, 6 (2) : 183-219. doi: 10.3934/naco.2016008 Zhiqiang Yang, Junzhi Cui, Qiang Ma. The second-order two-scale computation for integrated heat transfer problem with conduction, convection and radiation in periodic porous materials. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 827-848. doi: 10.3934/dcdsb.2014.19.827 Jingwei Hu, Shi Jin, Li Wang. An asymptotic-preserving scheme for the semiconductor Boltzmann equation with two-scale collisions: A splitting approach. Kinetic & Related Models, 2015, 8 (4) : 707-723. doi: 10.3934/krm.2015.8.707 Xu Yang, François Golse, Zhongyi Huang, Shi Jin. Numerical study of a domain decomposition method for a two-scale linear transport equation. Networks & Heterogeneous Media, 2006, 1 (1) : 143-166. doi: 10.3934/nhm.2006.1.143 Shi Jin, Xu Yang, Guangwei Yuan. A domain decomposition method for a two-scale transport equation with energy flux conserved at the interface. Kinetic & Related Models, 2008, 1 (1) : 65-84. doi: 10.3934/krm.2008.1.65 Xinfu Chen, King-Yeung Lam, Yuan Lou. Corrigendum: Dynamics of a reaction-diffusion-advection model for two competing species. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4989-4995. doi: 10.3934/dcds.2014.34.4989 Xinfu Chen, King-Yeung Lam, Yuan Lou. Dynamics of a reaction-diffusion-advection model for two competing species. Discrete & Continuous Dynamical Systems - A, 2012, 32 (11) : 3841-3859. doi: 10.3934/dcds.2012.32.3841 Erik Kropat. Homogenization of optimal control problems on curvilinear networks with a periodic microstructure --Results on $\boldsymbol{S}$-homogenization and $\boldsymbol{Γ}$-convergence. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 51-76. doi: 10.3934/naco.2017003 Bo Duan, Zhengce Zhang. A two-species weak competition system of reaction-diffusion-advection with double free boundaries. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 801-829. doi: 10.3934/dcdsb.2018208 Anna Kostianko, Sergey Zelik. Inertial manifolds for 1D reaction-diffusion-advection systems. Part Ⅱ: periodic boundary conditions. Communications on Pure & Applied Analysis, 2018, 17 (1) : 285-317. doi: 10.3934/cpaa.2018017 Erik Kropat, Silja Meyer-Nieberg, Gerhard-Wilhelm Weber. Computational networks and systems-homogenization of self-adjoint differential operators in variational form on periodic networks and micro-architectured systems. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 139-169. doi: 10.3934/naco.2017010 Joachim von Below, José A. Lubary. Stability implies constancy for fully autonomous reaction-diffusion-equations on finite metric graphs. Networks & Heterogeneous Media, 2018, 13 (4) : 691-717. doi: 10.3934/nhm.2018031 Zhuoqin Yang, Tingting Guan. Bifurcation analysis of complex bursting induced by two different time-scale slow variables. Conference Publications, 2011, 2011 (Special) : 1440-1447. doi: 10.3934/proc.2011.2011.1440 Impact Factor: HTML views (28) Erik Kropat Silja Meyer-Nieberg Gerhard-Wilhelm Weber Figures and Tables
CommonCrawl
Perl Weekly Challenge 122: Basketball Points You are given a score $S. You can win basketball points e.g. 1 point, 2 points and 3 points. Write a script to find out the different ways you can score $S. Input: $S = 4 Output: 1 1 1 1 Output: 1 1 1 1 1 The Tribonacci Numbers are defined as follows: \[ \mathcal{T}(n) = \begin{cases} 0 & \text{if } n \leq 1 \\ 1 & \text{if } n = 2 \\ \mathcal{T}(n - 3) + \mathcal{T}(n - 2) + \mathcal{T}(n - 1) & \text{if } n > 2 \end{cases} \] This sequence is found at the OEIS as A000073. There is a formula to calculate \(\mathcal{T}(n)\) directly, in a similar was there is one for the Fibonacci numbers: \[ \mathcal{T}(n) = \left\lfloor \frac{3 \left(\sqrt[3]{586 + 102\sqrt{33}}\right) \left(\frac{1}{3}(\sqrt[3]{19 + 3\sqrt{33}} + \sqrt[3]{19 - 3\sqrt{33}} + 1)\right)^n} {\left(\sqrt[3]{586 + 102\sqrt{33}}\right)^2 - 2\sqrt[3]{586 + 102\sqrt{33}} + 4} \right\rceil \] Now, the number of ways to decompose a non-negative integer \(N\) as a sum of 1s, 2s, and 3s, is equal to \(\mathcal{T}(N + 2)\). But we don't have to calculate the number of ways to decompose a score, we actually have to calculate all the different ways to decompose a given score. The definition of the Tribonacci Numbers gives us a way to calculate the all the different decompositions. Let \(\mathcal{S}(n)\) be the set of all decompositions of a score of \(n - 2\). Then \[ \mathcal{S}(n) = \begin{cases} \emptyset & \text{if } n \leq 1, \\ \{ \text{""} \} & \text{if } n = 2 \\ \{ \forall x \in \mathcal{S}(n - 1): \text{"1"} \odot x \} \; \cup & \\ \{ \forall x \in \mathcal{S}(n - 2): \text{"2"} \odot x \} \; \cup & \text{if } n > 2 \\ \{ \forall x \in \mathcal{S}(n - 3): \text{"3"} \odot x \} & \end{cases} \] where \(\odot\) is the concatenation operator. That is, we can decompose a score of \(n\) by either first scoring a \(1\) and then decomposing a score of \(n - 1\), or first scoring a \(2\) and then decomposing a score of \(n - 2\), or first scoring a \(3\) and then decomposing a score of \(n - 3\). The definition given above suggests using a recursive solution. This is possible, but instead, we will be using an iterative solution. We will reading the n from standard input. We start off by initializing the first three sets, \(\mathcal{S}(0), \mathcal{S}(1), \mathcal{S}(2)\): my @s = ([], [], [""]); Thus two empty sets, and a set consisting of an empty string. We now repeatedly (n times) add a next set, using the last three sets: map {push @s => [map {my $s = $_; map {"$s $_"} @{$s [-$s]}} 1 .. 3]} 1 .. <>; We can rewrite this using nested for loops instead of maps to make it clear what is happening: for (1 .. <>) { my @new; for my $s (1 .. 3) { for my $decomposition (@{$s [-$s]}) { push @new => "$s $decomposition"; push @s => \@new; A new set is created by taking all the decompositions for the last three sets, and prepending them with 1, 2, or 3. At the end, we have to print the elements of the last set: say for @{$s [-1]} We will be using two arrays, c, and s. s [i] will contain all the decompositions of a score of i, while c [i] will contain the number of decompositions of a score of i. First, the initialization: c [0] = 0 s [2, 0] = "" We can now repeatedly add new entries to s and c. Note that we have n in $1: for (i = 3; i < $1 + 3; i ++) { c [i] = 0 for (j = 1; j <= 3; j ++) { for (k = 0; k < c [i - j]; k ++) { s [i, c [i]] = j " " s [i - j, k] c [i] ++ Finally, we print the result: for (k = 0; k < c [$1 + 2]; k ++) { print s [$1 + 2, k] For our Bash solution, we need a trick. Bash doesn't have multidimensional arrays. We could have used a concatenated key, and an associative array, but we do something else instead. Instead of having an array of sets, we use an array of strings; each string has all the decompositions concatenated together; and each decomposition starts with a newline. (So, the newline acts as a separator, but there is an extra newline at the beginning). The initialization: declare scores l=$'\n' scores[2]=$l l=$'\n' is a little trick to get a string consisting of just a newline into a variable. We can now repeatedly add a new entry to scores: for ((i = 3; i < n + 3; i ++)) do for ((j = 1; j <= 3; j ++)) do scores[$i]=${scores[$i]}${scores[$((i - j))]//$l/$l$j } The interesting part is: ${scores[$((i - j))]//$l/$l$j }. i is the index of new entry, and j is 1, 2, or 3, so ${scores[$((i - j))]} is one of the last three entries. We then use a regular expression to prepend j to each of the decompositions. The general syntax is: ${word//pattern/replacement}. This takes $word, and replaces each non-overlapping occurance of pattern with replacement, returning the result. The Perl equivalent would be: $word =~ s/pattern/replacement/gr. Printing the result is now simply — we don't have to loop as the decompositions are already separated by newlines. But we have to remove the first newline: echo "${scores[$((n + 2))]/$l/}" Note that we have only a single slash before the pattern; this means we only replace the first occurance. In C, we have to work hard! Since C doesn't have a way of find out the size of an array, nor an efficient method to find the length of a string, we will be using three arrays: scores, a two dimensional array of strings (which are pointers to char) count, which counts the number of decompositions for a specific score lengths, a two dimensional array, with the length of each of the strings in scores. We'll declare this as: typedef long long number; char *** scores; number * count; size_t ** lengths; Then we read our number n, and allocate space for the arrays: if (scanf ("%d", &n) != 1) { perror ("Unexpected input"); exit (1); if ((scores = (char ***) malloc ((n + 3) * sizeof (char **))) == NULL) { perror ("Malloc scores failed"); if ((count = (number *) malloc ((n + 3) * sizeof (number))) == NULL) { perror ("Malloc count failed"); if ((lengths = (size_t **) malloc ((n + 3) * sizeof (size_t *))) == NULL) { perror ("Malloc lengths failed"); We can now initialize the first three entries, which requires more allocating of memory: count [0] = 0; if ((scores [2] = (char **) malloc (sizeof (char *))) == NULL) { perror ("Malloc failed"); if ((scores [2] [0] = (char *) malloc (sizeof (char))) == NULL) { if ((lengths [2] = (size_t *) malloc (sizeof (size_t))) == NULL) { scores [2] [0] [0] = '\0'; lengths [2] [0] = 0; We now start the loop in which we add new sets to scores: for (int i = 3; i < n + 3; i ++) { We start with calculating how many entries there will be in that set (which is the sum of the sizes of the previous three sets), and allocate memory for scores [i] and lengths [i]: count [i] = count [i - 1] + count [i - 2] + count [i - 3]; if ((scores [i] = (char **) malloc (count [i] * sizeof (char *))) == NULL) { if ((lengths [i] = (size_t *) malloc (count [i] * sizeof (size_t))) It's only now that we can create the entries in the set. Each entry is two character longer than the its "parent" entry: first a 1, 2, or 3, then a space, then a copy of an entry from one of the three previous sets. number l = 0; for (int j = 1; j <= 3; j ++) { for (int k = 0; k < count [i - j]; k ++) { lengths [i] [l] = 2 + lengths [i - j] [k]; if ((scores [i] [l] = (char *) malloc ((lengths [i] [l] + 1) * sizeof (char))) == NULL) { scores [i] [l] [0] = j + '0'; scores [i] [l] [1] = ' '; strncpy (scores [i] [l] + 2, scores [i - j] [k], lengths [i - j] [k]); scores [i] [l] [lengths [i] [l]] = '\0'; l ++; At the end of the loop, we can release some memory — we don't need the entry three steps back anymore: if (i - 3 > 1) { for (int k = 0; k < count [i - 3]; k ++) { free (scores [i - 3] [k]); free (scores [i - 3]); free (lengths [i - 3]); After we have created the final set, we can print its entries: for (int i = 0; i < count [n + 2]; i ++) { printf ("%s\n", scores [n + 2] [i]); All what now needs to be done, is freeing memory: for (int i = n; i <= n + 2; i ++) { for (int k = 0; k < count [i]; k ++) { free (scores [i] [k]); free (scores [i]); free (lengths [i]); free (scores); free (lengths); free (count); We have also solutions in Lua, Node.js, Python and Ruby, which are all similar to our Perl solution.
CommonCrawl
Math Formulas Algebra Formulas For Class 8 Algebra Formulas For Class 10 Algebraic Expressions formula Area and Perimeter Formulas Area of a Triangle Formula Area of a Circle Formula Area of a Square Formula Area of Equilateral triangle formula Area of a Cylinder formula Rhombus Formula Area of a Rhombus Formula Perimeter of Rhombus Formula Sin cos formula Cos Inverse Formula Sin Theta formula Tan2x formula Tan Theta Formula Tangent 3 Theta Formula Trigonometric Functions formulas Exponential formula Differential Equations formula Pi Formulas Quadrilateral Formula Set Formulas Sequence and Series Formulas Selling Price Formula Chemistry Formulas Chemical Compound Formulas Mean Value Theorem Formula In mathematics, the mean value theorem states, roughly, that given a planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. The Mean Value Theorem states that if f(x) is continuous on [a,b] and differentiable on (a,b) then there exists a number c between a and b such that: \[\large {f}'(c)=\frac{f(b)-f(a)}{b-a}\] Question 1: Evaluate f(x) = x2 + 2 in the interval [1, 2] using mean value theorem ? Given function is f(x) = x2 + 2 and the interval is [1,2] Mean value theorem is given by, f'(c) = $\frac{f(b)-f(a)}{b-a}$ f(b) = f(2) = 22 + 2 = 6 f(a) = f(1) = 12 + 2 = 3 So, f'(c) = $\frac{6-3}{2-1}$ = 3 FORMULAS Related Links Height Of Parallelogram Formula Equation Of A Tangent To A Circle All Mathematical Formulas Pdf Frequency Distribution Formula Daily Compound Interest Formula Diffraction Grating Formula Retention Factor Ideal Gas Law Formula Combustion Formula Diagonal Of Parallelogram Formula Shravankumar December 26, 2019 at 4:14 pm Join BYJU'S Formulas Learning Program
CommonCrawl
Insight into the evolution of the proton concentration during autohydrolysis and dilute-acid hydrolysis of hemicellulose Nuwan Sella Kapu1, Zhaoyang Yuan1, Xue Feng Chang1,2, Rodger Beatson2,3, D. Mark Martinez1 & Heather L. Trajano1 Biotechnology for Biofuels volume 9, Article number: 224 (2016) Cite this article During pretreatment, hemicellulose is removed from biomass via proton-catalyzed hydrolysis to produce soluble poly- and mono-saccharides. Many kinetic models have been proposed but the dependence of rate on proton concentration is not well-defined; autohydrolysis and dilute-acid hydrolysis models apply very different treatments despite having similar chemistries. In this work, evolution of proton concentration is examined during both autohydrolysis and dilute-acid hydrolysis of hemicellulose from green bamboo. An approximate mathematical model, or "toy model", to describe proton concentration based upon conservation of mass and charge during deacetylation and ash neutralization coupled with a number of competing equilibria, was derived. The model was qualitatively compared to experiments where pH was measured as a function of time, temperature, and initial acid level. Proton evolution was also examined at room temperature to decouple the effect of ash neutralization from deacetylation. The toy model predicts the existence of a steady-state proton concentration dictated by equilibrium constants, initial acetyl groups, and initial added acid. At room temperature, it was found that pH remains essentially constant both at low initial pH and autohydrolysis conditions. Acid is likely in excess of the neutralization potential of the ash, in the former case, and the kinetics of neutralization become exceedingly small in the latter case due to the low proton concentration. Finally, when the hydrolysis reaction proceeded at elevated temperatures, one case of non-monotonic behavior in which the pH initially increased, and then decreased at longer times, was found. This is likely due to the difference in rates between neutralization and deacetylation. The model and experimental work demonstrate that the evolution of proton concentration during hydrolysis follows complex behavior that depends upon the acetyl group and ash content of biomass, initial acid levels and temperature. In the limit of excess added acid, pH varies very weakly with time. Below this limit, complex schemes are found primarily related to the selectivity of deacetylation in comparison to neutralization. These findings indicate that a more rigorous approach to models of hemicellulose hydrolysis is needed. Improved models will lead to more efficient acid utilization and facilitate process scale-up. In this work, the kinetics of proton generation during prehydrolysis of bamboo chips in a batch reactor is examined. Bamboo grows rapidly to become harvest-ready in approximately three years, and has a chemical composition similar to wood [1, 2]. It is also an abundant natural resource in many Asian countries. For example, China is reported to be the home for over 500 species of bamboo covering more than seven million hectares [3]. Moreover, bamboo is considered a promising species for cultivation on marginal land for biofuels and bio-products [3]. Despite these advantages, it is only recently that bamboo has garnered research focus in the area of pulping and biorefinery applications [1, 4], and it can still be considered an underutilized feedstock. Prehydrolysis, which is also referred to as 'pretreatment', refers to the reaction pathway to remove hemicelluloses from lignocellulosic material during the production of high-purity dissolving pulps or biofuels [3, 5, 6]. Here, an acid catalyzes the breakdown of long hemicellulose chains to form shorter chain oligomers and sugar monomers in the presence of water or steam. Kinetic modeling still remains at the forefront and the evolution of concentration of the acid catalyst \({[\mathrm{H}^+]}\) is one of the longstanding unanswered questions [7]. Prehydrolysis is different from torrefaction wherein biomass is treated at 200–300 °C in an inert gas environment [8]. Hemicellulose is hydrolyzed into mostly soluble sugars during prehydrolysis, while during torrefaction, it is degraded, depending on process temperature, into volatile organic compounds including CO2 and CO, and char [9]. Prehydrolysis is typically performed by treating biomass at 140–180 °C with either water/steam (autohydrolysis) or dilute acid solutions [6]. Autohydrolysis is an industrially practiced step in dissolving pulp production, and both autohydrolysis and dilute-acid hydrolysis are considered viable pretreatment options in the production of lignocellulosic ethanol. However, the chemical complexity of biomass and the lack of refined kinetic models continue to hamper process optimization and scale-up efforts. The literature on the kinetics of the removal of hemicellulose is substantial. The modeling approach was built upon the approach used for dilute-acid hydrolysis of cellulose [10]. For hemicellulose, complex behavior is evident and numerous groups consider that two fractions of hemicellulose are distributed spatially over two separate domains in the solid matrix to help simplify the analysis [11]. Each fraction reacts with the available protons at differing rates due to differences in reaction activation energy. This model has been adopted widely and is commonly referred to as the "biphasic model". It consists of two solid species, fast and slow hemicelluloses, denoted as \({X_i}( {\rm{s}})\), which hydrolyze following first-order kinetics $$\begin{aligned} {X}_{i}(\mathrm{s}) \xrightarrow [{\mathrm{H}^{+}}]{{\mathrm{k}_\mathrm{i}}} {X}(\mathrm{aq}) \qquad \qquad r_i = k_i[{X}_{i}], \end{aligned}$$ where \(r_i\) and \(k_i\) are defined as the rate of reaction and rate constant, respectively, to form a set of soluble products, X(aq), which are susceptible to further hydrolysis or decomposition reactions. The subscript i represents either fast or slow. The initial values for \({X}_{i}\) are considered to be intrinsic for the biomass [12, 13]. Variations on this approach are available in the literature to describe subtle effects such as the formation of oligomeric intermediates or mass transfer rates [1, 14–28]. However, no physical or chemical attributes have been identified to differentiate fast and slow hemicelluloses. One of the open remaining questions in this literature is an understanding of the evolution of the concentration of the acid catalyst. What makes this problem particularly challenging is that there are competing pathways governing proton evolution and neutralization. Although difficult to substantiate, a number of authors have advanced rate constants \(k_i\) of the form $$\begin{aligned} k_i = {k_o}_i \exp \left(-\frac{{E_a}_i}{RT}\right)f\left(t,[\mathrm{H}^+]\right), \end{aligned}$$ where \({k_o}_i\) is the pre-exponential factor, and \({E_a}_i\) is the activation energy. The function \(f(t,{[\mathrm{H}^+]})\) is determined empirically and is found to vary greatly in the literature. This function is included to allow for different reaction rates with different acid levels. In one extreme, we find that this function varies linearly in time while in the other extreme it is considered as a constant and set to its initial value. We summarize these forms as $$\begin{aligned} f\left( t,\left[ {{ \mathrm H}^{+}} \right] \right) =\left\{ \begin{matrix} a+bt &{}&{} \text {autohydrolysis} \\ \left[ {\mathrm{H}^{+}} \right] _0^{n} &{}&{} \text {dilute acid} \\ \end{matrix} \right. \end{aligned}$$ depending upon if the experiment is conducted under dilute-acid or autohydrolysis conditions. Here, a, b, and n are empirical constants and \({[\mathrm{H}^{+}]^n_0}\) is the initial concentration of the acid catalyst. n is typically found to be between 0.8 and 1.3 and we note that Shen and Wyman [24] set \(n=1\) for corn stover. The utility of this functional form has been questioned and it is evident that there is no theoretical basis for the form of the assumed functions [12, 15, 22, 29–35]. In this work, we attempt to gain insight into the assumed form of Eq. 3 by examining the evolution of the proton concentration during reaction through experiment and mathematical modeling. The analysis presented in this section is aimed at understanding the evolution of \({[\mathrm{H}^+]}\) during reaction. The goal is to develop a qualitative understanding of this form by posing a hypothetical reaction scheme which, at some level of approximation, represents the true reaction scheme. It is done at a level in which the analysis is mathematically transparent and of sufficient detail to capture the dominant mechanisms. As a result, we refer to our approach in the subsequent comparison to the experimental data as "the toy model". One of the many complicating factors hindering the modeling process is that there is a large number of chemical species (Table 1 highlights this), which are distributed throughout the cell wall in a complex manner. To simplify, classes of species which behave similarly are grouped together and represented as one hypothetical species. For example, we represent the ash constituents as a lumped parameter MO, that is, the ash is an oxide of the species M with a valence state of \(2^+\); this hypothetical species serves to neutralize the available protons. This can be reposed at another valence state or with secondary effects, such as precipitation from solution, included. In a similar manner the hemicellulose constituents have been reduced to a linear xylose polymer, denoted by X, fast and slow, having arabinose (Ar) side chains (Fig. 1). Protons are represented by \({\mathrm{H}}^+\) and the hydroxyl groups by \({\mathrm{OH}}^-\); both of these species are considered to be in the aqueous phase and the aq notation has been dropped. We have included the potential of an acid being added to the system and denote this species as \({\mathrm{H}_2\mathrm{A}}\) because sulfuric acid is most commonly used in the literature. The acetyl group Ac is defined as \({\mathrm{H}_{3}\mathrm{C}{-}\mathrm{C}{=}(\mathrm{O})^-}.\) Mass transfer effects are neglected. Table 1 Representative composition of the bamboo chips A schematic of the idealized hemicellulose (X)–lignocellulose (LC) substrate considered in this work. Although xylan is hypothesized to be comprised of fast and a slow-reacting fractions, we do not distinguish these in this figure. The species Ac and Ar, which represent the acetyl and arabinose groups, are initially bound to the xylan chain but are released through acid hydrolysis. The ash (MO) is not shown in this figure but is considered to be physically embedded in the LC portion of the matrix We consider four primary reactive pathways in Fig. 2 and each individual reaction is assumed to follow elementary kinetics. In the first of these, shown on the far left of Fig. 2, we consider deacetylation where Ac is cleaved from the hemicellulose backbone though an acid hydrolysis of the ester $$\begin{aligned} \mathrm{XOAc} + {\mathrm{H}_{2}\mathrm{O}}\xrightarrow [{\mathrm{H}^{+}}]{{\mathrm{k}_{1}}} \mathrm{XOH}(\mathrm{s}) + \mathrm{AcOH}(\mathrm{aq}) \\ r_1 = k_1[\mathrm{XOAc}][\mathrm{H}^+]. \end{aligned}$$ This reaction may occur with acetyl groups which are attached to either soluble or solid phases of the hemicellulose. For simplicity any differences in rate between the deacetylation reaction occurring in the solid or liquid phases are ignored. As the product AcOH(aq), acetic acid, behaves as a weak acid, it adopts the following equilibrium in solution $$\begin{aligned} \mathrm{AcOH}(\mathrm{aq}) & {\overset{{{K}_\mathrm{AcOH}}}{\rightleftharpoons }} \mathrm{AcO}^{-} + \mathrm{H}^{+},\\ K_{\mathrm{AcOH}} &= \frac{[\mathrm{AcO}^-][\mathrm{H}^{+}]}{[\mathrm{AcOH}]}= 1.8 \times 10^{-5}\, M, \end{aligned}$$ where \(K_{i}\), from this point forward is defined as the equilibrium constant and the value quoted is at room temperature. Both Garrote et al. [18] and Aguilar et al. [26] have used similar modeling approaches to describe deacetylation. Aguilar et al. for example, explicitly indicated that this reaction follows first-order kinetics [26]. We build upon these studies by including the effects of the weak-acid behavior of acetic acid (see Eq. 5). Water disassociation $$\begin{aligned} \mathrm{H}_{2}\mathrm{O} & {\overset{K_{\rm {w}}}{\rightleftharpoons }} \mathrm{OH}^{-} + \mathrm{H}^{+}\\{K_{\rm {w}}} &= [\mathrm{OH}^-][\mathrm{H}^+]=1\times 10^{-14}\,M^2 \end{aligned}$$ is an additional source of H+. Because of these equilibria, H+ is available for both the neutralization and hydrolysis reactions. A schematic of the idealized reaction scheme. The chemical reactions shown form the basis of the toy mathematical model of the proton concentration In addition to this, protons may also be available if acid is added to the system. We capture the reaction scheme as if the added acid is sulfuric acid, as this is the most common addition in the literature: $$\begin{aligned}&{\mathrm H}_{2}{\mathrm A}({\mathrm {aq}}) \xrightarrow {\mathrm {fast}} {\mathrm {HA}}^{-} + {\mathrm H}^{+} \end{aligned}$$ $$\begin{aligned}{\mathrm {HA}}^{-} & {\overset{ K_{\mathrm{a}}}{\rightleftharpoons }} {\mathrm A}^{2-} + {\mathrm H}^{+}\\ & { K_{\rm{a}}} = \frac{{[{\mathrm {A}}^{2-}][{\mathrm {H}}^{+}]}}{{[{\mathrm {HA}}^{-}]}}= 1\times 10^{-2}\,M \end{aligned}$$ Like others in the literature, we consider the disassociation given in Eq. 7 to be instantaneous. The final aspect to consider is the neutralization of the protons by the ash. As mentioned above the reaction scheme depends upon the species involved. Here, we considered a hypothetical oxide MO which reacts according to the following scheme $$\begin{aligned} \mathrm{MO}(\mathrm{s}) + \mathrm{2H}^{+} \xrightarrow {k_2} \mathrm{M}^{2+}(\mathrm{aq}) + \mathrm{H}_2\mathrm{O} \\ r_2= & {} k_2 [\mathrm{MO}(\mathrm{s})][\mathrm{H}^+] \end{aligned}$$ $$\begin{aligned}\mathrm{M}^{2+} + 2\mathrm{OH}^- &{\overset{K_{\mathrm {m}}}{\rightleftharpoons }} \mathrm{M}(\mathrm{OH})_2(\mathrm{aq})\\{K_{\rm {m}}}&= \frac{[\mathrm{M}^{2+}(\mathrm{aq})][\mathrm{OH}^-]^2}{[\mathrm{M(OH)}_2]} \rightarrow 0 \end{aligned}$$ As the equilibrium constant \({K}_\mathrm{m}\) is unknown, we simply assign this value to be a very small number to reduce the number of free parameters. It should be noted that we do not characterize a number of the potential secondary reactions in solutions, even though they may affect the proton levels to a small degree. For example, we ignore the potential reaction between \({M^{2+}}\) and \({A^{2-}}\) for mathematical transparency as these do not effect the proton concentration. Having established the chemistry of the toy model, we now construct the mathematical model. We build the model upon two conservation laws: conservation of mass of each of the species found in solution and an overall charge neutrality of the solution. Conservation of mass expresses that the initial moles of a certain species must sum to the total moles of the species in the reaction products. For example, the initial moles of M in \({[\mathrm{{MO}}]_{0}}\) must balance the number of moles of M, in the species [MO], \({[\mathrm{M}^{2+}]}\), and \({[\mathrm{{M(OH)}}_2]}\) at any time throughout the course of the reaction. This can be expressed as $$\begin{aligned}{}[\mathrm{MO}]_0&= [\mathrm{MO}]+ [\mathrm{M}^{2+}] + [\mathrm{M(OH)}_2] = [\mathrm{MO}]+ [\mathrm{M}^{2+}]\left( 1 + \frac{[{ \rm {OH}}^-]^2}{{ K}_{\rm {m}}}\right) \end{aligned}$$ through use of the equilibrium relationship given in Eq. (10). In a similar manner, conservation of mass for the species Ac can be expressed as $$\begin{aligned}{}[\mathrm{XOAc}]_o &= [\mathrm{XOAc}]+ [\mathrm{AcOH}]&+ [\mathrm{AcO}^-] \nonumber \\&=[\mathrm{XOAc}]+[\mathrm{AcO}^{-}]\left( 1+{\frac{[\mathrm{H}^{+}]}{K_{AcOH}}}\right) \end{aligned}$$ and A as $$\begin{aligned}{}[\mathrm{H}_{2}\mathrm{A}]_o&= [\mathrm{HA^{-}}]+ [\mathrm{A}^{2-}] = [\mathrm{A}^{2-}]\left( 1+\frac{[\mathrm{H}^{+}]}{K_{\rm {a}}}\right) \end{aligned}$$ with use of Eqs. (5) and (8), respectively. To continue, the charge neutralization conservation equation is invoked, i.e. $$\begin{aligned}{}[\mathrm{AcO}^{-}]+[\mathrm{OH}^{-}]+[\mathrm{HA}^{-}]+2[\mathrm{A}^{2-}] = [\mathrm{H}^{+}]+ 2[\mathrm{M}^{2+}] \end{aligned}$$ which can be expressed as $$\begin{aligned} \left( \frac{[\mathrm{XOAc}]_o-[\mathrm{XOAc}]}{\left( 1+\frac{[\mathrm{H}^{+}]}{K_{AcOH}}\right) }\right) &+\frac{K_{\rm {w}}}{[\mathrm{H}^{+}]}+[\mathrm{H}_{2}\mathrm{A}]_0\left( \frac{[\mathrm{H}^{+}]+2K_{\rm {a}}}{[\mathrm{H}^{+}]+K_{\rm {a}}} \right) \nonumber \\=[\mathrm{H}^{+}] &+ 2\left( \frac{[\mathrm{MO}]_0-[\mathrm{MO}]}{\left( 1+\frac{\mathrm{K}_\mathrm{w}^{2}}{K_{\rm {m}}[\mathrm{H}^{+}]^{2}}\right) }\right) \end{aligned}$$ through use of Eqs. (11)–(13). This equation indicates that the proton concentration in the solution is governed by charge neutralization and is related to moles of acetic acid formed (first term on LHS of equation), the amount of ash neutralized (second term on RHS of equation), three different equilibria found in solution \((K_{\rm {m}},\,K_{\rm {a}},\, K_{\rm{AcOH}})\), and the amount of acid initially added (\({[\mathrm{H}_{2}\mathrm{A}]_0}\)). To complete this description, we use the rate expressions given in Eqs. (4) and (9) $$\begin{aligned} \frac{\rm {d}}{{\rm {d}}t}[\mathrm{XOAc}]= & {} -k_1[\mathrm{XOAc}][\mathrm{H}^{+}] \qquad [\mathrm{XOAc}(0)] = [\mathrm{XOAc}]_{0} \end{aligned}$$ $$\begin{aligned} \frac{\rm{d}}{{\rm {d}}t}[\mathrm{MO}]= & {} -k_{2}[\mathrm{MO}][\mathrm{H}^{+}] \qquad [\mathrm{MO}(0)] = [\mathrm{MO}]_{0}, \end{aligned}$$ where the subscript o represents the initial concentration of the species. Eqs. (15–17) represent the toy model to describe the proton concentration during reaction. The utility of this set of equations was tested in three separate experiments: At long reaction times where the reactions with XOAc and MO are nearly complete. With the reaction occurring at room temperature to examine the proposed ash neutralization scheme. At typical reaction temperatures found for prehydrolysis as a function of initial pH. Bamboo chips, provided by the Lee & Man Paper Manufacturing Ltd., were stored at 4 °C until used for experimentation. The chips were air dried for approximately 24 h and re-chipped using a Wiley mill (Thomas Scientific, NJ, USA) and screened with a 45–16–9.5 mm stacked sieve system. Chips retained on the 9.5 mm pan were designated as accepts for experimentation. The accepts were washed twice (6 and 4 min, respectively) with distilled water at a liquid to wood (based on the oven dry weight of wood) ratio of L:W = 20:1 using a laboratory mixer. The washed chips were air dried for approximately 24 h and stored at 4 °C until used. Before starting, the chemical composition of the accept chips were analyzed following National Renewable Energy Laboratory (NREL) standard protocols [36]; see Table 1 for a summary of the results. Briefly, the chips were air-dried and ground to pass through 40-mesh using a Wiley mill. The powdered samples were then digested by a two-step H2SO4 hydrolysis protocol. For polysaccharide analysis, acid hydrolysates (liquid samples) were recovered by filtration through medium-porosity filtering crucibles (Fisher Scientific Co., ON, Canada), and an internal standard, fucose, added. These samples were re-filtered using 0.2 μm syringe filters (Chromatographic Specialties, Inc. ON, Canada) for HPLC. A Dionex ICS 5000+ HPLC system fitted with an AS-AP autosampler was used to separate the monomeric sugars in the samples at 45 °C, against sugar standards, on a Dionex CarboPac SA10 analytical column. 1 mM NaOH at 1 mL/min flow was the mobile phase, and the sugars were quantified using electrochemical detection and Chromeleon software (Thermo Fisher Scientific, MA, USA). High-purity monomeric sugar standards, arabinose, galactose, glucose, xylose and mannose were purchased from Sigma-Aldrich (ON, Canada). A portion of the filtrate recovered after the two-step acid hydrolysis was analyzed for acid soluble lignin following [37]. Acid insoluble lignin was determined gravimetrically according to Sluiter et al. [36]. TAPPI test method T211 om-02 was followed to determine the total ash content. Detailed analysis of the metal composition of ash was done using inductively coupled plasma time of flight mass spectrometry (ICP-TOFMS) [38]. The α- and β-cellulose content of bamboo were determined according to TAPPI test method T203 om-09. Four separate studies were conducted in this work, as summarized in Table 2. In all cases, bamboo chips and water were mixed at defined liquor to wood ratios (L:W, see Table 2) and placed in a 300-mL stainless steel reactor. The total mass of the chips and water for all L:W ratios was kept constant at 217 g; this slurry filled about 80 % of the available volume of the reactor. The purpose of the first study (series 1–10) was to characterize the reactor temperature response over time. The reactor was immersed in an oil bath set at a defined temperature, \(T_{\rm {b}}\). The temperature of the mixture was continually monitored with two thermocouples mounted in the middle of the reactor, on the central plane but at two different radial positions. Upon completion of a run, the reactor was cooled by immersion in an ice-water bath. In the second study (series 11 and 12), conducted to investigate the equilibrium proton concentration after a long period of time, the pH of bamboo chips–liquid mixture having a liquor-to-wood ratio of L:W = 6.5:1 (wt/wt) was measured as a function of initial acid content after a minimum of 6 h (in some cases 10 h). In the third study, this was repeated but conducted at room temperature for more than 10 h (series 13–16). In the fourth study (series 17–21), the time evolution of the proton concentration was measured as a function of time, temperature and initial acid addition. In this case, the L:W = 6.5:1 (wt/wt). Before proceeding to the main findings, it is instructive to first examine the temperature profile in the reactor after immersion in the oil bath. For each experimental condition, the temperature of the reaction mixture (chips and liquid phases) was recorded using two temperature transducers located at different radial positions in the reactor. In all conditions (series 1–21) there was no significant difference between the two transducer signals, and the reactor seemingly behaved as if there were no spatial gradients in the system, i.e. it was at a uniform temperature. Two results representative of all runs are shown in Fig. 3. The trend with all data sets was that the heat-up period was 15~20 min, i.e. the heat-up rate was essentially the same. The cool-down rate was approximately ~25 °C. Temperature evolution for two representative cases. Both thermocouple signals are presented and replicate runs are shown but the difference between them is not perceptible on this scale. The thermocouples are place at the same elevation in the reactor but at two different radial positions. The experimental conditions for each series are given in Table 2 It curious that there are no strong radial temperature gradients in the system. This result is evident in both the pure water case (series 1–2) and cases with L:W ratios as low as 6.5:1 (series 3–6). We offer two speculative arguments to explain this. In the first case, we argue that the thermal mass of the steel reactor, i.e. the product of its mass and heat capacity, to be significantly larger than the reactants. As a result, the temperature response of the reactants is dictated by the heating or cooling of the reactor. The second argument is somewhat more delicate. It is also possible that convection occurs due to difference in density of the fluid near the outer wall in comparison to the bulk. Convective currents in the reactor would tend to diminish the radial gradients. With the notion of uniform spatial temperature gradients, we pose a second toy model to understand the temperature evolution throughout the reaction. We propose the temperature profile follows an equation of the form $$\begin{aligned} c\frac{{\rm {d}}T}{{\rm{d}}t} = h(T_b-T) \Rightarrow \bar{T}= \frac{T_{\rm {b}}-T}{T_{\rm {b}}-T_0} = \exp \left(-\frac{h}{c}t\right) \end{aligned}$$ where c is the product of the effective mass and heat capacity of the reactor and reactants; and h is the overall heat transfer coefficient. The utility of this equation is tested by plotting series 1–10, shown in Table 2, in Fig. 4 using the scalings indicated in Eq. 18. What is evident in this figure is that the system displays nearly exponential behavior as the experimental data (the red dotted lines) somewhat follow Eq. 18, shown as the thick black line. However, we were unable to achieve a similar scaling during the cool-down period. Table 2 A summary of the experimental conditions tested The temperature evolution during the heat up period for series 1–10. The results have been scaled using the form advanced in Eq. (18) with h/c set to be 0.15/min At this point, we begin to explore the utility of the toy model. The first aspect of the model that we will explore is the long-time behavior and examine if a steady-state proton concentration is possible. Experimentally, the pH was measured at long time by simply allowing the reaction to proceed for at least 315 min at an elevated temperature. From the toy model, we see that a steady-state concentration for [H+] exists and can only be achieved when both the deaceytlyation and neutralization reactions are complete, i.e. $$\begin{aligned} \frac{{\rm {d}} [\mathrm{XOAc}]}{{\rm {d}}t}=\frac{{\rm {d}} [\mathrm{MO}]}{{\rm {d}}t}=0, \quad \text {thus,} \quad [\mathrm{XOAc}]=[\mathrm{MO}] = 0. \end{aligned}$$ Indeed, at steady state the proton concentration may be estimated directly from Eq. 15, i.e. $$\begin{aligned} \frac{[\mathrm{XOAc}]_{0}}{\left( 1+\frac{[\mathrm{H}^{+}]}{K_{\rm {AcOH}}}\right) }+\frac{K_{\rm {w}}}{[\mathrm{H}^{+}]}+[\mathrm{H}_{2}\mathrm{A}]_{o}&\left( \frac{[\mathrm{H}^{+}]+2K_{\rm {a}}}{[\mathrm{H}^{+}]+K_{\rm {a}}} \right) \nonumber \\&= [\mathrm{H}^{+}]+ 2\frac{[\mathrm{MO}]_{0}}{\left( 1+\frac{{K}_{\rm {w}}^{2}}{K_{\rm {m}}[\mathrm{H}^{+}]^{2}}\right) } \end{aligned}$$ which is a sixth-order polynomial in [H\({^+}\)]. The steady-state concentration is given by the roots of this polynomial and the behavior of this function is given in Fig. 5. This equation was solved for [H\({^+}\)] in MATLAB using the built-in root finding procedure. Superimposed on this is the experimental data given as series 11 and 12. Two observations are clearly evident. The first observation that can be made is that we find a remarkably similar trend with the toy model. The second observation is that there are two distinct regions. Under autohydrolysis conditions, i.e the right-hand potion of the graph, the steady-state (or long time) pH is independent of the initial pH. Here, the steady-state pH is governed by the weak-acid equilibrium and by ash neutralization or buffering. With increasing levels of added acid, we find that the long-time pH approximately equals the initial pH. This is shown on the left-hand portion of Fig. 5. The effect of initial pH on the steady-state pH measured after long-time prehydrolysis. The dashed line represents the toy model with the equilibrium constant given in the text previously. The value of K m is not stated in the text and is taken to be small. For practical purposes \(K_{\rm {m}}=10^{-17}\,{\rm M}\) for this calculation. The two remaining parameters are set to be \({[\mathrm{{XOAc}}]}_o=0.025\,{\rm M}\) and \({[\mathrm{{MO}}]}_o=0.001\,{\rm M}\) and were determined through regression. Series 11 represents the reaction at 160 °C and series 12 at 180 °C These results support the kinetic modeling for xylan removal under dilute-acid conditions. As discussed in the introduction, a number of authors have assigned the proton concentration to be constant and equal to its initial value (see [11, 12, 24, 31] for example). However, under autohydrolysis conditions, this does not occur. There is a vast difference between the initial and the steady-state pH of the system. We continue the discussion of the toy model and examine a second limiting case when the reaction proceeds at room temperature, see Fig. 6. Here, four cases were examined in which the amount of acid initially added was varied. At room temperature, it can be assumed that the deacetylation reaction proceeds at a much slower rate in comparison to the ash neutralization scheme. Under this assumption, the toy model reduces to $$\begin{aligned} \frac{\rm {d}}{{\rm {d}}t}[\mathrm{MO}]&= -k_2[\mathrm{MO}][\mathrm{H}^{+}] \end{aligned}$$ $$\begin{aligned} \frac{K_{\rm {w}}}{[\mathrm{H}^{+}]}+[\mathrm{H}_{2}\mathrm{A}]_{0}\left( \frac{[\mathrm{H}^{+}]+2K_{\rm {a}}}{[\mathrm{H}^{+}]+K_{\rm {a}}} \right)&= [\mathrm{H}^{+}]+ 2\left( \frac{[\mathrm{MO}]_{0}-[\mathrm{MO}]}{\left( 1+\frac{{K}_\mathrm{w}^{2}}{K_{\rm {m}}[\mathrm{H}^{+}]^{2}}\right) }\right) \end{aligned}$$ which has been solved numerically in MATLAB using a Runge–Kutta scheme (ODE23s) coupled with a root-finding procedure for the proton concentration. The equations are solved simultaneously. As shown in Fig. 6, at low initial pH (series 13), pH is constant as the concentration of added acid is in excess of the neutralization potential of the ash. With decreasing initial added acid (series 14–15), the neutralization reaction proceeds until all the ash is reacted. With series 16, no added acid, the pH varies weakly with time. We interpret this result through the toy model, and advance the argument that the neutralization reaction proceeds but the kinetics are extremely slow due to the low proton concentration. Examination of the pH when the reaction proceeds at room temperature. To evaluate the toy model, the same values for the constants given in the caption of the previous figure are used. In addition, \(k_2 = 10 \, {\rm M}^{-1}{\rm min}^{-1}\) as determined by regression We now move to perhaps the main findings of this work and examine the evolution of pH during prehydrolysis treatment. In our final set of experiments, we examine the evolution of the proton concentration at elevated temperatures. Here we must include the effect of deacetylation and as a result, the full toy model must be solved numerically using MATLAB. We treat Eqs. 16–17 as a system of equations and solve this initial value problem in conjunction with a root-finding procedure to estimate [H\({^+}\)] from Eq. 15. The results are shown in Fig. 7. Again at low initial pH, proton concentration varies weakly with time (cf series 17 and 18) and remains essentially at its initial value. This was the expected result as demonstrated earlier through steady-state analysis under excess acid conditions, the pH should remain essentially constant. Below this limit complex behavior is observed. Under autohydrolysis conditions (series 20 and 21), there is a rapid initial drop in pH followed by a diminished rate at longer times. However, the most curious result is given by series 21 where non-monotonic behavior, i.e. the pH initially rises and then falls, is observed. We base our interpretation on the toy model which indicates that the neutralization reaction is initially proceeding faster than deacetylation. At longer times, the ash is completely reacted and the pH diminishes from deacetylation. A evolution of pH under prehydrolysis conditions. To help understand this, we related the lowest initial pH experiments to the added acid: series 17 0.25 % (wt/wt) and series 18 0.5 % (wt/wt) These results can now be used to interpret the form of the rate constant used for xylan removal. As seen in Eq. 3, the rate constant under dilute acid conditions is related to the initial pH. This is quite reasonable as we have shown that the pH should be essentially constant during the course of the reaction. We, however, cannot make any comment on the value of n in this equation. Below this limit, the behavior of [H\({^+}\)] is very complex. Simple linear functions may indeed apply for a particular systems of interest. However, this cannot be generalized as the pH response depends strongly on the rate of ash neutralization in comparison to the rate of deacetylation. In this work, the evolution of the proton concentration was examined during the hydrolysis of bamboo chips. At issue was the seemingly disparate model descriptions in the literature which treat dilute acid differently than autohydrolysis conditions. We have attempted to address this issue by posing a "toy model" in which we have included a number of chemical components to help describe the reaction. We advance that the proton concentration is governed by a charge neutrality in the solution and influenced by the: weak-acid equilibrium formed from the deacetylation of the acetyl group from the xylan, equilibrium created by water dissociation, ash neutralization and the associated equilibrium in solution, added acid. There are a number of outcomes from the toy model which have been tested experimentally. The first, and perhaps most significant outcome, is that the toy model predicts the existence of a steady-state solution. The steady-state value is dictated by the equilibrium constants, and the initial added acid and acetyl group levels. The model qualitatively follows the trend given by experiment. It is difficult to perform a quantitative comparison as auxiliary relationships, such as the variation of equilibrium constant with temperature, are not known. The model was tested at room temperature to examine the changes in pH when ash neutralization is the dominant mechanism. Under these conditions we find, surprisingly, that the pH remains essentially constant both at low initial pH and under autohydrolysis conditions. Acid is likely in excess of the neutralization potential of the ash, in the former case, and the kinetics of neutralization become exceedingly small in the latter case due to the low proton concentration. Finally, when the hydrolysis reaction proceeded at elevated temperatures, we found one case of non-monotonic behavior in which the pH initially increased, and then decreased at longer times. This is attributed to the difference in rates between the neutralization and deacetylation reactions. As described in the introduction the evolution of the proton concentration during prehydrolysis is poorly modeled using empirical functions [Eq. (5)] which are not rooted in a proper chemical reaction scheme. With our toy model, we propose a chemical reaction pathway that satisfactorily describes experimentally determined proton concentration under both autohydrolysis and dilute-acid hydrolysis conditions. Accurate modeling of the proton concentration would significantly improve the existing kinetic models of hemicellulose hydrolysis and facilitate more efficient process optimization and scale-up. acetyl group H3C–C(=O)− arabinose empirical constant in Eq. (5) product of the effective mass and heat capacity of the reactor and reactants E ai : activation energy of reaction i H2A: acid species overall heat transfer coefficient K i : equilibrium constant at room temperature for reaction i rate constant for reaction i k oi : pre-exponential factor for rate constant k i LHS: left-hand side L:W: liquid to wood ratio oxide of ash species M with a valence state of 2+ n : empirical constant describing power law dependence of k i on proton concentration RHS: right-hand side T b : oil bath temperature xylose Luo X, Ma X, Hu H, Li C, Cao S, Huang L, Chen L. Kinetic study of pentosan solubility during heating and reacting processes of steam treatment of green bamboo. Bioresour Technol. 2013;130:769–76. Scurlock J, Dayton D, Hames B. Bamboo: an overlooked biomass resource? Biomass Bioeng. 2000;19:229–44. Littlewood J, Wang L, Turnbull C, Murphy RJ. Techno-economic potential of bioethanol from bamboo in china. Biotechnol Biofuels. 2013;6:173–86. He J, Cui S, Wang SY. Preparation and crystalline analysis of high-grade bamboo dissolving pulp for cellulose acetate. J Appl Poly Sci. 2008;107(2):1029–38. Sixta H. Handbook of Pulp. New York: John Wiley and Sons Ltd; 2006. Trajano HL, Wyman CE. Fundamentals of biomass pretreatment at low pH. In: Wyman CE, editor. Aqueous pretreatment of plant biomass for biological and chemical conversion to fuels and chemicals. New York: John Wiley and Sons, Ltd.; 2013. p. 103–28. Sella-Kapu N, Trajano HL. Review of hemicellulose hydrolysis in softwoods and bamboo. Biofuels Bioprod Bioref. 2014;8:857–70. Neupane S, Adhikari S, Wang Z, Ragauskas A, Pu Y. Effect of torrefaction on biomass structure and hydrocarbon production from fast pyrolysis. Green Chem. 2015;17:2406–17. Shoulaifar TK. Chemical changes in biomass during torrefaction. Ph.D. Thesis, Åbo Akademi University. 2016. Saeman JF. Kinetics of wood saccharification: Hydrolysis of cellulose and decomposition of sugars in dilute acid at high temperature. Ind Eng Chem Res. 1945;37:42–52. Kobayashi T, Sakai Y. Hydrolysis rate of pentosan of hardwood in dilute sulfuric acid. Bull Agr Chem Soc Jpn. 1956;20:1–7. Esteghlalian A, Hashimoto AG, Fenske JJ, Penner MH. Modeling and optimization of the dilute sulfuric acid pretreatment of corn stover, poplar and switchgrass. Bioresour Technol. 1997;59:129–36. Ma X, Huang L, Chen Y, Chen L. Preparation of bamboo dissolving pulp for textile production;part 1. study on prehydrolysis of green bamboo for producing dissolving pulp. BioResources. 2011;6:1428–39. Cahela DR, Lee YY, Chambers RP. Modeling of percolation process in hemicellulose hydrolysis. Biotechnol Bioeng. 1983;25:3–17. Conner AH, Lorenz LF. Kinetic modeling of hardwood prehydrolysis. part iii. water and dilute acetic acid prehydrolysis of southern red oak. Wood Fiber Sci. 1986;18:248–63. Tillman LM, Lee YY, Torget R. Effect of transient acid diffusion on pretreatment/hydrolysis of hardwood hemicellulose. Appl Biochem Biotechnol. 1990;24–25:103–13. Carrasco F, Roy C. Kinetic study of dilute acid prehydrolysis of xylan-containing biomass. Biotechnol Bioeng. 1992;26:189–208. Garrote G, Dominguez H, Parajo J. Study on the deacetylation of hemicelluloses during the hydrothermal processing of eucalyptus wood. Holz als Roh-und Werkst. 2001;59:53–9. Kim SB, Lee YY. Diffusion of sulfuric acid within lignocellulosic biomass particles and its impact on dilute-acid pretreatment. Bioresour Technol. 2002;83:165–71. Brennan M, Wyman C. Initial evaluation of simple mass transfer models to describe hemicellulose hydrolysis in corn stover. Appl Biochem Biotechnol. 2004;113–116:965–76. Hosseini SA, Shah N. Multiscale modelling of hydrothermal biomass pretreatment for chip size optimization. Bioresour Technol. 2009;100:2621–8. Morinelly JE, Jensen JR, Browne M, Co TB. R, S.D.: Kinetic characterization of xylose monomer and oligomer concentrations during dilute acid pretreatment of lignocellulosic biomass from forests and switchgrass. Ind Eng Chem Res. 2009;48:9877–84. Mittal A, Chatterjee SG, Scott GM, Amidon TE. Modeling xylan solubilization during autohydrolysis of sugar maple and aspen wood chips: reaction kinetics and mass transfer. Chem Eng Sci. 2009;64:3031–41. Shen J, Wyman CE. A novel mechanism and kinetic model to explain enhanced xylose yields from dilute sulfuric acid compared to hydrothermal pretreatment of corn stover. Bioresour Technol. 2011;102:9111–20. Visuri JA, Song T, Kuitunen S, Alopaeus V. Model for degradation of galactoglucomannan in hot water extraction conditions. Ind Eng Chem Res. 2012;51:10338–44. Aguilar R, Ramirez J, Garrote G, Vázquez M. Kinetic study of the acid hydrolysis of sugar cane bagasse. J Food Eng. 1992;55:309–18. Liu X, Lu M, Ai N, Yu F, Ji J. Kinetic model analysis of dilute sulfuric acid-catalyzed hemicellulose hydrolysis in sweet sorghum bagasse for xylose production. Ind Crops Prod. 2012;38:81–6. Zhao X, Zhou Y, Liu D. Kinetic model for glycan hydrolysis and formation of monosaccharides during dilute acid hydrolysis of sugarcane bagasse. Bioresour Technol. 2012;105:160–8. Maloney MT, Chapman TW, Baer AJ. Dilute acid hydrolysis of paper birch: kinetics studies of xylan and acetyl-group hydrolysis. Biotechnol Bioeng. 1985;27:355–61. Malester I, Green M, Shelef G. Kinetics of dilute acid hydrolysis of cellulose originating from municipal solid wastes. Ind Eng Chem Res. 1992;31:1998–2003. Wyman C, Decker S, Himmel ME, Brady J, Skopec CE, Viikari L. Hydrolysis of cellulose and hemicellulose. In: Dumitriu S, editor. Polysaccharides-structural diversity and functional versatility. 2nd ed. Boca Raton: CRC Press; 2004. p. 995–1034. Lloyd T, Wyman CE. Predicted effects of mineral neutralization and bisulfate formation on hydrogen ion concentration for dilute sulfuric acid pretreatment. Appl Biochem Biotechnol. 2004;113–116:1013–22. Lloyd T, Wyman CE. Combined sugar yields for dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis of the remaining solids. Bioresour Technol. 2005;96:1967–77. Hu H-C, Chai X-S, Zhan H-Y, Barnes D, Huang L-L, Chen L-H. Hydrogen ion catalytic kinetic model of hot water preextraction for production of biochemicals derived from hemicellulose using moso bamboo (phyllostachys pubescens). Ind Eng Chem Res. 2014;53(29):11684–90. Hong B, Xue G, Guo X, Weng L. Kinetic study of oxalic acid pretreatment of moso bamboo for textile fiber. Cellulose. 2013;20:645–53. Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D, Crocker D. Determination of structural carbohydrates and lignin in biomass: laboratory analytical procedure. Golden: National Renewable Energy Laboratory; 2012. Dence CW. The determination of lignin. Methods in lignin chemistry. Berlin: Springer; 1992. p. 33–61. Benkhedda K, Infante HG, Ivanova E, Adams FC. Trace metal analysis of natural waters and biological samples by axial inductively coupled plasma time of flight mass spectrometry (icp-tofms) with flow injection on-line adsorption preconcentration using a knotted reactor. J Analy Atomic Spectrometry. 2000;15(10):1349–56. NSK, ZY and XC carried out experimental studies. NSK, ZY, RB, HLT and DMM posed the toy model. RB and DMM conceived the study and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. Financial support from Vantage Dragon and the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged. Department of Chemical and Biological Engineering, University of British Columbia, 2360 East Mall, Vancouver, V6T 1Z3, Canada Nuwan Sella Kapu, Zhaoyang Yuan, Xue Feng Chang, D. Mark Martinez & Heather L. Trajano Chemical and Environmental Technology, British Columbia Institute of Technology, 3700 Willingdon Ave, Burnaby, V5G 3H2, Canada Xue Feng Chang & Rodger Beatson Department of Wood Science, University of British Columbia, 2424 Main Mall, Vancouver, V6T 1Z4, Canada Rodger Beatson Nuwan Sella Kapu Zhaoyang Yuan Xue Feng Chang D. Mark Martinez Heather L. Trajano Correspondence to Heather L. Trajano. Kapu, N.S., Yuan, Z., Chang, X.F. et al. Insight into the evolution of the proton concentration during autohydrolysis and dilute-acid hydrolysis of hemicellulose. Biotechnol Biofuels 9, 224 (2016). https://doi.org/10.1186/s13068-016-0619-6 Autohydrolysis Dilute-acid hydrolysis Hemicellulose Proton concentration
CommonCrawl
Urban Cooling Model Cooling Capacity Index Urban Heat Mitigation Index (Effect of Large Green Spaces) Air Temperature Estimates Value of Heat Reduction Service Appendix: Data Sources and Guidance for Parameter Selection Reference Evapotranspiration Green Area Maximum Cooling Distance Baseline Air Temperature Magnitude of the UHI Effect Air Temperature Maximum Blending Distance Energy Consumption Table Average Relative Humidity Urban Cooling Model¶ Urban heat mitigation (HM) is a priority for many cities that have undergone heat waves in recent years. Vegetation can help reduce the urban heat island (UHI) effect by providing shade, modifying thermal properties of the urban fabric, and increasing cooling through evapotranspiration. This has consequences for the health and wellbeing of citizens through reduced mortality and morbidity, increased comfort and productivity, and the reduced need for air conditioning (A/C). The InVEST urban cooling model calculates an index of heat mitigation based on shade, evapotranspiration, and albedo, as well as distance from cooling islands (e.g. parks). The index is used to estimate a temperature reduction by vegetation. Finally, the model estimates the value of the heat mitigation service using two (optional) valuation methods: energy consumption and work productivity. UHIs affect many cities around the world, with major consequences for human health and wellbeing: high mortality or morbidity during heat waves, high A/C consumption, and reduced comfort or work productivity. The UHI effect, i.e. the difference between rural and urban temperatures, is a result of the unique characteristics of cities due to two main factors: the thermal properties of materials used in urban areas (e.g. concrete, asphalt), which store more heat, and the reduction of the cooling effect (through shade and evapotranspiration) of vegetation. Natural infrastructure therefore plays a role in reducing UHIs in cities. Using the rapidly-growing literature on urban heat modeling (Deilami et al. 2018), the InVEST urban cooling model estimates the cooling effect of vegetation based on commonly available data on climate, land use/land cover (LULC), and (optionally) A/C use. Cooling Capacity Index¶ The model first computes the cooling capacity (CC) index for each pixel based on local shade, evapotranspiration, and albedo. This approach is based on the indices proposed by Zardo et al. 2017 and Kunapo et al. 2018, to which we add albedo, an important factor for heat reduction. The shade factor ('shade') represents the proportion of tree canopy (≥2 m in height) associated with each land use/land cover (LULC) category. Its value is comprised between 0 and 1. The evapotranspiration index (ETI) represents a normalized value of potential evapotranspiration, i.e. the evapotranspiration from vegetation (or evaporation from soil, for unvegetated areas). It is calculated for each pixel by multiplying the reference evapotranspiration (\(ET0\), provided by the user) and the crop coefficient (\(Kc\) , associated with the pixel's LULC type), and dividing by the maximum value of the \(ET0\) raster in the area of interest, \(ETmax\).: (96)¶\[ETI = \frac{K_c \cdot ET0}{ET_{max}}\] Note that this equation assumes that vegetated areas are sufficiently irrigated (although Kc values can be reduced to represent water-limited evapotranspiration). The albedo factor is a value between 0 and 1 representing the proportion of solar radiation reflected by the LULC type (Phelan et al. 2015). The model combines the three factors in the CC index: (97)¶\[CC_i = 0.6 \cdot shade + 0.2\cdot albedo + 0.2\cdot ETI\] The recommended weighting (0.6; 0.2; 0.2) is based on empirical data and reflects the higher impact of shading compared to evapotranspiration. For example, Zardo et al. 2017 report that "in areas smaller than two hectares [evapotranspiration] was assigned a weight of 0.2 and shading of 0.8. In areas larger than two hectares the weights were changed to 0.6 and 0.4, for [evapotranspiration] and shading respectively". In the present model, we propose to disaggregate the effects of shade and albedo in equation (83), and give albedo equal weight to ETI based on the results by Phelan et al. 2015 (see Table 2 in their study showing that vegetation and albedo have similar coefficients). Note: alternative weights can be manually entered by the user to test the sensitivity of model outputs to this parameter (or if local knowledge is available). Optionally, the model can consider another factor, intensity (\(building.intensity\) for a given landcover classification), which captures the vertical dimension of built infrastructure. Building intensity is an important predictor of nighttime temperature since heat stored by buildings during the day is released during the night. To predict nighttime temperatures, users need to provide the building intensity factor for each land use class in the Biophysical Table and the model will change equation (97) to: (98)¶\[CC_i = 1 - building.intensity\] Urban Heat Mitigation Index (Effect of Large Green Spaces)¶ To account for the cooling effect of large green spaces (>2 ha) on surrounding areas (see discussion in Zardo et al. 2017 and McDonald et al. 2016), the model calculates the urban HM index: HM is equal to CC if the pixel is unaffected by any large green spaces, but otherwise set to a distance-weighted average of the CC values from the large green spaces and the pixel of interest. To do so, the model first computes the area of green spaces within a search distance \(d_{cool}\) around each pixel (\(GA_i\)), and the CC provided by each park (\(CC_{park_i}\)): (99)¶\[{GA}_{i}=cell_{area}\cdot\sum_{j\in\ d\ radius\ from\ i} g_{j}\] (100)¶\[CC_{park_i}=\sum_{j\in\ d\ radius\ from\ i} g_j \cdot CC_j \cdot e^{\left( \frac{-d(i,j)}{d_{cool}} \right)}\] where \(cell_{area}\) is the area of a cell in ha, \(g_j\) is 1 if pixel \(j\) is green space or 0 if it is not, \(d(i,j)\) is the distance between pixels \(i\) and \(j\), \(d_{cool}\) is the distance over which a green space has a cooling effect, and \(CC_{park_i}\) is the distance weighted average of the CC values attributable to green spaces. (Note that LULC classes that qualify as "green spaces" are determined by the user with the parameter 'green_area' in the Biophysical Table, see Input table in Section 3.) Next, the HM index is calculated as: (101)¶\[\begin{split}HM_i = \begin{Bmatrix} CC_i & if & CC_i \geq CC_{park_i}\ or\ GA_i < 2 ha \\ CC_{park_i} & & otherwise \end{Bmatrix}\end{split}\] Air Temperature Estimates¶ To estimate heat reduction throughout the city, the model uses the (city-scale) UHI magnitude, \(UHI_{max}\). Users can obtain UHI values from local literature or global studies: for example, the Global Surface UHI Explorer developed by the University of Yale, provides estimates of annual, seasonal, daytime, and nighttime UHI (https://yceo.users.earthengine.app/view/uhimap). Note that UHI magnitude is defined for a specific period (e.g. current or future climate) and time (e.g. nighttime or daytime temperatures). The selection of period and time will affect the service quantification and valuation. Air temperature without air mixing \(T_{air_{nomix}}\) is calculated for each pixel as: (102)¶\[T_{air_{nomix},i}=T_{air,ref} + (1-HM_i)\cdot UHI_{max}\] Where \(T_{air,ref}\) is the rural reference temperature and \(UHI_{max}\) is the maximum magnitude of the UHI effect for the city (or more precisely, the difference between \(T_{air,ref}\) and the maximum temperature observed in the city). Due to air mixing, these temperatures average spatially. Actual air temperature (with mixing), \(T_{air}\), is derived from \(T_{air_{nomix}}\) using a Gaussian function with kernel radius \(r\), defined by the user. For each area of interest (which is a vector GIS layer provided by the user), we calculate average temperature and temperature anomaly \((T_{air,i} - T_{air,ref})\). Value of Heat Reduction Service¶ The value of temperature reduction can be assessed in at least three ways: energy savings from reduced A/C electricity consumption; gain in work productivity for outdoor workers; decrease in heat-related morbidity and mortality. The model provides estimates of (i) energy savings and (ii) work productivity based on global regression analyses or local data. Energy savings: the model uses a relationship between energy consumption and temperature (e.g. summarized by Santamouris et al. 2015) to calculate energy savings and associated costs for a building \(b\): (103)¶\[Energy.savings(b)= consumption.increase(b) \cdot (\overline{T_{air,MAX} - T_{air,i}})\] \(consumption.increase(b)\) (kWh/° C/\(m^2\)) is the local estimate of the energy consumption increase per each degree of temperature per square meter of the building footprint, for building category \(b\). \(T_{air,MAX}\) (° C) is the maximum temperature over the landscape \((T_{air,ref} + UHI_{max})\); \(\overline{T_{air,MAX} - T_{air,i}}\) (° C) is the average difference in air temperature for building \(b\), with \(T_{air,i}\) modeled in the previous steps. If costs are provided for each building category, equation (103) is replaced by equation (104) (104)¶\[Energy.savings(b)= consumption.increase(b) \cdot (\overline{T_{air,MAX} - T_{air,i}}) \cdot cost(b)\] \(cost(b)\) is the estimate of energy cost per kWh for building category \(b\). Note that this is very likely to be equal for all buildings. To calculate total energy savings, we sum the pixel-level values over the area of interest. Work Productivity: the model converts air temperature into Wet Bulb Globe Temperature (WBGT) to calculate the impacts of heat on work productivity. WBGT takes into account humidity, and can be estimated from standard meteorological data in the following way (American College of Sports Medicine, 1984, Appendix I): (105)¶\[WBGT_i = 0.567 \cdot T_{air,i} + 0.393 \cdot e_i + 3.94\] \(T_{air}\) = temperature provided by the model (dry bulb temperature (° C)) \(e_i\) = water vapor pressure (hPa) Vapor pressure is calculated from temperature and relative humidity using the equation: (106)¶\[e_i = \frac{RH}{100} \cdot 6.105 \cdot e^{\left ( 17.27 \cdot \frac{T_{air,i}}{(237.7 + T_{air,i})} \right )}\] \(RH\) = average Relative Humidity (%) provided by the user For each pixel, the model computes the estimated loss in productivity (%) for two work intensities: "light work" and "heavy work" (based on rest time needed at different work intensities, as per Table 2 in Kjellstrom et al. 2009): (107)¶\[\begin{split}Loss.light.work_i = \begin{Bmatrix} 0 & if & WBGT < 31.5\\ 25 & if & 31.5 \leq WBGT < 32.0 \\ 50 & if & 32.0 \leq WBGT < 32.5 \\ 75 & if & 32.5 \leq WBGT \\ \end{Bmatrix}\end{split}\] (108)¶\[\begin{split}Loss.heavy.work_i = \begin{Bmatrix} 0 & if & WBGT < 27.5\\ 25 & if & 27.5 \leq WBGT < 29.5 \\ 50 & if & 29.5 \leq WBGT < 31.5 \\ 75 & if & 31.5 \leq WBGT \\ \end{Bmatrix}\end{split}\] Here, "light work" corresponds to approximately 200 Watts metabolic rate, i.e. office desk work and service industries, and "heavy work" corresponds to 400 W, i.e. construction or agricultural work. If city-specific data on distribution of gross labor sectors are not available, the user can estimate the working population of the city in 3 sectors (service, industry, agriculture) using national-level World Bank data (e.g. "employment in industry, male (%)" and similar). Loss of work time for a given temperature can be calculated using the resting times in Table 2 (Kjellstrom et al. 2009) and the proportion of working population in different sectors. If local data are available on average hourly salaries for the different sectors, these losses in work time can be translated into monetary losses. Finally, for "light work", note that A/C prevalence can play a role. If most office buildings are equipped with A/C, the user might want to reduce the loss of work time for the service sector by the same proportion as A/C prevalence. Due to the simplifications described above, the model presents a number of limitations which are summarized here. CC index: the CC index relies on empirical weights, derived from a limited number of case studies, which modulate the effect of key factors contributing to the cooling effect (equation (83)). This weighting step comprises high uncertainties, as reviewed in Zardo et al. 2017. To characterize and reduce this uncertainty, users can test the sensitivity of the model to these parameters or conduct experimental studies that provide insights into the relative effects of shade, albedo, and evapotranspiration. Effect of large parks and air mixing: two parameters capture the effect of large green spaces and air mixing ( \(d_{cool}\) and \(r\)). The value of these parameters is difficult to derive from the literature as they vary with vegetation properties, climate (effect of large green spaces), and wind patterns (air mixing). Similar to CC, users can characterize and reduce these uncertainties by testing the sensitivity of the model to these parameters and comparing spatial patterns of temperature estimated by the model with observed or modeled data (see Bartesaghi et al. 2018 and Deilami et al. 2018 for additional insights into such comparisons). Valuation options: the valuation options currently supported by the model are related to A/C energy consumption and outdoor work productivity. For A/C energy consumption, users need to assess A/C prevalence, and reduce estimates accordingly (i.e. reduce energy consumption proportionally to actual use of A/C). Valuation of the health effects of urban heat is not currently included in the model, despite their importance (McDonald et al. 2016). This is because these effects vary dramatically across cities and it is difficult to extrapolate current knowledge based predominantly in the Global North (Campbell et al. 2018). Possible options to obtain health impact estimates include: using global data from McMichael et al. 2003, who use a linear relationship above a threshold temperature to estimate the annual attributable fraction of deaths due to hot days or, for applications in the US, a methodology was developed based on national-scale relationships between mortality and temperature change: see McDonald et al. 2016. Gasparrini et al. 2014 break down the increase in mortality attributable to heat for 384 cities in 13 countries. \(T_air\) output from the InVEST model could be used to determine the mortality fraction attributable to heat (first determine in which percentile \(T_{air,i}\) falls, then use Table S3 or Table S4 in the appendix). Workspace (required): Folder where model outputs will be written. Make sure that there is ample disk space and that write permissions are correct. Results suffix (optional): Text string that will be appended to the end of output file names, as "_Suffix". Use a suffix to differentiate model runs, for example by providing a short name for each scenario. If a suffix is not provided, or is unchanged between model runs, the tool will overwrite previous results. Land Cover Map (required): Raster of LULC for each pixel, where each unique integer represents a different LULC class. All values in this raster MUST have corresponding entries in the Biophysical Table. The model will use the resolution of this layer to resample all outputs. The resolution should be small enough to capture the effect of green spaces in the landscape, although LULC categories can comprise a mix of vegetated and non-vegetated covers (e.g. "residential", which may have 30% canopy cover). Biophysical Table (required): A .csv (Comma Separated Values) table containing model information corresponding to each of the land use classes in the LULC. All classes in the LULC raster MUST have corresponding values in this table. Each row is an LULC class and columns must be named and defined as follows: lucode: Required. LULC class code. Codes must match the 'value' column in the LULC raster and must be unique integer or floating point values, in consecutive order. Shade: A value between 0 and 1, representing the proportion of tree cover (0 for no tree; 1 for full tree cover with canopy ≥2 m in height). Required if using the weighted factor approach to CC calculations. Kc: Required. Crop coefficient, a value between 0 and 1 (see Allen et al. 1998). Albedo: A value between 0 and 1, representing the proportion of solar radiation directly reflected by the LULC class. Required if using the weighted factor approach to CC calculations. Green_area: Required. A value of either 0 or 1, 1 meaning that the LULC class qualifies as a green area (green areas >2 ha have an additional cooling effect), and 0 meaning that the class is not counted as a green area. Building_intensity: A floating-point value between 0 and 1. This is calculated by dividing the floor area by the land area, normalized between 0 and 1. Required if using the weighted factor approach to CC calculations. Reference Evapotranspiration: A raster representing reference evapotranspiration (units of millimeters) for the period of interest (could be a specific date or monthly values can be used as a proxy). Area of interest: Polygon vector delineating areas of interest (city boundaries or neighborhoods boundaries). Results will be aggregated within each shape contained in this vector. Green Area Maximum Cooling Distance (\(d_{cool}\)): Distance (in meters) over which large urban parks (>2 ha) will have a cooling effect (recommended value: 450 m). Baseline air temperature (\(T_{ref}\)): Rural reference air temperature (where the urban heat island effect is not observed) for the period of interest. This could be nighttime or daytime temperature, for a specific date or an average over several days. The results will be given for the same period of interest. Magnitude of the UHI effect (\(UHI_{max}\)): Magnitude of the UHI effect (in ° C), i.e. the difference between the rural reference (baseline air) temperature and the maximum temperature observed in the city. Air Temperature Maximum Blending Distance: Search radius (in meters) used in the moving average to account for air mixing (recommended value range for initial run: 500 m to 600 m; see Schatz et al. 2014 and Londsdorf et al. 2021). Cooling Capacity Calculation Method: Either "Weighted Factors" or "Building Intensity". The method selected here determines the predictor used for air temperature. If "Weighted Factors" is selected, the CC calculations will use the weighted factors for shade, albedo, and ETI as a predictor for daytime temperatures. Alternatively, if "Building Intensity" is selected, building intensity will be used as a predictor for nighttime temperature instead of shade, albedo, and ETI. Building Footprints (required if doing energy savings valuation): Vector with built infrastructure footprints. The attribute table must contain a column named 'Type', containing integers referencing the building type (e.g. 1 = residential, 2 = office, etc.). Energy Consumption Table (required if doing energy savings valuation): A .csv (Comma Separated Values) table containing information on energy consumption for each building type, in kWh/degC/\(m^2\). The table must contain the following columns: "Type": Building type defined in the vector above. "Consumption": Energy consumption per building type, in kWh/degC/\(m^2\), where the \(m^2\) refers to the area of the polygon footprint of the building in \(m^2\). This consumption value must be adjusted for the average number of stories for structures of this type. "RH" (optional): Average relative humidity (%) during the period of interest, which is used to calculate the WBGT for the work productivity module. "cost" (optional): The cost per kWh (\(currency/kWh\)) of electricity for each building type. (Any monetary unit may be used.) If this column is provided in the Energy Consumption Table, the energy_sav field of the output vector buildings_with_stats.shp will be in monetary units rather than in kWh. The values in this column are very likely to be the same for all building types. Average relative humidity (required if performing work productivity valuation): The average relative humidity (0-100%) over the time period of interest. CC index Shade weight: The relative weight to apply to shade when calculating the CC index. Recommended value: 0.6. CC index Albedo weight: The relative weight to apply to albedo when calculating the CC index. Recommended value: 0.2. CC index Evapotranspiration weight: The relative weight to apply to ETI when calculating the CC index. Recommended value: 0.2. hm_[Suffix].tif: The calculated HMI. uhi_results_[Suffix].shp: A copy of the input vector "Area of Interest" with the following additional fields: "avg_cc" - Average CC value (-). "avg_tmp_v" - Average temperature value (degC). "avg_tmp_an" - Average temperature anomaly (degC). "avd_eng_cn" - (optional) Avoided energy consumption (kWh or $ if optional energy cost input column was provided in the Energy Consumption Table). "avg_wbgt_v" - (optional) Average WBGT (degC). "avg_ltls_v" - (optional) Light work productivity loss (%). "avg_hvls_v" - (optional) Heavy work productivity loss (%). buildings_with_stats_[Suffix].shp: A copy of the input vector "Building Footprints" with the following additional fields: "energy_sav" - Energy savings value (kWh or currency if optional energy cost input column was provided in the Energy Consumption Table). Savings are relative to a theoretical scenario where the city contains NO natural areas nor green spaces; where CC = 0 for all LULC classes. "mean_t_air" - Average temperature value in building (degC). The intermediate folder contains additional model outputs: cc_[Suffix].tif: Raster of CC values. T_air_[Suffix].tif: Raster of estimated air temperature values. T_air_nomix_[Suffix].tif: Raster of estimated air temperature values prior to air mixing (i.e. before applying the moving average algorithm). eti_[Suffix].tif: Raster of values of actual evapotranspiration (reference evapotranspiration times crop coefficient "Kc"). wbgt_[Suffix].tif: Raster of the calculated WBGT. reprojected_aoi_[Suffix].shp: The user-defined Area of Interest, reprojected to the Spatial Reference of the LULC. reprojected_buildings_[Suffix].shp: The user-defined buildings vector, reprojected to the Spatial Reference of the LULC. Appendix: Data Sources and Guidance for Parameter Selection¶ Kc¶ Reference Evapotranspiration¶ Building Footprints¶ Albedo¶ Albedo for urban built infrastructure can be found in local microclimate literature. Deilami et al. 2018 and Bartesaghi et al. 2018 provide a useful review. Stewart and Oke (2012) provide value ranges for typical LULC categories. Green Area Maximum Cooling Distance¶ Distance (meters) over which large urban parks (>2 ha) have a cooling effect. See a short review in Zardo et al. 2017, including a study that reports a cooling effect at a distance five times tree height. In the absence of local studies, an estimate of 450m can be used. Baseline Air Temperature¶ Rural reference temperature (°C) can be obtained from local temperature stations or global climate data. Magnitude of the UHI Effect¶ i.e. the difference between the maximum temperature in the city and the rural reference (baseline) air temperature. In the absence of local studies, users can obtain values from a global study conducted by Yale: https://yceo.users.earthengine.app/view/uhimap Air Temperature Maximum Blending Distance¶ Search radius (meters) used in the moving average to account for air mixing. A recommended initial value range of 500m to 600m can be used based on preliminary tests in pilot cities (Minneapolis-St Paul, USA and Paris, France). This parameter can be used as a calibration parameter if observed or modeled temperature data are available. Energy Consumption Table¶ Energy consumption (kWh/°C) varies widely across countries and cities. Santamouris et al. 2015 provide estimates of the energy consumption per °C for a number of cities worldwide. For the United States (US), EPA EnergyStar Portfolio Manager data may provide categorical averages as well as data for specific buildings: https://www.energystar.gov/buildings/facility-owners-and-managers/existing-buildings/use-portfolio-manager/understand-metrics/what-energy Note: If A/C prevalence is low, this valuation metric should not be used as it assumes that energy costs will increase with higher temperatures (and greater A/C use). A/C prevalence data for the US can be obtained from the American Housing Survey: https://www.census.gov/programs-surveys/ahs.html Average Relative Humidity¶ Average relative humidity (%) during heat waves can be obtained from local temperature stations or global climate data FAQs¶ What is the output resolution? Model outputs are of two types: rasters and vectors. Rasters have the same resolution as the LULC input (all other raster inputs are resampled to the same resolution). Why aren't the health impacts calculated by the model? The effects of heat on human health vary dramatically across cities and it is difficult to develop a generic InVEST model that accurately captures and quantifies these for all cities. See the point about "Valuation of the health effects of urban heat" in the model Limitations section for additional details and pathways to assess the health impacts of urban heat mitigation. Allen, R. G., Pereira, L. S., Raes, D., & Smith, M. (1998). Crop evapotranspiration - Guidelines for computing crop water requirements - FAO Irrigation and drainage paper 56. FAO, Rome, Italy. American College of Sports Medicine (1984). Prevention of Thermal Injuries During Distance Running. Medicine and Science in Sports & Exercise, 16(5), ix-xiv. https://doi.org/10.1249/00005768-198410000-00017 Bartesaghi, C., Osmond, P., & Peters, A. (2018). Evaluating the cooling effects of green infrastructure : A systematic review of methods, indicators and data sources. Solar Energy, 166(February), 486-508. https://doi.org/10.1016/j.solener.2018.03.008 Campbell, S., Remenyi, T. A., White, C. J., & Johnston, F. H. (2018). Heatwave and health impact research: A global review. Health & Place, 53, 210-218. https://doi.org/https://doi.org/10.1016/j.healthplace.2018.08.017 Deilami, K., Kamruzzaman, M., & Liu, Y. (2018). Urban heat island effect: A systematic review of spatio-temporal factors, data, methods, and mitigation measures. International Journal of Applied Earth Observation and Geoinformation, 67, 30-42. https://doi.org/https://doi.org/10.1016/j.jag.2017.12.009 Gasparrini, A., Guo, Y., Hashizume, M., Lavigne, E., Zanobetti, A., Schwartz, J., Tobias, A., Tong, S., Rocklöv, J., Forsberg, B., Leone, M., De Sario, M., Bell, M. L., Guo, Y. L., Wu, C., Kan, H., Yi, S., Coelho, M. d., Saldiva, P. H., Honda, Y., Kim, H., & Armstrong, B. (2015). Mortality risk attributable to high and low ambient temperature: a multicountry observational study. The lancet, 386(9991), 369-375. https://doi.org/10.1016/S0140-6736(14)62114-0 Kjellstrom, T., Holmer, I., & Lemke, B. (2009). Workplace heat stress, health and productivity - an increasing challenge for low and middle-income countries during climate change. Global Health Action, 2, 10.3402/gha.v2i0.2047. https://doi.org/10.3402/gha.v2i0.2047 Kunapo, J., Fletcher, T. D., Ladson, A. R., Cunningham, L., & Burns, M. J. (2018). A spatially explicit framework for climate adaptation. Urban Water Journal, 15(2), 159-166. https://doi.org/10.1080/1573062X.2018.1424216 Londsdorf, E.V., Nootenboom, C., Janke, B., & Horgan, B.P. (2021). Assessing urban ecosystem services provided by green infrastructure: Golf courses in the Minneapolis-St. Paul metro area. Landscape and Urban Planning, 208. https://doi.org/10.1016/j.landurbplan.2020.104022 McDonald, R. I., Kroeger, T., Boucher, T., Wang, L., & Salem, R. (2016). Planting Healthy Air: A global analysis of the role of urban trees in addressing particulate matter pollution and extreme heat. CAB International, 128-139. McMichael, A. J., Campbell-Lendrum, D. H., Corvalán, C. F., Ebi, K. L., Githeko, A. k., Scheraga, J. D., & Woodward, A. (2003). Climate change and human health: risks and responses. World Health Organization. Geneva, Switzerland. Phelan, P. E., Kaloush, K., Miner, M., Golden, J., Phelan, B., Iii, H. S., & Taylor, R. A. (2015). Urban Heat Island : Mechanisms , Implications , and Possible Remedies. Annual Review of Environment and Resources, 285-309. https://doi.org/10.1146/annurev-environ-102014-021155 Santamouris, M., Cartalis, C., Synnefa, A., & Kolokotsa, D. (2015). On the impact of urban heat island and global warming on the power demand and electricity consumption of buildings - A review. Energy & Buildings, 98, 119-124. https://doi.org/10.1016/j.enbuild.2014.09.052 Shatz, J. & Kucharik, C.J. (2014). Seasonality of the Urban Heat Island Effect in Madison, Wisconsin. Journal of Applied Meterology and Climatology, 53(10), 2371-2386. https://doi.org/10.1175/JAMC-D-14-0107.1 Stewart, I. D., & Oke, T. R. (2012). Local climate zones for urban temperature studies. American Meteorological Society. https://doi.org/10.1175/BAMS-D-11-00019.1 Zardo, L., Geneletti, D., Prez-soba, M., & Eupen, M. Van. (2017). Estimating the cooling capacity of green infrastructures to support urban planning. Ecosystem Services, 26, 225-235. https://doi.org/10.1016/j.ecoser.2017.06.016
CommonCrawl
Investigating GNSS PPP–RTK with external ionospheric constraints Xiaohong Zhang ORCID: orcid.org/0000-0002-2763-25481,2, Xiaodong Ren2, Jun Chen3, Xiang Zuo4, Dengkui Mei2 & Wanke Liu2 Satellite Navigation volume 3, Article number: 6 (2022) Cite this article Real-Time Kinematic Precise Point Positioning (PPP–RTK) is inextricably linked to external ionospheric information. The PPP–RTK performances vary much with the accuracy of ionospheric information, which is derived from different network scales, given different prior variances, and obtained under different disturbed ionospheric conditions. This study investigates the relationships between the PPP–RTK performances, in terms of precision and convergence time, and the accuracy of external ionospheric information. The statistical results show that The Time to First Fix (TTFF) for the PPP–RTK constrained by Global Ionosphere Map (PPP–RTK-GIM) is about 8–10 min, improved by 20%–50% as compared with that for PPP Ambiguity Resolution (PPP-AR) whose TTFF is about 13–16 min. Additionally, the TTFF of PPP–RTK is 4.4 min, 5.2 min, and 6.8 min, respectively, when constrained by the external ionospheric information derived from different network scales, e.g. small-, medium-, and large-scale networks, respectively. To analyze the influences of the optimal prior variances of external ionospheric delay on the PPP–RTK results, the errors of 0.5 Total Electron Content Unit (TECU), 1 TECU, 3 TECU, and 5 TECU are added to the initial ionospheric delays, respectively. The corresponding convergence time of PPP–RTK is less than 1 min, about 3, 5, and 6 min, respectively. After adding the errors, the ionospheric information with a small variance leads to a long convergence time and that with a larger variance leads to the same convergence time as that of PPP-AR. Only when an optimal prior variance is determined for the ionospheric delay in PPP–RTK model, the convergence time for PPP–RTK can be shorten greatly. The impact of Travelling Ionospheric Disturbance (TID) on the PPP–RTK performances is further studied with simulation. It is found that the TIDs increase the errors of ionospheric corrections, thus affecting the convergence time, positioning accuracy, and reliability of PPP–RTK. Precise Point Positioning (PPP) can achieve a positioning accuracy of better than 10 cm within 30 min by applying satellite precise orbit and clock products. Much work has been done to reduce the convergence time of PPP, such as using multi-constellation Global Navigation Satellite System (GNSS) data (Guo et al., 2017; Li et al., 2019a, 2019b; Li et al., 2015a; Li et al., 2019c; Lou et al., 2016), performing integer ambiguity resolution (Collins et al., 2008; Ge et al., 2008; Hu et al., 2019; Laurichesse et al., 2009; Li & Zhang, 2012; Li et al., 2016; Liu et al., 2017), and adding priori atmospheric information (de Oliveira et al., 2017). However, it still needs more than 15 min to achieve the solutions with centimeter-level accuracy due to the ambiguity of carrier observations. For the Real-Time Kinematic (RTK) technique, though the ambiguity can be quickly fixed, it cannot be applied by the users far away from the reference stations. PPP–RTK is the integer ambiguity resolution enabled PPP by adding the priori atmospheric information from a local reference network, which has a short convergence time (Wübbena et al., 2005; Li & Zhang et al. 2011; de Oliveira et al., 2017; Zhang et al., 2018). It extends the PPP concept by providing different kinds of corrections to users, such as satellite orbit, clock, and atmospheric delay corrections as well as satellite phase and code biases. These corrections, when accurately provided, enable regional or even global PPP–RTK users to recover the integer nature of ambiguities, thus improving the positioning accuracy and convergence behavior (Geng et al., 2010; Geng & Shi, 2017; et al., ; 2019; Liu et al., 2017; Li et al., 2019a; Li et al., 2019b; Li et al., 2019c; Zha et al., 2021). Among all kinds of corrections, the precise ionospheric delay is the main bottleneck limiting the fast and successful ambiguity resolution for PPP–RTK (Hernández-Pajares et al., 2011; Jakowski et al., 2008). Consequently, the precise determination of the ionospheric delay along a specific Line-of-Sight (LoS) and its correction are of great significance for reducing the convergence time of PPP–RTK. Ionospheric delays used for augmenting PPP with Ambiguity Resolution (PPP-AR) are mainly derived with the following two ways. The first one is the two-dimensional vertical thin-shell TEC (Total Electron Content) model established at a specific altitude by using mathematical algorithms, such as spherical harmonics function (Schaer, 1999), spherical cap harmonics function (Haines, 1988; Li et al., 2015b), B-splines, and trigonometric B-splines (Mautz et al., 2005; Schmidt et al., 2008) and so on. Existing studies indicated that benefiting from the vertical ionospheric corrections, the convergence time of PPP–RTK can be reduced (Banville et al., 2014; Psychas et al., 2018). One main problem with the estimated TEC models is the mapping function error. To improve the accuracy of vertical ionospheric TEC model, the ionosphere is divided into many three-dimensional voxels of the same size. Hernández-Pajares et al. (1999) first presented GNSS-based data-driven tomographic models. Since then, many scholars further improved the ionosphere tomographic models by different methods (Wen et al., 2015; Wen et al., 2007, 2008; Zheng et al., 2017, 2018, 2020, 2021). Olivares-Pulido et al. (2019) presented a 4D tomographic ionospheric model to support PPP–RTK, achieving an accuracy better than 10 cm in the horizontal direction within about 10 min. The second one is to interpolate the ionospheric delays with high accuracy derived from networks. It has been demonstrated that PPP–RTK can achieve mm-level positioning accuracy in the horizontal direction by using ionospheric corrections derived from a small-scale network (about dozens of kilometers) (Teunissen et al., 2010; Zhang et al., 2011). Also, the convergence behavior can be significantly improved with TEC in the slant and vertical directions when leveraging a regional network and Global Ionospheric Maps (GIMs) (Xiang et al., 2020). In addition, Wang et al. (2017) found that ten seconds were required to make most of the horizontal positioning errors smaller than 10 cm by using 1 Hz data when the network corrections are provided, such as the satellite clocks, the satellite phase biases, and the ionospheric delays. Previous studies showed that the convergence time of PPP–RTK can be reduced when constrained by external ionospheric information. However, the performances of PPP–RTK constrained by the ionospheric information, which is derived from different scale networks with different accuracy levels by giving different prior variances, and derived under different ionospheric conditions, are not fully studied. This study addresses the aforementioned problems. The performances of PPP–RTK constrained by different accuracy of ionospheric information for different situations are discussed and analyzed. Finally, summary and conclusions are given. In this section, the algorithms of the PPP–RTK server model, the estimation of ionospheric delay based on carrier phase observations and the PPP–RTK user model are introduced. PPP–RTK server model The difference between the observed and calculated observation equations for un-differenced and un-combined at k-th epoch can be described as: $$\left\{ \begin{aligned} {\text{E}}(\Delta P_{r,i}^{s} (k)) &= {\mathbf{c}}_{r}^{s} (k) \cdot {\Delta }{\mathbf{x}}_{r} (k) + c \cdot [{\text{d}}t_{r} (k) - {\text{d}}t^{s} (k)] + g_{r}^{s} \cdot T_{r}^{s} (k) \\ & \quad + \mu_{i} \cdot I_{r,i}^{s} (k) + B_{r,i}^{{}} - B_{i}^{s} \\ {\text{E}}(\Delta \Phi_{r,i}^{s} (k))& = {\mathbf{c}}_{r}^{s} (k) \cdot {\Delta }{\mathbf{x}}_{r} (k) + c \cdot [{\text{d}}t_{r} (k) - {\text{d}}t^{s} (k)] + g_{r}^{s} \cdot T_{r}^{s} (k) \\ & \quad - \mu_{i} \cdot I_{r,i}^{s} (k) + b_{r,i}^{{}} - b_{i}^{s} + \lambda_{i} \cdot N_{r,i}^{s} \\ \end{aligned} \right.$$ where \({\text{E}}( \cdot )\) denotes the expectation operator; \(P_{r,i}^{s} (k)\) and \(\Phi_{r,i}^{s} (k)\) are the code and phase observations, respectively, from receiver r to satellite s at epoch k for frequency i (i = 1,2); the 3 × 1 vector \(\Delta {\mathbf{x}}_{r} (k)\) denotes the receiver's position increment at epoch k; the 3 × 1 vector \({\mathbf{c}}_{r}^{s} (k)\) indicates the unit vector from receiver to satellite; c denotes the speed of light in vacuum; \({\text{d}}t_{r}\) and \({\text{d}}t^{s}\) denote the receiver and satellite clock errors, respectively; \(T_{r}^{s} (k)\) denotes the zenith non-hydrostatic tropospheric delays at epoch k, while the hydrostatic tropospheric delays can be corrected by an empirical model; \(g_{r}^{s}\) denotes the non-hydrostatic tropospheric mapping function; \(B_{r,i}^{{}}\) and \(B_{i}^{s}\) are the receiver and satellite code hardware biases, respectively; \(b_{r,i}^{{}}\) and \(b_{i}^{s}\) are the receiver and satellite carrier phase hardware biases, respectively; \(\mu_{i} = f_{1}^{2} /f_{i}^{2}\) is the conversion coefficient of ionospheric delay for frequency i; \(I_{r,i}^{s} (k)\) is the first order ionospheric delay for frequency i along a line-of-sight at epoch k; \(\lambda_{i} = c/f_{i}\) indicates the phase wavelength for frequency i; \(N_{r,i}^{s}\) is the phase integer ambiguity for frequency i. Equation (1) is a rank deficiency system due to the linear dependency of some columns of the design matrix. To eliminate the rank deficiency, Odijk et al. (2016) proposed the null space of the design matrix and chose a minimum constraint set, i.e., S-basis constraint. Based on the S-basis, the types of rank deficiency and their S-basis constraints can be found in Table 1. It should be noted that this choice of S-basis holds for the Code Division Multiple Access (CDMA) signals, and different choice of S-basis should be applied for the Frequency Division Multiple Access (FDMA) signals (Zhang et al., 2021). Table 1 The rank deficiencies information, including involved parameters, sizes, and S-basis for PPP–RTK network In Table 1, p and q denote the reference receiver and satellite, respectively. In addition, \(B_{{{\text{IF}}}}^{s}\), \(B_{{{\text{GF}}}}^{s}\), \(B_{{r,{\text{IF}}}}^{{}}\) and \(B_{{r,{\text{GF}}}}^{{}}\) can be described as: $$\left\{ \begin{aligned} B_{{{\text{IF}}}}^{s} &= \frac{{\mu_{2} B_{1}^{S} - \mu_{1} B_{2}^{S} }}{{\mu_{2} - \mu_{1} }} \hfill \\ B_{{{\text{GF}}}}^{s} & = \frac{{B_{2}^{S} - B_{1}^{S} }}{{\mu_{2} - \mu_{1} }} \hfill \\ D_{{{\text{DCB}}}}^{S} & = B_{2}^{S} - B_{1}^{S} \hfill \\ B_{{{\text{IF}}}}^{s} & = \frac{{\mu_{2} B_{r,1} - \mu_{1} B_{r,2} }}{{\mu_{2} - \mu_{1} }} \hfill \\ B_{{{\text{GF}}}}^{s} & = \frac{{B_{r,2} - B_{r,1} }}{{\mu_{2} - \mu_{1} }} \hfill \\ D_{{{\text{DCB}}}}^{r} & = B_{r,2} - B_{r,1} \hfill \\ \end{aligned} \right.$$ When the rank deficiencies in Eq. (1) are eliminated based on the S-basis listed in Table 1, the full-rank un-differenced and un-combined PPP–RTK network code and phase observations at epoch k can be expressed as: $$\left\{ \begin{aligned} {\text{E}}(\Delta P_{r,i}^{s} (k))& = c \cdot [\overline{{{\text{d}}t}}_{r \ne p} (k) - \overline{{{\text{d}}t}}^{s} (k)] + g_{r}^{s} \cdot \overline{T}_{r \ne p}^{s} (k) \\ & \quad + \mu_{i} \cdot \overline{I}_{r,i}^{s} (k) + \overline{B}_{r \ne p,i > 2}^{{}} - \overline{B}_{i>2}^{s} \\ {\text{E}}(\Delta \Phi_{r,i}^{s} (k)) & = c \cdot [\overline{{{\text{d}}t}}_{r \ne p} (k) - \overline{{{\text{d}}t}}^{s} (k)] + g_{r}^{s} \cdot \overline{T}_{r \ne p}^{s} (k) \\ & \quad- \mu_{i} \cdot \overline{I}_{r,i}^{s} (k) + \overline{b}_{r \ne p,i > 2}^{{}} - \overline{b}_{i}^{s} + \lambda_{i} \cdot N_{r \ne p,i}^{s \ne q} \\ \end{aligned} \right.$$ The formulations of the estimable parameters in Eq. (3) can be found in Table 2. Table 2 The estimable parameters of un-differenced and un-combined PPP–RTK and corresponding formulations As is shown in Table 2, the ionospheric parameters are biased by Geometry-Free (GF) receiver and satellite code biases. These biases are also contained in the interpolated ionospheric delays that are provided to users. As a result, the interpolated user ionospheric delay for different satellites will contain different combinations of receiver code biases when the different receivers of a network observe different satellites, resulting in system biases for PPP–RTK users of the network. It is worth noting that the system biases can be eliminated by single differencing between satellites when the ionospheric delays are used as constraints in PPP–RTK. It should be mentioned that Eq. (3) ignores the spatial correlation of ionospheric delays in the network. The slant ionospheric delays derived from the network receivers for the same satellite are approximately equal if the distances between the receivers are a few hundred kilometers (Odijk, 2002). Accordingly, the redundant observation equation, i.e., the single-differenced ionospheric code observations described in Eq. (4), can be added to Eq. (1). $$I_{p}^{s} (k) - I_{r \ne p}^{s} (k) = 0,{\kern 1pt} W = S^{ - 1}$$ where W and S denote the weight and variance–covariance matrix of the single-differenced ionospheric code observations between receivers. Once the single differenced ionospheric code observations between receivers are added to Eq. (1), it will not be a rank deficiency system. After eliminating the rank deficiency, the full rank un-differenced and un-combined PPP–RTK can be formulated as: $$\left\{ \begin{aligned} {\text{E}}(\Delta P_{r,i}^{s} (k))& = c \cdot [\overline{{{\text{d}}t}}_{r \ne p} (k) - \overline{{{\text{d}}t}}^{s} (k)] + g_{r}^{s} \cdot \overline{T}_{r \ne p}^{s} (k) \\ & \quad + \mu_{i} \cdot \overline{I}_{r,i}^{s} (k) + \overline{B}_{r \ne p,i> 2}^{{}} - \overline{B}_{i>2}^{s} + \frac{{\mu_{i} }}{{\mu_{2} - \mu_{1} }}\overline{B}_{{r \ne p,{\text{DCB}}}} \\ {\text{E}}(\Delta \Phi_{r,i}^{s} (k)) & = c \cdot [\overline{{{\text{d}}t}}_{r \ne p} (k) - \overline{{{\text{d}}t}}^{s} (k)] + g_{r}^{s} \cdot \overline{T}_{r \ne p}^{s} (k) \\ & \quad- \mu_{i} \cdot \overline{I}_{r,i}^{s} (k) + \overline{b}_{r \ne p,i> 2}^{{}} - \overline{b}_{i}^{s} + \lambda_{i} \cdot N_{r \ne p,i}^{s \ne q} \\ \overline{I}_{r}^{s} - \overline{I}_{r \ne p}^{s} &= 0 \\ \end{aligned} \right.$$ where the estimable forms of the biased receiver code biases, phase biases, and ionospheric delays are listed in Table 3. Table 3 The estimable formulation for the un-differenced and un-combined PPP–RTK sever model It can be seen from Table 3 that the estimable ionospheric parameters of the network reference receiver include GF receiver code biases. Therefore, the interpolated ionospheric delays provided to users contain the same receiver Differential Code Bias (DCB), which is eventually absorbed by the estimated receiver code and phase bias of users. Precise ionospheric delay estimation As is mentioned above, the convergence time can be reduced by applying the constraint of ionospheric information, which can be derived from GF combination. In recent decades, three methods were used for the extraction of ionospheric observables, such as the Carrier-to-Code Levelling (CCL) method (Ciraolo et al., 2007), the un-differenced and un-combined PPP (UD-PPP) method (Zhang et al., 2012), and the zero-difference integer ambiguity (PPP-Fixed) method (Ren et al., 2020). The CCL method only considers the noise of a specific satellite in a continuous arc, leading to large errors between different satellites when the reference of each satellite is different. The UD-PPP method uses external constraints and performs the adjustment of the selected network by the least-squares method. The estimated ambiguity can be the best for the satellites tracked by a certain receiver, and the differences between different satellites can be effectively reduced. In addition to the advantages of the UD-PPP, the PPP-Fixed fully utilizes the constraints of receivers for the whole network. Furthermore, ionospheric observables extracted from carrier phase observations by using the PPP-Fixed method have higher accuracy than those extracted with CCL and UD-PPP. The PPP-Fixed method is applied for ionospheric delay extraction. The processing flow chart of this method is plotted in Fig. 1 and its detailed processing steps can be found in (Ren et al., 2020). Flow chart of ionospheric delay estimated by using the PPP-Fixed method PPP–RTK user model The receiver of a user can be theoretically regarded as a part of network. Therefore, the rank deficiencies and corresponding S-basis for users are nearly the same as those of network receivers. Unlikely, the positions of users need to be estimated and the \(\overline{{{\text{d}}t}}^{s}\), \(\overline{B}_{i > 2}^{s}\), and \(\overline{b}_{i}^{s}\) need to be corrected as well. Therefore, the PPP–RTK user model is given as follows: $$\left\{\begin{aligned} & E(\Delta {P}_{r,i}^{s}(k)+c\cdot {\overline{\mathrm{d}t}}^{s}(k)+{\overline{\mathrm{d}t}}_{j>2}^{S}(k))\\ &={c}_{u}^{S}(k)\cdot \Delta {x}_{u}(k)+c\cdot {\widehat{\mathrm{d}t}}_{u}(k)+{g}_{u}^{s}\cdot {\widehat{T}}_{u}^{s}(k)+{\mu }_{i}\cdot {\widehat{I}}_{u}^{s}(k)+{\overline{B}}_{u,i>2}+{\mu }_{i}\cdot {\widehat{B}}_{u,\mathrm{GF}}\\ & E(\Delta {\Phi }_{r,i}^{s}(k)+c\cdot {\overline{\mathrm{d}t}}^{s}(k)+{b}_{j>2}^{S}(k))\\ &={c}_{u}^{S}(k)\cdot \Delta {x}_{u}(k)+c\cdot {\widehat{\mathrm{d}t}}_{u}(k)+{g}_{u}^{s}\cdot {\widehat{T}}_{u}^{s}(k)-{\mu }_{i}\cdot {\widehat{I}}_{u}^{s}(k)+{\widehat{b}}_{u,i>2}+{\lambda }_{i}\cdot {N}_{u,i}^{s\ne q}\\ & {\overline{I}}_{\mathrm{Interpolate}}^{s}={\overline{I}}_{r\ne p}^{s}\end{aligned}\right.$$ where the formulations of the estimable parameter are listed in Table 4. When the ionospheric delays provided to users are interpolated by network ionospheric delays, Eq. (6) represents the PPP–RTK user model, and \(\widehat{B}_{{u,{\text{GF}}}}\) needs to be estimated. It should be noted that the ionospheric delay corrections contribute to the rapid convergence at the beginning significantly when a new satellite is observed, or outages occur. Table 4 The estimable parameters for a PPP–RTK user In this section we first describe the performances of the PPP–RTK constrained by the external ionospheric information derived from different types of networks. Then we analyze the dependencies of convergence time on the prior variances of the ionospheric delays. Finally, a Travelling Ionospheric Disturbance (TID) is simulated and its impacts on the PPP–RTK performances are presented. For PPP–RTK test, the data collected by about 190 GNSS stations from 2020/04/01 to 2020/04/08 are applied. The distribution of GNSS stations, which are mainly located in the Chinese mainland, is plotted in Fig. 2. Distribution of GNSS stations The parameter estimation is performed by the least square method for the network, while the Kalman filter and kinematic mode are applied for users. The satellite with the maximum elevation angle for a specific epoch is selected as a reference. The satellite positions and satellite clock errors are derived from the precise satellite orbit and clock products provided by the International GNSS Service (IGS). The receiver positions of the network are fixed to their known values. The code and phase biases of receivers and satellites are treated as time-invariant parameters. Other kinds of error sources in positioning, such as the solid earth tide, the ocean tide, the relativistic effects, as well as satellite receiver Phase Center Offset (PCO) and Phase Center Variation (PCV), are corrected by corresponding models. The ambiguity term of the reference station is fixed by estimating the Un-calibrated Phase Delays (UPD). In this study, the UPDs are estimated by using the method introduced by Li & Zhang (2012). The ambiguities of users are resolved by using the Least-squares AMBiguity Decorrelation Adjustment (LAMBDA) method (Teunissen, 1995). Other main processing strategies are listed in Table 5. Table 5 The detailed processing setups for PPP–RTK PPP–RTK test for different network scales In PPP–RTK processing, the Kalman filter is reinitialized every hour. Figure 3 presents the distribution of GNSS stations and validation stations, which are applied for the regional network test of the PPP–RTK constrained by interpolated ionospheric delays. Distribution of GNSS stations and validation stations for a regional network Figure 4 illustrates the performance comparisons of the PPP-AR and the PPP–RTK constrained by Global Ionosphere Map (PPP–RTK-GIM). As we can see, The Time to First Fix (TTFF) of PPP–RTK-GIM for different IGS stations is improved by 20%-50% as compared with that of PPP-AR. Statistical results of PPP-AR and PPP–RTK constrained by GIM To better understand how the ionospheric delays derived from different network scales affect the performances of a PPP–RTK user, the small-scale (about 300 km), medium-scale (about 500 km) and large-scale (800 km) networks are designed and shown in Fig. 5. The number of reference stations is about 80, 40, and 30 for small-scale, medium-scale, and large-scale networks, respectively. Both small-scale and medium-scale networks consider the spatial variations of the ionosphere and the distribution of GNSS stations, while the distances between the reference stations for small-scale network are smaller than those for medium-scale network. The large-scale network contains less reference stations that can nearly cover China, leading to the longest distances between reference stations. Hence, the large-scale network considers neither the spatial variations of the ionosphere nor the distribution of available reference stations. The designed networks with point spacing of a 300, b 500, and c 800 km Figure 6 compares the statistical results of TTFF and fixing rate for the PPP–RTK constrained by the ionospheric delays interpolated from small-scale, medium-scale, and large-scale networks, denoted as PPP–RTK-300, PPP–RTK-500, and PPP–RTK-800, respectively. From Fig. 6 we can see that PPP–RTK-300 performs best, followed by PPP–RTK-500, and then PPP–RTK-800. This is due to the fact that the spatial variation of the ionosphere depends much on the distance between stations. The TTFF is about 4.4, 5.2, and 6.8 min for the PPP–RTK-300, PPP–RTK-500, and PPP–RTK-800, respectively. And their corresponding fixing rate is 97%, 96%, and 93%, respectively. Statistical results of the PPP–RTK constrained by the ionospheric delays interpolated from different scales of network The relationship between convergence time and different prior variances The above results indicate that the TTFF can be reduced for the PPP–RTK constrained by ionospheric delay. However, due to the large number of the stations and long data length in the test, we could not determine the optimum variance to test them one by one. To figure out the relationship between the variance of the given ionospheric delay corrections and the convergence time of PPP–RTK, we added the random errors to the provided ionospheric delay corrections as follows: $$\tilde{I}_{{r,{\text{bias}}}}^{s} { = }\tilde{I}_{r}^{s} + \sigma$$ where \(\sigma\) denotes the random error. In order to avoid the added random error close to zero, the random error should meet the following condition: $$0.5 \times S_{{{\text{Max}}}}^{{{\text{Error}}}} \le \left| \sigma \right| \le S_{{{\text{Max}}}}^{{{\text{Error}}}}$$ where \(S_{{{\text{Max}}}}^{{{\text{Error}}}}\) expresses the threshold value of the added error. Figure 7 presents the convergence time for different errors of less than 0.5 TECU, 1 TECU, 3 TECU and 5 TECU that are added to the network-derived ionospheric delays with optimum variances. As is shown in Fig. 7, all the positioning accuracy better than 10 cm can be achieved within 7 min. Although the variance can properly describe the accuracy of ionospheric delay, the convergence time increases with the increase of added ionospheric errors. The convergence time is about 3, 5, and 6 min, respectively, when the corresponding added error is about 1 TECU, 3 TECU, and 5 TECU with optimum variance. Meanwhile, the convergence time is less than 1 min when the added error of ionospheric delay is smaller than 0.5 TECU with optimum variance. The convergence time of the ionospheric delay corrections with an error of a smaller than 0.5 TECU, b 1 TECU, c 3 TECU, and d 5 TECU as optimum variance, respectively Figure 8 illustrates the convergence time of PPP-AR and that of the PPP–RTK constrained by ionospheric delay corrections with an error of 3 TECU for different variances. It shows that the variances of ionospheric delay corrections affect the convergence time significantly. On the one hand, a very small variance of the ionospheric delay with an error of 3 TECU leads to the lower fixing rate and a longer convergence time compared with those of PPP-AR. Meanwhile, the stability and reliability of the positioning errors during the un-convergence period are not ideal. On the other hand, when the variance of ionospheric delay correction is too large, the convergence time of the PPP–RTK constrained by ionospheric delay corrections is nearly the same as that of PPP-AR, but the positioning accuracy during the un-convergence period is better than that of PPP-AR. When the variance matches the accuracy of ionospheric delay corrections, the convergence time of the PPP–RTK constrained by ionospheric delay corrections can be reduced significantly compared with that of PPP-AR. The convergence time of a PPP-AR, and the ionospheric delay corrections with an error of 3 TECU while b the variance is very small (< 1 TECU), c the variance is the optimum (= 3 TECU), and d the variance is large (> 5TECU) Figure 9 describes the convergence time for the different errors of ionospheric delay corrections with different variances. As shown in Fig. 9, the convergence time varies similarly for the different errors of ionospheric delay corrections. Generally, the convergence time is long when the variance is very small, while the convergence time is almost the same as that of PPP-AR when the variance is large. The convergence time will be reduced significantly if the variance can best match the accuracy of the ionospheric delay corrections when compared with that of PPP-AR. In terms of the ionospheric delay corrections without errors, the convergence time will be the same as that of PPP-AR if the variance is large enough, and the convergence time will be very short if the variance is very small. The convergence time of the PPP–RTK constrained by external ionospheric delay information with an error of a 0, b 1 TECU, c 3 TECU, and d 5 TECU for different variances The performance of PPP–RTK during the periods of ionospheric disturbances Travelling Ionospheric Disturbance (TID) is the ionospheric density fluctuation that propagates as a wave through the ionosphere at a wide range of velocities and frequencies (Belehaki et al., 2020; Saito et al., 1998; Tsugawa et al., 2004, 2007). The high occurrence rate of TIDs and complicated variety of their characteristics regarding their velocity, propagation direction, and amplitude impact the operation of ground-based infrastructures, especially real-time kinematic services, and radio communication (Hernández-Pajares et al., 2006, 2012). In this study, a TID is simulated and its impacts on the convergence time of the PPP–RTK constrained by external ionospheric delay information are analyzed. For this purpose, the simulated TID can be expressed as follows: $$S = A_{0} + A_{1} \cdot \cos (2 \cdot {\uppi } \cdot f_{0} \cdot t + \varphi_{0} )$$ where \(S\) indicates the simulated results; \(A_{0}\) expresses the direct component in TECU; \(A_{1}\) is the amplitude of the TID in TECU; \(f_{0}\) is the frequency of the TID in Hz; t is the number of epoch; \(\varphi_{0}\) denotes the initial phase of the TID with an unit of rad. Suppose that \(A_{0} = 0\), \(A_{1} = 1\), and \(f_{0} = 0.2\;{\text{mHz}}\), the simulated TIDs with different initial phases are shown in Fig. 10. The simulation results of TID with same direct component, amplitude, and frequency but different initial phases To study the impacts of TID on PPP–RTK, the TID is firstly simulated by assuming that \(A_{0} = 0\), \(A_{1} = 1\), \(f_{0} = 0.2\;{\text{mHz}}\) and \(\varphi_{0} { = }135^{ \circ }\). Then the TID errors are added to the ionospheric delay corrections. The convergence time of the PPP–RTK constrained by ionospheric delay corrections with a best variance and influenced by TID errors for different variances can be found in Fig. 11. As can be seen from Fig. 11, the positioning results of PPP–RTK are poor due to the influence of TID. Moreover, the convergence time and reliability are affected as well, which is mainly related to the prior variance of the ionospheric delay. Particularly, if the same variance is adopted after adding TID errors with that during the durations without the TID errors, the positioning errors will be very large and vary greatly. The positioning accuracy is always larger than 10 cm and there are some situations where the ambiguities are wrongly fixed. In addition, much longer convergence time is needed during the TID period if the prior variance is larger. The positioning errors will be smaller than 10 cm after convergence, but still worse than that without adding TID errors. The convergence time of PPP–RTK constrained by ionospheric delay corrections with a no TID, and with a TID of an amplitude of 1 TECU and the variance of b 0.088 m, and c 0.38 m Furthermore, TIDs are simulated with \(A_{0} = 0\), \(f_{0} = 0.2\;{\text{mHz}}\) and \(\varphi_{0} { = }135^{ \circ }\), but with different amplitudes. The convergence time with the same variances for PPP–RTK during the TID periods are investigated and the results are plotted in Fig. 12. As we can see, the impacts of TIDs increase as the amplitudes of TIDs become larger. Note that the impact of TIDs is more significant at the beginning of convergence for each reinitialized period. When the ambiguity parameter is fixed, the positioning errors are nearly the same as those of the case without TIDs. Meanwhile, the convergence time will be longer for the TIDs with the larger amplitudes. The convergence time of the PPP–RTK constrained by external ionospheric information with a no TID, and with TID of the amplitude for b 0.5 TECU, c 1 TECU, and d 2 TECU and the variance of 0.48 m In this study, the relationship between the accuracy of ionospheric delay corrections and the convergence time is investigated. The PPP–RTK network model, PPP–RTK user model, and the method to extract ionospheric delay are first described. Then, the positioning performances of the PPP–RTK constrained by ionospheric delay with different errors, and with different TID situations are presented. The results of this study are summarized as follows. In terms of the TTFF for PPP-AR and PPP–RTK-GIM, the performance of the PPP–RTK constrained by global ionosphere maps can be improved compared with that of PPP-AR. The TTFF is about 8–10 min and 13–16 min for PPP-AR and PPP–RTK-GIM, respectively. The improvement of TTFF for the PPP–RTK-GIM is about 20% ~ 50%. The errors of 0.5 TECU, 1 TECU, 3 TECU and 5 TECU are added to the network-derived ionospheric delays, respectively, which are used to constrain PPP–RTK with optimum variances. The results show that all the positioning errors smaller than 10 cm can be achieved within 7 min and the convergence time depends on the accuracy of ionospheric delay corrections. The convergence time is about 3, 5, and 6 min, respectively, when the added errors are about 1, 3, and 5 TECU with optimum variance, while it is about 1 min for the error less than 0.5 TECU. In addition, the convergence time of the PPP–RTK constrained by ionospheric delay with an error of 3 TECU for different variances is investigated. The results indicate that the variance of ionospheric delay affects the convergence time significantly. The convergence time for the ionospheric delay with errors have the similar variation. The convergence time will be very long when the variance is very small, while it will be the same as that of PPP-AR when the variance is very large. Finally, the impacts of TID on the convergence time of PPP–RTK are checked by a simulation. The TID can affect the convergence time, positioning accuracy and reliability of PPP–RTK, which are related to the given variances. In addition, the larger variance leads to a longer time of convergence and the impact of TID will be enhanced with the increase of TID amplitude. Availability of data and material Banville, S., Collins, P., Zhang, W., & Langley, R. B. (2014). Global and Regional Ionospheric Corrections for Faster PPP Convergence. Navigation, 61, 115–124. https://doi.org/10.1002/navi.57 Belehaki, A., Tsagouri, I., Altadill, D., Blanch, E., Borries, C., Buresova, D., Chum, J., Galkin, I., Juan, J. M., Segarra, A., Timoté, C. C., Tziotziou, K., Verhulst, T. G. W., & Watermann, J. (2020). An overview of methodologies for real-time detection, characterisation and tracking of traveling ionospheric disturbances developed in the TechTIDE project. Journal Space Weather Space Climate, 10, 42. Boehm, J., Niell, A., Tregoning, P., & Schuh, H. (2006). Global mapping function (GMF): A new empirical mapping function based on numerical weather model data. Geophysical Research Letters. https://doi.org/10.1029/2005GL025546 Ciraolo, L., Azpilicueta, F., Brunini, C., Meza, A., & Radicella, S. M. (2007). Calibration errors on experimental slant total electron content (TEC) determined with GPS. Journal of Geodesy, 81, 111–120. https://doi.org/10.1007/s00190-006-0093-1 Collins, P., Lahaye, F., Héroux, P., & Bisnath, S. (2008). Precise point positioning with ambiguity resolution using the decoupled clock model. 21st International Technical Meeting of the Satellite Division of the Institute of Navigation, ION GNSS 2008, 3, 1549–1556. De Oliveira, P. S., Morel, L., Fund, F., Legros, R., Monico, J. F. G., Durand, S., & Durand, F. (2017). Modeling tropospheric wet delays with dense and sparse network configurations for PPP–RTK. GPS Solutions, 21, 237–250. https://doi.org/10.1007/s10291-016-0518-0 Wen,D., Zhang, X.,Tong,Y., Zhang,G.,Zhang, M.,&Leng,R.(2015). GPS-based ionospheric tomography with a constrained adaptive simultaneous algebraic reconstruction technique. Journal of earth system science, 124(2), 283-289.https://doi.org/10.1007/s12040-015-0542-4 Ge, M., Gendt, G., Rothacher, M., Shi, C., & Liu, J. (2008). Resolution of GPS carrier-phase ambiguities in Precise Point Positioning (PPP) with daily observations. Journal of Geodesy, 82, 389–399. https://doi.org/10.1007/s00190-007-0187-4 Geng, J., & Shi, C. (2017). Rapid initialization of real-time PPP by resolving undifferenced GPS and GLONASS ambiguities simultaneously. Journal of Geodesy, 91, 361–374. https://doi.org/10.1007/s00190-016-0969-7 Geng, J., Li, X., Zhao, Q., et al. (2019). Inter-system PPP ambiguity resolution between GPS and BeiDou for rapid initialization. Journal of Geodesy, 93(3), 383–398. Geng, J., Meng, X., Dodson, A., et al. (2010). Rapid re-convergences to ambiguity-fixed solutions in precise point positioning. Journal of Geodesy, 84, 705–714. https://doi.org/10.1007/s00190-010-0404-4 Guo, F., Li, X., Zhang, X., & Wang, J. (2017). The contribution of Multi-GNSS Experiment (MGEX) to precise point positioning. Advances in Space Research, 59, 2714–2725. https://doi.org/10.1016/j.asr.2016.05.018 Haines, G. V. (1988). Computer programs for spherical cap harmonic analysis of potential and general fields. Computers & Geosciences, 14, 413–447. https://doi.org/10.1016/0098-3004(88)90027-1 Hernández-Pajares, M., Juan, J. M., & Sanz, J. (1999). New approaches in global ionospheric determination using ground GPS data. Journal of Atmospheric and Solar-Terrestrial Physics, 61, 1237–1247. https://doi.org/10.1016/S1364-6826(99)00054-1 Hernández-Pajares, M., Juan, J. M., & Sanz, J. (2006). Medium-scale traveling ionospheric disturbances affecting GPS measurements: Spatial and temporal analysis. Journal of Geophysical Research: Space Physics. https://doi.org/10.1029/2005JA011474 Hernández-Pajares, M., Juan, J. M., Sanz, J., & Aragón-Àngel, A. (2012). Propagation of medium scale traveling ionospheric disturbances at different latitudes and solar cycle conditions. Radio Science, 47, 1–22. https://doi.org/10.1029/2011RS004951 Hernández-Pajares, M., Juan, J. M., Sanz, J., Aragón-Àngel, À., García-Rigo, A., Salazar, D., & Escudero, M. (2011). The ionosphere: Effects, GPS modeling and the benefits for space geodetic techniques. Journal of Geodesy, 85, 887–907. https://doi.org/10.1007/s00190-011-0508-5 Hu, J., Zhang, X., Li, P., Ma, F., & Pan, L. (2019). Multi-GNSS fractional cycle bias products generation for GNSS ambiguity-fixed PPP at Wuhan University. GPS Solutions, 24, 15. https://doi.org/10.1007/s10291-019-0929-9 Jakowski, N., Mayer, C., Wilken, V., & Hoque, M. M. (2008). Ionospheric impact on GNSS signals. Física De La Tierra, 20, 11. Laurichesse, D., Mercier, F., Berthias, J. P., Broca, P., & Cerri, L. (2009). Integer ambiguity resolution on undifferenced GPS phase measurements and its application to PPP and satellite precise orbit determination. Navigation, 56, 135–149. Li, P., Zhang, X., Ren, X., Zuo, X., & Pan, Y. (2016). Generating GPS satellite fractional cycle bias for ambiguity-fixed precise point positioning. GPS Solutions, 20, 771–782. https://doi.org/10.1007/s10291-015-0483-z Li, X., Ge, M., Dai, X., Ren, X., Fritsche, M., Wickert, J., & Schuh, H. (2015a). Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo. Journal of Geodesy, 89, 607–635. https://doi.org/10.1007/s00190-015-0802-8 Li, X., Li, X., Liu, G., Feng, G., Yuan, Y., Zhang, K., & Ren, X. (2019a). Triple-frequency PPP ambiguity resolution with multi-constellation GNSS: BDS and Galileo. Journal of Geodesy, 93, 1105–1122. https://doi.org/10.1007/s00190-019-01229-x Li, X., Li, X., Ma, F., Yuan, Y., Zhang, K., Zhou, F., & Zhang, X. (2019b). Improved PPP Ambiguity Resolution with the Assistance of Multiple LEO Constellations and Signals. Remote Sensing, 11, 408. Li, X., Ma, F., Li, X., Lv, H., Bian, L., Jiang, Z., & Zhang, X. (2019c). LEO constellation-augmented multi-GNSS for rapid PPP convergence. Journal of Geodesy, 93, 749–764. https://doi.org/10.1007/s00190-018-1195-2 Li, X., & Zhang, X. (2012). Improving the estimation of uncalibrated fractional phase offsets for PPP ambiguity resolution. Journal of Navigation, 65, 513–529. https://doi.org/10.1017/S0373463312000112 Li, X., Zhang, X., & Ge, M. (2011). Regional reference network augmented precise point positioning for instantaneous ambiguity resolution. Journal of Geodesy, 2011(85), 151–158. https://doi.org/10.1007/s00190-010-0424-0 Li, Z., Yuan, Y., Wang, N., Hernandez-Pajares, M., & Huo, X. (2015b). SHPTS: Towards a new method for generating precise global ionospheric TEC map based on spherical harmonic and generalized trigonometric series functions. Journal of Geodesy, 89, 331–345. https://doi.org/10.1007/s00190-014-0778-9 Liu, Y., Lou, Y., Ye, S., Zhang, R., Song, W., Zhang, X., & Li, Q. (2017). Assessment of PPP integer ambiguity resolution using GPS, GLONASS and BeiDou (IGSO, MEO) constellations. GPS Solutions, 21, 1647–1659. https://doi.org/10.1007/s10291-017-0641-6 Lou, Y., Zheng, F., Gu, S., Wang, C., Guo, H., & Feng, Y. (2016). Multi-GNSS precise point positioning with raw single-frequency and dual-frequency measurement models. GPS Solutions, 20, 849–862. https://doi.org/10.1007/s10291-015-0495-8 Mautz, R., Ping, J., Heki, K., Schaffrin, B., Shum, C., & Potts, L. (2005). Efficient spatial and temporal representations of global ionosphere maps over Japan using B-spline wavelets. Journal of Geodesy, 78, 662–667. https://doi.org/10.1007/s00190-004-0432-z Odijk, D. (2002). Fast precise GPS positioning in the presence of ionospheric delays. Delft University of Technology. Odijk, D., Zhang, B., Khodabandeh, A., Odolinski, R., & Teunissen, P. J. G. (2016). On the estimability of parameters in undifferenced, uncombined GNSS network and PPP–RTK user models by means of S-system theory. Journal of Geodesy, 90, 15–44. https://doi.org/10.1007/s00190-015-0854-9 Olivares-Pulido, G., Terkildsen, M., Arsov, K., Teunissen, P. J. G., Khodabandeh, A., & Janssen, V. (2019). A 4D tomographic ionospheric model to support PPP–RTK. Journal of Geodesy, 93, 1673–1683. https://doi.org/10.1007/s00190-019-01276-4 Psychas, D., Verhagen, S., Liu, X., Memarzadeh, Y., & Visser, H. (2018). Assessment of ionospheric corrections for PPP–RTK using regional ionosphere modelling. Measurement Science and Technology, 30, 014001. https://doi.org/10.1088/1361-6501/aaefe5 Ren, X., Chen, J., Li, X., & Zhang, X. (2020). Ionospheric total electron content estimation using GNSS carrier phase observations based on zero-difference integer ambiguity: methodology and assessment. IEEE Transactions on Geoscience and Remote Sensing. https://doi.org/10.1109/TGRS.2020.2989131 Saastamoinen, J. (1972). Contributions to the theory of atmospheric refraction. Bulletin Géodésique, 1946–1975(105), 279–298. https://doi.org/10.1007/bf02521844 Saito, A., Fukao, S., & Miyazaki, S. (1998). High resolution mapping of TEC perturbations with the GSI GPS Network over Japan. Geophysical Research Letters, 25, 3079–3082. https://doi.org/10.1029/98GL52361 Schaer S. (1999). Mapping and predicting the Earth's ionosphere using the Global Positioning System. Unpublished Ph.D Thesis, University of Bern, Bern. Schmidt, M., Bilitza, D., Shum, C. K., & Zeilhofer, C. (2008). Regional 4-D modeling of the ionospheric electron density. Advances in Space Research, 42, 782–790. https://doi.org/10.1016/j.asr.2007.02.050 Teunissen, P., Odijk, D., & Zhang, B. (2010). PPP–RTK: results of CORS network-based PPP with integer ambiguity resolution. Journal of Aeronautics, Astronautics and Aviation, Series A, 42, 223–229. Teunissen, P. J. G. (1995). The least-squares ambiguity decorrelation adjustment: A method for fast GPS integer ambiguity estimation. Journal of Geodesy, 70, 65–82. https://doi.org/10.1007/BF00863419 Tsugawa, T., Kotake, N., Otsuka, Y., & Saito, A. (2007). Medium-scale traveling ionospheric disturbances observed by GPS receiver network in Japan: A short review. GPS Solutions, 11, 139–144. https://doi.org/10.1007/s10291-006-0045-5 Tsugawa, T., Saito, A., & Otsuka, Y. (2004). A statistical study of large-scale traveling ionospheric disturbances using the GPS network in Japan. Journal of Geophysical Research: Space Physics. https://doi.org/10.1029/2003ja010302 Wang, K., Khodabandeh, A., & Teunissen, P. (2017). A study on predicting network corrections in PPP–RTK processing. Advances in Space Research, 60, 1463–1477. https://doi.org/10.1016/j.asr.2017.06.043 Wen, D., Yuan, Y., Ou, J., Huo, X., & Zhang, K. (2007). Three-dimensional ionospheric tomography by an improved algebraic reconstruction technique. GPS Solutions, 11, 251–258. https://doi.org/10.1007/s10291-007-0055-y Wen, D., Yuan, Y., Ou, J., Zhang, K., & Liu, K. (2008). A hybrid reconstruction algorithm for 3-D ionospheric tomography. IEEE Transactions on Geoscience and Remote Sensing, 46, 1733–1739. https://doi.org/10.1109/TGRS.2008.916466 Wübbena, G., Schmitz, M., Bagge, A. (2005). PPP–RTK: precise point positioning using state-space representation in RTK networks. In Proceedings of ION GNSS, pp. 13–16. Xiang, Y., Gao, Y., & Li, Y. (2020). Reducing convergence time of precise point positioning with ionospheric constraints and receiver differential code bias modeling. Journal of Geodesy. https://doi.org/10.1007/s00190-019-01334-x Zha, J., Zhang, B., Liu, T., & Hou, P. (2021). Ionosphere-weighted undifferenced and uncombined PPP–RTK: Theoretical models and experimental results. GPS Solution. https://doi.org/10.1007/s10291-021-01169-0 Zhang, B., Ou, J., Yuan, Y., & Li, Z. (2012). Extraction of line-of-sight ionospheric observables from GPS data using precise point positioning. Science China Earth Sciences, 55, 1919–1928. https://doi.org/10.1007/s11430-012-4454-8 Zhang, B., Teunissen, P. J. G., & Odijk, D. (2011). A Novel Un-differenced PPP–RTK Concept. Journal of Navigation, 64, S180–S191. https://doi.org/10.1017/s0373463311000361 Zhang, B., Chen, Y., & Yuan, Y. (2018). PPP–RTK based on undifferenced and uncombined observations: Theoretical and practical aspects. Journal of Geodesy, 93(7), 1–14. Zhang, B., Hou, P., Zha, J., & Liu, T. (2021). Integer-estimable FDMA model as an enabler of GLONASS PPP–RTK. Journal of Geodesy, 95, 91. https://doi.org/10.1007/s00190-021-01546-0 Zheng, D., Yao, Y., Nie, W., Chu, N., Lin, D., & Ao, M. (2020). A new three-dimensional computerized ionospheric tomography model based on a neural network. GPS Solutions, 25, 10. https://doi.org/10.1007/s10291-020-01047-1 Zheng, D., Yao, Y., Nie, W., Liao, M., Liang, J., & Ao, M. (2021). Ordered Subsets-Constrained ART Algorithm for Ionospheric Tomography by Combining VTEC Data. IEEE Transactions on Geoscience and Remote Sensing, 59, 7051–7061. https://doi.org/10.1109/TGRS.2020.3029819 Zheng, D., Zheng, H., Wang, Y., Nie, W., Li, C., Ao, M., Hu, W., & Zhou, W. (2017). Variable pixel size ionospheric tomography. Advances in Space Research, 59, 2969–2986. https://doi.org/10.1016/j.asr.2017.03.031 Zheng, D. Y., Yao, Y. B., Nie, W. F., Yang, W. T., Hu, W. S., Ao, M. S., & Zheng, H. W. (2018). an improved iterative algorithm for ionospheric tomography reconstruction by using the automatic search technology of relaxation factor. Radio Science, 53, 1051–1066. https://doi.org/10.1029/2018rs006588 The numerical calculations have been done on the supercomputing system in the Supercomputing Center of Wuhan University. We also gratefully acknowledge the use of Generic Mapping Tool (GMT) software. This work was funded by the National Science Fund for Distinguished Young Scholars (no. 41825009), Changjiang Scholars Program, the National Natural Science Foundation of China (No.42174031, 41904026), the Technology Innovation Special Project (Major program) of Hubei Province of China (No. 2019AAA043), and initial scientific research fund of talents in Minjiang University (No. MJY21039). Chinese Antarctic Center of Surveying and Mapping, Wuhan University, Wuhan, 430079, China Xiaohong Zhang School of Geodesy and Geomatics, Wuhan University, Wuhan, 430079, China Xiaohong Zhang, Xiaodong Ren, Dengkui Mei & Wanke Liu Department of Surveying and Mapping Engineering, Minjiang University, Fuzhou, 350118, China Jun Chen Department of Geodesy, GeoForschungsZentrum (GFZ), Telegrafenberg, 14473, Potsdam, Germany Xiang Zuo Xiaodong Ren Dengkui Mei Wanke Liu Xiaohong Zhang and Xiaodong Ren proposed the idea and drafted the article; Jun Chen and Dengkui Mei carried out the simulation and the evaluation in data analysis; Xiang Zuo and Wanke Liu assisted in article revision. All authors read and approved the final manuscript. Correspondence to Xiaohong Zhang. Zhang, X., Ren, X., Chen, J. et al. Investigating GNSS PPP–RTK with external ionospheric constraints. Satell Navig 3, 6 (2022). https://doi.org/10.1186/s43020-022-00067-1 PPP–RTK Ionospheric delay information Ionospheric variance Convergence time GNSS PPP and PPP-RTK
CommonCrawl
The influence of low-intensity physiotherapeutic ultrasound on the initial stage of bone healing in rats: an experimental and simulation study Aldo José Fontes-Pereira1, Marcio Amorim2, Fernanda Catelani1, Daniel Patterson Matusin3, Paulo Rosa1, Douglas Magno Guimarães4, Marco Antônio von Krüger1 & Wagner Coelho de Albuquerque Pereira1 Low-intensity physiotherapeutic ultrasound has been used in physical therapy clinics; however, there remain some scientific issues regarding the bone-healing process. The objective of this study was to investigate the influence of low-intensity physiotherapeutic ultrasound on the initial stage of bone healing in rats. Twenty-two male adult rats were assessed quantitatively and qualitatively using radiographic, biochemical, and histological analyses. Numerical simulations were also performed. Fractures in animals in the ultrasound group (n = 11) were treated with low-intensity ultrasound (pulsed mode, duty cycle 20 %) for 10 min daily at an intensity of 40 mW/cm2 SATA (1.0 MHz) for 10 days. Fractures in animals in the control group (n = 11) were not treated. Alkaline phosphatase levels were non-significantly higher in the ultrasound group than in the control group in the time intervals considered (t(13) = 0.440; 95 % confidence interval (CI) −13.79 to 20.82; p = 0.67). Between-group serum calcium levels were also not significantly different (t(13) = −0.842; 95 % CI −0.48 to 0.21; p = 0.42). Finally, there were no significant differences in radiological scores between the two groups (U = 118; 95 % CI −1.99 to 1.99; p = 0.72). However, the diameter of the newly formed bone tissue was greater and more evident in the ultrasound group. Thirteen days after fracture, there was no significant between-group differences in bone-healing processes, although the increased alkaline phosphatase levels and diameter of new bone tissue need to be further investigated. Even though it is one of the most rigid and resilient substances in the human body, bone tissue is constantly exposed to conditions, such as injury or fracture, that may affect its structural integrity [1, 2]. The occurrence of a fracture triggers a complex process of healing to restore bone mechanics and functional integrity [1, 3]. This process is dynamic and features well-defined stages of repair regulated by a variety of cellular elements and stimulant agents [4]. Sometimes, there can be complications in this process, resulting in retardation of fracture union with the risk of pseudoarthrosis [5] and other consequences, such as long and painful treatment, missed work, reduction in patients' quality of life and general well-being, and increased public health-care expenditures [6, 7]. Thus, efforts to determine treatments that accelerate the bone consolidation process are justified [2, 8]. Physiotherapy offers several options for treating fractures, including therapeutic ultrasound (TUS), which is normally used in physical therapy clinics [9–11]. According to Wolff's law, ultrasonic stimulation generates micro-mechanical forces and tension on the fracture site, resulting in accelerated bone formation. It has also been mentioned that the use of low-intensity pulsed ultrasound stimulation (LIPUS) increases bone metabolism [8, 12, 13], resulting in accelerated bone healing by abbreviating inflammation and soft and hard callus formation [14]. Well-documented [8, 12, 14] studies with low-intensity ultrasonic waves have shown evidence of their effects on bone healing. Commercial equipment especially designed to provide low-intensity ultrasound for the purpose of bone healing is normally set at a fixed intensity of 30 mW/cm2. However, this equipment costs about 10 times more than does TUS equipment commonly used for general purposes in physical therapy clinics. An investigation of this potential use should include careful steps to enable standardization of the duration and intensity of irradiation required for effective treatment. To date, no conclusive investigation on the evaluation of cellular and biochemical mechanisms triggered by TUS [2, 9] has been reported. The present study analyzed the radiographic, biochemical, and histological effects of TUS at an intensity of 40 mW/cm2 in induced fractures of rat tibias with the objective of evaluating the effect of ultrasound intensity provided by common TUS equipment on the initial stage of fracture healing. The study was approved by the Ethics Committee in Research of the Evandro Chagas Institute, Pará, Brazil (protocol n.009/2012) according to the guidelines for the care and use of laboratory animals [15], National Legislation of Animal Vivisection in Force (Federal Law 11,794 of October 8, 2008), and international and national ethical instructions (Laws 6638/79, 9605/98, Decree 24665/34). The sample consisted of 22 male rats (Rattus norvegicus, McCoy strain) at least 90 days of age and weighing 325 ± 25 g. Each rat was maintained at a controlled temperature (23 ± 2 °C) in cages measuring 45 × 15 × 30 cm and lined with autoclavable rice straw that was exchanged on alternate days. The animals received water and food ad libitum. The rats were randomly divided into two experimental groups: a control group (CG), consisting of 11 rats that underwent induced fractures in the middle one third of the right tibia without receiving any treatment, and an ultrasound group (USG), consisting of 11 animals that underwent the same fracture procedure and received low-intensity TUS. Fracture induction Prior to fracture induction, the rats were anesthetized with intraperitoneal ketamine (80 mg/kg) and xylazine (15 mg/kg) solution [2, 8] at a dose of 0.6 mL per 100 g body weight. After sedation, the rats were placed in the lateral decubitus position and then subjected to fracture of the middle one third of the diaphysis of the right tibia of the hind limb, using the equipment previously described [2]. Afterwards, the rats were placed in cages, with a maximum of four rats per cage, and subjected to analgesic therapy during the entire experimental period (200 mg/kg paracetamol dissolved in water). The animals were not immobilized after the fracture [2, 16, 17]. After 24 h, TUS treatment was applied to the fracture gap while the animal was in the lateral decubitus position. The rats were not sedated during treatment. Stationary ultrasound equipment (BIOSET® model SONACEL PLUS, Bioset Industry Electronic Technology Ltda., Rio Claro, SP, Brazil) was used on the fracture site, with a frequency of 1 MHz, intensity of 40 mW/cm2 (SATA), pulsed mode, duty cycle 20 %, pulse repetition frequency of 100 Hz, pulse width of 2 ms, and an effective radiating area of 0.79 cm2. The equipment was found to meet the IEC 60601-1-2:2010 regulation, according to the calibration made by the Electrical Engineering Department of the Federal University of Pará (Brazil). The coupling material was a commercial gel soluble in water. Treatment was performed for 10 min once per day for five consecutive days, followed by a 2-day period without irradiation. This sequence was repeated for a total of 10 sessions. Post-treatment procedures Once the treatment protocol was complete, the animals fasted for 12 h. Then, they were anesthetized intraperitoneally with hydrochloride ketamine (80 mg/kg) and xylazine (15 mg/kg) at a dose of 0.6 mL per 100 g body weight. While completely sedated, the rats underwent exsanguination by cardiac puncture (~5 mL of blood was collected for biochemical analysis) [2] and then euthanized by decapitation. Analysis of the bone matrix synthesis biochemical markers was performed using a Labtest kit (Vital Scientific NV, Holliston, MA, USA) with absorbance at 590 nm for measurement of serum alkaline phosphatase level and a laboratory kit (Vital Scientific) with absorbance at 570 nm to determine the concentration of serum calcium. Both analyses were performed using the Vitalab Selectra and Chemistry Analyzer automated system (Vital Scientific). For radiological evaluation, the rats' right hind limbs were disarticulated from the hip, fixed in buffered 10 % paraformaldehyde, and subsequently submitted to analysis. Radiography was obtained in the lateral view with the same radiographic technique (40 kV × 2 mA), using an exposure time of 0.6 s, and always at the same distance from the X-ray tube (30 cm). Radiographic analysis was performed by two independent observers blinded to the treatment group who examined the callus formation, the quality of bone union, and bone remodeling, according to the radiographic system score for osseous healing [6, 18, 19]. This radiographic scoring system has three categories (periosteal reaction, quality of bone union, and remodeling) regarding fracture healing. The first two categories are scored from 0 to 3 points and the third category has scores from 0 to 2, so the maximum expected score is 8 (the sum of the maximum score for each category) for complete bone fracture repair (Table 1). Table 1 Radiographic scoring system for fracture healing [6, 18, 19] Finally, the right hind limb was dissected until the tibia was totally exposed and immersed for 36 h in a descaling solution of ethylenediaminetetraacetic acid (EDTA) at 5 %. Then, histological slides were prepared with the histotechnical procedure and a microtome (Leica Microsystems, Wetzlar, HE, DE). Sections of 5-mm-thick tissue were obtained from the region of the bone callus and stained with hematoxylin-eosin [20]. Qualitative analysis of the blades by evaluation of bone formation from estimation of the thickness of the newly formed tissue was made with an optical microscope (Carl Zeiss Microscopy LLC, Thornwood, NY, USA) coupled to a video camera, the AxioCam HRC (Carl Zeiss Microscopy LLC, Thornwood, NY). Simulation configuration Numerical, two-dimensional simulations of wave propagation were performed with SimSonic software developed at the Laboratoire d'Imagerie Paramétrique (CNRS, University Paris 6, Paris, France), employing the method of finite-difference time domain (FDTD) with elastodynamic equations (Fig. 1) [21]. Diagram of the numerical model configuration A numerical model with a spatial resolution of 0.01 mm represented the region of the fracture, and it consisted of two cortical plates with bone marrow inside surrounded above and below by muscle, fat, and skin. Interruption of the cortical bone represented the fracture gap. The thickness of the cortical layers and bone marrow was 0.04 and 0.1 mm, respectively, and the fracture gap was 4 mm. The thickness of the muscle, fat, and skin was 1.73, 0.58, and 0.55 mm, respectively. These values are averages of measurements obtained from the radiography of the animals used in the experiments. In the two-dimensional numerical model, a line source measuring 10 mm placed on the skin layer's upper surface generated a longitudinal pulsed wave of 1 MHz with a duration of 3 μs. Fifteen receivers (R1–R15) positioned along the propagation axis recorded pressure during 20 μs starting from just before the pulse generation. The receivers R2, R4, R6, R8, R10, R12, and R14 were located in the middle of each layer, while the receivers R3, R5, R7, R9, R13, and R15 were located over the interface between layers (1 pixel apart). Perfect matching between layers (PML) was assumed, and absorption was disregarded. The elastic constants (C11, C22, C33, and C12) and densities (Table 2) obtained from the literature [21] were used to model the isotropic mechanical responses of each material. The longitudinal velocity of isotropic materials and the calculated acoustic impedance were within the range of values reported in the literature [22]. Table 2 Mechanical properties of tissues [21] Parameters evaluated The parameters used in the analysis were time-of-flight of the first arriving signal (TOFFAS), sound pressure level (SPL), and amplitude root mean square (RMS) [23]. The TOFFAS evaluates the duration of the ultrasound wave propagation leaving the emitter transducer until its arrival at the corresponding receiving transducer, responding to impedance differences in the propagation medium. The TOFFAS was obtained by interpolating the five amplitude points by a parabolic signal adjustment of each receiver from a given threshold. The SPL and RMS were used to evaluate ultrasound wave amplitudes, providing, respectively, attenuation and energy of the signal of each receiver. While the SPL was calculated based on the peak of the wave (Eq. 1), the RMS (Eq. 2) used a temporal window of 10.9 μs from the first arriving signal (FAS), as follows: $$ \mathrm{S}\mathrm{P}\mathrm{L}=20\cdot { \log}_{10}\left(\frac{A_{\mathrm{R}\mathrm{k}}}{A_{\mathrm{R}1}}\right), $$ where A Rk is the peak amplitude of the signal of receivers R2 to R15 (k) and A R1 corresponds to the amplitude of the reference signal (receiver R1). $$ \mathrm{R}\mathrm{M}\mathrm{S}=\sqrt{\frac{{\displaystyle {\sum}_{k=1}^{k=N}{A_{\mathrm{Rk}}}^2}}{N},} $$ where A Rk corresponds to the signal amplitude of each receiver (k = 1–15) divided by the total number of signals (N). Power tables for Cohen's d effect size were used to calculate the sample size: Cohen's d = 1.2 two-tailed, α = 0.05, and power of 0.8 [24]. Data normality was examined using the Kolmogorov-Smirnov test. Student's t test (t) was used to evaluate differences in biochemical markers, and the Mann-Whitney U test (U) was used to evaluate differences in radiographic scores between the groups. The inter-observer agreement was calculated using the kappa coefficient (K). Statistical analysis was performed using SPSS (version 20.0, IBM Corporation, Armonk, NY, USA). The level of significance was α = 0.05, with a confidence interval of 95 %. The USG presented higher levels of alkaline phosphatase (86.38 ± 18.94 U/L) than did the CG (82.86 ± 10.03 U/L) for the time interval considered, but with no statistical significance (t(13) = 0.440; 95 % CI −13.79 to 20.82; p = 0.67). Serum calcium levels also were not significantly different (t(13) = −0.842; 95 % CI −0.48 to 0.21; p = 0.42); however, the CG had higher levels of serum calcium (10.04 ± 0.26 mg/dL) than did the USG (9.90 ± 0.35 mg/dL). Seven samples were discarded because of coagulation. The qualitative histological analysis revealed the formation of immature bone in both groups. However, the diameter of the newly formed bone tissue was greater and more evident in the USG (Fig. 2). Formation of an immature bone in the control group (CG) (a hematoxylin and eosin (H&E) ×20 and b H&E ×40) and the ultrasound group (USG) (c H&E ×20 and d H&E ×40) was similar. The diameter of the newly formed bone tissue (asterisk) was greater and more evident in the USG The inter-observer reliability (r) for the total radiographic score was K = 0.64 (p < 0.001) and for the three categories periosteal reaction, quality of bone union, and remodeling was K = 0.63 (p < 0.001), K = 0.72 (p < 0.001), and K = 1.00 (p < 0.001), respectively. The scoring system for radiographic fracture healing showed no significant difference between the groups (U = 118; 95 % CI −1.99 to 1.99; p = 0.72) (Fig. 3). Radiographic scoring system for fracture healing and categories (periosteal reaction, quality of bone union, and remodeling) Figure 4a shows wave propagation along the tissue, Fig. 4b shows an example of a typical signal detected by the receivers, and Fig. 4c presents the TOFFAS, which increased with the depth of the receivers and the thickness of tissue. It should be noted that between the signal of receiver R1 (positioned at the top of the model) and R2 (0.28 mm from the top of the model), there is a computational adjustment mechanism for TOFFAS prediction, so this parameter is considered only from receiver R2. Figure 4d, e shows the attenuation and signal energy of each receiver, respectively. a Wave propagation along the tissue. b Signal of receiver R8 positioned in the center of the fracture gap showing the time-of-flight of the first arriving signal (TOFFAS). c TOFFAS of receivers R1 to R15. d Sound pressure level (SPL) of receivers R1 to R15. e Root mean square (RMS) of receivers R1 to 15 The use of conventional TUS for fracture treatment could mean a reduction in treatment costs for bone fractures, therefore making this therapeutic modality more accessible to the general population. Thus, this work was intended to elucidate the effects of TUS on the bone-healing process. The animal model R. norvegicus (McCoy strain) was used, as it has pathophysiological and biomechanical properties similar to those of the human bone [11]. A closed-fracture model was chosen to reduce the risk of infection, which would alter the consolidation process [2]. Several studies [2, 17] did not have success using fracture stabilization methods in rats, whether it was by invasive treatments, such as Kirschner wires, or noninvasive treatments, such as the use of a plaster splint. Several complications arose, namely bone fractures in other regions of the limb, compartment syndrome, and infection at the site of invasive assets. Thus, in the present study, none of the fractures was immobilized. The method of treatment adopted for this study follows the usual human physical therapy treatment protocol. According to Einhorn [25], between 10 and 16 days, it is possible to identify four healing stages. Our protocol lasted 13 days, so we can assume that the healing process was occurring and that our results indicate that the ultrasonic dose we used was not able to accelerate this process. Biochemical analysis was performed using the indices of alkaline phosphatase and serum calcium. These markers were used to evaluate the process of bone formation [13, 16, 26] once calcium became a component of the bone matrix and alkaline phosphatase activity was noted in osteoblastic formation [26, 27]. The results of this biochemical analysis were not statistically significant, although ultrasound did promote increased alkaline phosphatase activity in the treated versus the untreated rats. These results were similar to those in a previous study [2] that tested TUS pulsing (0.2 W/cm2) in rats with bone fractures after 5 weeks of treatment. Leung et al. [28], using LIPUS equipment designed for bone healing, showed that treatment for 20 min per day at 30 mW/cm2 increased levels of alkaline phosphatase. Guerino et al. [29] suggested that an increased level of alkaline phosphatase seen with TUS is possibly associated with increased cell proliferation and mineralization. Histological analysis showed that animals treated with TUS had the thickest bone formation, suggesting that TUS influenced the consolidation of bone tissue. A similar result was found by Oliveira et al. [30] when LIPUS equipment was used at an intensity of 30 mW/cm2. However, our findings are not yet conclusive regarding bone-healing acceleration. Perhaps TUS can influence bone formation, but at least for the dose that we applied, no statistically significant difference was found. The qualitative radiological analysis showed a greater volume of bone callus in animals in the USG. Thus, as in the study of Kumagai et al. [8], radiological evaluation showed that the area of hard callus was significantly higher in the LIPUS-treated animals than in the animals in the CG. However, the scoring system for radiographic fracture healing showed no significant difference between the groups. These findings support the use of quantitative measures (e.g., quantitative ultrasound, bone densitometry, quantitative computed tomography) that are often overlooked in traumatology studies. Thus, these tools can be used to minimize the subjectivity of evaluators and to show real statistical differences between therapies. Simulation analysis showed that TOFFAS values were consistent with the localization of receivers in the numerical model so that the highest values were observed with a larger distance between receivers or when they were farther from the emitter. The arrangement of the receivers, when chosen with respect to the thickness of each tissue, showed that the interior of the fracture responded to the smallest change in TOFFAS (receivers R7–R9) (i.e., the wave propagates faster inside the fracture). This fact should be taken into account in case therapy depends on the propagation time of ultrasound inside a given region. Catelani et al. [23], using SimSonic software (CNRS, University Paris 6, Paris, France), found that the interior regions of fractures near the cortical bone-bone marrow interface showed greater reduction of TOFFAS values with respect to receivers located in the center of the fracture (due to the formation of lateral waves with velocity compatible with the cortical bone). This could account in part for the mechanisms involved in fracture healing stimulated by LIPUS. Additionally, in this study, it was possible to note the reduction of TOFFAS through the interior of the fracture. The results of the SPL and RMS analyses showed similar behavior between values for different receivers. The highest concentration of energy was observed in receivers near the skin-fat, fat-muscle, and bone marrow-muscle interfaces. The receivers R4, R7, and R12 excelled in energy concentration, followed by the adjacent receivers R3, R8, and R11. The impedance mismatch led to the reflection phenomenon of ultrasound waves at the interfaces, which may explain the higher concentration of energy in the soft tissues and attenuation of the ultrasound wave in the center of the model and deeper regions, where the greatest attenuation would be expected. The intensity of LIPUS proposed in the literature (30 mW/cm2) seems to provide a good stimulus for accelerating the bone-healing process. This intensity would not be directly proportional to a higher concentration of energy at the fracture site, however, given that the intensity of the ultrasound used in this study was approximately 33 % higher (40 mW/cm2) and it did not change the duration for bone repair. As the concentration of power and local induction heating are considered harmful to bone healing, factors such as the rapid passage of ultrasound through fracture [23], which is associated with low-intensity TUS, are closely related with the acceleration of consolidation. Conversely, higher intensities would increase the risk of local heating, hindering this process. The ultrasound equipment commonly found in physical therapy clinics may influence the bone-healing process according to anecdotal evidence from books, blogs, and practitioners, but this claim has little scientific basis [9]. In this study, we could not find evidence that TUS influences the bone-healing process. On the other hand, some aspects of the procedure need to be clarified (e.g., ultrasonic intensity, duration of treatment) with respect to changes in levels of alkaline phosphatase and the diameter of new bone formation observed in this study. We propose that there is an optimal range for accelerating bone healing, around 30 mW/cm2 of ultrasonic intensity, so to use TUS, it would be necessary to use attenuators. Thus, additional studies of different parameters at different stages of bone healing are needed to clarify the interaction between TUS and biological tissue. The present results suggest that TUS in the dose we used is not recommended for clinical use. FAS: First arriving signal FDTD: Finite-difference time domain LIPUS: Low-intensity pulsed ultrasound stimulation RMS: SPL: TOFFAS : Time-of-flight of the first arriving signal TUS: Rutten S, Nolte PA, Korstjens CM, Klein-Nulend J. Low-intensity pulsed ultrasound affects RUNX2 immunopositive osteogenic cells in delayed clinical fracture healing. Bone. 2009;45:862–9. Fontes-Pereira AJ, Teixeira Rda C, de AJB O, Pontes RWF, de RSM B, Negrão JNC. The effect of low-intensity therapeutic ultrasound in induced fracture of rat tibiae. Acta Ortopédica Bras. 2013;21:18–22. Bilezikian JP, Raisz LG, Martin TJ. Principles of bone biology: two-volume set. San Diego: Academic Press Inc; 2008. Einhorn TA. The cell and molecular biology of fracture healing. Clin Orthop. 1998;355:S7–21. Jackson LC, Pacchiana PD. Common complications of fracture repair. Clin Tech Small Anim Pract. 2004;19:168–79. Sarban S, Senkoylu A, Isikan UE, Korkusuz P, Korkusuz F. Can rhBMP-2 containing collagen sponges enhance bone repair in ovariectomized rats?: a preliminary study. Clin Orthop Relat Res. 2009;467:3113–20. Rose FRAJ, Oreffo ROC. Bone tissue engineering: hope vs hype. Biochem Biophys Res Commun. 2002;292:1–7. Kumagai K, Takeuchi R, Ishikawa H, Yamaguchi Y, Fujisawa T, Kuniya T, et al. Low-intensity pulsed ultrasound accelerates fracture healing by stimulation of recruitment of both local and circulating osteogenic progenitors. J Orthop Res. 2012;30:1516–21. Maggi LE, Omena TP, von Krüger MA, Pereira WCA. Didactic software for modeling heating patterns in tissues irradiated by therapeutic ultrasound. Braz J Phys Ther. 2008;12:204–14. Matheus JPC, Oliveira FB, Gomide LB, Milani J, Volpon JB, Shimano AC. Effects of therapeutic ultrasound on the mechanical properties of skeletal muscles after contusion. Braz J Phys Ther. 2008;12:241–7. Blouin S, Baslé MF, Chappard D. Rat models of bone metastases. Clin Exp Metastasis. 2005;22:605–14. Nolte PA, Klein-Nulend J, Albers GHR, Marti RK, Semeins CM, Goei SW, et al. Low-intensity ultrasound stimulates endochondral ossification in vitro. J Orthop Res. 2001;19:301–7. Alvarenga ÉC, Rodrigues R, Caricati-Neto A, Silva-Filho FC, Paredes-Gamero EJ, Ferreira AT. Low-intensity pulsed ultrasound-dependent osteoblast proliferation occurs by via activation of the P2Y receptor: role of the P2Y1 receptor. Bone. 2010;46:355–62. Pounder NM, Harrison AJ. Low intensity pulsed ultrasound for fracture healing: a review of the clinical evidence and the associated biological mechanism of action. Ultrasonics. 2008;48:330–8. Garber JC, Barbee RW, Bielitzki JT, Clayton LA, Donovan JC, Hendriksen CFM, et al. Guide for the care and use of laboratory animals. Natl Acad Press Wash DC. 2011;8:220. Giordano V, Knackfuss IG, Gomes Rdas C, Giordano M, Mendonça RG, Coutynho F. Influência do laser de baixa energia no processo de consolidação de fratura de tíbia: estudo experimental em ratos. Rev Bras Ortop. 2001;36:174–8. Pelker RR, Friedlaender GE. The Nicolas Andry Award-1995. Fracture healing. Radiation induced alterations. Clin Orthop. 1997;341:267–82. Johnson KD, Frierson KE, Keller TS, Cook C, Scheinberg R, Zerwekh J, et al. Porous ceramics as bone graft substitutes in long bone defects: a biomechanical, histological, and radiographic analysis. J Orthop Res. 1996;14:351–69. Yang C, Simmons DJ, Lozano R. The healing of grafts combining freeze-dried and demineralized allogeneic bone in rabbits. Clin Orthop. 1994;298:286–95. Angle SR, Sena K, Sumner DR, Virkus WW, Virdi AS. Combined use of low-intensity pulsed ultrasound and rhBMP-2 to enhance bone formation in a rat model of critical size defect. J Orthop Trauma. 2014;28:605–11. Bossy E, Talmant M, Laugier P. Three-dimensional simulations of ultrasonic axial transmission velocity measurement on cortical bone models. J Acoust Soc Am. 2004;115:2314–24. Protopappas VC, Fotiadis DI, Malizos KN. Guided ultrasound wave propagation in intact and healing long bones. Ultrasound Med Biol. 2006;32:693–708. Catelani F, Ribeiro APM, Melo CAV, Pereira WC, Machado CB. Ultrasound propagation through bone fractures with reamed intramedullary nailing: results from numerical simulations. Proc Meet Acoust. 2013;19:075093. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: Routledge; 1988. Einhorn TA, Gerstenfeld LC. Fracture healing: mechanisms and interventions. Nat Rev Rheumatol. 2015;11:45–54. Mayr-Wohlfart U, Fiedler J, Günther K-P, Puhl W, Kessler S. Proliferation and differentiation rates of a human osteoblast-like cell line (SaOS-2) in contact with different bone substitute materials. J Biomed Mater Res. 2001;57:132–9. Notelovitz M. Androgen effects on bone and muscle. Fertil Steril. 2002;77(Supplement 4):34–41. Leung K-S, Lee W-S, Tsui H-F, Liu PP-L, Cheung W-H. Complex tibial fracture outcomes following treatment with low-intensity pulsed ultrasound. Ultrasound Med Biol. 2004;30:389–95. Guerino MR, Santi FP, Silveira RF, Luciano E. Influence of ultrasound and physical activity on bone healing. Ultrasound Med Biol. 2008;34:1408–13. de Oliveira P, Fernandes KR, Sperandio EF, Pastor FAC, Nonaka KO, Parizotto NA, et al. Comparative study of the effects of low-level laser and low-intensity ultrasound associated with Biosilicate® on the process of bone repair in the rat tibia. Rev Bras Ortop. 2012;47:102–7. The authors want to thank Dr. Fábio Di Paulo and Dr. Ewerton Andrez for applying the radiographic system score for osseous healing to this work. Funding was from the following Brazilian agencies: National Council for Scientific and Technological Development (CNPq)—ref. 308.627/2013-0; Coordination for the Improvement of Higher Education Personnel (CAPES)—ref. 3485/2014; and Carlos Chagas Filho Foundation for Research Support in the State of Rio de Janeiro (FAPERJ)—ref. E-26/203.041/2015. Please contact the author for data requests. AJFP, FC, DMG, DPM, PR, MAVK, and WCAP provided the concept/research design. AJFP, DMG, FC, DPM, PR, MAVK, and WCAP provided the data analysis and writing. AJFP, MA, FC, and DMG provided the data collection. AJFP, DMG, MA, and WCAP provided the facilities/equipment and the samples. All authors read and approved the final manuscript. The study was approved by the Ethics Committee in Research of the Evandro Chagas Institute, Pará, Brazil, protocol number 009/2012. Ultrasound Laboratory, Biomedical Engineering Program/COPPE/Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, Rio de Janeiro, Brazil Aldo José Fontes-Pereira, Fernanda Catelani, Paulo Rosa, Marco Antônio von Krüger & Wagner Coelho de Albuquerque Pereira Laboratory of Morpho-physiopathology, State University of Pará, Belém, Pará, Brazil Marcio Amorim Military Police Central Hospital of Rio de Janeiro, Rio de Janeiro, Rio de Janeiro, Brazil Daniel Patterson Matusin Laboratory of Epithelial Biology, Department of Periodontics and Oral Medicine, University of Michigan School of Dentistry, Ann Arbor, MI, USA Douglas Magno Guimarães Aldo José Fontes-Pereira Fernanda Catelani Paulo Rosa Marco Antônio von Krüger Wagner Coelho de Albuquerque Pereira Correspondence to Aldo José Fontes-Pereira. Fontes-Pereira, A.J., Amorim, M., Catelani, F. et al. The influence of low-intensity physiotherapeutic ultrasound on the initial stage of bone healing in rats: an experimental and simulation study. J Ther Ultrasound 4, 24 (2016). https://doi.org/10.1186/s40349-016-0068-5 Fracture healing Ultrasonic therapy
CommonCrawl
Adapted mean first passage time (MFPT) Measurement of the proximity of node $i$ to node $j$ $$S_{j}=\dfrac {\dfrac {\sum\limits_{i\not \subset C}\big \langle T_{ij}\big \rangle }{\left| C'\right| }-\dfrac {\sum\limits_{i\subset C}\big \langle T_{ij}\big \rangle }{\left| C\right| }}{\dfrac {\sum\limits_{i}\big \langle T_{ij}\big \rangle }{\left| C\right| +\left| C'\right| }}$$ where $T_{ij}$ refers to the average number of steps a random walker takes to reach node $j$ beginning at node $i$. Adapted betweenness centrality (BC) Measurement of the fraction of shortest paths containing a node of interest $j$. $$S_{j}=\dfrac {\dfrac {\sum\limits_{s\subset C}\dfrac {\sigma \left( s,t\mid j\right) }{\sigma \left( s,t\right) }}{\left( \left| C\right| -\mathbb1_C \begin{cases}1 \textrm{ if } j\subset C \\0 \textrm{ if } j\not \subset C\end{cases} \right) }-\dfrac {\sum\limits_{s\not \subset C}\dfrac {\sigma \left( s,t\mid j\right) }{\sigma \left( s,t\right) }}{\left( \left| C'\right| -\mathbb1_{C'} \begin{cases}0 \textrm{ if } j\subset C \\1 \textrm{ if } j\not \subset C\end{cases} \right) }}{\dfrac {\sum\limits_{s}\dfrac {\sigma \left( s,t\mid j\right) }{\sigma \left( s,t\right) }}{\left( \left| C\right| +\left| C'\right| - 1\right) }}$$ where $\sigma\left(s, t\right)$ is the number of shortest paths (smallest number of edges) between $s$ and $t$, and $\sigma \left( s,t\mid j\right)$ is the number of shortest paths between $s$ and $t$ passing through $j$. Adapted shared neighbors (SN) Measurement of the fraction of shared adjacent nodes (Tanimoto coefficient, $T_c$) between two nodes of interest $$S_{j}=\dfrac {\dfrac {\sum\limits_{i\subset C}T_c\left( i,j\right) }{\left| C\right| }-\dfrac {\sum\limits_{i\not \subset C}T_c\left( i,j\right) }{\left| C'\right| }}{\dfrac {\sum\limits_{i}T_c\left( i,j\right) }{\left| C\right| +\left| C'\right| }} =\dfrac {\dfrac {\sum\limits_{i\subset C}\dfrac {\left| n_i \cap n_j\right|}{\left| n_i \cup n_j\right| }}{\left| C\right| }-\dfrac {\sum\limits_{i\not \subset C}\dfrac {\left| n_i \cap n_j\right|}{\left| n_i \cup n_j\right| }}{\left| C'\right| }}{\dfrac {\sum\limits_{i}\dfrac {\left| n_i \cap n_j\right|}{\left| n_i \cup n_j\right| }}{\left| C\right| +\left| C'\right| }}$$ where $n_i\textrm{ = \{neighbors of }i\}$ and $n_j = \{\textrm{neighbors of } j\}$. Adapted inverse shortest path (ISP) Measurement of the shortest path length connecting two nodes of interest $$S_{j}=\dfrac {\dfrac {\sum\limits_{i\subset C}\dfrac {1}{\ell_s\left( i,j\right) }}{\left| C\right| }-\dfrac {\sum\limits_{i\not \subset C}\dfrac {1}{\ell_s\left( i,j\right) }}{\left| C'\right| }}{\dfrac {\sum\limits_{i}\dfrac {1}{\ell_s\left( i,j\right) }}{\left| C\right| +\left| C'\right| }}$$ where $\ell_s\left( i,j\right)$ is the smallest number of edges between $i$ and $j$. Tatonetti Lab at Columbia University Medical Center MADSS: Modular Assembly of Drug Safety Subnetworks MADSS (Modular Assembly of Drug Safety Subnetworks) is a network analysis framework that predicts drug safety by scoring every protein in a human protein-protein interaction network (interactome) on its connectivity to a "seed" set of proteins with a genetic link to a phenotype of interest (e.g. disease, adverse drug reaction). Proteins receiving high connectivity scores constitute a neighborhood within the interactome; drugs targeting proteins within this neighborhood are predicted to be involved in mediating the phenotype of interest. To build these neighborhoods, MADSS uses four adapted pairwise connectivity functions to score every protein in the interactome on its connectivity to the seed set (click for formula): Mean first passage time (MFPT) Betweenness centrality (BC) Shared neighbors (SN) Inverse shortest path (ISP) Each of these functions is adapted to the following formulation: where C refers to the seed set, C' refers to the complementary set of proteins, and Sj is the metric connectivity score. Mij represents the pairwise connectivity between protein j and protein i. If Mij is higher for seeds than non-seeds, then Sj will be positive, and the protein is more connected to the seeds than the rest of the network. Proteins receiving high connectivity scores thus constitute a subnetwork of the global interactome, which we call a neighborhood. We then assign drugs the connectivity score of their most highly connected target. We created and validated MADSS in Lorberbaum, et al., Systems Pharmacology Augments Drug Safety Surveillance. Clinical Pharmacology & Therapeutics (2015) using four clinically relevant adverse events (AEs). We created a model for each of these AEs by training a random forest classifier using the drug connectivity scores as features and a gold standard from a Medication-wide association study as training labels. Adverse event neighborhood visualizations are available for acute myocardial infarction, upper gastrointestinal bleeding, acute liver failure, and acute kidney failure.
CommonCrawl
Numerical analysis of a non equilibrium two-component two-compressible flow in porous media April 2014, 7(2): 347-362. doi: 10.3934/dcdss.2014.7.347 Asymptotics of wave models for non star-shaped geometries Farah Abou Shakra 1, Université Paris 13, Sorbonne Paris Cité, LAGA, CNRS, UMR 7539, F-93430, Villetaneuse, France Received April 2013 Revised May 2013 Published September 2013 In this paper, we provide a detailed study and interpretation of various non star-shaped geometries linking them to recent results for the 3D critical wave equation and the 2D Schrödinger equation. These geometries date back to the 1960's and 1970's and they were previously studied only in the setting of the linear wave equation. Keywords: scattering, illuminated from interior, illuminated from exterior, Wave and Schrödinger equations, almost star-shaped.. Mathematics Subject Classification: 35L70, 35L20, 35B33, 35Q5. Citation: Farah Abou Shakra. Asymptotics of wave models for non star-shaped geometries. Discrete & Continuous Dynamical Systems - S, 2014, 7 (2) : 347-362. doi: 10.3934/dcdss.2014.7.347 F. Abou Shakra, Asymptotics of the critical non-linear wave equation for a class of non star-shaped obstacles,, to appear in JHDE, (). Google Scholar F. Abou Shakra, On 2D NLS on non-trapping exterior domains,, preprint, (). Google Scholar H. Bahouri and P. Gérard, High frequency approximation of solutions to critical nonlinear wave equations,, Amer. J. Math., 121 (1999), 131. doi: 10.1353/ajm.1999.0001. Google Scholar H. Bahouri and J. Shatah, Decay estimates for the critical semilinear wave equation,, Ann. Inst. Henri Poinaré, 15 (1998), 783. doi: 10.1016/S0294-1449(99)80005-5. Google Scholar M. D. Blair, H. F. Smith and C. D. Sogge, Strichartz estimates and the nonlinear Schrödinger equation on manifolds with boundary,, Math. Ann., 354 (2012), 1397. doi: 10.1007/s00208-011-0772-y. Google Scholar M. D. Blair, H. F. Smith and C. D. Sogge, Strichartz estimates for the wave equation on manifolds with boundary,, Annales de l'Institut Henri Poincare, 26 (2009), 1817. doi: 10.1016/j.anihpc.2008.12.004. Google Scholar C. O. Bloom and N. D. Kazarinoff, Local energy decay for a class of nonstar-shaped bodies,, Arch. Rat. Mech. Anal., 55 (1974), 73. Google Scholar C. O. Bloom and N. D. Kazarinoff, "Short wave Radiation Problems in Inhomogeneous Media: Asymptotic Solutions,", Lecture Notes in Mathematics, 522 (1976). Google Scholar N. Burq, G. Lebeau and F. Planchon, Global existence for energy critical wave in 3-D domains,, J. Amer. Math. Soc., 21 (2008), 831. doi: 10.1090/S0894-0347-08-00596-1. Google Scholar J. Colliander, M. Grillakis and N. Tzirakis, Tensor products and correlation estimates with applications to nonlinear Schrödinger equations,, Comm. Pure Appl. Math., 62 (2009), 920. doi: 10.1002/cpa.20278. Google Scholar J. Colliander, J. Holmer, M. Visan and X. Zhang, Global existence and scattering for rough solutions to generalized nonlinear Schrödinger equations on $\mathbbR$,, Commun. Pure Appl. Anal., 7 (2008), 467. doi: 10.3934/cpaa.2008.7.467. Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global existence and scattering for rough solutions of a nonlinear Schrödinger equation in $\mathbbR^3$,, Comm. Pure Appl. Math., 57 (2004), 987. doi: 10.1002/cpa.20029. Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in $\mathbbR^3$,, Ann. of Math. (2), 167 (2008), 767. doi: 10.4007/annals.2008.167.767. Google Scholar J. Ginibre and G. Velo, Scattering theory in the energy space for a class of nonlinear Schrödinger equations,, J. Math. Pures Appl. (9), 64 (1985), 363. Google Scholar M. G. Grillakis, Regularity and asymptotic behavior of the wave equation with a critical nonlinearity,, Ann. of Math., 132 (1990), 485. doi: 10.2307/1971427. Google Scholar M. G. Grillakis, Regularity for the wave equation with a critical nonlinearity,, Comm. Pure App. Math., 45 (1992), 749. doi: 10.1002/cpa.3160450604. Google Scholar O. Ivanovici, On the Schrödinger equation outside strictly convex obstacles,, Anal. PDE, 3 (2010), 261. doi: 10.2140/apde.2010.3.261. Google Scholar O. Ivanovici and F. Planchon, On the energy critical Schrödinger equation in 3D non-trapping domains,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 27 (2010), 1153. doi: 10.1016/j.anihpc.2010.04.001. Google Scholar O. Ivanovici and F. Planchon, Square functions and heat flow estimates on domains,, , (2009). Google Scholar V. Ja. Ivrii, Exponential decay of the solution of the wave equation outside an almost star-shaped region, (Russian), Dokl. Akad. Nauk SSSR, 189 (1969), 938. Google Scholar L. V. Kapitanski, Global and unique weak solutions of nonlinear wave equations,, Math. Res. Lett., 1 (1994), 211. Google Scholar R. Killip, M. Visan and X. Zhang, Quintic NLS in the exterior of a strictly convex obstacle,, , (2012). Google Scholar De-Fu Liu, Local energy decay for hyperbolic systems in exterior domains,, J. Math. Anal. Appl., 128 (1987), 312. doi: 10.1016/0022-247X(87)90185-5. Google Scholar C. S. Morawetz, The decay of solutions of the exterior initial-boundary value problem for the wave equation,, Comm. Pure Appl. Math., 14 (1961), 561. doi: 10.1002/cpa.3160140327. Google Scholar C. S. Morawetz, Decay for solutions of the exterior problem for the wave equation,, Comm. Pure Appl. Math., 28 (1975), 229. doi: 10.1002/cpa.3160280204. Google Scholar C. S. Morawetz, The limiting amplitude principle,, Comm. Pure Appl. Math., 15 (1962), 349. doi: 10.1002/cpa.3160150303. Google Scholar C. S. Morawetz, J. V. Ralston and W. A. Strauss, Decay of solutions of the wave equation outside nontrapping obstacles,, Comm. Pure Appl. Math., 30 (1977), 447. doi: 10.1002/cpa.3160300405. Google Scholar K. Nakanishi, Energy scattering for nonlinear Klein-Gordon and Schrödinger equations in spacial dimensions 1 and 2,, J. Funct. Anal., 169 (1999), 201. doi: 10.1006/jfan.1999.3503. Google Scholar F. Planchon and L. Vega, Bilinear virial identities and applications,, Ann. Sci. Éc. Norm. Supér. (4), 42 (2009), 261. Google Scholar F. Planchon and L. Vega, Scattering for solutions of NLS in the exterior of a 2D star-shaped obstacle,, Math. Res. Lett., 19 (2012), 887. Google Scholar J. Shatah and M. Struwe, Regularity results for nonlinear wave equations,, Ann. of Math., 138 (1993), 503. doi: 10.2307/2946554. Google Scholar J. Shatah and M. Struwe, Well-posedness in the energy space for semilinear wave equations with critical growth,, Internat. Math. Res. Notices, 7 (1994), 303. doi: 10.1155/S1073792894000346. Google Scholar H. F. Smith and C. D. Sogge, On the critical semilinear wave equation outside convex obstacles,, J. Amer. Math. Soc., 8 (1995), 879. doi: 10.1090/S0894-0347-1995-1308407-1. Google Scholar W. A. Strauss, Dispersal of waves vanishing on the boundary of an exterior domain,, Comm. Pure Appl. Math., 28 (1975), 265. doi: 10.1002/cpa.3160280205. Google Scholar T. Tao, Spacetime bounds for the energy-critical nonlinear wave equation in three spatial dimensions,, Dynamics of PDE, 3 (2006), 93. Google Scholar Jason Murphy, Kenji Nakanishi. Failure of scattering to solitary waves for long-range nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1507-1517. doi: 10.3934/dcds.2020328 Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323 Norman Noguera, Ademir Pastor. Scattering of radial solutions for quadratic-type Schrödinger systems in dimension five. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021018 Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020294 Zhouxin Li, Yimin Zhang. Ground states for a class of quasilinear Schrödinger equations with vanishing potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020298 Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316 Fang-Di Dong, Wan-Tong Li, Shi-Liang Wu, Li Zhang. Entire solutions originating from monotone fronts for nonlocal dispersal equations with bistable nonlinearity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1031-1060. doi: 10.3934/dcdsb.2020152 Noriyoshi Fukaya. Uniqueness and nondegeneracy of ground states for nonlinear Schrödinger equations with attractive inverse-power potential. Communications on Pure & Applied Analysis, 2021, 20 (1) : 121-143. doi: 10.3934/cpaa.2020260 Jose Anderson Cardoso, Patricio Cerda, Denilson Pereira, Pedro Ubilla. Schrödinger Equations with vanishing potentials involving Brezis-Kamin type problems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020392 Li Cai, Fubao Zhang. The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020125 Vito Napolitano, Ferdinando Zullo. Codes with few weights arising from linear sets. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020129 Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298 Jianli Xiang, Guozheng Yan. The uniqueness of the inverse elastic wave scattering problem based on the mixed reciprocity relation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021004 Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065 Farah Abou Shakra
CommonCrawl
Patrickjmt Bayes Theorem The Likelihood, the prior and Bayes Theorem Most real Bayes prob-lems are solved numerically. Politics: Athenian democracy was based on the concept of isonomia (equality of political rights) and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated. Examples of Bayes' Theorem in Practice 1. Search Search. Use a tree diagram to work out the probabilities. either be multiplied or divided by the same nonzero number without. Company B supplies 30% of the computers sold and is late 3% of the time. An urn contains 5 red balls and 2 green balls. Examples, Tables, and Proof Sketches Example 1: Random Drug Testing. All of the videos are closed captioned and ADA compliant. One day they play a game together. Larch Mountain salamander; Magellanic penguin; Maned wolf; Narwhal; Margay; Montane solitary eagle; Endangered species | Conservation Status. , Introduction to Probability and Statistics: Principles and Applications for Engineering and the Computing Sciences. probability table search new fashion videos, top fashion today, best fashion in high quality videos at FashionDee. On Windows the easiest way to do this is stop the service, copy the data folder, then unregister the service and re-resister it pointing to the new data folder space. Central Limit Theorem Examples. For example, you can: Correct for measurement errors. Pythagoras' theorem In a right-angle triangle, the length of the hypotenuse squared is equal to the sum of the squares of the lengths of the other sides: 2 2 2 |adj| + |opp| = |hyp|. Become a patron of PatrickJMT today: Read 44 posts by PatrickJMT and get access to exclusive content and experiences on the world's largest membership platform for artists and creators. If you aren't familiar with Bayes' theorem, go ahead and check my introductory post. Properties depend on value of "a". f NQ£MRQPY`b£MG4ka OV_XM 'Â;µ¶Å Ç Ã · NQG I ivo4P JMRbP OVaUÆNbV_G4X `S ¿*V_^4PYX i USk_i `b`bVe PSR¦å æ§ç´è MNKi4UYU[G4Rd OV_XM. In this lesson, you will learn the differences between mutually exclusive and non-mutually exclusive events and how to find the probabilities of each using the Addition Rule of Probability. Probability & Statistics. The study of chaotic behavior has received substantial atten- tion in many disciplines. If we divide both sides of the above equation by 2 adj| ||hyp | 2 2 |hyp| , we obtain 2 || || + opp 2 = 1, hyp which can be rewritten as cos2 θ + sin2 θ = 1. The Likelihood, the prior and Bayes Theorem Most real Bayes prob-lems are solved numerically. Denote A_n(u,v) the maximum nu. This could be understood with. In a certain day care class, $30\%$ of the children have grey eyes, $50\%$ of them have blue and the other $20\%$'s eyes are in other colors. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Gaensslen, two of the fallacies committed by lawyers, namely the prosecutor's fallacy and the defense attorney's fallacy, "are misinterpretations of conditional probabilities". Its simplicity might give the false impression that actually applying it to real-world problems is always straightforward. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. laringotraqueitis en bebes nadando dbi sala fall protection vest esto es vida robi draco rosa tab sddpcm commands. Bayes' Theorem with Examples Thomas Bayes was an English minister and mathematician, and he became famous after his death when a colleague published his solution to the "inverse probability" problem. So why all the fuss? A. The central limit theorem states that the distribution of sample means approximates a normal distribution as the sample size gets larger (assuming that all samples are identical in size. This is the currently selected item. Subjectivists, who maintain that rational belief is governed by the laws of probability. If you aren't familiar with Bayes' theorem, go ahead and check my introductory post. This website and its content is subject to our Terms and Conditions. Using Bayes Theorem Solution—Step 1: Review the literature (or check with your instrument supplier or manufacturer) to ascertain the sensitivity and specificity of the measure in previous studies. pdf A history of the progress of the calculus of variations during the nineteenth century Isaac Todhunter 9781402167478 Adamant Media Corporation. STEM Lessons for College Students. Monroe County Pennsylvania. At the moment he has made playlists for a) trigonometry, b) algebra c) functions and equations d) calculus. It is a movie registered for one week until '. A biased coin (with probability of obtaining a Head equal to p > 0) is tossed repeatedly and independently until the first head is observed. In this article, I am going to use a practical problem to intuitively derive the Bayes' Theorem. In the parenthesis, enter the numerical permutation values from left to right separated by a comma. There are 490 Million searches per month for keywords like ruckus wireless zoneflex r700, tssttvdgxl-shp or quintic putting mirror and average price for advertising with Adwords P. Thanks to all of you who support me on Patreon. Bayes' theorem 1 Bayes' theorem The simple statement of Bayes' theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule) is a theorem with two distinct interpretations. Let's put in place a bound for the length these path have. This theorem connec ts several seemingly disparate concepts: reduced row echelon forms, matrix inverses, row spaces, column spaces, and determinants. The Math and the Philosophy (Bayes' Theorem – Bayesianism) Bayes' theorem is a consequence of the definition of conditional probability. Use a tree diagram to work out the probabilities. SETS Basic Properties of Sets Introduction to Set Theory (bullcleo1). Bayes' Rule Calculator. Ask HN: Good books or articles on UI design? 151 points by nahcub 7 hours ago 47 comments top 36. In a certain day care class, $30\%$ of the children have grey eyes, $50\%$ of them have blue and the other $20\%$'s eyes are in other colors. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. And it calculates that probability using Bayes' Theorem. If you have any questions or suggestions regarding the sub, please send the us (the moderators) a message. mercer kernels (I still haven't completely understood these things, I'll update once I find any good resource). Examples, Tables, and Proof Sketches Example 1: Random Drug Testing. This is used in so many areas again-and-again such as investing, portfolio management and so on <3. I'm not sure when I. Bayes' theorem (also known as Bayes' rule or Bayes' law) is a result in probability theory that relates conditional probabilities. (According to some data we found online (not sure how accurate it is), mammograms are actually less reliable than the numbers we used!. ' Nothing can make one so happily exhilarated or so frightened: it's a solid lesson in the limitations of self to realize that your heart is running around inside someone else's body. Statistical Topics This topics list provides access to definitions, explanations, and examples for each of the major concepts covered in Statistics 101-103. Bayes' Theorem is a way of calculating conditional probabilities. From the Calculator page: First, press the MENU key, #5 Probability and choose option #2 Permutations. Bayes's theorem, in probability theory, a means for revising predictions in light of relevant evidence, also known as conditional probability or inverse probability. The example I gave last article was how to figure out if someone is bluffing frequently enough to justify a call. A straightforward corollary to Menger's Theorem states that if we pick two non-adjacent u,v∈V then the maximum number of internal vertex-disjoint u−v paths is equal to the minimum size of an u−v vertex-cut. com/patrickjmt !!. Bayes' theorem describes the probability of occurrence of an event related to any condition. Bayes' theorem is a mathematical equation used in probability and statistics to calculate conditional probability. The Math and the Philosophy (Bayes' Theorem – Bayesianism) Bayes' theorem is a consequence of the definition of conditional probability. One day they play a game together. Youtube'da PatrickJMT diye biri var matematik üzerine çok güzel videoları var. 19 Canada | Arroyo Municipality Puerto Rico | Sweden Sotenas | Williamson County Tennessee | Reeves County Texas | Fairfield County Connecticut | Keewatin Canada | Marshall County Alabama | Bryan County Oklahoma | Bayfield County Wisconsin | Lorient France | Roosevelt County New. You can browse the video by course and topic on this site. Frederick County | Virginia. Bayes theorem visualisation - Bayes' theorem - Wikipedia, the free encyclopedia Illustration of frequentist interpretation with tree diagrams. by Brian Dunning via skeptoid. Bayes' theorem (or Bayes' Law and sometimes Bayes' Rule) is a direct application of conditional probabilities. The Binomial theorem tells us how to expand expressions of the form (a+b)ⁿ, for example, (x+y)⁷. The next step I think is to know some common things in machine learning as the no free lunch theorem, curse of dimensionality, overfitting, feature selection, how to select the current metric to asses your model and common pitfalls. Taylor Remainder Theorem Proof polynomial remainder (part 1)Taylor polynomial remainder (part 2)Next tutorialMaclaurin series of sin(x), cos(x), and eˣCurrent time:0:00Total duration:11:270 energy pointsCalculus|Series|Taylor & Maclaurin polynomials introTaylor polynomial remainder (part 1)AboutTranscriptThe more terms we have in a Taylor. If you run out of space on a server you may need to move the data folder for PostgreSQL to a new drive. But with the Binomial theorem, the process is relatively fast!. of Bayes' theorem (or Bayes' rule), which we use for revising a probability value based on additional information that is later obtained. Let , and denote the complex conjugate of , then the Fourier transform of the absolute square of is given by. The calculator is free, and it is easy to use. Probability estimates should start with what we already know about the world and then be incrementally updated as new information becomes available. Re: Prerequisites for the course? I think it is quite doable given your background. The example I gave last article was how to figure out if someone is bluffing frequently enough to justify a call. The Grunwald-Wang Theorem gives an arithmetic criteria for determining when an algebraic number is a perfect square, cube, 4th power, etc. This theorem connec ts several seemingly disparate concepts: reduced row echelon forms, matrix inverses, row spaces, column spaces, and determinants. The solution to this question can easily be calculated using Bayes's theorem. If we divide both sides of the above equation by 2 adj| ||hyp | 2 2 |hyp| , we obtain 2 || || + opp 2 = 1, hyp which can be rewritten as cos2 θ + sin2 θ = 1. Denote A_n(u,v) the maximum nu. Free Videos, Quizzes, Assessments, Homework Assignments, from the Worlds Largest K-12 Library. Used alongside a good textbook these should allow you to both prepare for lessons and review them afterwards. In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias. larry hazelrigg springfield mo jobs i 80 san pablo dam road storage ephedrine wikipedia frankenstein cuit vapeur philips hd9110 A Gijon Spain secret A Gijon Spain sieve per firenze majelis rasulullah terbaru 2015 roket erbey molinar lubbock tx 11200 westheimer road furniture. Find the probability of A occurring given that B occurs. It says the probability of an event is affected by how probable the event is and the accuracy of the instrument used to measure it. Computer Sc — Discrete Mathematical Structures. (According to some data we found online (not sure how accurate it is), mammograms are actually less reliable than the numbers we used!. Thomas Bayes (/ b eɪ z /; c. Updating with Bayes theorem In this chapter, you used simulation to estimate the posterior probability that a coin that resulted in 11 heads out of 20 is fair. crawford-ap stats syllabus 2016-17 - Free download as Word Doc (. Bayes, who was a reverend who lived from 1702 to 1761 stated that the probability you test positive AND are sick is the product of the likelihood that you test positive GIVEN that you are sick and the "prior" probability that you are sick (the prevalence in the. com , basic-mathematics. Embed this widget ». Re: Prerequisites for the course? I think it is quite doable given your background. A ball is drawn. Bayes' theorem, named after 18th-century British mathematician Thomas Bayes, is a mathematical formula for determining conditional probability. It follows simply from the axioms of conditional probability, but can be used to powerfully reason about a wide range of problems involving belief updates. Bachelor of Science (. Probably, you guessed it right. Multiply when going deeper into a tree… Add when combining separate branches…. So far, nothing's controversial; Bayes' Theorem is a rule about the 'language' of probability, that can be used in any analysis describing random variables, i. txt) or read online for free. This could be understood with. Probability estimates should start with what we already know about the world and then be incrementally updated as new information becomes available. Over a million assessments, videos, lesson plans, homework assignments and more. (Bayes 학습)(8) 마르코프 연쇄-(3) 이전에 올린 마르코프 연쇄에 관한 글에서 '정칙 마르코프 연쇄(regular Markov chains)' 에 대해 언급했다. Bayes' theorem 1 Bayes' theorem The simple statement of Bayes' theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule) is a theorem with two distinct interpretations. Onu da izleyebilirsin. ir بصورت اتوماتیک و بلافاصله پس از پرداخت الکترونیکی امکان پذیر نیست و لینک دانلود تا حداکثر 5 ساعت به ایمیل شما عزیزان ارسال می گردد. Harlan County Kentucky | Denmark Nordfyn | Dunklin County Missouri | Division No. The Likelihood, the prior and Bayes Theorem Most real Bayes prob-lems are solved numerically. The calculation quantifies the validity of the prejudice that characteristic or marker, X, is possessed by an element because that element has been identified as possessing characteristic or marker, A. Actuarial Exam Bayes' Theorem Bernoulli distribution Binomial distribution CAS Exam 1 CAS General Probability Central Limit Theorem Conditional Mean Conditional Probability Conditional Variance Convolution Covariance deductible Exam P Exam P Practice Problems Expected Insurance Payment Expected Value Exponential distribution Gamma distribution. It follows simply from the axioms of conditional probability, but can be used to powerfully reason about a wide range of problems involving belief updates. In this video from PatricJMT we look at a very real life example of Bayes' Theorem in action. Most of the videos above are taken from the two fantastic video collections of PatrickJMT and ExamSolutions. Bayes' theorem converts the results from your test into the real probability of the event. Napa County California. This theorem connec ts several seemingly disparate concepts: reduced row echelon forms, matrix inverses, row spaces, column spaces, and determinants. Multiply when going deeper into a tree… Add when combining separate branches…. For example, the probability of a hypothesis given some observed pieces of evidence and the probability of that evidence given the hypothesis. Nash County North Carolina. Bayes's theorem can involve some complicated mathematics, but at its core lies a very simple premise. (1)에서 도출한 아래의 베이즈 정리(Bayes's Theorem: 이하 Bayes Theorem)는 놀라운 응용성을 갖는다. Chapter 8 & 9 Probability homework and resources. Joe is a randomly chosen member of a large population in which 3% are heroin users. Let \(\vec F\) be a vector field whose components have continuous first order partial derivatives. Primes and prime factorization are especially important in number theory, as are a number of functions such as the divisor function, Riemann zeta function, and totient function. You can browse the video by course and topic on this site. It is a movie registered for one week until '. Library September 1, 2016 - august 31, 2017. 1701 – 7 April 1761) was an English statistician, philosopher and Presbyterian minister who is known for formulating a specific case of the theorem that bears his name: Bayes' theorem. Use a tree diagram to work out the probabilities. For support vector regression(SVR) - I think winston's lecture video on svm + it's mega recitation video would be sufficient. Tampa - United States. First Watch all the videos of Ravindrababu Ravula Sir's Youtube channel Gate Lectures by Ravindrababu Ravula https://www. Proper use of Bayes' Theorem in a courtroom has the potential to both counteract. A ball is drawn. Bayes' theorem converts the results from your test into the real probability of the event. Over a million assessments, videos, lesson plans, homework assignments and more. Welcome to Mathispower4u! This site provides more than 6,000 free mini-lessons and example videos with no ads. 8) Mitch Campbell's IB SL videos. Bayes' theorem describes the probability of occurrence of an event related to any condition. At the moment he has made playlists for a) trigonometry, b) algebra c) functions and equations d) calculus. The calculator is free, and it is easy to use. WORKED EXAMPLES 1 TOTAL PROBABILITY AND BAYES' THEOREM EXAMPLE 1. Joe tests positive for heroin in a drug test that correctly identifies users 95% of the time and correctly identifies nonusers 90% of the time. The knowledge gained from the course can be leveraged to put you on the path to a productive life with a lot less stress and lot more happiness and self-contentment. mercer kernels (I still haven't completely understood these things, I'll update once I find any good resource). Bayes' Theorem with LEGO 69 The probability of touching either a blue or a red brick, as you would expect, is 1: PP()blue + ()red = 1 This means that red and blue bricks alone can describe our entire set. Embed this widget ». The calculation quantifies the validity of the prejudice that characteristic or marker, X, is possessed by an element because that element has been identified as possessing characteristic or marker, A. Bayes' theorem 1 Bayes' theorem The simple statement of Bayes' theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule) is a theorem with two distinct interpretations. モンテカルロ法による円周率近似の表示をぼんやり眺めるだけのインテリア 玩具ができた。 近似値と試行回数を交互に. by Brian Dunning via skeptoid. For example, if cancer is related to age, then, using Bayes' theorem, a person's age can be used to more accurately assess the. On Windows the easiest way to do this is stop the service, copy the data folder, then unregister the service and re-resister it pointing to the new data folder space. I don't know the $\frac{1-p}{2-p}$ formula and I don't understand what it is supposed to calculate. The Likelihood, the prior and Bayes Theorem Most real Bayes prob-lems are solved numerically. Doc_id Review Left Term Right Sentiment Polarity Rating Contradiction-Based_MOY Contradiction-Based_Ci-0BI9jXyEeWa2g6sjqf03Q: Excellent course material with a lot of emphasis on getting hands-on-experience. 19 Canada | Arroyo Municipality Puerto Rico | Sweden Sotenas | Williamson County Tennessee | Reeves County Texas | Fairfield County Connecticut | Keewatin Canada | Marshall County Alabama | Bryan County Oklahoma | Bayfield County Wisconsin | Lorient France | Roosevelt County New. Bayes' Theorem with Examples Thomas Bayes was an English minister and mathematician, and he became famous after his death when a colleague published his solution to the "inverse probability" problem. Bayes' Rule Calculator. Free Videos, Quizzes, Assessments, Homework Assignments, from the Worlds Largest K-12 Library. Onu da izleyebilirsin. What is the probability that the sum equals 10 given it. I nd it clearest to present the objections to Bayesian statistics in the voice of a hy-pothetical anti-Bayesian statistician. For example: if we have to calculate the probability of taking a blue ball from the second bag out of three different bags of balls, where each bag contains three different color balls viz. changing the value of the rational number. Bayes' theorem connects conditional probabilities to their inverses. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Gaensslen, two of the fallacies committed by lawyers, namely the prosecutor's fallacy and the defense attorney's fallacy, "are misinterpretations of conditional probabilities". (1)에서 도출한 아래의 베이즈 정리(Bayes's Theorem: 이하 Bayes Theorem)는 놀라운 응용성을 갖는다. The study of chaotic behavior has received substantial atten- tion in many disciplines. Some claim that certain common false memories are evidence for alternate realities. a is any value greater than 0, except 1. Politics: Athenian democracy was based on the concept of isonomia (equality of political rights) and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated. Amazing visualisation about Bayes theorem used all over different industries and in decision analysis and with fault trees. In this video from PatricJMT we look at a very real life example of Bayes' Theorem in action. There are 490 Million searches per month for keywords like ruckus wireless zoneflex r700, tssttvdgxl-shp or quintic putting mirror and average price for advertising with Adwords P. Ask HN: Good books or articles on UI design? 151 points by nahcub 7 hours ago 47 comments top 36. Proper use of Bayes' Theorem in a courtroom has the potential to both counteract. End of course questionnaire 1 = agree (13 responses / 18 students) 4 = disagree * End of course questionnaire 1 = agree (13 responses / 18 students) 4 = disagree * End of course questionnaire 1 = agree (13 responses / 18 students) 4 = disagree * 142857 - a cyclic number Bayes theorem Chi Square test Euclid Euler polyhedra Newton Mandelbrot. Conditional probability with Bayes' Theorem. A Central Limit Theorem word problem will most likely contain the phrase "assume the variable is normally distributed", or one like it. This course will guide you through the most important and enjoyable ideas in probability to help you cultivate a more quantitative worldview. And it calculates that probability using Bayes' Theorem. This is used in so many areas again-and-again such as investing, portfolio management and so on <3. One key to understanding the essence of Bayes' theorem is to recognize that we are dealing with sequential events, whereby new additional information is obtained for a subsequent event, and that new. In this video we help you learn what a random variable is, and the. Bayes' theorem (also known as Bayes' rule or Bayes' law) is a result in probability theory that relates conditional probabilities. I can mechanically complete an exercise, but the conceptual understanding lags behind. You da real mvps! $1 per month helps!! :) https://www. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. Larch Mountain salamander; Magellanic penguin; Maned wolf; Narwhal; Margay; Montane solitary eagle; Endangered species | Conservation Status. In this article, I am going to use a practical problem to intuitively derive the Bayes' Theorem. com , basic-mathematics. For support vector regression(SVR) - I think winston's lecture video on svm + it's mega recitation video would be sufficient. This website and its content is subject to our Terms and Conditions. If we divide both sides of the above equation by 2 adj| ||hyp | 2 2 |hyp| , we obtain 2 || || + opp 2 = 1, hyp which can be rewritten as cos2 θ + sin2 θ = 1. If you have any questions or suggestions regarding the sub, please send the us (the moderators) a message. Conditional probability and Bayes theorem. doc), PDF File (. 널리 사용되는 마르코프 연쇄 유형에는 세 가지가 있다. A straightforward corollary to Menger's Theorem states that if we pick two non-adjacent u,v∈V then the maximum number of internal vertex-disjoint u−v paths is equal to the minimum size of an u−v vertex-cut. One day they play a game together. One key to understanding the essence of Bayes' theorem is to recognize that we are dealing with sequential events, whereby new additional information is obtained for a subsequent event, and that new. If you know the real probabilities and the chance of a false positive and false negative, you can correct for measurement errors. The knowledge gained from the course can be leveraged to put you on the path to a productive life with a lot less stress and lot more happiness and self-contentment. Updating with Bayes theorem In this chapter, you used simulation to estimate the posterior probability that a coin that resulted in 11 heads out of 20 is fair. Bayes' theorem converts the results from your test into the real probability of the event. If you run out of space on a server you may need to move the data folder for PostgreSQL to a new drive. 널리 사용되는 마르코프 연쇄 유형에는 세 가지가 있다. The Binomial theorem tells us how to expand expressions of the form (a+b)ⁿ, for example, (x+y)⁷. For KKT optimization conditions, This link would be useful. "To be the father of growing daughters is to understand something of what Yeats evokes with his imperishable phrase 'terrible beauty. Some claim that certain common false memories are evidence for alternate realities. Joe is a randomly chosen member of a large population in which 3% are heroin users. IB HL Videos I've created a number of playlists - which should cover nearly all of the key content that you need for HL Maths. Learn about discrete and continous probability. negation of an event, what is a sample space, the standard layout of 52 poker cards, the Bayes theorem for conditional probability. Use a tree diagram to work out the probabilities. By the end of this course, you'll master the fundamentals of probability, and you'll apply them to a wide array of problems, from games and sports to economics and science. Probability. Learn about discrete and continous probability. If you aren't familiar with Bayes' theorem, go ahead and check my introductory post. com , mathworksheetsland. For example, if cancer is related to age, then, using Bayes' theorem, a person's age can be used to more accurately assess the. Multiply when going deeper into a tree… Add when combining separate branches…. Examples, Tables, and Proof Sketches Example 1: Random Drug Testing. I think the only way to learn this is reading a lot about machine learning and making mistakes by your own. Thanks to all of you who support me on Patreon. Using Bayes Theorem Solution—Step 1: Review the literature (or check with your instrument supplier or manufacturer) to ascertain the sensitivity and specificity of the measure in previous studies. Frederick County | Virginia. An urn contains 5 red balls and 2 green balls. Embed this widget ». Here is a game with slightly more complicated rules. Over a million assessments, videos, lesson plans, homework assignments and more. (1) 베이즈 정리를 보다 일반적으로 사용하기 위해 A를 로, B를 로 바꾸어 아래와 같이 다시 쓰자. By Murray Bourne, 13 Oct 2006. larry hazelrigg springfield mo jobs i 80 san pablo dam road storage ephedrine wikipedia frankenstein cuit vapeur philips hd9110 A Gijon Spain secret A Gijon Spain sieve per firenze majelis rasulullah terbaru 2015 roket erbey molinar lubbock tx 11200 westheimer road furniture. STEM Lessons for College Students. either be multiplied or divided by the same nonzero number without. It follows simply from the axioms of conditional probability, but can be used to powerfully reason about a wide range of problems involving belief updates. Or you can tap the button below. 1701 – 7 April 1761) was an English statistician, philosopher and Presbyterian minister who is known for formulating a specific case of the theorem that bears his name: Bayes' theorem. Taylor Remainder Theorem Proof polynomial remainder (part 1)Taylor polynomial remainder (part 2)Next tutorialMaclaurin series of sin(x), cos(x), and eˣCurrent time:0:00Total duration:11:270 energy pointsCalculus|Series|Taylor & Maclaurin polynomials introTaylor polynomial remainder (part 1)AboutTranscriptThe more terms we have in a Taylor. Some claim that certain common false memories are evidence for alternate realities. If we divide both sides of the above equation by 2 adj| ||hyp | 2 2 |hyp| , we obtain 2 || || + opp 2 = 1, hyp which can be rewritten as cos2 θ + sin2 θ = 1. We have 33 resources for learning Statistics & Probability including educational Online Classes, Websites, Books, YouTube & Online Videos, and Worksheets &; Printables, from providers such as Khan Academy, Duke University, Saylor Academy, and OpenStax. Proper use of Bayes' Theorem in a courtroom has the potential to both counteract. Bayes' theorem is a statistical method for calculating conditional probabilities. Bayes' Theorem is a way to calculate conditional probability. Bayes' theorem (also known as Bayes' rule or Bayes' law) is a result in probability theory that relates conditional probabilities. The example I gave last article was how to figure out if someone is bluffing frequently enough to justify a call. com , mathworksheetsland. You can find the calculator in Stat Trek's main menu under the Stat Tools tab. Bayes theorem visualisation - Bayes' theorem - Wikipedia, the free encyclopedia Illustration of frequentist interpretation with tree diagrams. One key to understanding the essence of Bayes' theorem is to recognize that we are dealing with sequential events, whereby new additional information is obtained for a subsequent event, and that new. But with the Binomial theorem, the process is relatively fast!. You can browse the video by course and topic on this site. , Arnold, Jesse C. What is the probability that the sum equals 10 given it. Bayes' theorem is to recognize that we are dealing with sequential events, whereby new additional information is obtained for a subsequent event, and that new information is used to revise the probability of the initial event. Taylor Remainder Theorem Proof polynomial remainder (part 1)Taylor polynomial remainder (part 2)Next tutorialMaclaurin series of sin(x), cos(x), and eˣCurrent time:0:00Total duration:11:270 energy pointsCalculus|Series|Taylor & Maclaurin polynomials introTaylor polynomial remainder (part 1)AboutTranscriptThe more terms we have in a Taylor. Joe is a randomly chosen member of a large population in which 3% are heroin users. In this video from PatricJMT we look at a very real life example of Bayes' Theorem in action. The Bayes theorem describes the probability of an event based on the prior knowledge of the conditions that might be related to the event. Onu da izleyebilirsin. Let \(\vec F\) be a vector field whose components have continuous first order partial derivatives. (According to some data we found online (not sure how accurate it is), mammograms are actually less reliable than the numbers we used!. Learn discrete probability distributions like the Binomial, Geometric and the Poisson Distributions and continuous probability distributions like the Exponential. NUMB3RS and Bayes' Theorem 23 Nov 2007 (5) Wu's squaring trick, PatrickJMT, Google calculus 27 May 2014 (9). Alex Smola. The applet below illustrates the two theorems. Boston - Cambridge - Newton, MA-NH Spokane - Spokane Valley, WA; Durham - Chapel Hill, NC; Lakeland - Winter Haven, FL. Let's put in place a bound for the length these path have. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. 2016-09-10. Some claim that certain common false memories are evidence for alternate realities. All of the videos are closed captioned and ADA compliant. Then modify the example or enter your own linear programming problem in the space below using the same format as the example, and press "Solve. Bayes' Theorem is a way to calculate conditional probability. According to Dr. Probability: Random variables. So why all the fuss? A. The "Fundamental Theorem of Algebra" is not the start of algebra or anything, but it does say something interesting about polynomials: Any polynomial of degree n has n roots but we may need to use complex numbers. Dolan via PsyPost. A new collection of videos made just for IB. Explore the largest K-12 Resource Library. Rolle's theorem is clearly a particular case of the MVT in which f satisfies an additional condition, f(a) = f(b). com/patrickjmt !!. I don't understand your calculations and I don't understand why you're trying to use Bayes formula. Let's use this addition rule to find the probability for. Central Limit Theorem Examples. a is any value greater than 0, except 1. So why all the fuss? A. Onu da izleyebilirsin. In probability theory and applications, Bayes' theorem shows the relation between a conditional probability and its reverse form. Some think that we don't use it enough, but maybe we internalize it too much because we expect that what has always happened always will happen and discount our ability to change our situation: "The thing we forget in Bayes' Theorem is that our actions play a role in determining outcomes" (8:25), making us not change our actions and. 05 class 3, Conditional Probability, Independence and Bayes' Theorem, Spring 2014. Understanding how Bayes theorem works in poker is critical to making many different types of decisions at the table. In the Bayesian interpretation, it expresses how a subjective degree of belief should rationally change to account for evidence. ' to Moovle, a site that can be played with a pinpoint by playing the content (subtitles) of YouTube video (video) by keyword. Rolle's theorem is clearly a particular case of the MVT in which f satisfies an additional condition, f(a) = f(b). First Watch all the videos of Ravindrababu Ravula Sir's Youtube channel Gate Lectures by Ravindrababu Ravula https://www. com/channel/ For Numerical. In other words, it is used to calculate the probability of an event based on its association with another event. mercer kernels (I still haven't completely understood these things, I'll update once I find any good resource). f(x) = log a (x). by Brian Dunning via skeptoid. Joe tests positive for heroin in a drug test that correctly identifies users 95% of the time and correctly identifies nonusers 90% of the time. Learn about discrete and continous probability. 2 If used for diagnosis of disease, this refers to the odds of having a. Bayes' Theorem with Examples Thomas Bayes was an English minister and mathematician, and he became famous after his death when a colleague published his solution to the "inverse probability" problem. Joe is a randomly chosen member of a large population in which 3% are heroin users. Let's put in place a bound for the length these path have. It looks like Bayes Theorem. The next step I think is to know some common things in machine learning as the no free lunch theorem, curse of dimensionality, overfitting, feature selection, how to select the current metric to asses your model and common pitfalls. Dolan via PsyPost. Bayes' Theorem Examples: A Beginners Visual Approach to Bayesian Data Analysis If you've recently used Google search to find something, Bayes' Theorem was used to find your search results. And it calculates that probability using Bayes' Theorem. com/patrickjmt !! Part 1 http://www. com - FashionDee. of Bayes' theorem (or Bayes' rule), which we use for revising a probability value based on additional information that is later obtained. Multiply when going deeper into a tree… Add when combining separate branches…. The links below will take you to videos and exercises relating to the content in MATH-122. Nash County North Carolina; Okmulgee County Oklahoma; Division No. Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities.
CommonCrawl
Transfer of malignant trait to BRCA1 deficient human fibroblasts following exposure to serum of cancer patients Dana Hamam1,2, Mohamed Abdouh1, Zu-Hua Gao3, Vincenzo Arena4, Manuel Arena5 & Goffredo Orazio Arena1,6 Journal of Experimental & Clinical Cancer Research volume 35, Article number: 80 (2016) Cite this article It was reported that metastases might occur via transfer of biologically active blood circulating molecules from the primary tumor to distant organs rather than only migration of cancer cells. We showed in an earlier study that exposure of immortalized human embryonic kidney cells (HEK 293) to cancer patient sera, induce their transformation into undifferentiated cancers due to a horizontal transfer of malignant traits. In the present work, we tested the hypothesis that even other human cells as long as they are deficient for a single oncosuppressor gene might undergo malignant transformation when exposed to human cancer serum. We used the CRISPR/Cas9 system to establish a stable BRCA1 knockout (KO) in human fibroblasts. The BRCA1-KO fibroblasts were exposed to cancer patients' sera or healthy patients' sera for 2 weeks. Treated cells were analyzed for cell proliferation and transformation to study their susceptibility to the oncogenic potential of cancer patients' sera and to determine the possible mechanisms underlying their hypothesized transformation. BRCA1-KO fibroblasts treated with cancer patients' sera displayed higher proliferation and underwent malignant transformation as opposed to wild type control fibroblasts, which were not affected by exposure to cancer patients' sera. The malignant transformation was not seen when BRCA1-KO fibroblasts were treated with healthy human sera. Histological analysis of tumors generated by BRCA1-KO fibroblasts showed that they were carcinomas with phenotypical characteristics related to the cancers of the blood donor patients. Interestingly, BRCA1-KO fibroblasts were significantly more prone to internalize serum-derived exosomes, when compared to wild type fibroblasts. This suggests that oncosuppressor genes might protect the integrity of the cell genome also by blocking integration of cancer-derived exosomes. These data support the hypothesis that any human cells carrying a single oncosuppressor mutation is capable of integrating cancer factors carried in the blood and undergo complete malignant transformation. Oncosuppressor genes might protect the cell genome by impeding the integration inside the cells of these mutating factors. Metastasis is considered the leading cause of morbidity and mortality related to cancer [1, 2]. It has been well accepted that metastases develop by dissemination of cancer cells, which detach from the primary tumors, travel through the circulatory system and reach the metastatic site where they start to grow [3]. A review of recent literature brought evidence that the metastatic process might not be only due to primary tumor cells spreading to distant metastatic sites. Several studies reported that cancer cells-derived factors could either prepare a niche to permit the engraftment of malignant cancer cell in distant organs or predispose target cells, located in distant organs, to their malignant transformation [4–9]. These factors (i.e., proteins, nucleic acids, cell-surface receptors and lipids) could be either blood floating naked entities or molecules carried as cargo in exosomes [5, 7, 10–16]. Exosomes are small (30–100 nm) extracellular membrane-enclosed vesicles, which originate from cellular endosomal compartment under both physiological and pathological conditions [17–19]. Exosomes, which express distinctive surface markers, harbour substances that mirror the content of their cell of origin [20–22] and have the capability to induce different types effects, even at distance, by insuring the trafficking of different factors (i.e., survival and mitogenic signalling molecules) into target recipient cells [4, 8]. Pioneer researchers describing malignant trait transfer via blood circulating factors in immortalized mouse fibroblasts (NIH3T3 cells) called this phenomenon "genometastasis" [23–25]. Recently, after exposing immortalized human embryonic kidney cells (HEK293) to cancer patients' sera, we observed their transformation into malignant cells, confirming for the first time the validity of the genometastatic theory in human cells [26]. In our study we remarked that only HEK293 were prone to undergo malignant transformation as opposed to different types of normal cells (fibroblasts, mesenchymal stem cells and embryonic stem cells), which failed to acquire the malignant traits. Our findings supported the hypothesis that the different stages of carcinogenesis such as initiation, promotion and progression might not represent events limited to the cells forming the primary tumor, but may actually be a process reproducible in primed cells, located in target organs, through the incorporation of key factors released by the primary tumor. To strengthen our hypothesis that any cell with a single oncosuppressor mutation might be susceptible to integrate mutating factors at metastatic sites, we generated a human fibroblast cell line deficient for the oncosuppressor BRCA1 (Breast cancer susceptibility gene 1) using the CRISPR technology and we exposed it to different types of patients' cancer sera and healthy patients' sera. BRCA1 is a tumor suppressor gene that plays a significant role in DNA repair pathways [27]. Specific inherited mutations in BRCA1 increase the risk of breast and ovarian cancers, and it has been associated with increased risks of several additional types of cancer [28–30]. The aims of our investigations were to determine the oncogenic potential of cancer patients' sera on BRCA1-KO human fibroblasts, to characterize their differentiation following serum treatments and evaluate their phenotypes, and to determine their receptiveness to integrate serum-carried factors, such as exosomes. BRCA1-KO fibroblasts treated with cancer patients' sera displayed higher proliferation and underwent malignant transformation as opposed to wild type fibroblasts, which were not affected by exposure to cancer patients' sera. The malignant transformation was not seen when BRCA1-KO fibroblasts were treated with healthy human sera. Histological analysis of tumors generated by BRCA1-KO fibroblasts showed that they were carcinomas with phenotypical characteristics related to the cancers of the blood donor patients. Uptake of exosomes was significantly higher in the oncosuppressor mutated cells. Patients' recruitment and characteristics of cancers Patients for the current study were recruited form the department of General Surgery at the Royal Victoria Hospital and St-Mary's Hospital (Montreal, Canada) and underwent a written consent for blood collection in accordance to a protocol approved by the Ethics Committee of our institution (SDR-10-057). Blood samples were collected from both healthy individuals and patients who underwent resection of primary cancer and who were readmitted for metastatic disease treatment (Table 1). Healthy subjects were recruited based on three criteria: (i) age (35–45-year-old), (ii) absence of any signs and symptoms or personal history of cancer and (iii) negative family history for malignancy. Table 1 Clinical features of cancer patients recruited in the present study Blood collection and serum preparation from cancer patients and healthy subjects Blood samples (20 ml) were collected from a peripheral vein in vacutainer tubes (Becton Dickinson) containing clot-activation additive and a barrier gel to isolate serum. Blood samples were incubated for 60 min at room temperature to allow clotting and subsequently were centrifuged at 1500 g for 15 min. Serum was collected and a second centrifugation was performed on the serum at 2000 g for 10 min to clear it from any contaminating cells. Serum samples were aliquoted and stored at −80 °C until use. Cell line and culture conditions Human fibroblasts and human embryonic kidney cells (HEK-293) (ATCC, VA, USA) were maintained as per supplier's recommendations. When cells reached 30 % confluence, they were treated with DMEM-F12 medium (Wisent, Saint-Bruno, Canada) supplemented with antibiotics and 10 % cancer patient sera or control sera, which had been filtered through 0.2 μm filters. Cells were maintained in these conditions at 37 °C in humidified atmosphere containing 95 % air and 5 % CO2 with medium change every second day for 2 weeks. When cells reached 80-90 % confluence, they were passaged 1 in 6 using 0.05 % Trypsin-EDTA (Wisent, Saint-Bruno, Canada). To confirm that there was no contamination or carry-over of cells from human serum, aliquots of the culture medium were placed in a culture plate and incubated at 37 °C, 5 % CO2 for 4 weeks. CRISPR/Cas9-mediated BRCA1 knockout in fibroblasts and cell sorting We used the CRISPR/Cas9 system to establish a stable BRCA1 knockout in human fibroblasts as previously described [31]. The pSpCas9(BB)-2A-GFP plasmid (PX458; Addgene, MA, USA) was used as the cloning backbone for sgBRCA1 (single-guided RNA to BRCA1). For this study, we designed two sequences targeting BRCA1 locus (Table 2). Human fibroblasts were transfected with the empty plasmid (PX458) or the plasmid containing the guide (PX458-sgBRCA1) using Lipofectamine 3000 as per the manufacturer protocol (Invitrogen, Burlington, Canada). Transfected fibroblasts were then sorted based on the expression of the reporter GFP (green fluorescent protein) gene using a FACSAria cytometer (BD Biosciences, Mississauga, Canada) (Additional file 1: Figure S1). Sorted GFP positive cells were cultured and aliquots were subjected to Surveyor assay and Western blot analyses (Additional file 2: Figure S2). To minimize the off-target effects, cells were transfected with minimal amount of plasmid (500 ng). Also, the guide sequences were designed by using a web-based prediction algorithm tool [31]. We chose the highly ranked guide sequence with the least exonic off-target sites (Additional file 3: Table S2 and S3). Table 2 Primers sequences used in sgBRCA1 cloning and knockout validation SURVEYOR nuclease assay DNA was isolated from sorted fibroblasts using GenElute Mammalian Genomic DNA Miniprep kit according to manufacturer specifications (Sigma, Oakville, Canada). Extracted DNA was amplified by PCR using Phusion High-Fidelity PCR kit (NEB, MA, USA) and set of primers for BRCA1 gene (Table 2). The PCR reaction was performed in thermal cycler (Bio-Rad Laboratories, Inc., Hercules, CA, U.S.A.). Amplicons were loaded on 1 % agarose gel and corresponding bands with the expected sizes were excised and purified using QIAquick Gel Extraction Kit (QIAGEN, Redwood City, CA, USA). Purified DNA samples were subjected to the Indel (insertion/deletion) assay. Briefly, DNA was denatured at 95 °C for 10 min, and let anneal at decreasing temperatures (95 °C to 20 °C) for 30 min. Reannealed DNA was subjected to endonuclease digestion using IDT Surveyor Mutation Detection kit (IDT, Iowa, USA). The digestion products were run on 1 % agarose gel to quantify the Indel efficiency (Additional file 2: Figure S2) [31]. Briefly, the gel was imaged and the intensity of the bands in each lane was measured by using ImageJ Software. For each lane, we calculated the fraction of the PCR product cleaved by using the following formula: fcut = (B + C)/(A + B + C), where A is the intensity of the undigested PCR product, while B and C are the intensities of each cleaved band. The Indel percentage was estimated by applying the following formula: $$ \mathrm{Indel}\left(\%\right)=100\times \left(1-\sqrt{\left(1-f\kern0.5em cut\right)}\right) $$ Population doubling level (PDL) calculation Cells were considered at population doubling zero at the first time they are exposed to patient serum-containing culture medium. At every passage, cell number was determined and population doubling was calculated using the following formula; PDL = log(Nh/Ni)/log2, where Nh is the number of cells harvested at the end of the incubation time and Ni is the number of cells inoculated at the beginning of the incubation time. Cumulative PDL was calculated by adding the previous calculated PDL. Immunoblotting Cells were lysed in RIPA buffer containing protease inhibitors (Sigma, Oakville, Canada). Equal amounts of proteins were resolved on 10 % SDS-PAGE and transferred to a nitrocellulose membrane (BioRad, CA, USA). Membranes were blocked in TBS containing 5 % non-fat dry milk and exposed overnight at 4 °C to rabbit-anti-BRCA1 (ab191042 and ab131360, Abcam, MA, USA) or mouse-anti-β-Actin (A5316, Sigma, Oakville, Canada). Membranes were washed in TBST (TBS-0.05 % Tween-20) and incubated with either anti-rabbit or anti-mouse peroxidase-conjugated secondary antibody for 1 h at room temperature. After several washes in TBST, the blots were developed using Immobilon Western HRP Substrate (Millipore, Etobicoke, Canada). Exosomes isolation and labeling Exosomes were isolated from serum using Total Exosome Isolation kit according to the manufacturer's protocol (Invitrogen, Burlington, Canada). Exosomes were labeled using the PKH26 dye following the manufacturer recommendations (Sigma, Oakville, Canada). Labeled exosomes were diluted in labeling stop solution (PBS/FBS) and pelleted by ultra-centrifugation for 80 min at 100,000 x g at 4 °C. The pellet was washed in Hank's Balanced Salt Solution (HBSS) with an ultra-centrifugation using the same parameters. The pelleted exosomes were re-suspended in HBSS and stored at −80 °C. 10 μg of labeled exosomes was added to ~5x103 BRCA1-KO Fibroblast, Control PX-458-transfected fibroblasts and HEK-293 cells cultured in 8-well chamber slides (VWR, Mont-Royal, Canada). Cells were washed, fixed for 10 min with Paraformaldehyde 4 %. Slides were mounted with coverslip in VECTASHIELD Mounting Medium with DAPI (Vector Laboratories, Burlington, Canada). Stained cells were visualized using an LSM780 confocal microscope (Zeiss, Toronto, Canada). Exosomes internalization was quantified using ImageJ software. Exosomes characterization Morphological examination of isolated exosomes was done using transmission electron microscope (JEM-2010, Jeol Ltd., Tokyo, Japan). Briefly, 20 μl of exosomes were loaded on a copper grid and stained with 2 % phosphotungstic acid. Samples were dried by incubating them for 10 min under an electric incandescent lamp. Samples were examined under electron microscope and imaged using a Hitachi H-600 TEM operating at 60 kV. In Parallel, an aliquot of exosome samples was run on a Nanosight NS500 system (Nanosight Ltd., Amesbury, UK), and size distribution was analyzed using the NTA 1.3 software. In vivo tumor growth Five-week-old female NOD-SCID mice (Jackson Laboratory) were used in compliance with McGill University Health Centre Animal Compliance Office (Protocol 2012–7280). Cells growing in log phase were harvested by trypsinization and washed twice with HBSS. Mice were injected subcutaneously with 2 million cells in 200 μl HBSS/Matrigel. Mice were euthanized one month post-injection. The resulting xenotransplants were photographed and processed as indicated below. Immunohistochemistry labelling procedures and histological analyses Mice xenotransplants were collected, fixed in 10 % buffered formalin, embedded in paraffin, and stained with H&E (hematoxylin and eosin) according to standard protocols or processed for immunohistochemistry. Briefly, 5 μm tissue sections were dewaxed in xylene and rehydrated with distilled water. After antigen unmasking, and blocking of endogenous peroxidase (3 % hydrogen peroxide), the slides were incubated with primary antibodies (Additional file 3: Table S1). Labeling was performed using iView DAB Detection Kit (Ventana) on the Ventana automated immunostainer. Sections were counterstained lightly with Hematoxylin before mounting. Histological analyses were performed by a certified pathologist who was blinded to the type of cells from which the cancerous masses, which formed in mice, had been derived. Statistical analysis Statistical differences were analyzed using Student's t test for unpaired samples. An ANOVA followed by the Dunnett test was used for multiple comparisons with one control group. The criterion for significance (p value) was set as mentioned in figures. BRCA1 knocking-out in human fibroblasts A human fibroblast cell line deficient for the oncosuppressor BRCA1 was developed to study its susceptibility to the oncogenic potential of cancer patient serum (Additional file 1: Figure S1A). For this purpose, we used the CRISPR-Cas9 technology to knock-out BRCA1 [31]. BRCA1-KO fibroblasts (i.e., fibroblasts transfected with a sgBRCA1-expressing vector) and control fibroblasts (empty vector transfected cells) were sorted based on the expression of the GFP reporter gene (Additional file 1: Figure S1B). BRCA1 knockout was validated using the SURVEYOR nuclease assay (Additional file 2: Figure S2A). One of two guides used was efficient in knocking-out BRCA1 in fibroblasts. As described in Materials and Methods, the Indel percentage was estimated at 33 %, which is in the range of the values obtained with this assay. Moreover, protein extracts were analyzed for BRCA1 expression. BRCA1 was not detected in BRCA1-KO fibroblasts, compared to empty vector-transfected cells (Additional file 2: Figure S2B). Cancer patient sera increased the proliferation of BRCA1-KO fibroblasts To analyze the effect of cancer patient sera on the growth of BRCA1-deficient fibroblasts, equal amount of cells were plated and cultured with DMEM media supplemented with 10 % of cancer patients' serum for 2 weeks. We used serum from two healthy control donors and 10 metastatic cancer patients (Table 1). At every passage, cell numbers were determined to estimate the population doubling levels (PDL) in each condition (Fig. 1a). Independently of the cancer serum used, the cumulative PDL was increased when compared to that of cells cultured in control human serum (range: 1.3 to 6.1 increase in population doubling; mean +/− SD: 3.2 +/− 1.4; P = 0.027) (Fig. 1b). These data suggest that cancer patient sera significantly enhanced the proliferation of BRCA1-deficient fibroblasts in vitro. Cancer patient sera increased BRCA1-KO fibroblasts growth. Cells were cultured for 2 weeks in control human serum, or cancer patient sera. Cell growth was then analyzed by counting the number of viable cells at every passage (5 days duration for every passage). a Line graph shows the population doublings capability and (b) column graph represents cumulative population doublings at the end of the treatment periods. Data are mean ± SD of 2 control sera vs. 6 cancer patient sera. (P = 0.027) Cancer patients' sera transfer malignant traits to BRCA1-KO fibroblasts To determine whether cancer sera promote tumor formation in vivo, NOD/SCID mice were injected subcutaneously with BRCA1-KO fibroblasts exposed to healthy control or cancer patients' sera. Cells were injected following 2 weeks of treatment. Mice were followed up for tumor growth, and they were euthanatized 3 to 4 weeks after cell inoculation due to the massive growth of cancerous masses, which had compromised the ability to move of the mice. Independently of the cancer serum used, all mice injected with cancer sera-treated BRCA1-KO fibroblasts developed visible tumors as early as one week following inoculation (Fig. 2a and Additional file 4: Figure S3). In contrast, none of the mice injected with control fibroblasts, treated with healthy control or cancer patient sera, developed tumors during the course of the experiments (4 weeks latency) (Fig. 2a). Cancer patient sera induced the transformation of BRCA1-KO fibroblasts. a NOD/SCID mice were injected with either control or BRCA1-KO fibroblasts, that were treated for 2 weeks with healthy individual (Control) or cancer patient sera. 4 weeks after cells transplantation, mice were euthanized and the tumors photographed. Representative pictures are shown. Tumor masses were only observed when mice were injected with BRCA1-KO fibroblasts treated with cancer patient sera. (n = 3 mice per group). (b and c) Formalin-fixed paraffin-embedded tumors generated following injection of BRCA1-KO fibroblasts treated with CRC-LM patient serum (b: Case E) or pancreatic cancer patient serum (c: Case J) were processed for H&E staining, or immunolabeled with antibodies against tumor specific markers Histopathological analyses of excised tumors showed solid growth of tumor cells sheets with high proliferation index (over 85–90 % Ki67 positivity) (Fig. 2b, c and Additional file 5: Figure S4). We further characterized these tumors for differentiation patterns based on the primary tumor of the blood donors. The histology of tumors obtained with cells treated with sera derived from five different patients (Cases A, B, C, D and F) confirmed that they were poorly differentiated carcinomas (Fig. 3 and Additional file 4: Figure S3). Notably, vimentin staining of all the cancerous masses was negative indicating that cancer factors carried in the sera of all tested patients had integrated in the genome of the BRCA1-KO fibroblasts changing their fate permanently. Even more striking was the discovery that BRCA-KO fibroblasts treated with sera taken from four patients with colorectal cancer liver metastases (Cases E, G, H and I), and one patient with pancreatic cancer (Case J) displayed epithelial differentiating features typical of colorectal adenocarcinomas and pancreatic ductal adenocarcinoma, respectively (Table 3 and Fig. 2b, c and Additional file 5: Figure S4). The tumors generated with cells treated with cases E, G, H and I (colorectal cancer) were negative for CK7 but they all stained positive for CEA, CK20, CDX-2 and AE1/AE3, which are universal markers of colorectal cancer. The tumors generated with BRCA-KO fibroblasts treated with case J (pancreatic cancer), stained strongly positive for CK7/CK19 which is a typical feature of pancreatic adenocarcinoma differentiation. Interestingly, CK7 expression was not uniform in the cancer specimen but it was strongly expressed in the areas with pancreatic tumor morphology indicating an evolving differentiation from undifferentiated morphology to pancreatic cancer phenotype. Attempts to characterize immunohistochemically the other tumors failed to show any differentiating features resembling the tumors of blood donors as they all stained negatively for tumor-specific markers (Table 3 and Fig. 3). Tumors generated with cancer patient sera-treated BRCA1-KO did not all display differentiation characteristics. BRCA1-KO fibroblasts were treated with cancer patient sera for 2 weeks (Cases A, B, C, D, and F). Treated cells were injected into NOD/SCID mice that were followed for 4 weeks for tumors growth. Developing tumors were excised and photographed (See Additional file 4: Figure S3). Formalin-fixed paraffin-embedded xenotransplant samples were processed for H&E staining, or immunolabeled with antibodies against tumor specific markers Table 3 Summary of the immunohistochemistry analyses of the xenografts obtained with cancer patients' serum-treated BRCA1-KO fibroblasts BRCA1-KO fibroblasts treated with healthy control sera failed to acquire any malignant transformation To rule out the possibility that BRCA1-KO fibroblasts might have an innate tendency to turn malignant due to the impaired function of the BRCA1 gene, BRCA1-KO fibroblasts were cultured with healthy patients' serum for 2 weeks. Inoculation of these cells in NOD/SCID mice failed to form any mass even at longer latency period (12 weeks post-transplantation), confirming the known notion that a single oncosuppressor mutation is not enough to trigger cancerogenesis and activate the cascade of events that eventually lead to cancer transformation (Fig. 2a). Furthermore, the results of this test confirmed also that the putative factors responsible for the malignant transformation of the BRCA1-KO fibroblasts were exclusively present in the serum of patients with metastatic cancer and were absent in the serum of healthy patients. Malignant transformation of the BRCA1-KO fibroblasts is permanent and not transient Further, we wanted to test whether the malignant transformation of the BRCA1-KO fibroblasts was permanent and not secondary to a short-term and transient effect. Therefore, BRCA1-KO fibroblasts were treated with cancer serum for 2 weeks. Subsequently, they were exposed to DMEM-F12 media supplemented with 10 % FBS and 1 % Pen-Strep, for two more weeks to allow them to recover. Interestingly we found that cells had kept their tumorigenic potential and still formed tumors when they were injected into NOD/SCID mice, indicating thus that the transformation was indeed permanent and not transient (Fig. 4). Cancer patient sera permanently transformed BRCA1-KO fibroblasts. a BRCA1-KO fibroblasts were treated with cancer patient serum for 2 weeks. Cells were then transferred to regular culture medium (10 % FBS-supplemented DMEM-F12 medium with 1 % antibiotics) for 2 weeks. Cells were injected subcutaneously into NOD/SCID mice. 4 weeks after cells transplantation, mice were euthanized and the tumors photographed. Representative pictures are shown. b Formalin-fixed paraffin-embedded xenotransplants samples were processed for H&E staining BRCA1-KO fibroblasts demonstrate a significant increased uptake of cancer-derived exosomes A number of recent evidences indicate that exosomes are involved in cancer invasion and metastasis [4, 8, 18, 32]. We had noticed in previous unpublished experiments that cancer exosomes had the tendency to accumulate in the cytoplasm of HEK293 cells in larger number than what was observed in normal cells. Following this observation, we hypothesized that perhaps one of the methods implemented by the oncosuppressor genes to protect the genome of the cells was to prevent the uptake of genetic material potentially harmful to the cell. As a consequence of that, any cell with a single oncosuppressor mutation would show a superior uptake of cancer exosomes. In order to test our hypothesis, exosomes were isolated from cancer patients' sera. The size of the isolated particles was between 50 and 120 nm as visualized by electron microscopy (Fig. 5a), and Nanosight analyses (size = 103 +/− 7; Fig. 5b). This is in the range of the expected size for exosomes. The isolated exosomes were labeled with PKH-26 and added to BRCA1-KO and wild type fibroblasts, and HEK293 cell cultures to assess their internalization. Exosomes uptake was assessed by measuring the number of positive intracellular spots (Fig. 5c and d). As hypothesized, BRCA1-KO fibroblasts and HEK293 cells showed an uptake of cancer exosomes 3 to 13 times higher (6.6 +/− 0.6 times for BRCA1-KO fibroblasts and 7.7 +/− 2.0 for HEK293 cells) than that measured in control wild type fibroblasts. This suggests that one of the ways oncosuppressor genes might use to protect the integrity of the genome would be to act as gatekeepers at the membrane level and block the integration of dangerous genetic material. BRCA1-KO fibroblasts internalized exosomes more efficiently than control cells. a Exosomes were isolated as described under Methods. Representative micrographs of transmission electron microscopy on cancer patient sera exosome preparations. The image showed small vesicles of approximately 50–120 nm in diameter. Scale bars 100 nm. b NanoSight analysis of samples prepared as in (a). The size was centered around 94 nm in diameter. c Confocal microscopy monitoring of PKH-26-labeled exosome uptake in vitro into BRCA1-KO fibroblasts, control fibroblasts, and HEK293 cells. Note that exosomes internalized more in BRCA1-KO cells. They were dispersed in the cytoplasm and tended to form aggregates in the perinuclear regions. d The number of PKH-26-labeled exosomes (dots) was counted. Data are expressed as relative number (Rel. Num.) of exosomes per cell compared to that in control fibroblasts. In the insert, data are expressed as mean +/− SD (n = 6 exosome preparations; P = 0.032 for HEK293 cells compared to control fibroblasts, P = 0.028 for BRCA1-KO fibroblasts compared to control fibroblasts, and P = 0.35 for HEK293 cells compared to BRCA1-KO fibroblasts) In the present study, we confirmed that sera of patients with metastatic cancer contain tumorigenesis-signaling factors that, once delivered to recipient target cells, are capable to complete the cascade of events that eventually lead the cells to acquire malignant traits. In our previous research we had used the HEK293 cells as a model of "initiated" cell to demonstrate that the horizontal transfer of blood-circulating cancer factors is also applicable to human cells [26], confirming thus, the validity of the "genometastatic" theory in humans [23–25]. According to this theory, carcinogenetic steps such as initiation, promotion and progression might not represent events limited to the cells forming the primary tumor, but may actually be a process reproducible through cancer factors, shed by primary tumors and carried through the blood, to susceptible cells, located at metastatic sites [5, 7, 10–16]. In order to strengthen the validity of this alternative metastatic pathway in humans, we used fibroblasts, which are among the most represented cells in human body and central players at metastatic sites [33, 34]. In addition, these cells display a high level of plasticity [35]. We induced a BRCA1 oncosuppressor mutation to reproduce an in vitro model that would be as close as possible to what it is encountered in real clinical scenarios. We hypothesized that not only immortal cells, as previously demonstrated [23–26], but also any human cell carrying a single oncosuppressor mutation might represent a target cell susceptible of malignant transformation if exposed to blood-circulating cancer factors. The observation that BRCA1-KO fibroblasts cultured in cancer patients' sera displayed oncogenic properties such as increased proliferation potential and ability to form tumors following subcutaneous injection into NOD/SCID mice supports the belief that truly a horizontal transfer of cancer material might have been overlooked when trying to understand metastatic disease. Moreover, the discovery that BRCA1-KO fibroblasts change their fate completely and turn into colon cancer cells and pancreatic cancer cells, when exposed to serum derived from patients affected by metastatic colon and pancreatic cancer, strengthen the belief of the authors, that similarity doesn't imply sameness and therefore metastatic cells might not necessarily be only cells detaching from primary tumors. In line with this concept, the molecular profiles of primary and metastatic lesions are not usually identical and therefore, at least theoretically, the possibility that metastases might not be deriving from the same cells is still open [36–39]. The data gathered in our study prove that any cell as long as it is "initiated" has the potentials to incorporate in its genome "signaling" factors, change its fate and acquire aberrant phenotypical traits identical or similar to the source of the signaling factors. To our knowledge, this is the first study to demonstrate the transformation of human fibroblasts carrying a single oncosuppressor mutation (i.e., BRCA1) into colon cancer or pancreatic cancer cells after exposure to serum of patients with metastatic colon or metastatic pancreatic cancer. This result becomes even more striking and fascinating when considering that the recovery test indicated that the phenotypical modifications were indeed permanent and not related to a transient alteration of the genetic asset of the cells. In other words, a short exposure to cancer factors present in the serum seems to have been strong enough to overcome the repair mechanisms of the treated cells. The main implication of this stable modification of the genome of human cells is that at least in vitro, metastatic transformation, through horizontal transfer, is not a theory anymore but a fascinating reality. The scientific soundness of this data is corroborated by the evidence brought about in our experiments that the malignant transformation would be unlikely to be secondary to an innate instability of the fibroblasts caused by the BRCA1 mutation. In fact, the exposure of the BRCA1-KO cells to healthy serum never made them susceptible of malignant transformation, confirming the known notion that a single mutation is not able to trigger the cascade of events that leads to tumorigenesis [40]. In our study, we documented, for the first time, a significant increased uptake of cancer-derived exosomes by BRCA1-KO fibroblasts and HEK293 cells when compared to wild type fibroblasts. This finding noted only in cells carrying a single oncosuppressor mutation suggests a potential role that oncosuppressor genes might have in exosomes trafficking. Perhaps, they might protect the integrity of the cellular genome not only by repairing DNA damages and controlling cell cycle checkpoints [27, 41], but also by blocking the uptake of extracellular mutating material and thus preventing their integration in the genome with subsequent DNA alterations. Although we reported an increased uptake of cancer derived exosomes in oncosuppressor mutated cells, their putative role in the documented malignant transformation of the cells yet has to be determined as well as the true nature of all the factors involved in the transfer of malignant traits. While cancer cells-derived circulating factors (i.e., DNA, mRNA, miRNA, proteins) were detected in the blood of cancer patients, and their regulatory role in cancer progression and development were reported in many studies [16, 22, 32, 42, 43] still their respective roles have to be fully defined. If on one hand, it has been reported that exosomes originating from the primary tumor paves the way for organ-specific metastasis by preparing a niche for the engraftment of circulating cancer cells, [5–7, 9], on the other hand, the results of our study might offer a different prospective, which do not exclude, but might integrate the conventional view on metastasis. The inability of the BRCA1-KO fibroblasts to fully differentiate in all cancer phenotypes is a limitation of our study. In our view, this failure is not perceived necessarily as a weakness of the theory but it is perceived as awareness that the genometastatic process is not understood in its entirety. When we exposed HEK293 to cancer patients' sera the cells turned only into poorly differentiated carcinomas regardless of the type of cancer studied [26]. The exposure of BRCA1-KO fibroblasts has added value to the genometastatic theory since we have observed a full differentiation into at least two cancer cell lineages (colon and pancreas). We are planning to repeat the experiments with different cells and different single oncosuppressor mutations to see if the horizontal transformation targets different types of cells in different organs and at different stages of their physiological differentiation according to the type of cancer. The data presented in this study support the hypothesis that any human cells carrying a single oncosuppressor mutation is capable of integrating cancer factors carried in the blood of cancer patients and undergo complete malignant transformation. The evidence shown in our experiments that uptake of cancer-derived exosomes is significantly increased in these cells suggests a possible role that oncosuppressor genes might have in exosomes trafficking. The reported findings support the notion of the possible role of a non-classical pathway to explain cancer traits exchange between malignant and non-malignant cells that may have implications during cancer progression and metastasis. Based on these results, the hypothesis that dissemination and migration of cancer cells from primary tumors might not be the only mechanism to explain metastases seems rational and merits further study. BRCA1 : breast cancer susceptibility gene 1 BSA: diaminobenzidine FBS: H&E: hematoxylin and eosin HEK293: human embryonic kidney cells KO: PDL: Population doubling level Eccles SA, Welch DR. Metastasis: recent discoveries and novel treatment strategies. Lancet. 2007;369(9574):1742–57. Jemal A, Bray F, Center MM, Ferlay J, Ward E, Forman D. Global cancer statistics. CA Cancer J Clin. 2011;61:69–90. Nguyen DX, Bos PD, Massagué J. Metastasis: from dissemination to organ specific colonization. Nat Rev Cancer. 2009;9:274–84. Skog J, Wurdinger T, van Rijn S. Glioblastoma microvesicles transport RNA and proteins that promote tumour growth and provide diagnostic biomarkers. Nat Cell Biol. 2008;10:1470–81. Hood JL, San RS, Wickline SA. Exosomes released by melanoma cells prepare sentinel lymph nodes for tumor metastasis. Cancer Res. 2011;71:3792–801. Grange C, Tapparo M, Collino F, Vitillo L, Damasco C, Deregibus MC, Tetta C, Bussolati B, Camussi G. Microvesicles released from human renal cancer stem cells stimulate angiogenesis and formation of lung premetastatic niche. Cancer Res. 2011;71(15):5346–56. Peinado H, Alečković M, Lavotshkin S, Matei I, Costa-Silva B, Moreno-Bueno G, Hergueta-Redondo M, Williams C, García-Santos G, Ghajar C, Nitadori-Hoshino A, Hoffman C, Badal K, Garcia BA, Callahan MK, Yuan J, Martins VR, Skog J, Kaplan RN, Brady MS, Wolchok JD, Chapman PB, Kang Y, Bromberg J, Lyden D. Melanoma exosomes educate bone marrow progenitor cells toward a prometastatic phenotype through MET. Nat Med. 2012;18:883–91. Abdel-Mageed ZY, Yang Y, Thomas R, Ranjan M, Mondal D, Moraz K, Fang Z, Rezk BM, Moparty K, Sikka SC, Sartor O, Abdel-Mageed AB. Neoplastic reprogramming of patient-derived adipose stem cells by prostate cancer cell-associated exosomes. Stem Cells. 2014;32:983–97. Fujita Y, Yoshioka Y, Ochiya T. Extracellular vesicle transfer of cancer pathogenic components. Cancer Sci. 2016. [DOI: 10.1111Epub ahead of print]. Valadi H, Ekström K, Bossios A, Sjöstrand M, Lee JJ, Lötvall JO. Exosome-mediated transfer of mRNAs and microRNAs is a novel mechanism of genetic exchange between cells. Nat Cell Biol. 2007;9:654–9. Runz S, Keller S, Rupp C, Stoeck A, Issa Y, Koensgen D. Malignant ascites-derived exosomes of ovarian carcinoma patients contain CD24 and EpCAM. Gynecol Oncol. 2007;107:563–71. Pisetsky DS, Gauley J, Ullal AJ. Microparticles as a source of extracellular DNA. Immunol Res. 2010;49:227–34. Subra C, Grand D, Laulagnier K, Stella A, Lambeau G, Paillasse M, De Medina P, Monsarrat B, Perret B, Silvente-Poirot S, Poirot M, Record M. Exosomes account for vesicle mediated transcellular transport of activatable phospholipases and prostaglandins. J Lipid Res. 2010;51:2105–20. Gaiffe E, Pretet JL, Launay S, Jacquin L, Saunier M, Hetzel G. Apoptotic HPV positive cancer cells exhibit transforming properties. PLoS One. 2012;7:e36766. Balaj L, Lessard R, Dai L, Cho YJ, Pomeroy SL, Breakefield XO. Tumour microvesicles contain retrotransposon elements and amplified oncogene sequences. Nat Commun. 2011;2:180. Fleischhacker M, Schmidt B. Circulating nucleic acids (CNAs) and cancer--a survey. Biochim Biophys Acta. 2007;1775(1):181–232. Thery C, Zitvogel L, Amigorena S. Exosomes: composition, biogenesis and function. Nat Rev Immunol. 2002;2:569–79. Ogorevc E, Kralj-Iglic V, Veranic P. The role of extracellular vesicles in phenotypic cancer transformation. Radiol Oncol. 2013;47(3):197–205. Candelario KM, Steindler DA. The role of extracellular vesicles in the progression of neurodegenerative disease and cancer. Trends Mol Med. 2014;20(7):368–74. Pant S, Hilton H, Burczynski ME. The multifaceted exosome: biogenesis, role in normal and aberrant cellular function, and frontiers for pharmacological and biomarker oppertunities. Biochem Pharmacol. 2012;83(11):1484–94. Colombo M, Raposo G, Théry C. Biogenesis, secretion, and intercellular interactions of exosomes and other extracellular vesicles. Annu Rev Cell Dev Biol. 2014;30:255–89. Falcone G, Felsani A, D'Agnano I. Signaling by exosomal microRNAs in cancer. J Exp Clin Cancer Res. 2015;34:32. Garcıa-Olmo D, Garcıa-Olmo DC, Ontanon J, Martinez E, Vallejo M. Tumor DNA circulating in the plasma might play a role in metastasis. The hypothesis of the genometastasis. Histol Histopathol. 1999;14:1159–64. García-Olmo DC, Domínguez C, García-Arranz M, Anker P, Stroun M, García-Verdugo JM, García-Olmo D. Cell-free nucleic acids circulating in the plasma of colorectal cancer patients induce the oncogenic transformation of susceptible cultured cells. Cancer Res. 2010;70:560–7. Trejo-Becerril C, Perez-Cardenas E, Taja-Chayeb L, Anker P, Herrera-Goepfert R, Medina-Velazquez LA, Hidalgo-Miranda A, Perez-Montiel D, Chavez-Blanco A, Cruz-Velazquez J, Dıaz-Chavez J, Gaxiola M, as-Gonzalez AD. Cancer Progression Mediated by Horizontal Gene Transfer in an In Vivo Model. PLoS One. 2012;7(12):e52754. Abdouh M, Zhou S, Arena V, Arena M, Lazaris A, Onerheim R, Metrakos P, Arena G. Transfer of malignant trait to immortalized human cells following exposure to human cancer serum. J Exp Clin Cancer Res. 2014;33(1):86. Gudmundsdottir K, Ashworth A. The roles of BRCA1 and BRCA2 and associated proteins in the maintenance of genomic stability. Oncogene. 2006;25(43):5864–74. Welcsh PL, King MC. BRCA1 and BRCA2 and the genetics of breast and ovarian cancer. Hum Mol Genet. 2001;10(7):31–42. Campeau PM, Foulkes WD, Tischkowitz MD. Hereditary breast cancer: New genetic developments, new therapeutic avenues. Hum Genet. 2008;124(1):31–42. Pal T, Permuth-Wey J, Betts JA. BRCA1 and BRCA2 mutations account for a large proportion of ovarian carcinoma cases. Cancer. 2005;104(12):2807–16. Ran FA, Hsu PD, Wright J, Agarwala V, Scott DA, Zhang F. Genome engineering using the CRISPR-Cas9 system. Nat Protoc. 2013;8(11):2281–308. Hoshino A, Costa-Silva B, Shen TL, Rodrigues G, Hashimoto A, Tesic MM. Tumour exosome integrins determine organotropic metastasis. Nature. 2015;527(7578):329–35. Polanska UM, Orimo A. Carcinoma-associated fibroblasts: non-neoplastic tumour-promoting mesenchymal cells. J Cell Physiol. 2013;228(8):1651–7. Paulsson J, Micke P. Prognostic relevance of cancer-associated fibroblasts in human cancer. Semin Cancer Biol. 2014;25:61–8. Takahashi K, Tanabe K, Ohnuki M, Narita M, Ichisaka T, Tomoda K, Yamanaka S. Induction of pluripotent stem cells from adult human fibroblasts by defined factors. Cell. 2007;131(5):861–72. Suzuki M, Tanin D. Gene expression profiling of human lynph node metastases and matched primary breast carcinomas: clinical implications. Mol Oncol. 2007;1(2):172–80. Huang S, Chen Y, Podsypanina K, Li Y. Comparison of expression profiles of metastatic versus primary mammary tumors in MMTV-Wnt-1 and MMTV-Neu transgenic mice. Neoplasia. 2008;10(2):118–24. Liu X, Zhang M, Go VLW, Hu S. Membrane proteomic analysis of pancreatic cancer cells. J Biomed Sci. 2010;17(1):74. Yoshida A, Okamoto N, Tozawa-Ono A, KoizUmi H, Kiguchi K, Ishizuka B, Kumai T, Suzuki N. Proteomic analysis of differential protein expression by brain metastases of gynecological malignancies. Hum Cell. 2013;26(2):56–66. Boehm JS, Hession MT, Bulmer SE, Hahn WC. Transformation of human and murine fibroblasts without viral oncoproteins. Mol Cell Biol. 2005;25:6464–74. Jiang Q, Greenberg RA. Deciphering the BRCA1 tumor suppressor network. J Biol Chem. 2015;290(29):17724–32. Diehl F, Li M, Dressman D, He Y, Shen D, Szabo S, Diaz Jr LA, Goodman SN, David KA, Juhl H, Kinzler KW, Vogelstein B. Detection and quantification of mutations in the plasma of patients with colorectal tumors. Proc Natl Acad Sci U S A. 2005;102(45):16368–73. Kahlert C, Melo SA, Protopopov A, Tang J, Seth S, Koch M, Zhang J, Weitz J, Chin L, Futreal A, Kalluri R. Identification of double-stranded genomic DNA spanning all chromosomes with mutated KRAS and p53 DNA in the serum exosomes of patients with pancreatic cancer. J Biol Chem. 2014;289(7):3869–75. We are grateful to Ayat Salman for her assistance with the Ethical Committee approvals, Laura Montermini for her assistance with Nanosight data acquisition, and Diane Gingras for her assistance with electron microscopy data acquisitions. This work was financially supported by Giuseppe Monticciolo. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Cancer Research Program, McGill University Health Centre-Research Institute, 1001 Decarie Boulevard, Montreal, H4A 3J1, QC, Canada Dana Hamam , Mohamed Abdouh & Goffredo Orazio Arena Department of Experimental Surgery, Faculty of Medicine, McGill University, 845 Rue Sherbrooke O, Montreal, H3A 0G4, QC, Canada Department of Pathology, McGill University Health Centre-Research Institute, 1001 Decarie Boulevard, H4A 3J1, Montreal, QC, Canada Zu-Hua Gao Department of Obstetrics and Gynecology, Santo Bambino Hospital, via Torre del Vescovo 4, Catania, Italy Vincenzo Arena Department of Surgical Sciences, Organ Transplantation and Advances Technologies, University of Catania, via Santa Sofia 84, Catania, Italy Manuel Arena Department of Surgery, McGill University, St. Mary Hospital, 3830 Lacombe Avenue, Montreal, H3T 1M5, QC, Canada Goffredo Orazio Arena Search for Dana Hamam in: Search for Mohamed Abdouh in: Search for Zu-Hua Gao in: Search for Vincenzo Arena in: Search for Manuel Arena in: Search for Goffredo Orazio Arena in: Correspondence to Goffredo Orazio Arena. GA supervised the study. MA, VA, MA and GA conceived and designed the study. MA, DH and GA developed the methodology. MA, DH, ZHG and GA acquired and analyze data, and managed patients. MA, DH and GA drafted the manuscript. All authors read and approved the final draft of the manuscript. Methodology design followed to knock-out BRCA1 in fibroblasts. pX-458 plasmid was used as a vector for CRISPR-Cas9 system for BRCA1 knocking down. Human fibroblasts were transfected using lipofectamine 3000 (A). Transfected fibroblasts with sgBRCA1-pX458 or empty pX-458 vectors were sorted using a FACSAria cell sorter based on their GFP positivity. Only GFP positive cells were obtained and expanded in culture (B). (i) Naïve fibroblasts were gated as negative fraction (GFP negative fibroblasts). (ii) Fraction of cells sorted as control fibroblasts (empty pX-458 vector-transfected cells). (iii) Fraction of cells sorted as sgBRCA1-pX458 transfected fibroblasts (BRCA1-KO). Transfection efficiency showed percentage of 4–6 %. Fibroblasts were treated with cancer patients' serum or healthy individual serum. Treated cells were analyzed for their in vitro proliferation or injected into NOD/SCID mice for tumor growth potential. (PPT 524 kb) Validation of BRCA1 Knockout in human fibroblasts. (A) Surveyor nuclease assay was performed as described under Materials and Methods. DNA was extracted from sorted fibroblasts. After amplification, DNA was denatured, reannealed and subjected to endonuclease digestion. Digestion products were run on 1 % agarose gel. Only with DNA extracted from sgBRCA1-transfected fibroblasts, we detected 3 bands: the full-length PCR product (i) and the digestion products (ii and iii). Note that the cumulative size of bands ii and iii equals the size of band i. (B) Western blot analysis of proteins extracted from control fibroblasts (empty vector transfected) and BRCA1-KO fibroblasts. Note that BRCA1 signal is absent in BRCA1-KO fibroblasts. (*) points to the BRCA1 signal. (PPT 265 kb) List of antibodies used in this study. Table S2. BRCA1 guide sequence as designed by the CRISPR Design Tool. Table S3. Analyses of off-target sequences of the BRCA1 guide. (DOC 58 kb) Cancer patient sera induced the transformation of BRCA1-KO fibroblasts. BRCA1-KO fibroblasts were treated with cancer patient sera for 2 weeks (6 Cases). Treated cells were injected into NOD/SCID mice that were followed for 4 weeks for tumors growth. Developing tumors were excised and photographed. (PPT 189 kb) Cancer patient sera changed the fate of BRCA1-KO fibroblasts. BRCA1-KO fibroblasts were treated with CRC-LM patient sera for 2 weeks (Cases G, H and I). Treated cells were injected into NOD/SCID mice that were followed for 4 weeks for tumors growth. Generated tumors were processed for H&E staining, or immunolabeled with antibodies against tumor specific markers. (PPT 36551 kb) Hamam, D., Abdouh, M., Gao, Z. et al. Transfer of malignant trait to BRCA1 deficient human fibroblasts following exposure to serum of cancer patients. J Exp Clin Cancer Res 35, 80 (2016) doi:10.1186/s13046-016-0360-9 Genometastasis Tumor suppressor genes
CommonCrawl
What are the most misleading alternate definitions in taught mathematics? Asked 10 years, 1 month ago I suppose this question can be interpreted in two ways. It is often the case that two or more equivalent (but not necessarily semantically equivalent) definitions of the same idea/object are used in practice. Are there examples of equivalent definitions where one is more natural or intuitive? (I'm meaning so greatly more intuitive so as to not be subjective.) Alternatively, what common examples are there in standard lecture courses where a particular symbolic definition obscures the concept being conveyed. soft-question big-list examples teaching definitions QPeng $\begingroup$ This definition of the product topology is really not so bad when you correct the typos and translate it to words: it says every point X x Y should have an open neighborhood that's a product of open sets in X and in Y. What's wrong with that? $\endgroup$ – Jonathan Wise Dec 2 '09 at 16:52 $\begingroup$ Well, the natural generalization of that definition is the box topology, whereas the natural generalization of Daniel's definition is the (categorical) product topology. $\endgroup$ – Qiaochu Yuan Dec 2 '09 at 17:07 $\begingroup$ My second comment: (2) The definition in terms of open sets is spiritually a construction, not a definition. It may be described as "a construction in terms of open sets that works only for finite products". The definition in terms of coarsest topology is a genuine definition, and is generally accepted as the correct definition, but it doesn't give you a construction. The genuine definition gives you much more intuition about the product, but sometimes you need a construction. Some of my fellow category theorists regard that bit about needing a construction as a heresy. $\endgroup$ – SixWingedSeraph Dec 2 '09 at 17:24 $\begingroup$ The definition in terms of a coarsest topology gives you a perfectly valid construction: take the inverse image of every open set. $\endgroup$ – Qiaochu Yuan Dec 2 '09 at 17:28 $\begingroup$ This general point about definitions needs to be made: The definition is intended to give a (more or less) minimal technical description of the concept that implies all true theorems about the concept and nothing else. It doesn't matter if the definition emphasizes technical aspects and doesn't mention some big intuitive ideas about it. That's not what definitions are for. A teacher should provide many ways to think about the concept, some of which might constitute definitions. $\endgroup$ – SixWingedSeraph Dec 5 '09 at 1:35 Many topics in linear algebra suffer from the issue in the question. For example: In linear algebra, one often sees the determinant of a matrix defined by some ungodly formula, often even with special diagrams and mnemonics given for how to compute it in the 3x3 case, say. det(A) = some horrible mess of a formula Even relatively sophisticated people will insist that det(A) is the sum over permutations, etc. with a sign for the parity, etc. Students trapped in this way of thinking do not understand the determinant. The right definition is that det(A) is the volume of the image of the unit cube after applying the transformation determined by A. From this alone, everything follows. One sees immediately the importance of det(A)=0, the reason why elementary operations have the corresponding determinant, why diagonal and triangular matrices have their determinants. Even matrix multiplication, if defined by the usual formula, seems arbitrary and even crazy, without some background understanding of why the definition is that way. The larger point here is that although the question asked about having a single wrong definition, really the problem is that a limiting perspective can infect one's entire approach to a subject. Theorems, questions, exercises, examples as well as definitions can be coming from an incorrect view of a subject! Too often, (undergraduate) linear algebra is taught as a subject about static objects---matrices sitting there, having complicated formulas associated with them and complex procedures carried out with the, often for no immediately discernible reason. From this perspective, many matrix rules seem completely arbitrary. The right way to teach and to understand linear algebra is as a fully dynamic subject. The purpose is to understand transformations of space. It is exciting! We want to stretch space, skew it, reflect it, rotate it around. How can we represent these transformations? If they are linear, then we are led to consider the action on unit basis vectors, so we are led naturally to matrices. Multiplying matrices should mean composing the transformations, and from this one derives the multiplication rules. All the usual topics in elementary linear algebra have deep connection with essentially geometric concepts connected with the corresponding transformations. $\begingroup$ "Even relatively sophisticated people will insist that det(A) is the sum over permutations..." Yes indeed. How else do you prove that SL(n) is an algebraic group? How you want to think of a determinant depends on the situation. $\endgroup$ – JS Milne Dec 9 '09 at 6:34 $\begingroup$ Of course you are right, and perhaps my post is a bit of a rant! I apologize. (But surely it is implicit in the question that all the equivalent formulations of a definition might find a suitable usage.) My point is that in an undergraduate linear algebra class, a computational approach to the determinant obscures its fundamental geometric meaning as a measure of volume inflation. The permutation sum definition is especially curious in an undergraduate course, because the method is not feasible (exponential time), whereas other methods, such as the LU decomposition, are polynomial time. $\endgroup$ – Joel David Hamkins Dec 10 '09 at 14:09 $\begingroup$ Victor, of course I mean the signed volume. And I don't think it is so difficult. One can even treat it axiomatically: inflating one dimension by a factor multiplies the volume by that factor; swapping two coordinates reverses orientation; skews do not change volume. From these principles one can derive the usual formulas, while also providing a feasible means to compute it. $\endgroup$ – Joel David Hamkins Jun 2 '10 at 21:42 $\begingroup$ Another way to say it "axiomatically" is that the determinant is the induced endomorphism of the top exterior power of the vector space. Of course, it probably shouldn't be defined that way in a first linear algebra course! $\endgroup$ – Mike Shulman Jul 16 '10 at 3:14 $\begingroup$ This reminds me of a story recounted by a friend of mine in graduate school. He spent a lot of time in the department, and one evening was approached by an undergraduate taking a fancy class that had introduced the trace of a linear transformation in the slick coordinate-free manner. This undergraduate had been tasked with computing the trace of a certain $2\times 2$ matrix and had no idea how to proceed. $\endgroup$ – Ramsey Apr 25 '12 at 4:58 Here's another algebra peeve of mine. The definition of a normal subgroup in terms of conjugation is pretty strange until it's explained that normal subgroups are the ones you can quotient by. Again, in my opinion I think normal subgroups should be introduced as kernels of homomorphisms from the get-go. Qiaochu Yuan $\begingroup$ Many textbooks define normal subgroups before even talking about homomorphisms, which is totally bonkers in my opinion. $\endgroup$ – Steven Gubkin Dec 4 '09 at 19:44 $\begingroup$ Right. Unlike normal subgroups, homomorphisms are totally intuitive: they turn true equations into other true equations. That's something students have been doing their whole lives. $\endgroup$ – Qiaochu Yuan Dec 4 '09 at 21:30 $\begingroup$ I totally agree with this and always tell students to think of "kernel of some homomorphism" as the definition and "closed under conjugation by any element of G" as a fact that can be shown to be equivalent to it. $\endgroup$ – gowers Dec 5 '09 at 22:35 $\begingroup$ I agree that as soon as define "normal subgroup" you should prove that they are exactly the kernels of homomorphisms, but in some situations (e.g., algebraic groups) its hard to show that normal subgroups are kernels and in other situations (e.g., group schemes) they aren't. $\endgroup$ – JS Milne Dec 9 '09 at 6:21 $\begingroup$ Let us also remember that homomorphisms gained a foothold more than a century after normal subgroups. You need the idea of an abstract group in order for the quotients and homomorphism theorems to make sense (which is also the metamathematical reason behind the difficulties mentioned by JS Milne). $\endgroup$ – Victor Protsak May 28 '10 at 1:27 In my experience, introductory algebra courses never bother to clarify the difference between the direct sum and the direct product. They're the same for a finite collection of abelian groups, which in my opinion gets confusing. Of course, they're quite different for infinite collections. I think students should be taught sooner rather than later that the first is the coproduct and the second is the product in $\text{Ab}$. This clarifies the constructions for non-abelian groups as well, since the direct product remains a product in $\text{Grp}$ but the coproduct is very different! $\begingroup$ I do remember having to explain the difference to many confused people. It is quite confusing to see lecturers use the two interchangeably without mention. $\endgroup$ – Sam Derbyshire Dec 2 '09 at 17:29 $\begingroup$ I'll vouch for this as a guinea pig: I did learn them as separate concepts my freshman year, and learned to be careful even in the case of two spaces. That gave me a good intuition about when to mistrust finite-case defined products (e.g. the box topology, above) because they "could be defined differently" in the infinite case. I didn't need to know the categorial definitions that early on. $\endgroup$ – Elizabeth S. Q. Goodman Dec 4 '09 at 5:30 $\begingroup$ even in the finite case, products and coproducts differ because they are NOT only the objects, but come together with structure morphisms (as every universal object). $\endgroup$ – Martin Brandenburg May 24 '10 at 21:45 $\begingroup$ Indeed I think one has to grasp the distinction between direct sum and direct product to truly appreciate their isomorphy in most cases. $\endgroup$ – darij grinberg May 28 '10 at 10:05 $\begingroup$ Not to mention that in $Grp$ the direct sum and the coproduct are also two different things. (For finitely many summands, the direct sum is again the same as the direct products; for infinitely many summands, the direct sum is neither the product nor the coproduct, although it still has a more colimitish flavour.) $\endgroup$ – Toby Bartels Apr 4 '11 at 4:22 I increasingly abhor the introduction of the finite ring $Z_n$ not as $\mathbb{Z}/n\mathbb{Z}$ but as the set $\{0,\ldots,n-1\}$ with "clock arithmetic". (I understand that if you want to introduce modular arithmetic at the high school level or below, this is the way to go. I am talking about undergraduate abstract algebra textbooks that introduce the concept in this way.) Two problems: Using clocks to motivate addition modulo $n$: excellent pedagogy. Be sure to mention military time, which goes from $0$ to $23$ instead of $1$ to $12$ twice. But...using clocks to motivate multiplication modulo $n$: WTF? Time squared?? Mod $24$??? It's the worst kind of pedagogy: something that sounds like it should make sense but actually doesn't. Of course soon enough you stop clowning around and explain that you just want to add/subtract/multiply the numbers and take the remainder mod $n$. This brings me to: Many texts define $Z_n$ as the set $\{0,\ldots,n-1\}$ and endow it with addition and multiplication by taking the remainder mod $n$. Then they say that this gives a ring. Now why is that? For instance, why are addition and multiplication associative operations? If you think about this for a little while, you will find that all explanations must pass through the fact that $\mathbb{Z}$ is a ring under the usual addition and multiplication and the operations on $Z_n$ are induced from those on $\mathbb{Z}$ by passing to the quotient. You don't, of course, have to use these exact words, but I do not see how you can avoid using these concepts. Thus you should be peddling the homomorphism concept from the very beginning. As a corollary, I'm saying: the concept of a finite ring $Z_n$ for some generic $n$ is more logically complex than that of the one infinite ring $\mathbb{Z}$ (that rules them all?). A lot of people seem, implicitly, to think that the opposite is true. Pete L. Clark $\begingroup$ While I agree with the premise here, you can motivate multiplication mod n by performing $a$ tasks which each require $b$ hours...what time will it be at the end? Of course, this is really doing (integer) * (residue) and not (residue)*(residue), but then you have them observe that if you do your task $n$ times, $b$ is irrelevant, and remarkably, all that matters is how many times you perform the task mod n!! $\endgroup$ – Cam McLeman May 27 '10 at 16:52 $\begingroup$ Also, it's probably worth noting that strictly speaking addition on a clock doesn't make much sense either; an actual clock is not the group $Z_n$, but rather the action of that group on itself. $\endgroup$ – Harry Altman May 27 '10 at 22:09 $\begingroup$ @Harry I'm glad someone said it! Times form an affine space. You don't add times. "3 o'clock plus 4 o'clock" means nothing. The thing you add are time intervals. Time intervals are measured by stopwatches. Stopwatches with hands generally don't wrap at 12 or 24 hours. $\endgroup$ – Dan Piponi Jul 15 '10 at 22:34 $\begingroup$ A torsor, in other words, right? $\endgroup$ – yatima2975 Aug 18 '10 at 15:47 $\begingroup$ I have seen students who must have been exposed to the introduction to $Z_n$ that Pete warns of and who think that they are specifying a $\mathbb{Z}$-module homomorphism $\{0,1,2\}\rightarrow \{0,1,2,3,4,5\}$ by setting $i\mapsto i$. This to me is the ultimate reason to avoid introducing $Z_n$ as the set $\{0,\ldots,n-1\}$. $\endgroup$ – Alex B. Oct 23 '10 at 16:07 A simple example is the two definitions for independence of events: A and B are independent iff $P(A\cap B) = P(A)P(B)$ A is independent from B iff $P(A\mid B) = P(A)$ Some presentations start with Definition 1, which is entirely uninformative: nothing in it explains why on earth we bother discussing this. In contrast, Definition 2 says exactly what "independent" means: knowing that B has occured does not change the probability that A occurs as well. A reasonable introduction to the subject should start with Definition 2; then observe there is an issue when P(B)=0, and resolve it; then observe independence is symmetric; then derive Definition 1. 2 revisions, 2 users Christos 78% $\begingroup$ Resolve it how, short of a serious digression into conditional expectation? $\endgroup$ – Jeff Hussmann May 27 '10 at 20:02 $\begingroup$ One cannot "resolve" this. If B has probability zero, any conditional probability is admissible. I think a natural approach would be showing that 2. implies 1. when P(B)>0 and then abstract and generalize to 1. The second definition is simply not workable. $\endgroup$ – Michael Greinecker Jul 15 '10 at 20:33 $\begingroup$ @Mike: That works, but you can only do it while forgetting about sigma-algebras in nice enough spaces. See S. M. Samuels, The Radon-Nikodym Theorem as a Theorem in Probability. jstor.org/pss/2321055 I think "nice enough" is "Borel" but I can't verify that at this computer. $\endgroup$ – Neil Toronto Apr 11 '11 at 15:38 $\begingroup$ @MichaelGreinecker, irrespective of its mathematical correctness, the second definition is the right way of motivating the whole thing. We mathematicians need to stop trying to present everything perfect the first-time around. Give the reader a working definition, discuss its problems, and use that to motivate the "proper" definition. That is how you engage the reader. $\endgroup$ – goblin Jul 12 '14 at 9:01 $\begingroup$ I'm going to have to object to this; the first definition is rather natural too. If X and Y are independent random variables, that means that the random variable $(X,Y)$ behaves in the obvious way; the possible outcomes are given simply by pairing an outcome for $X$ (weighted by the probability that $X=x$) with an outcome for $Y$ (weighted by the probability that $Y=y$). $\endgroup$ – user13113 May 18 '15 at 14:50 One that I particularly dislike is the definition of an action of a group G on a set X as being a function $f:G\times X\rightarrow X$ that satisfies certain properties. I cannot understand why anybody gives this definition when "homomorphism from G to the group of permutations of X" is not only easier to understand but is also how one thinks about group actions later. $\begingroup$ Why? They are really different forms of the same conjecture, and so might as well be given together. In some situations only the $G\times X\to X$ definition makes sense, for example, for an algebraic group G acting on a variety X (the automorphism group of X isn't an algebraic group). In other situations it's easier: when $G$ and $X$ have topologies, it's easier to say that $G\times X\to X$ is continuous than to first define a topology on Aut(X). $\endgroup$ – JS Milne Dec 9 '09 at 6:29 $\begingroup$ @gowers: Interesting, I think of the "$f$" version as the more natural of the two. What's an action? You take a $g\in G$ and an $x\in X$, and you get a new $x'\in X$. That's precisely encoded by f. Taking in a $g\in G$ and outputting "a function which sends $x$'s to $x'$'s" seems to me to obfuscate the matter. $\endgroup$ – Cam McLeman May 27 '10 at 16:41 $\begingroup$ @ David: What's the difference between 'you can multiply by scalars in a ring' and 'with a map $R \times M \to M$'? $\endgroup$ – Toby Bartels Apr 4 '11 at 4:33 $\begingroup$ One place that you must think about it this way is in the theory of Poisson actions. There, $X$ is a Poisson manifold, and you could consider the group $Aut(X)$ of ichthyomorphisms of it, but unless $G$ has a trivial Poisson structure the action of $G$ on $X$ is not by ichthyomorphisms. That is, each $g\in G$ does not preserve $X$'s Poisson structure. This is reflected in the fact that $g\in G$ is usually not a Poisson submanifold. All that one has is the map $G\times X \to X$. $\endgroup$ – Allen Knutson Mar 15 '12 at 13:57 $\begingroup$ @AllenKnutson I like "ichthyomorphism"; is it your invention? Google finds only a few occurrences of this word, and the only one that looks mathematical is this MO page. $\endgroup$ – Andreas Blass Apr 21 '15 at 14:55 One of my biggest annoyances is professors or books which fail to adequately distinguish between prime and irreducible elements of a ring, Herstein if I remember correctly being a (ha ha) prime example of this. The fact that these are the same in Z, where people first learn about unique factorization, doesn't help matters. Zev Chonoles $\begingroup$ Or that we say "irreducible" when talking about polynomials. $\endgroup$ – Qiaochu Yuan Dec 2 '09 at 17:34 $\begingroup$ It probably doesn't help that the "usual" definition of a prime number is as an irreducible element of the rig $N$ of natural numbers... $\endgroup$ – Yemon Choi Dec 2 '09 at 17:36 $\begingroup$ What is "prime element of a ring"? An element which generates a prime principal ideal? $\endgroup$ – Victor Protsak May 28 '10 at 1:42 $\begingroup$ Ahhhhhhhhhhhhhhhhhhhhhhhhhhh. I felt the same way ;) $\endgroup$ – David Corwin Jul 15 '10 at 19:11 $\begingroup$ @ Yemon: Yes, the word 'prime' has been mangled beyond all recognition. Imagine telling Fermat that 1 is no longer prime, but 0 is! (I was going to imagine telling Pythagoras, before I remembered that he doesn't even know about 0 in the first place.) $\endgroup$ – Toby Bartels Apr 4 '11 at 4:25 I normally won't bother with a 5 month old community wiki, but someone else bumped it and I couldn't help but notice that the significant majority of the examples are highly algebraic. I wouldn't want the casual reader to go away with the impression that everything is defined correctly all the time in analysis and geometry, so here we go... 1) "A smooth structure on a manifold is an equivalence class of atlases..." Aside from the fact that one hardly ever works directly with an explicit example of an atlas (apart from important counter-examples like stereographic projections on spheres and homogeneous coordinates on projective space), this point of view seems to obscure two important features of a smooth structure. First, the real point of a smooth structure is to produce a notion of smooth functions, and the definition should reflect that focus. With the atlas definition, one has to prove that a function which is smooth in one atlas is also smooth in any equivalent atlas (not exactly difficult, but still an irritating and largely irrelevant chore). Second, it should be clear from the definition that smoothness is really a local condition (the fact that there are global obstructions to every point being "smooth" point is of course interesting, but also not the point). The solution to both problems is to invoke some version of the locally ringed space formalism from the get-go. Yes, it takes some work on the part of the instructor and the students, but I and a number of my peers are living proof that geometry can be taught that way to second year undergraduates. If you still don't believe there are any benefits, try the following exercise. Sit down and write out a complete proof that the quotient of a manifold by a free and properly discontinuous group action has a canonical smooth structure using (a) the maximal atlas definition and (b) the locally ringed space definition. 2) "A tangent vector on a manifold is a point derivation..." While there are absolutely a lot of advantages to having this point of view around (not the least of which is that it is a better definition in algebraic geometry), I believe that this is misleading as a definition. Indeed, the key property that a good definition should have in my opinion is an emphasis on the close relationship between tangent vectors and smooth curves. Note that such a definition is bound to involve equivalence classes of smooth curves having the same derivative at a given point, and the notion of the derivative of a smooth curve is defined by composing with a smooth function. So for those who really like point derivations, they aren't far behind. There just needs to be some mention of curves, which in many ways are really what give differential geometry its unique flavor. 3) The notion of amenability in geometric group theory particularly lends itself to misleading definitions. I think there are two reasons. The first is that modulo some mild exaggeration basically every property shared by all amenable groups is equivalent to the definition. The second is that amenability comes up in so many different contexts that it is probably impossible to say there is one and only one "right" definition. Every definition is useful for some purposes and not useful for others. For example the definition involving left invariant means is probably most useful to geometric group theorists while the definition involving the topological properties of the regular representation in the dual is probably more relevant to representation theorists. All that being said, I think I can confidently say that there are "wrong" definitions. For example, I spent about a year of my life thinking that the right definition of amenability for a group is that its reduced group C* algebra and its full group C* algebra are the same. 4) Some functional analysis books have really bad definitions of weak topologies, involving specifying certain bases of open sets. This point of view can be useful for proving certain lemmas and working with some examples, but given the plethora of weak topologies in analysis these books should really give an abstract definition of weak topologies relative to any given family of functions and from then on specify the topology by specifying the relevant family of functions. I'm sure I could go on and on, but these four have proven to be particularly difficult and frustrating for me. $\begingroup$ I REALLY want to read a differential geometry course based on locally ringed spaces. Do you have one? $\endgroup$ – darij grinberg May 27 '10 at 16:37 $\begingroup$ Sheaves on Manifolds by Kashiwara is the only book length treatment of differential geometry from this point of view that I know, but it is far from an introductory text. The course I referred to in my answer was taught by Brian Conrad several years ago, and he still has lots of useful handouts on his web page from that course. Other than that, I can't help you. :( $\endgroup$ – Paul Siegel May 27 '10 at 17:43 $\begingroup$ I think the one geometric group theorist I've talked about this with considered the existence of a Følner sequence to be the right definition... $\endgroup$ – Harry Altman May 27 '10 at 21:48 $\begingroup$ Also, this should really be four separate answers, for the purposes of this kind of community-wiki big list question $\endgroup$ – Yemon Choi Jul 16 '10 at 5:19 $\begingroup$ The definition of tangent spaces via curves has one, very substantial, disadvantage: it is not clear that the so-defined tangent space is a vector space. You can define an addition in charts and show that it is well-defined, but that looks, unfortunately, not very natural. $\endgroup$ – Johannes Ebert Apr 25 '12 at 8:39 My biggest issue is with the coordinate-definition of tensor products. A physicist defines a rank $k$ tensor over a vector space $V$ of dimension $n$ to be an array of $n^k$ scalars associated to each basis of $V$ which satisfy certain transformation rules; in particular, if we know the array for a given basis, we can automatically determine it for a different basis. Another way to say this is that the space of tensors is the set of pairs consisting of a basis and a $n^k$ array of scalars, identified by an equivalence relation which gives the coordinate transformation law. For some strange reason, people seem to call this a coordinate-free definition. While it is in a sense coordinate-free (the transformation between coordinates lets you break free of coordinates in a sense), it is very confusing at first sight. People who use this definition will they say that certain operations are coordinate-free. What they mean by this, and it took me a long time to figure this out, is that you can do a certain algebraic operation to the coordinates of the tensor, and the formula is the same no matter which basis you work with (e.g., multiplying a covariant rank $1$ tensor with a contravariant rank $1$ tensor to get a scalar, or exterior differentiation of differential forms, or multiplying two vectors to get a rank $2$ tensor). The much nicer definition uses tensor products. This is a coordinate-free construction, as opposed to the coordinate-full description given above. This definition is nice because it connects to multilinear maps (in particular, it has a nice universal property). It also helped me see why tensors are different from elements of some $n^k$-dimensional vector space over the same field (they are special because we are equipped not just with a vector space but with a multilinear map from $V \times \cdots \times V \to V \otimes \cdots \otimes V$. The covariant/contravariant distinction can be explained in terms of functionals. This allows you to talk about contraction of tensors without worrying having to prove that it is coordinate-invariant! Finally, once you have all that under your best, you can easily derive the coordinate transformation laws from the multilinearity of $\otimes$. $\begingroup$ When I look at physics texts on tensors-even mathematically literate and careful texts like Frankel-I wonder how ANYONE understands the monstrousity they present tensors as.As formulas that transform by indecies that raise and lower by certain rules.No wonder only geniuses understood relativity theory before mathematicains began cleaning it up. $\endgroup$ – The Mathemagician Jul 15 '10 at 22:13 $\begingroup$ I'm now satisfied to know that my physics TA also agrees with me about physicists' approach to tensors. $\endgroup$ – David Corwin Dec 19 '10 at 5:43 $\begingroup$ I think this is outdated. Many physicists learn about tensors in a course on General Relativity and one of the standard textbooks is Wald, "General Relativity". It defines tensors in terms of multilinear maps, not as a collection of scalars obeying certain transformation rules. The same is true of Carroll, "Spacetime and Geometry." Most theoretical physicists in this day and age understand this view of tensors. $\endgroup$ – Jeff Harvey Dec 23 '10 at 15:41 $\begingroup$ @Jeff Harvey: When I was doing my bachelor's and master's in physics (in the '00s), I never got the impression that "most theoretical physicists in this day and age understand [the coordinate-free] view of tensors." Maybe it depends on subfield / institution? Certainly I met a lot of people who did understand the coordinate-free view, but I also met a lot of people who appeared not to. This often made life difficult for me, because I have trouble understanding the coordinate-full view, and I had a very hard time getting people to help me translate things into coordinate-free language. $\endgroup$ – Vectornaut Apr 25 '12 at 6:44 $\begingroup$ A justification for the bad definition is that physicists also sometimes deal with arrays of scalars that vary with the basis but according to some other transformation rule. So you can say: this quantity is a tensor, that one is not. An example is Christoffel symbols. Yes, these should be understood as the coordinates of a connection, but it took a while for that perspective to develop. And some people might still be thinking: who knows what transformation rules we might see next; we must remain flexible. $\endgroup$ – Toby Bartels Oct 6 '14 at 6:07 I'd say the standard definition of singular homology is pretty bad. It's a historical relic in some sense -- topologists were so concerned by naturality, whether manifolds have combinatorially distinct triangulations and issues such as that, that they decided those preoccupations were more important than imparting a solid foundational intuition as to what a homology class is. In my experience, people who see Poincare's proof of Poincare duality first vs. the people who see a singular homology exposition usually have a far better command of what is actually going on, to the point where they view Poincare duality is something light and natural, while most students that see it through the eyes of singular homology more often see it as something distant and intractible. And all that effort is for what? So students can know Poincare duality is true on topological manifolds, when all the examples they've seen are smooth manifolds. edit: my preferred way to describe Poincare's proof is to modernize it a tad. Your set-up is a triangulated manifold $M$, then you construct the dual polyhedral decomposition (a CW-decomposition) so that the (simplicial) $i$-cells of $M$ are in bijective correspondence with the (dual polyhedral) $m-i$-cells of $M$. This is much more straightforward than living in the simplicial world. Then you show that (up to a sign change) the chain complex for the simplicial homology is the chain complex for the cohomology of the dual polyhedral decomposition. The fussiest bit is keeping track of the orientations in the orientable case. $\begingroup$ I don't really understand, what you want to change concretely. What definition of homology do you prefer? In my opinion, your way of presenting Poincare duality is indeed more intuitive (so it is surely not wrong to give the students an idea of it), but has at least 3 disadvantages: 1) You first have to prove that smooth manifolds can be triangulated. 2) You have to show that the isomorphism does not depend on the choices (in some sense). 3) These ideas do not generalize well to other situations like more sophisticated dualities or the Thom iso (I think). $\endgroup$ – Lennart Meier May 27 '10 at 15:33 $\begingroup$ Another good intuitive proof of Poincare duality (in the sense of equality of Betti numbers) is via Morse theory: replace $f$ with $-f.$ $\endgroup$ – Victor Protsak May 28 '10 at 1:39 $\begingroup$ @Meier, Re (1) proving that manifolds have triangulations is at least as fundamental as any homology of fundamental group construction with manifolds so this seems totally natural to me. (2) depends on what applications you're interested in. After Poincare duality is set up properly there are many alternative formulations you can give it -- once there is a firm foundation in place. Re (3), the search for generality is essentially the complete opposite point of my post. To a student there's little point generalizing something for which there's little initial grasp. $\endgroup$ – Ryan Budney May 28 '10 at 22:53 $\begingroup$ @Victor, actually using the replace f by −f trick you see than on an oriented manifold the Morse complex is isomorphic to its dual and an orientation is required to construct the map. Depending on whether you want to carefully give the construction of the Morse complex or prove the existence of a triangulation both methods give a concrete picture of the dual cocycle but require a fair amount of geometric work. $\endgroup$ – Tom Mrowka Apr 4 '11 at 13:24 Another simple example is the definition for equivalence relations: R(.,.) is an equivalence relation iff R is reflexive, symmetric, and transitive. R(.,.) is an equivalence relation iff there exists a function f such that R(a,b) iff f(a)=f(b). Most presentations start with Definition 1, which contains no hint as to why we bother discussing such relations or why we call them "equivalences". In contrast, Definition 2 (along with a couple of examples) immediately tells you that R captures one particular attribute of the elements of the domain; and, since elements with the same value for this attribute are called "equivalent", R is called an "equivalence". A reasonable introduction should start with Definition 2, then go on to prove Definition 1 is a convenient alternative characterization. $\begingroup$ I've never actually seen the second definition explicitly, although I've used it implicitly often enough. I don't completely see how that's a clearer exposition, though. $\endgroup$ – Cory Knapp Dec 6 '09 at 12:15 $\begingroup$ A function is a great way to capture the intuitive meaning of "some property that we want to be the same." The second definition also doesn't require introducing three new concepts. $\endgroup$ – Qiaochu Yuan Dec 6 '09 at 17:00 $\begingroup$ When equivalence relations are introduced, it is usually shown that giving an equivalence relation on a set is the same as giving a partition of the set. This seems a little more natural than your (2). $\endgroup$ – JS Milne Dec 9 '09 at 6:37 $\begingroup$ I always thought (1) was very nice and intuitive, after all, it says "an equivalence relation is a relation that behaves like =", which for undergrads is a nice introduction to the idea that one might care about other kinds of similarity than equality. $\endgroup$ – Ketil Tveiten Jun 2 '10 at 13:08 $\begingroup$ Definition 2 has the downside that it isn't intrinsic; you have to specify a codomain for the function $f$, and then you have to decide how to define $f$. For example, consider the set of measurable functions on $[0,1]$, with $R(g,h)$ iff $g=h$ almost everywhere. I can't off the top of my head figure out how to define $f$, or even what its codomain should be (other than "the set of equivalence classes", which begs the question). $\endgroup$ – Nate Eldredge Apr 25 '12 at 15:59 A function is a collection of ordered pairs such that ... Gerald Edgar $\begingroup$ Really? I think the alternate definition is much more misleading: a function is a rule... by which most students immediately think "algebraic formula." $\endgroup$ – Qiaochu Yuan Dec 5 '09 at 2:05 $\begingroup$ I would almost prefer not even to say what a function is at all. I'd just say that if f is a function from A to B and x is an element of A then f(x) is an element of B. And that's all you need to know. Of course, I'm exaggerating a bit, and this point of view is not sufficient after a while (e.g. how would you decide whether the set of functions from A to B is countable, how would you define function spaces, etc.?) but in some situations this is the most important fact that you need from the basic definition of functions. Of course, one would also give examples, including artificial ones. $\endgroup$ – gowers Dec 5 '09 at 23:25 $\begingroup$ The nice thing about the subset of $A \times B$ definition is that it's clear what it means for one function to be equal to another. If a function is a rule you have to specify what it means for one rule to be equal to another. Similarly, things like the union and intersection of functions do not immediately make sense. $\endgroup$ – Ryan Budney Jan 11 '10 at 6:54 $\begingroup$ I think that the set-of-pairs definition is a neat formal trick, but not really how anyone intuitively thinks about a function (people use mental images of "rules of correspondence" or "machine that produce an output given an input", etc). I had a friend who disagreed and claimed that he truly thought of functions as sets of pairs. A few days later I heard him talking about the graph of a function and asked him "by the graph of a function you simply mean the function, right?". After that incident he agreed with me that nobody thinks of functions as sets of pairs. :) $\endgroup$ – Omar Antolín-Camarena May 27 '10 at 20:33 $\begingroup$ Another problem with this definition is that it's wrong -- in modern mathematics (though less so in the informal language of some analysts, IME) a function has a codomain. Under this definition a function has an image, but any superset of the image could be its domain. As an undergraduate, I was given this definition several times, and it bothered me. A function is a triple $(A, B, R)$ where R is a subset of $A\times B$ such that... $\endgroup$ – Max Apr 4 '11 at 14:01 I know that this comment will be somewhat controversial, but I strongly believe that the standard (algebraic) textbook definition of d of a differential form is unpedagogical. I much prefer the route taken in Arnold's GTM book on classical mechanics: Just define d of a form as the thing that makes Stokes theorem true! Then one derives the algebraic formula for d of a form. Everything is motivated at every step, and the student isn't confronted with a confusing algebraic definition of unknown origin. $\begingroup$ Well, to use that as a definition, you need to show that there is a thing which makes Stokes theorem true... $\endgroup$ – Mariano Suárez-Álvarez Dec 3 '09 at 3:02 $\begingroup$ Right. That approach is essentially the same as defining functors via universal properties; the construction to prove they exist is less important than the property. $\endgroup$ – Qiaochu Yuan Dec 3 '09 at 15:07 $\begingroup$ The standard definition of the exterior derivative really isn't unpedagogical; it's pretty much the only sensible definition once you have agreed on skew-symmetry (in fact, anyone that accepts the determinant as sensible should think the same of the exterior derivative). $\endgroup$ – Sam Derbyshire Dec 4 '09 at 7:32 $\begingroup$ Personally the algebraic formula for $d$ leaves me cold. I have always found it much easier to define it on functions via $df(X) = Xf$ for a vector field $X$, and then extending it as an odd derivation over the wedge product which obeys $d^2=0$. It is easy to see that this defines it uniquely. I think that this is pedagogical and easy to remember. $\endgroup$ – José Figueroa-O'Farrill Dec 4 '09 at 19:25 $\begingroup$ @José: It took me a while to understand what "standard algebraic definition" Jon meant (presumably, the one given by an explicit formula with partial derivatives, signs, and omitted indices), because all along I was thinking about your definition, which I think is excellent. $\endgroup$ – Victor Protsak May 28 '10 at 2:20 Similar to gowers's answer about group actions, a module over a ring R is an abelian group M together with a function $f:R\times M \to M$ that satisfies certain properties. It may set the beginner's mind at ease to hear, "They're just like vector spaces except over arbitrary rings instead of only fields," which is misleading in itself but is a good mnemonic for remembering the definition. However, I usually find it more intuitive to think of a module over R as a homomorphism from R to the endomorphism ring of an abelian group, and with this definition no mnemonic is necessary. $\begingroup$ I agree. It took me an incredibly long time to realize that vector spaces are fields acting on abelian groups. $\endgroup$ – Qiaochu Yuan Dec 6 '09 at 4:48 $\begingroup$ But describing modules as "vector spaces over a ring" most directly establishes the motivation of quite a bit of the work done in an introductory course on modules (more or less, try to see how much of the theory of vector spaces goes through). When someone first sees a module, the chances that looking at a morphism from a ring to an endomorphism algebra will sound natural are quite small. The point of view afforded by the "a module is a morphism" fits more naturally in the state of mind induced by representation theory (of groups, say), but I imagine very few people become familiar with... $\endgroup$ – Mariano Suárez-Álvarez May 27 '10 at 17:43 $\begingroup$ ...representation theory (of anything) soon enough that that can be used as motivation/context for modules and friends. $\endgroup$ – Mariano Suárez-Álvarez May 27 '10 at 17:44 $\begingroup$ I am a representation theorist and I have serious reservations about introducing modules as "homomorphisms into the endomorphism algebra of ....". For example, direct sum of modules is hardly natural in this setting and on an even more rudimentary level, addition of morphisms (i.e. the abelian group structure of a module) is hardly intuitive. More generally, geometrical perspective is irrevocably lost as soon as you adapt the "morphism" point of view (try defining a simple module=irreducible representation in the morphism setting). $\endgroup$ – Victor Protsak May 28 '10 at 2:07 $\begingroup$ Also, modules over commutative rings have some special properties which can't be captured easily (or at all) if you think of a module as a representation. Concerning "vector space over a ring" perspective: it's been a long time since I read van der Waerden's Algebra, but I believe, he was descriptively speaking of "groups with operators". $\endgroup$ – Victor Protsak May 28 '10 at 2:40 One often sees the cumulants of a probability distribution defined by saying the cumulant-generating function is the logarithm of the moment-generating function: $$ \sum_{n=1}^\infty \kappa_n \frac {t^n}{n!} = \log \sum_{n=0}^\infty \operatorname{E}(X^n) \frac{t^n}{n!} = \log\operatorname{E}\left( e^{tX} \right). $$ This fails to explain one of the basic motivations behind such a concept as the cumulants of a probability distribution. The variance $\operatorname{var}(X) = \operatorname{E}\left( (X - \operatorname{E}(X))^2 \right)$ is simultaneously $2$nd-degree homogenous: $\operatorname{var}(cX)=c^2\operatorname{var}(X)$; translation-invariant: $\operatorname{var}(c+X) = \operatorname{var}(X)$; cumulative: $\operatorname{var}(X_1+\cdots+X_n) = \operatorname{var}(X_1)+\cdots+\operatorname{var}(X_n)$ if $X_1,\ldots,X_n$ are independent. The higher-degree central moments also enjoy the first two properties (with the appropriate degree of homogeneity in each case), but the third property fails for $4$th and higher-degree central moments. (That it works for the $3$rd-degree central moment has been known to surprise people. It's trivial to prove it.) All of the cumulants have the three properties above (with the degree of homogeneity equal to the degree of the cumulant). $$\text{4th cumulant} = \Big(\text{4th central moment}\Big) - 3 \cdot \Big( \text{variance}\Big)^2.$$ This is $4$th-degree homogeneous, translation-invariant, and cumulative. Each cumulant above the $1$st degree is the unique polynomial in the central moments having those three properties and for which the coeffient of the $n$th-degree central moment in the $n$th cumulant is $1$. Is this not a more intuitive and motivating characterization of the cumulants than is the "definition" that speaks of the logarithm of the moment-generating function? $\begingroup$ Wow, that is much clearer. I hadn't really thought about it before. $\endgroup$ – David Manheim Jan 18 '18 at 3:12 Inspired by some of the comments, I would nominate the definition of infinite product topology in terms of its open sets, found in, e.g., Munkres' otherwise excellent Topology. "The product topology on $X = \prod_{\alpha \in J} X_\alpha$ is the topology generated by the sets of the form $\pi_\alpha^{-1}(U_\alpha)$, where $U_\alpha$ is an open subset of $X_\alpha$." One then proves that one can also use the basis of sets of the form $U = \prod_{\alpha \in J} U_\alpha$ where $U_\alpha$ is open in $X_\alpha$, and $U_\alpha = X_\alpha$ for all but finitely many $\alpha \in J$. This just makes it look like an annoying and unnatural modification of the box topology. Better in my opinion is to view $X = \prod X_\alpha$ explicitly as a function space (not as some sort of tuples, though they are really functions underneath), and to use the terminology of nets. Then it becomes clear that the product topology is just the topology of pointwise convergence, i.e. a net $f_i \to f$ iff the nets $f_i(\alpha) \to f(\alpha)$ for all $\alpha \in J$. Under this definition, Tychonoff's theorem, which previously seemed pretty obscure, has an obvious application when combined with Heine-Borel: given any set $J$ and a pointwise bounded net of functions $f_i : J \to \mathbb{R}$, there is a subnet that converges pointwise. This is maybe the most useful application, especially in functional analysis. (Indeed, I understand this was actually Tychonoff's original theorem, that an arbitrary product of closed intervals is compact.) For instance, it makes Alaoglu's theorem clear, once you see that the weak-* topology is just a toplogy of pointwise convergence. It's nice then to compare this with the Arzela-Ascoli theorem, which says that if $J$ is a compact Hausdorff space and the functions $f_i$ are not only pointwise bounded but also continuous and equicontinuous, then a subnet (in fact a subsequence) converges not only pointwise but in fact uniformly. $\begingroup$ It is not similar only in spirit! The product topology is the weak topology for the family of natural projection maps on the product. The precise relation with Nate's answer is that if $X$ is a set equipped with the weak topology corresponding to a family of maps $f_\alpha$ then a net $x_i$ converges in $X$ if and only if $f_\alpha(x_i)$ converges for each $\alpha$. I don't claim that the notion of a weak topology belongs in point set topology classes (though even the subspace topology is the weak topology for the inclusion map), but it is a surprisingly convenient organizing principle. $\endgroup$ – Paul Siegel May 27 '10 at 18:20 $\begingroup$ Actually, understanding the product topology as the one forced upon you if you want the categorical product on the particular category $Top$ was one of the first things that really sold me on the usefulness and power of category theory. $\endgroup$ – Todd Trimble♦ Apr 4 '11 at 10:50 $\begingroup$ If you do the "box topology" but saying that products of closed sets are closed, then you do get the product topology. $\endgroup$ – John Wiltshire-Gordon Apr 25 '12 at 4:56 $\begingroup$ This is great! "An annoying and unnatural modification of the box topology" is exactly what I thought when I first saw the definition of the product topology in Munkres. On the other hand, I've never been comfortable with defining a topology by specifying its convergent nets; how do I check that what I've defined is actually a topology? $\endgroup$ – Vectornaut Apr 25 '12 at 6:25 $\begingroup$ I'm with @Todd Trimble on this one: I think defining the product of topological spaces categorically makes the definition very easy to use, and gives good motivation for the definition of the product topology (which is used to "implement" the categorical product, proving its existence). $\endgroup$ – Vectornaut Apr 25 '12 at 6:26 Since this is a big list, I might as well comment 5 years later. David Corwin mentioned tensor products, and the top post is about linear algebra, so I thought I would mention that, in my opinion, coordinate definitions in general tend to obscure meaning. Before going on, I'll mention that I'm not saying coordinates are bad! I just think that introducing ideas with coordinates tends to be very unrevealing. A few examples which I find are obscured by coordinates are: Derivatives. The easiest example is the differential of a map $f:\mathbb{R}^m \rightarrow \mathbb{R}^n$. This is often given by the Jacobian matrix, and while the Jacobian is very useful in computation, it was not at all clear to me how it generalises the ordinary derivative, until I saw the proof that it satisfies the coordinate-free definition of a derivative at a point. Namely, the map $D_af:\mathbb{R}^m \rightarrow \mathbb{R}^n$ such that $$\lim_{\lvert x \rvert \rightarrow 0}{{\lvert f(a+x)-f(a)-D_af(x) \rvert}\over{\lvert x \rvert}} = 0.$$ Tensor products and Tensors. David Corwin already covered this. Local coordinates on manifolds. I think a lot of elegant definitions and properties are lost when using local coordinates, for example, the tangent space becomes very unwieldy and unnatural when interpreted in a local coordinate setting (although it does become more intuitive). Matrices and linear maps. I recommend reading the top post. But I'll mention that I am personally most bothered by determinants: they made no sense at all to me until I learned of the definition of the determinant as the top exterior power of the associated linear map! $\begingroup$ I also prefer thinking in terms of exterior powers, but another intuitive way of introducing determinants (at least over $\mathbb{R}$) is that they measure expansion of volume by applying linear maps. A precise related result is that every continuous group homomorphism $GL_n(\mathbb{R}) \to \mathbb{R}^\times$ is essentially a power of the determinant mapping (see e.g. golem.ph.utexas.edu/category/2011/08/mixed_volume.html#c039412). $\endgroup$ – Todd Trimble♦ Apr 21 '15 at 13:59 What about definitions that are elegantly concise to such an extent that they confound intuition? A classic of the genre: A forest is an acyclic graph; A tree is a connected forest. (Presumably most of us would be less surprised to hear a forest defined as a disjoint union of trees.) But perhaps there is something to be said for a shocking definition: I shall never forget this, and probably I will always remember the moment I first saw it. In a similar vein, I once saw a video of John H Conway giving a lecture on ordinals. He began, conventionally enough, by defining the notion of well-ordered set. But the definition he gave was an unconventional one: A set $S$ equipped with a transitive relation $\mathord{(\leq)}\subseteq S\times S$ such that every non-empty $T\subseteq S$ has a unique least element $m\in T$ such that $m\leq t$ for all $t\in T$. Notice that this implies reflexivity (take $T$ to be a singleton); totality (by existence of the least element of a two-element set); and antisymmetry (by uniqueness of the least element of a two-element set). So it's equivalent to the usual definition. And it is certainly memorable! But I doubt I would have understood it if I wasn't already familiar with the usual definition. Robin Houston $\begingroup$ I'm not immediately seeing what's unconventional about the definition of a well ordered set. Is it the fact that the relation is not required by definition to be an order, only transitive? $\endgroup$ – LSpice Aug 19 '18 at 18:20 $\begingroup$ @LSpice Yes, exactly. The usual definition in my experience is something like the one on wikipedia: "a well-order on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering." The definition Conway used (I don't know whether it's original to him) weakens the total order to a transitive relation, at the expense of requiring uniqueness of the least element. $\endgroup$ – Robin Houston Aug 19 '18 at 22:01 $\begingroup$ Oh, I see! I was confused because the definition emphasised the word "least", which made me think that's where the difference from the usual one lay. $\endgroup$ – LSpice Aug 19 '18 at 22:14 I see the problem crop up: a certain mathematical object has many characterizations, any one of which can be taken as the definition. Which do you use when you are introducing the subject? The first one that comes to mind is the basis of a vector space. Perhaps this is not the best example for the title question of this thread of discussion, but I know that this confuses some students. When I last taught linear algebra, we taught them at least four characterizations. It's not really that any of the characterizations is obscuring or misleading. Rather, each one highlights some important property(-ies). Of course, the better students enjoy seeing all of the characterizations, and they appreciate every one. The less facile students get flustered because they want there to be just One Right Way of thinking about them. A similar issue arises with the characterizations of an invertible matrix or linear transformation, though at least with a matrix it seems most reasonable to define an invertible matrix as one that has an inverse, namely another matrix that you can multiply it by to get the identity matrix. The issue comes up in spades when introducing matroids. Kurt Luoto $\begingroup$ I usually tend towards the historical definition. Usually that's the one that is best-motivated for people with the least background, since it's what motivated the creator. For example, if you look at Hassler Whitney's original papers on characteristic classes, they're extremely raw, explicit and beautiful. A very charming introduction, IMO. $\endgroup$ – Ryan Budney Dec 4 '09 at 5:50 $\begingroup$ I've never taught linear algebra, so I don't know what I'm talking about here. But perhaps the claim that having an inverse is the most natural characterization of "invertible" is just an artifact of the language? If we used the word "nonsingular" or "nondegenerate" for this property, other characterizations might seem more natural. $\endgroup$ – Michael Lugo Dec 4 '09 at 15:12 $\begingroup$ @Michael Lugo: I've never taught linear algebra either, but it seems to me that the essential property of an bijective function is that it has an inverse. One reason to believe this definition is the "right" one is that its generalization, the idea of an isomorphism, is far more important than the idea of "a morphism which is both monic and epic." $\endgroup$ – Vectornaut Apr 25 '12 at 6:14 Convolution: whether it is convolution of functions, measures or sequences, it is often defined by giving an explicit formula for the resulting function (or measure, etc.). While this definition makes calculations with convolutions relatively easy, it gives little intuition into what convolution really is and often seems largely unmotivated. In my opinion, the right way to define convolution (say, of two finite complex Radon measures on an LCA group $G$, which is a relatively general case) is as the unique bilinear, weak-* continuous extension of the group product to $M(G)$ (the space of measures as above), where $G$ is naturally identified with point masses. Then one can restrict the definition to $L^1 (G)$ and get the well known explicit formula for convolution of functions. Of course, a probabilist will probably prefer to think of convolution as the probability density function associated to the sum of two independent absolutely continuous random variables. And there are other possible alternative definitions (see this Mathoverflow discussion). But the formula definition is really the hardest one to get intuition for, in my opinion. $\begingroup$ I agree that at first glance the explicit formula doesn't say much, but I guess you don't expect the "unique bilinear, weak-* continuous extension of the group product to M(G)" definition to appear e.g. in an undergraduate real analysis course... $\endgroup$ – Michal Kotowski Apr 11 '11 at 13:33 $\begingroup$ True, though there are some instances where such a definition becomes more easy (e.g. in a course on the representation theory of finite groups, where measures become functions and all continuity issues disappear). $\endgroup$ – Mark Apr 12 '11 at 9:41 $\begingroup$ Convolution is like multiplication of polynomials (except the exponents come from some group G and the coefficients can be density functions on G instead of finite sums). $\endgroup$ – John Wiltshire-Gordon Apr 25 '12 at 4:52 I remember being confused by the multivariable calculus approach to vectorfields. Sometimes thinking of functions just as functions and sometimes as feilds. It should be possible to convey the idea of having a space of directional derivatives attatched to each point without having to talk about vector bundles. In general I can accept that there is more going on behind the scenes than there is time for in a course, but simply knowing that there is a more general and "right" way of doing things is very helpful to me. This also tends to make the course much more interesting. K.J. Moi $\begingroup$ I want to like this, but I don't understand what definition you would prefer instead. $\endgroup$ – Toby Bartels Feb 12 '14 at 23:13 Induced representations defined in terms of tensor products of $G$ modules and in terms of vector-valued functions on $G$. It would be nice, if more textbook in representation theory stress this more heavily. Both definition have their advantages and disadvantages, I guess, but I personally feel more comfortable with the interpretation in terms of functions. $\begingroup$ I don't think any of these definitions is misleading, but the fact that many books give only one of them (and some proceed to then use the other...) certainly is! $\endgroup$ – darij grinberg Apr 11 '11 at 15:03 $\begingroup$ This. I'm working on semigroup representations and been having problems getting a clear mental image about induced representations. I've been looking at the group-oriented literature but am wary of getting my intuition wrong for semigroups. $\endgroup$ – kastberg Apr 11 '11 at 17:22 The entire branch of point-set topology as taught in most textbooks has completely unintuitive definitions that obscure the entire subject. For instance, the definition of a topological space in terms of open sets tells you nothing about the meaning of point-set topology. It would be much more clear if topological spaces where originally defined in terms of topological closure operators since closure operators since intuitively we have $x\in\overline{A}$ if the set $A$ touches the point $x$ in some way. Other unnecessarily obscured concepts in point-set topology include the definitions of product topology, subspace topology, Hausdorff spaces, regular spaces, compact spaces, and continuous functions. Furthermore, some definitions are obscured when the spaces are not required to be Hausdorff. For instance, the notions of compactness, paracompactness, regularity, and normality do not have much meaning without the Hausdorff separation axiom. If one has a non-Hausdorff space where every open cover has a finite subcover, then one should call that space quasi-compact and not compact. It is a shame that general topology is taught in such a meaningless fashion. Joseph Van Name $\begingroup$ What's a better definition of compactness for Hausdorff spaces? $\endgroup$ – Vectornaut Apr 25 '12 at 5:59 $\begingroup$ And a Hausdorff space is a space where every net(or filter) converges to at most one point. When we put this all together, a compact Hausdorff space is a space where every net(filter) accumulates at some point and converges to at most one point. $\endgroup$ – Joseph Van Name Dec 18 '12 at 2:45 $\begingroup$ And how to define paracompactness then? If you define it in terms of decompositions of unit you lose conection with compactness $\endgroup$ – Ostap Chervak Jan 17 '13 at 18:59 $\begingroup$ There are many non-trivial ways of defining paracompactness. A good topology textbook should therefore prove some of these characterizations of paracompactness. Perhaps the most intuitive one would be that a paracompact space is a $T_{1}$-space where each open cover has an open barycentric refinement. This characterization therefore says that paracompact spaces are precisely the spaces where the collection of all open covers generates a uniformity. Moreover, this uniformity is supercomplete. In fact, a space is paracompact iff it has a compatible supercomplete uniformity. $\endgroup$ – Joseph Van Name Jan 17 '13 at 22:28 $\begingroup$ Your proposed alternative definition of compact is equivalent to the open-cover one even for non-Hausdorff spaces. So why is this meaningless again? $\endgroup$ – Toby Bartels Feb 12 '14 at 23:19 A function $f: \mathbb R\to \mathbb R$ is continuous if $f^{-1}(G)$ is open for all $G$ open in $\mathbb R$ is less intuitive than the delta epsilion definition of a continuous function. Digital Gal $\begingroup$ Yes, but it is important. I think it is only taught once students have developed intuition for the epsilon delta definition. $\endgroup$ – David Corwin Jul 15 '10 at 23:41 $\begingroup$ "A function $f$ is continuuous if and only $lim_nf(x_n) =f(lim_n x_n)$ for all convergent sequences in the domain of definition $f$". This is intuitive (once convergence of sequences is understood), convenient for proofs, general (holds for metric spaces). $\endgroup$ – Johannes Ebert Oct 23 '10 at 21:16 $\begingroup$ Maybe, but I find the following variant to be more intuitive than both of them: A function f is continuous at a point x if the preimage of any neighborhood of f(x) is a neighborhood of x. $\endgroup$ – ACL Apr 4 '11 at 7:17 $\begingroup$ Johannes's version generalizes beyond metric spaces if you generalize sequences to nets. $\endgroup$ – Toby Bartels Feb 12 '14 at 23:16 When we write tensor products, it's optional to indicate the ring over which we do it; we can write $M\otimes N$ or $M \otimes_R N.$ But for elements, we always write $x\otimes y$ without reference to $R.$ You must keep it in mind and that can induce lapses. For example, $v\otimes u^2 - u\otimes uv$ may be $\ne0{:}$ it depends on the base ring, and that doesn't appear [in the notation]. Sometimes the problem is not the concept so much as the notation we use for it. Hideyuki Kabayakawa $\begingroup$ this is not a misleading alternate definition. perhaps a misleading notation, but I cannot think of a context where this is really a problem. $\endgroup$ – Martin Brandenburg May 24 '10 at 21:50 $\begingroup$ I give this +1 now that I have actually met a case where this notation could have caused trouble. Still this is the wrong question for this answer. $\endgroup$ – darij grinberg Apr 11 '11 at 15:09 $\begingroup$ I always write x ⊗_R y instead of just x ⊗ y. $\endgroup$ – Dmitri Pavlov Nov 24 '11 at 20:09 A discrete probability distribution is often defined as one for which the number of values that a random variable with that distribution can take is finite or countably infinite. I prefer to define it as one for which one has $\displaystyle \sum_x \Pr(X=x) = 1,$ where the sum is over all values $x$ for which $\Pr(X=x)>0.$ (One should not define it as one for which the support is finite or countably infinite. For example, suppose a probability distribution assigns positive probability to the singleton of every rational number between $0$ and $1,$ and the sum of those probabilities is $1.$ Then every real number between $0$ and $1$ (inclusive) is in the support, since every interval about every such number gets positive probability.) $\begingroup$ Is your definition of 'support' the usual one? I wouldn't know what the word meant without further context, since I expect the support of a function to be a subset of its domain, not of its co-domain; but, if I had to guess, I'd probably make the guess that it consisted of elements with positive-measure pre-image, not that it was given by what I think is your definition (that every neighbourhood has a positive-measure pre-image). $\endgroup$ – LSpice Aug 19 '18 at 18:24 $\begingroup$ @LSpice : The support of a measure on the set of Borel subsets of a topological space is the set of all points whose every open neighborhood is assigned positive measure. $\qquad$ $\endgroup$ – Michael Hardy Aug 19 '18 at 18:44 $\begingroup$ @LSpice : I don't see how you think the codomain is mentioned here. $\endgroup$ – Michael Hardy Aug 19 '18 at 18:45 $\begingroup$ It seemed that you were speaking of the support (as a subset of $\mathbb R$) of a random variable $X$, which is a function whose codomain is (presumably) $\mathbb R$. $\endgroup$ – LSpice Aug 19 '18 at 19:59 $\begingroup$ @LSpice : I was talking about the support of a probability distribution, whose domain is the set of all Borel subsets of $\mathbb R. \qquad$ $\endgroup$ – Michael Hardy Aug 19 '18 at 20:00 "Prime number" is sometimes defined as a number with exactly two positive divisors, which are itself and $1.$ The deficiency of this characterization is only that it doesn't motivate the definition in the following way. $$ \begin{array}{cccccccccc} & & & & 60 \\ & & & \swarrow & & \searrow \\ & & 4 & & & & 15 \\ & \swarrow & \downarrow & & & \swarrow & & \searrow \\ 2 & & 2 & & 3 & & & & 5 \end{array} $$ One could continue factoring by pulling out $1$s, but that is uninformative in that it doesn't distinguish the number being factored from any other. The definition is motivated by the fact that the number $1$ cannot play the sort of role in this process that either composite or prime numbers play. (For Euclid this was not problematic since he didn't consider $1$ to be a number.) $\begingroup$ I don't understand what deficiency in the definition your diagram illustrates. What should the definition be? $\endgroup$ – LSpice Jan 18 '18 at 17:04 $\begingroup$ @LSpice : Consider what happens if you extend the diagram further by pulling out $1$s, thus: $$ \begin{array}{cccccccccccc} & & & & 60 \\ & & & \swarrow & & \searrow \\ & & 4 & & & & 15 \\ & \swarrow & \downarrow & & & \swarrow & & \searrow \\ 2 & & 2 & & 3 & & & & 5 \\ & & & & & & & \swarrow & & \searrow \\ & & & & & & 5 & & & & 1 \end{array} $$ You get nothing that distinguishes the numbers you're working with from any others. In other words, the number $1$ cannot play a role in this sort of thing in the way in which prime and composite numbers do. $\qquad$ $\endgroup$ – Michael Hardy Jan 18 '18 at 17:10 $\begingroup$ @LSpice : The point is that this answers the naive question: "Why isn't $1$ considered a prime number?" Why does $1$ play a role that is different from that of either prime or composite numbers? $\qquad$ $\endgroup$ – Michael Hardy Jan 18 '18 at 17:12 $\begingroup$ I don't learn anything from either diagram about why the existing definition is bad, but I'm probably not the best judge of what's clearest to a first-time learner, so that's probably irrelevant. What should the definition be? $\endgroup$ – LSpice Jan 18 '18 at 21:01 $\begingroup$ @LSpice : I am undecided as to the best form in which to state a definition for beginners. Maybe I would append a comment to it, on why the number $1$ should be treated differently. $\endgroup$ – Michael Hardy Jan 19 '18 at 0:57 I find simplicial homology very difficult. In particular, I find the idea of a simplicial complex very hard to comprehend, except in the case of an abstract simplicial complex. Although it's not equivalent, I much prefer the idea of what Hatcher calls a $\Delta$-complex, although I still have some trouble with that definition. $\pi=3.14$ cm. Tongue-in-cheek of course, but this can supposedly be found in books. Eivind Dahl $\begingroup$ Are there indeed books that say $\pi$ is dimensionful and give it in terms of cgs units? $\endgroup$ – Todd Trimble♦ Nov 30 '17 at 0:43 $\begingroup$ Let me, when I get back from travelling, try to dig up the book that says there are books which do this. From there we just have to trust the author I'm afraid. $\endgroup$ – Eivind Dahl Dec 1 '17 at 15:00 The first sentence in a probability talk is likely to be, Let $X$ be a random variable. As a non-probabilist who dabbles occasionally in probability, I find the notion of rv difficult to absorb; and then they have the expectation operator (integration), characteristic function (with a different meaning---indicator function means what characteristic function means in measure theory), and in general, it is a distinct language. David Handelman Not the answer you're looking for? Browse other questions tagged soft-question big-list examples teaching definitions or ask your own question. Explaining a comment: Difference between a transformation of points and a transformation of coordinates What great mathematics are we missing out on because of language barriers? What is Realistic Mathematics? Thinking and Explaining Organizing principles of mathematics Examples of two different descriptions of a set that are not obviously equivalent? Nearly all math classes are lecture+problem set based; this seems particularly true at the graduate level. What are some concrete examples of techniques other than the "standard math class" used at the *Graduate* level? What are types of coalgebras that are more naturally described by cooperads? Nonequivalent definitions in Mathematics
CommonCrawl
The European Physical Journal C A new experimental approach to probe QCD axion dark matter in the mass range... A new experimental approach to probe QCD axion dark matter in the mass range above \({ 40}\,{\upmu }\mathrm{{eV}}\) Studying minijets and MPI with rapidity correlations Studying minijets and MPI with rapidity correlations Scalar perturbations and quasi-normal modes of a nonlinear magnetic-charged... Scalar perturbations and quasi-normal modes of a nonlinear magnetic-charged black hole surrounded by quintessence The S-wave resonance contributions in the \(B^{0}_{s}\) decays into \(\psi... The S-wave resonance contributions in the \(B^{0}_{s}\) decays into \(\psi (2S,3S)\) plus pion pair Two loop electroweak corrections to \(\bar{B}\rightarrow X_s\gamma \) and... Two loop electroweak corrections to \(\bar{B}\rightarrow X_s\gamma \) and \(B_s^0\rightarrow \mu ^+\mu ^-\) in the B-LSSM Heavy quarkonium production through the top quark rare decays via the... Heavy quarkonium production through the top quark rare decays via the channels involving flavor changing neutral currents Higgs boson production at large transverse momentum within the SMEFT... Higgs boson production at large transverse momentum within the SMEFT: analytical results Hadronic production of the doubly charmed baryon via the proton–nucleus and... Hadronic production of the doubly charmed baryon via the proton–nucleus and the nucleus–nucleus collisions at the RHIC and LHC Electric dipole moment of the neutron from a flavor changing Higgs-boson Electric dipole moment of the neutron from a flavor changing Higgs-boson Constraints on the intrinsic charm content of the proton from recent ATLAS data Constraints on the intrinsic charm content of the proton from recent ATLAS data Extending the predictive power of perturbative QCD The European Physical Journal C, Feb 2019 Bo-Lun Du, Xing-Gang Wu, Jian-Ming Shen, Stanley J. Brodsky Bo-Lun Du Xing-Gang Wu Jian-Ming Shen Stanley J. Brodsky The predictive power of perturbative QCD (pQCD) depends on two important issues: (1) how to eliminate the renormalization scheme-and-scale ambiguities at fixed order, and (2) how to reliably estimate the contributions of unknown higher-order terms using information from the known pQCD series. The Principle of Maximum Conformality (PMC) satisfies all of the principles of the renormalization group and eliminates the scheme-and-scale ambiguities by the recursive use of the renormalization group equation to determine the scale of the QCD running coupling \(\alpha _s\) at each order. Moreover, the resulting PMC predictions are independent of the choice of the renormalization scheme, satisfying the key principle of renormalization group invariance. In this paper, we show that by using the conformal series derived using the PMC single-scale procedure, in combination with the Padé Approximation Approach (PAA), one can achieve quantitatively useful estimates for the unknown higher-order terms from the known perturbative series. We illustrate this procedure for three hadronic observables \(R_{e^+e^-}\), \(R_{\tau }\), and \(\Gamma (H \rightarrow b {\bar{b}})\) which are each known to 4 loops in pQCD. We show that if the PMC prediction for the conformal series for an observable (of leading order \(\alpha _s^p\)) has been determined at order \(\alpha ^n_s\), then the \([N/M]=[0/n-p]\) Padé series provides quantitatively useful predictions for the higher-order terms. We also show that the PMC + PAA predictions agree at all orders with the fundamental, scheme-independent Generalized Crewther relations which connect observables, such as deep inelastic neutrino-nucleon scattering, to hadronic \(e^+e^-\) annihilation. Thus, by using the combination of the PMC series and the Padé method, the predictive power of pQCD theory can be greatly improved. https://link.springer.com/content/pdf/10.1140%2Fepjc%2Fs10052-019-6704-9.pdf The European Physical Journal C March 2019, 79:182 | Cite as Extending the predictive power of perturbative QCD AuthorsAuthors and affiliations Bo-Lun DuXing-Gang WuJian-Ming ShenStanley J. Brodsky Open Access Regular Article - Theoretical Physics First Online: 28 February 2019 1 Shares 136 Downloads Abstract The predictive power of perturbative QCD (pQCD) depends on two important issues: (1) how to eliminate the renormalization scheme-and-scale ambiguities at fixed order, and (2) how to reliably estimate the contributions of unknown higher-order terms using information from the known pQCD series. The Principle of Maximum Conformality (PMC) satisfies all of the principles of the renormalization group and eliminates the scheme-and-scale ambiguities by the recursive use of the renormalization group equation to determine the scale of the QCD running coupling \(\alpha _s\) at each order. Moreover, the resulting PMC predictions are independent of the choice of the renormalization scheme, satisfying the key principle of renormalization group invariance. In this paper, we show that by using the conformal series derived using the PMC single-scale procedure, in combination with the Padé Approximation Approach (PAA), one can achieve quantitatively useful estimates for the unknown higher-order terms from the known perturbative series. We illustrate this procedure for three hadronic observables \(R_{e^+e^-}\), \(R_{\tau }\), and \(\Gamma (H \rightarrow b {\bar{b}})\) which are each known to 4 loops in pQCD. We show that if the PMC prediction for the conformal series for an observable (of leading order \(\alpha _s^p\)) has been determined at order \(\alpha ^n_s\), then the \([N/M]=[0/n-p]\) Padé series provides quantitatively useful predictions for the higher-order terms. We also show that the PMC + PAA predictions agree at all orders with the fundamental, scheme-independent Generalized Crewther relations which connect observables, such as deep inelastic neutrino-nucleon scattering, to hadronic \(e^+e^-\) annihilation. Thus, by using the combination of the PMC series and the Padé method, the predictive power of pQCD theory can be greatly improved. 1 Introduction Quantum chromodynamics (QCD) is believed to be the fundamental field theory of the hadronic strong interactions. Due to asymptotic freedom [1, 2], the QCD running coupling becomes numerically small at short distances, allowing perturbative calculations of observables for physical processes at large momentum transfer. The fundamental principle of renormalization group invariance requires that the prediction for a physical observable must be independent of both the choice of renormalization scheme and the choice of initial renormalization scale. However, due to the mismatch of the QCD running coupling (\(\alpha _s\)) and the pQCD coefficients at each order, a truncated pQCD series will not automatically satisfy this requirement, leading to well-known ambiguities. The predictive power of pQCD theory thus depends heavily on how to eliminate both the renormalization scheme-and-scale ambiguities and how to predict contributions from unknown higher-order terms. It has become conventional to choose the renormalization scale \(\mu _r\) as the typical momentum flow of the process. The resulting prediction at any fixed order will then inevitably also depend on the choice of the renormalization scheme. The hope is to achieve a nearly scheme-and-scale independent prediction by systematically computing higher-and-higher order QCD corrections; however, this hope is in direct conflict with the presence of the divergent \(n! \alpha _s^n \beta _0^n\) "renormalon" series [3, 4, 5]. It is also often argued that by varying the renormalization scale, one will obtain information on the uncalculated higher-order terms. However, the variation of the renormalization scale can only provide information on the \(\beta \)-dependent terms which control the running of \(\alpha _s\); the variation of \(\mu _r\) gives no information on the contribution to the observable coming from the \(\beta \)-independent terms. We will refer to the \(\beta \)-independent contributions as "conformal" terms – since they match the contributions of a corresponding conformal theory with \(\beta =0\). Obviously, the naive procedure of guessing and varying the renormalization scale can lead to a misleading pQCD prediction, especially if the conformal terms in the higher-order series are more important than the \(\beta \)-dependent terms. For example, the large K-factors for certain processes are caused by large conformal contributions, as observed in the recent analysis of the \(\gamma \gamma ^*\rightarrow \eta _c\) transition form factor [6]. Even if a nearly scale-independent prediction is attained for a global quantity such as a total cross-section or a total decay width, the scale independence could be due to accidental cancellations among different orders, even though the scale dependence at each order could be very large. Worse, even if a prediction with a guessed scale agrees with the data, one cannot explain why it is reliable prediction, thus greatly depressing the predictive power of pQCD. 1.1 The PMC The "Principle of Maximum Conformality" (PMC) rigorously eliminates the conventional renormalization scheme-and-scale ambiguities [7, 8, 9, 10]. It extends the well-known Brodsky–Lepage–Mackenzie (BLM) scale-setting method [11] to all orders in pQCD. The basic PMC procedure is to identify all contributions which originate from the \(\{\beta _i\}\)-terms in a pQCD series; one then shifts the scale of the QCD running coupling at each order to absorb the \(\{\beta _i\}\)-terms and to thus obtain the correct scale for its running behavior as well as to set the number of active quark flavors \(n_f\) arising from quark loops in the gluon propagators. The PMC also agrees with the standard Gell–Mann–Low method (GM-L) [12] for fixing the renormalization scale of \(\alpha (Q^2)\) and the effective number of lepton flavors \(n_\ell \) in Abelian quantum electrodynamics (QED). One can choose any value for the initial renormalization scale \(\mu _r\) when applying the PMC: the resulting scales for the running QCD coupling at each order are in practice independent of its value; thus the PMC eliminates the renormalization scale ambiguity. Moreover, the PMC predictions are scheme-independent due to its conformal nature, and the divergent renormalon behavior of the resulting perturbative series does not appear. The PMC satisfies renormalization group invariance and all of the self-consistency conditions of the renormalization group equation (RGE) [13]. The transition scale between the perturbative and nonperturbative domains can also be determined by using the PMC [14, 15, 16], thus providing a physical procedure for setting the factorization scale for pQCD evolution. The PMC has now been successfully applied to many QCD measurements studies at the LHC as well as other hadronic processes [17, 18, 19]. Within the framework of the PMC, the pQCD approximant can be written in the following form [9, 10], $$\begin{aligned} \rho _{n}(Q)|_{\mathrm{Conv.}}= & {} \sum ^{n}_{i=1} r_i(\mu ^2_r/Q^2) a^{p+i-1}(\mu _r) \end{aligned}$$ (1) $$\begin{aligned}= & {} r_{1,0}{a^p(\mu _r)} + \left[ r_{2,0} + p \beta _0 r_{2,1} \right] {a^{p+1}(\mu _r)} \nonumber \\&+ \big [ r_{3,0} + p \beta _1 r_{2,1} + (p+1){\beta _0}r_{3,1} \nonumber \\&+\frac{p(p+1)}{2} \beta _0^2 r_{3,2} \big ]{a^{p+2}(\mu _r)} {+} \big [ r_{4,0} {+} p{\beta _2}{r_{2,1}} \nonumber \\&+ (p+1){\beta _1}{r_{3,1}} + \frac{p(3+2p)}{2}{\beta _1}{\beta _0}{r_{3,2}} \nonumber \\&+ (p+2){\beta _0}{r_{4,1}}+ \frac{(p+1)(p+2)}{2}\beta _0^2{r_{4,2}} \nonumber \\&+ \frac{p(p+1)(p+2)}{3!}\beta _0^3{r_{4,3}} \big ]{a^{p+3}(\mu _r)} + \ldots ,\nonumber \\ \end{aligned}$$ (2) where \(a={\alpha _s}/\pi \), and Q represents the kinematic scale. The index \(p(\ge 1)\) indicates the \(\alpha _s\)-order of the leading-order (LO) contribution and \(\{r_{i,0}\}\) are conformal coefficients, and the \(\beta \)-pattern at each order is predicted by the non-Abelian gauge theory [20]. Following the standard PMC-s procedure, we obtain $$\begin{aligned} \rho _n(Q)|_{\mathrm{PMC}}=\sum _{i=1}^n r_{i,0}a^{p+i-1}(Q_{*}), \end{aligned}$$ (3) where \(Q_{*}\) is the determined optimal single PMC scale, whose analytical form can be found in Ref. [21]. We emphasize that the factorially divergent renormalon terms, such as \( n!\alpha _s^n \beta _0^n\), do not appear in the resulting conformal series; thus a convergent pQCD series can be achieved. 1 1.2 Padé Resummation The Padé approximation approach (PAA) provides a systematic procedure for promoting a finite Taylor series to an analytic function [22, 23, 24]. In particular, the PAA can be used to estimate the \((n+1)_\mathrm{th}\)-order coefficient by incorporating all known coefficients up to order n. It was shown in Ref. [25] that the Padé method provides an important guide for understanding the sequence of renormalization scales determined by the BLM method and its all-order extension, the PMC. Those scales are the optimal ones for evaluating each term in a skeleton expansion. The leading-order BLM/PMC sequence corresponds to the [0 / 1]-type PAA [26]. After applying the BLM/PMC, the summation over skeleton graphs is then similar to the summation of the perturbative contributions for a corresponding conformal theory. Since the divergent renormalon series does not appear in the conformal \(\beta =0\) perturbative series generated by the PMC, there is an opportunity to use a resummation procedure such as the Padé method to predict higher-order terms and to thus increase the precision and reliability of pQCD predictions. In this paper we will test whether one can use the PAA to achieve reliable predictions for the unknown higher-order terms for a pQCD series by using the renormalon-free conformal series determined by the PMC. For this purpose, we will adopt the PMC single-scale approach (PMC-s) [21], which utilizes a single effective renormalization scale which matches the PMC series via the mean-value theorem. Other applications of resummation methods to pQCD, together with alternatives to the PAA, have been discussed in the literature [25, 26, 27, 28, 29, 30, 31]. However, in our analysis, we will apply the PAA to the scale- and scheme- independent conformal series, whose perturbative coefficients are free of divergent renormalon contributions. 1.3 Applying the PAA to pQCD If we apply the PAA to the PMC prediction, the pQCD series can be rewritten in the following [N / M]-type form $$\begin{aligned} \rho ^{[N/M]}_n(Q)= & {} a^p \times \frac{b_0+b_1 a + \cdots + b_N a^N}{1 + c_1 a + \cdots + c_M a^M} \end{aligned}$$ (4) $$\begin{aligned}= & {} \sum _{i=1}^{n} C_{i} a^{p+i-1} + C_{n+1}\; a^{p+n}+\ldots , \end{aligned}$$ (5) where \(M\ge 1\) and \(N+M+1 = n\). Comparing Eq. (5) with the series (1) or (3), the coefficients \(C_{i}\) can be directly related to \(r_i\) or \(r_{i,0}\), respectively. Furthermore, by using the known \(\hbox {N}^{\mathrm{n-1}}\hbox {LO}\)-order pQCD series, the coefficients \(b_{i\in [0,N]}\) and \(c_{i\in [1,M]}\) can be expressed by using the coefficients \(C_{i\in [1,n]}\). Finally, we can use the coefficients \(b_{i\in [0,N]}\) and \(c_{i\in [1,M]}\) to predict the one-order-higher uncalculated coefficient \(C_{n+1}\) at the \(\hbox {N}^{\mathrm{n}}\hbox {LO}\)-order level. For examples, if \([N/M]=[n-2/1]\), we have $$\begin{aligned} C_{n+1}=\frac{C_n^2}{C_{n-1}}; \end{aligned}$$ (6) if \([N/M]=[n-3/2]\), we have $$\begin{aligned} C_{n+1}=\frac{-C_{n-1}^3+2C_{n-2}C_{n-1}C_{n}-C_{n-3}C_{n}^2}{C_{n-2}^2-C_{n-3}C_{n-1}}; \end{aligned}$$ (7) if \([N/M]=[n-4/3]\), we have $$\begin{aligned} C_{n+1}= & {} \{C_{n-2}^4-(3 C_{n-3} C_{n-1}+2 C_{n-4} C_{n}) C_{n-2}^2 \nonumber \\&\quad +\,2 [C_{n-4} C_{n-1}^2+(C_{n-3}^2+C_{n-5} C_{n-1}) C_{n}] C_{n-2} \nonumber \\&\quad -\,C_{n-5} C_{n-1}^3+C_{n-3}^2 C_{n-1}^2+C_{n-4}^2 C_{n}^2 \nonumber \\&\quad -\,C_{n-3} C_{n} (2 C_{n-4} C_{n-1}+C_{n-5} C_{n})\} \nonumber \\&\quad \{C_{n-3}^3-\left( 2 C_{n-4} C_{n-2}+C_{n-5} C_{n-1}\right) C_{n-3} \nonumber \\&\quad +\,C_{n-5} C_{n-2}^2+C_{n-4}^2 C_{n-1}\}; \mathrm{etc.} \end{aligned}$$ (8) In each case, \(C_{i<1}\equiv 0\). We need to know at least two \(C_i\) in order to predict the unknown higher-order coefficients; thus the PAA is applicable when we have determined at least the NLO terms (\(n=2\)) using the PMC. One can also use the full PAA (4) to estimate the sum of the whole series, e.g. to give the all-oders PAA prediction. As will be found later, the differences for the predictions of the truncated and full PAA series are small for converged perturbative series. In the following, we will apply the PAA for three physical observables \(R_{e^+e^-}\), \(R_{\tau }\) and \(\Gamma (H\rightarrow b{\bar{b}})\) which are known at four loops in pQCD. We will show how the "unknown" terms predicted by the PAA varies when one inputs more-and-more known higher-order terms. The ratio \(R_{e^+e^-}\) is defined as $$\begin{aligned} R_{e^+ e^-}(Q)= & {} \frac{\sigma \left( e^+e^-\rightarrow \mathrm{hadrons} \right) }{\sigma \left( e^+e^-\rightarrow \mu ^+ \mu ^-\right) }\nonumber \\= & {} 3\sum _q e_q^2\left[ 1+R(Q)\right] , \end{aligned}$$ (9) where \(Q=\sqrt{s}\) is the \(e^+e^-\) collision energy. The pQCD approximants for R(Q) are labelled \(R_n(Q)= \sum _{i=1}^{n} r_i(\mu _r/Q)a^{i}(\mu _r)\). The pQCD coefficients at \(\mu _r=Q\) have been calculated in the \(\overline{\mathrm{MS}}\)-scheme in Refs. [32, 33, 34, 35]. For illustration we take \(Q=31.6 \;\mathrm{GeV}\) [36]. The ratio \(R_{\tau }\) is defined as $$\begin{aligned} R_{\tau }(M_{\tau })= & {} \frac{\sigma (\tau \rightarrow \nu _{\tau }+\mathrm{hadrons})}{\sigma (\tau \rightarrow \nu _{\tau }+{\bar{\nu }}_e+e^-)}\nonumber \\= & {} 3\sum \left| V_{ff'}\right| ^2\left( 1+{\tilde{R}}(M_{\tau })\right) , \end{aligned}$$ (10) where \(V_{ff'}\) are Cabbibo–Kobayashi–Maskawa matrix elements, \(\sum \left| V_{ff'}\right| ^2 =\left( \left| V_{ud}\right| ^2+\left| V_{us}\right| ^2\right) \approx 1\) and \(M_{\tau }= 1.78\) GeV. The pQCD approximant, \({\tilde{R}}_{n}(M_{\tau })= \sum _{i=1}^{n}r_i(\mu _r/M_{\tau })a^{i}(\mu _r)\); the coefficients can be obtained by using the known relation of \(R_{\tau }(M_{\tau })\) to \(R(\sqrt{s})\) [37]. The decay width \(\Gamma (H\rightarrow b{\bar{b}})\) is defined as $$\begin{aligned} \Gamma (H\rightarrow b{\bar{b}})=\frac{3G_{F} M_{H} m_{b}^{2}(M_{H})}{4\sqrt{2}\pi } [1+{\hat{R}}(M_{H})], \end{aligned}$$ (11) where the Fermi constant \(G_{F}=1.16638\times 10^{-5}\;\mathrm{GeV}^{-2}\), the Higgs mass \(M_H=126\) GeV, and the b-quark \(\overline{\mathrm{MS}}\)-running mass is \(m_b(M_H)=2.78\) GeV [38]. The pQCD approximant \({\hat{R}}_n(M_H)= \sum _{i=1}^{n}r_i(\mu _r/M_{H}) a^{i}(\mu _r)\), where the predictions for the \(\overline{\mathrm{MS}}\)-coefficients at \(\mu _r=M_H\) can be found in Ref. [39]. In each case the coefficients at any other scale can be obtained via QCD evolution. In doing the numerical evaluation, we have assumed the running of \(\alpha _s\) at the four-loop level. The asymptotic QCD scale is set using \(\alpha _s(M_z)=0.1181\) [40], giving \(\Lambda _{\mathrm{QCD}}^{n_f=5}=0.210\) GeV. After applying the PMC-s approach, the optimal scale for each process can be determined. If the pQCD approximants are known at up to two-loop, three-loop, and four-loop level, the corresponding optimal scales are \(Q_{*}|_{e^+e^-}=[35.36\), 39.68, 40.30] GeV, \(Q_{*}|_{\tau }=[0.90\), 1.01, 1.05] GeV,2 and \(Q_{*}|_{H\rightarrow b{\bar{b}}}=[61.38\), 57.41, 58.84] GeV, accordingly. It is found that those PMC scales \(Q^*\) are completely independent of the choice of the initial renormalization scale \(\mu _r\). Table 1 Comparison of the exact ("EC") \((n+1){\mathrm{th}}\)-order conformal coefficients with the predicted ("[N / M]-type PAA") \((n+1)_\mathrm{th}\)-order ones based on the known \(n{\mathrm{th}}\)-order approximate \(R_{n}(Q=31.6~ \mathrm{GeV})\), where \(n=2,3,4\), respectively \(r_{n+1,0}\) \(n+1=3\) \(n+1=4\) \(n+1=5\) EC \(-\,1.0 \) \(-\,11.0 \) – PAA [0/1]\(+3.4\) [0/2]\(-\,9.9\) [0/3]\(-\,17.8\) – [1/1]\(+0.55\) [1/2]\(-\,18.0\) – – [2/1]\(-\,120\) Table 2 Comparison of the exact ("EC") \((n+1){\mathrm{th}}\)-order conformal coefficients with the predicted ("[N / M]-type PAA") \((n+1)_\mathrm{th}\)-order ones based on the known \(n{\mathrm{th}}\)-order approximate \({\tilde{R}}_{n}(M_{\tau })\), where \(n=2,3,4\), respectively \(r_{n+1,0}\) \(n+1=3\) \(n+1=4\) \(n+1=5\) EC \(+3.4\) \(+6.8\) – PAA [0/1]\(+\,4.6\) [0/2]\(+\,4.9\) [0/3]\(+\,14.7\) – [1/1]\(+\,5.5\) [1/2]\(+\,11.5\) – – [2/1]\(+\,13.5\) Table 3 Comparison of the exact ("EC") \((n+1){\mathrm{th}}\)-order conformal coefficients with the predicted ("[N / M]-type PAA") \((n+1)_\mathrm{th}\)-order ones based on the known \(n{\mathrm{th}}\)-order approximate \({\hat{R}}_{n}(M_H)\), where \(n=2,3,4\), respectively \(r_{n+1,0}\) \(n+1=3\) \(n+1=4\) \(n+1=5\) EC \(-\,1.36\times 10^2\) \(-\,4.32\times 10^2\) – PAA [0/1]\(+3.23\times 10^1\) [0/2]\(-\,7.26\times 10^2\) [0/3]\(\,+\,3.72\times 10^3\) – [1/1]\(\,+\,1.37\times 10^3\) [1/2]\(\,+\,3.20\times 10^3\) – – [2/1]\(-\,1.37\times 10^3\) The remaining task for the PAA is to predict the higher-order conformal coefficients. We present a comparison of the exact \((n+1){\mathrm{th}}\)-order conformal coefficients with the PAA predicted ones based on the known \(n{\mathrm{th}}\)-order approximates \(R_{n}(Q=31.6~ \mathrm{GeV})\), \({\tilde{R}}_{n}(M_{\tau })\) and \({\hat{R}}_{n}(M_H)\) in Tables 1, 2 and 3, respectively. Here the [N / M]-type PAA is for \(N+M=n-1\) with \(N\ge 0\) and \(M\ge 1\). Those Tables show that the \([N/M]=[0/n-1]\)-type PAA provides result closest to the known pQCD result. It is interesting to note that the \([0/n-1]\)-type PAA is consistent with the "Generalized Crewther Relations" (GSICRs) [41]. For example, the GSICR, which provides a remarkable all-orders connection between the pQCD predictions for deep inelastic neutrino-nucleon scattering and hadronic \(e^+e^-\) annihilation shows that the conformal coefficients are all equal to 1; e.g. \({\widehat{\alpha }}_d(Q)=\sum _{i}{\widehat{\alpha }}^{i}_{g_1}(Q_*)\), where \(Q_*\) satisfies $$\begin{aligned} \ln \left. \frac{Q_*^2}{Q^2}\right| _{g_1}= & {} 1.308 + [-\,0.802\, +\, 0.039 n_f] {\widehat{\alpha }}_{g_1}(Q_*) \nonumber \\&+\, [16.100 - 2.584 n_f {+} 0.102 n_f^2] {\widehat{\alpha }}_{g_1}^2(Q_*) {+} \cdots .\nonumber \\ \end{aligned}$$ (12) By using the \([0/n-1]\)-type PAA – the geometric series – all of the predicted conformal coefficients are also equal to 1. The \([0/n-1]\)-type PAA also agrees with the GM-L scale-setting procedure to obtain scale-independent perturbative QED predictions; e.g., the renormalization scale for the electron-muon elastic scattering through one-photon exchange is set as the virtuality of the exchanged photon, \(\mu _r^2 = q^2 = t\). By taking an arbitrary initial renormalization scale \(t_0\), we have $$\begin{aligned} \alpha _{em}(t) = \frac{\alpha _{em}(t_0)}{1 - \Pi (t,t_0)}, \end{aligned}$$ (13) where \(\Pi (t,t_0) = \frac{\Pi (t,0) -\Pi (t_0,0)}{1-\Pi (t_0,0)}\), which sums all vacuum polarization contributions, both proper and improper, to the dressed photon propagator. The PMC reduces in the \(N_C \rightarrow 0\) Abelian limit to the GM-L method [42] and the preferable \([0/n-1]\)-type makes the PAA geometric series self-consistent with the GM-L/PMC prediction. Table 4 Comparison of the exact ("EC") and the predicted ("PAA") pQCD approximants \(R_n(Q=31.6\;\mathrm{GeV})\), \({\tilde{R}}_{n}(M_\tau )\) and \({\hat{R}}_n(M_H)\) under conventional (Conv.) and PMC-s scale-setting approaches up to \(n{\mathrm{th}}\)-order level. The \((n+1)_\mathrm{th}\)-order PAA prediction equals to the \(n{\mathrm{th}}\)-order known prediction plus the predicted \((n+1){\mathrm{th}}\)-order terms using the \([0/n-1]\)-type PAA prediction (The values in the parentheses are results for the corresponding full PAA series). The PMC predictions are scale independent and the errors for conventional scale-setting are estimated by varying the initial renormalization scale \(\mu _r\) within the region of \([1/2\mu _0, 2\mu _0]\), where \(\mu _0=Q\), \(M_\tau \) and \(M_H\), respectively \(\mathrm{EC}\), \(n=2\) \(\mathrm{PAA}\), \(n=3\) \(\mathrm{EC}\), \(n=3\) \(\mathrm{PAA}\), \(n=4\) \(\mathrm{EC}\), \(n=4\) \(\mathrm{PAA}\), \(n=5\) \(R_n(Q)|_{\mathrm{PMC-s}}\) 0.04745 0.04772 (0.04777) 0.04635 0.04631 (0.04631) 0.04619 0.04619 (0.04619) \({\tilde{R}}_{n}(M_{\tau })|_{\mathrm{PMC-s}}\) 0.1879 0.2035 (0.2394) 0.2103 0.2128 (0.2134) 0.2089 0.2100 (0.2104) \({\hat{R}}_{n}(M_H)|_{\mathrm{PMC-s}}\) 0.2482 0.2503 (0.2505) 0.2422 0.2402 (0.2406) 0.2401 0.2405 (0.2405) \(R_n(Q)|_{\mathrm{Conv.}}\) \(0.04763^{+\,0.00045}_{-\,0.00139}\) \(0.04781^{+\,0.00043}_{-\,0.00053}\) \(0.04648_{-\,0.00071}^{+\,0.00012}\) \(0.04632_{-\,0.00025}^{+\,0.00018}\) \(0.04617_{-\,0.00009}^{+\,0.00015}\) \(0.04617_{-\,0.00001}^{+\,0.00007}\) \({\tilde{R}}_{n}(M_{\tau })|_{\mathrm{Conv.}}\) \(0.1527^{+\,0.0610}_{-\,0.0323}\) \(0.1800^{+\,0.0515}_{-\,0.0330}\) \(0.1832_{-\,0.0334}^{+\,0.0385}\) \(0.1975_{-\,0.0296}^{+\,0.0140}\) \(0.1988_{-\,0.0299}^{+\,0.0140}\) \(0.2056_{-\,0.0247}^{+\,0.0029}\) \({\hat{R}}_{n}(M_H)|_{\mathrm{Conv.}}\) \(0.2406^{+\,0.0074}_{-\,0.0104}\) \(0.2475^{+\,0.0027}_{-\,0.0066}\) \(0.2425_{-\,0.0053}^{+\,0.0002}\) \(0.2419_{-\,0.0040}^{+\,0.0002}\) \(0.2411_{-\,0.0040}^{+\,0.0001}\) \(0.2407_{-\,0.0040}^{+\,0.0002}\) Tables 1, 2 and 3 show that as more loop terms are inputted, the predicted conformal coefficients become closer to their exact value. To show this clearly, we define the normalized difference between the exact conformal coefficient and the predicted one as $$\begin{aligned} \Delta _{n} = \left| \frac{r_{n,0}|_{\mathrm{PAA}}-r_{n,0}|_\mathrm{EC}}{r_{n,0}|_{\mathrm{EC}}}\right| , \end{aligned}$$ where "EC" and "PAA" stand for exact and predicted conformal coefficients, respectively. By using the exact terms, known up to two-loop and three-loop levels accordingly, the normalized differences for the \(3{\mathrm{th}}\)-order and the \(4{\mathrm{th}}\)-order conformal coefficients, i.e. those coefficients in the \(n+1=3\) and \(n+1=4\) columns in Tables 1, 2 and 3, become suppressed from \(440.\%\) to \(10\%\) for \(R(Q=31.6~\mathrm{GeV})\), from \(35\%\) to \(28\%\) for \({\tilde{R}}(M_{\tau })\), and from \(124.\%\) to \(68\%\) for \({\hat{R}}(M_H)\). There are large differences for the conformal coefficients if we only know the QCD corrections at the two-loop level; however this decreases rapidly when we know more loop terms. Following this trend, the normalized differences for the \(5{\mathrm{th}}\)-order conformal coefficients should be much smaller than the \(4{\mathrm{th}}\)-order ones. Conservatively, if we set the normalized difference (\(\Delta _5\)) of the \(5{\mathrm{th}}\)-loop as the same one of the \(4_{\mathrm{}}\)-loop (\(\Delta _4\)), we can inversely predict the \(5{\mathrm{th}}\)-loop "\(\mathrm{EC'}\)" conformal coefficients: $$\begin{aligned}&r^{e^+ e^-}_{5,0}|_{\mathrm{EC'}} = -18.0\pm 1.8, \end{aligned}$$ (14) $$\begin{aligned}&r^{\tau }_{5,0}|_{\mathrm{EC'}} = 16.0\pm 4.5, \end{aligned}$$ (15) $$\begin{aligned}&r^{H\rightarrow b{\bar{b}}}_{5,0}|_{\mathrm{EC'}} = (6.92\pm 4.71)\times 10^3, \end{aligned}$$ (16) where the central values are obtained by averaging the two "EC" values derived from \(\frac{r_{5,0}|_\mathrm{PAA}}{(1+\Delta _4)}\) and \(\frac{r_{5,0}|_\mathrm{PAA}}{(1-\Delta _4)}\). The difference between the exact and predicted conformal coefficients is reduced by the \(\alpha _s/\pi \)-power suppression, thus the precision of the predictive power of the PAA should become most useful for total cross-sections and decay widths. We present the comparison of the exact results for \(R_{n}(Q=31.6~\mathrm{GeV})\), \({\tilde{R}}_{n}(M_{\tau })\) and \({\hat{R}}_{n}(M_H)\) with the \([0/n-1]\)-type PAA predicted ones in Table 4. The values in the parentheses are results for the corresponding full PAA series, which are calculated by using Eq. (4). Due to the fast pQCD convergence, the differences between the truncated and full PAA predictions are small, which are less than \(1\%\) for \(n\ge 4\). Similarly, we define the precision of the predictive power as the normalized difference between the exact approximant (\(\rho _n|_{\mathrm{EC}}\)) and the prediction (\(\rho _n|_{\mathrm{PAA}}\)); i.e. $$\begin{aligned} \left| \frac{\rho _{n}|_{\mathrm{PAA}}- \rho _{n}|_{\mathrm{EC}}}{\rho _{n}|_\mathrm{EC}}\right| . \end{aligned}$$ The PMC predictions are renormalization scheme-and-scale independent, and the pQCD convergence is greatly improved due to the elimination of renormalon contributions. Highly precise values at each order can thus be achieved [21]. In contrast, predictions using conventional pQCD series (1) are scale dependent even for higher-order predictions. We also present results using conventional scale-setting in Table 4; it confirms the conclusion that the conformal PMC-s series is much more suitable for applications of the PAA. By using the known (exact) approximants predicted by PMC-s scale-setting up to two-loop and three-loop levels accordingly, the differences between the exact and predicted three-loop and four-loop approximants are observed to decrease from \(3.0\%\) to \(0.3\%\) for \(\rho _n=R_n(Q=31.6~\mathrm{GeV})\), from \(3\%\) to \(2\%\) for \(\rho _n={\tilde{R}}_n(M_{\tau })\), and from \(3.0\%\) to \(\sim 0\%\) for \(\rho _n={\hat{R}}_n(M_H)\), respectively. The normalized differences for \(R_4(Q=31.6~\mathrm{GeV})\), \({\tilde{R}}_4(M_{\tau })\) and \({\hat{R}}_4(M_H)\) are small. If we conservatively set the normalized difference of the \(5{\mathrm{th}}\)-loop to match that of the 4-loop predictions, then the predicted \(5{\mathrm{th}}\)-loop "\(\mathrm{EC'}\)" predictions are $$\begin{aligned}&R_5(Q=31.6~\mathrm{GeV})|_{\mathrm{EC'}} = 0.04619\pm 0.00014, \end{aligned}$$ (17) $$\begin{aligned}&{\tilde{R}}_5(M_{\tau })|_{\mathrm{EC'}} = 0.2100\pm 0.0042, \end{aligned}$$ (18) $$\begin{aligned}&{\hat{R}}_5(M_H)|_{\mathrm{EC'}} = 0.2405\pm 0.0001. \end{aligned}$$ (19) 1.4 Summary The PMC provides first-principle predictions for QCD; it satisfies renormalization group invariance and eliminates the conventional renormalization scheme-and-scale ambiguities. Since the divergent renormalon series does not appear in the conformal (\(\beta =0\)) perturbative series generated by the PMC, there is an opportunity to use resummation procedures such as the Padé method to predict higher-order terms and thus to increase the precision and reliability of pQCD predictions. In this paper, we have shown that by applying PAA to the renormalon-free conformal series derived by using the PMC single-scale procedure, one can achieve quantitatively useful estimates for the unknown higher-order terms based on the known perturbative QCD series. In particular, we have found that if the PMC prediction for the conformal series for an observable (of leading order \(\alpha _s^p\)) has been determined at order \(\alpha ^n_s\), then the \([N/M]=[0/n-p]\) Padé series provides an important estimate for the higher-order terms. The all-orders predictions of the \([0/n-p]\)-type PAA are in fact identical to the predictions obtained from the all-order GSICRs which connect observables, such as deep inelastic neutrino-nucleon scattering, to hadronic \(e^+e^-\) annihilation. These relations are fundamental, high precision predictions of QCD. Open image in new window Fig. 1 Comparison of the exact ("EC") and the predicted ([0/n-1]-type "PAA") pQCD prediction for \(R_n(Q=31.6\;\mathrm{GeV})\) under the PMC-s scale-setting. It shows how the PAA predictions change when more loop-terms are included, where the five-loop "EC" prediction is from Eq. (17) Tables 1, 2 and 3 show that the difference between the exact and the predicted conformal coefficients at various loops, which decreases rapidly as additional high-order loop terms are included. Table 4 shows that the PAA becomes quantitatively effective even at the NLO level for the pQCD approximant due to the strong \(\alpha _s/\pi \)-suppression of the conformal series. For example, when using the NLO results \(R_2(Q)\), \({\tilde{R}}_2(M_{\tau })\) and \({\hat{R}}_2(M_H)\) to predict the observables \(R_3(Q)\), \({\tilde{R}}_3(M_{\tau })\) and \({\hat{R}}_3(M_H)\) at NNLO, the normalized differences between the Padé estimates and the known results are only about \(3\%\). Taking \(R_{e^+e^-}\) as an explicit example, we show how the PAA predictions change when more loop-terms are included in Fig. 1. In some sense this is an infinite-order prediction for \(R_{e^+e^-}(Q=31.6\;\mathrm{GeV})\), and it is the most precise prediction one can make using our PMC+PAA method, given the present knowledge of pQCD. Thus by combining the PMC with the Padé method, the predictive power of the pQCD theory can be remarkably improved. As a final remark, we show that the way of using PAA basing on the conformal series is consistent with that of the \({{\mathcal {N}}}=4\) supersymmetric Yang-Mills theory. For the purpose, we present a PAA prediction on the NNLO and \(\hbox {N}^{3}\hbox {LO}\) Balitsky–Fadin–Kuraev–Lipatov (BFKL) Pomeron eigenvalues. By using the PAA method together with the known LO and NLO coefficients given in Ref. [43], we find that the NNLO BFKL coefficient is \(0.86\times 10^4\) for \(\Delta =0.45\), where \(\Delta \) is the full conformal dimension of the twisted-two operator. The exact NNLO BFKL coefficient has been discussed in planar \({{\mathcal {N}}}=4\) supersymmetric Yang–Mills theory [44] by using the quantum spectral curve integrability-based method [45, 46], which gives \(1.08\times 10^4\) [44]. Thus the normalized difference between those two NNLO values is only about \(20\%\). As a step forward, we predict the \(\hbox {N}^{3}\hbox {LO}\) coefficient to the Pomeron eigenvalue by using the [0 / 2]-PAA type and the known NNLO coefficient given in Ref. [44], which results in \(-\,3.07\,\times \,10^5\). This value is also consistent with the \({{\mathcal {N}}}=4\) supersymmetric Yang-Mills prediction, since if we adopt the data-fitting prediction suggested in Ref. [47] to predict \(\hbox {N}^{3}\hbox {LO}\) coefficient, we shall obtain \(-3.66\times 10^5\). The normalized difference between those two \(\hbox {N}^{3}\hbox {LO}\) values is also only about \(20\%\). Footnotes 1. Only those \(\{\beta _i\}\)-terms that are pertained to RGE have been absorbed into the PMC scale. There may have cases in which the \(\{\beta _i\}\)-terms are not pertained to RGE and should be treated as conformal coefficients, which may break the pQCD convergence. 2. Because the usually adopted analytic \(\alpha _s\)-running differ significantly at scales below a few GeV from the exact solution of RGE [40], we will use the exact numerical solution of the RGE throughout to evaluate \(R_{\tau }\). Notes Acknowledgements This work is supported in part by the Natural Science Foundation of China under Grant No. 11625520 and No. 11847301, the Fundamental Research Funds for the Central Universities under the Grant No. 2018CDPTCG0001/3, and the Department of Energy Contract No. DE-AC02-76SF00515. SLAC-PUB-17306. References 1. D.J. Gross, F. Wilczek, Phys. Rev. Lett. 30, 1343 (1973)ADSCrossRefGoogle Scholar 2. H.D. Politzer, Phys. Rev. Lett. 30, 1346 (1973)ADSCrossRefGoogle Scholar 3. M. Beneke, V.M. Braun, Phys. Lett. B 348, 513 (1995)ADSCrossRefGoogle Scholar 4. M. Neubert, Phys. Rev. D 51, 5924 (1995)ADSCrossRefGoogle Scholar 5. M. Beneke, Phys. Rep. 317, 1 (1999)ADSCrossRefGoogle Scholar 6. S.Q. Wang, X.G. Wu, W.L. Sang, S.J. Brodsky, Phys. Rev. D 97, 094034 (2018)ADSCrossRefGoogle Scholar 7. S.J. Brodsky, X.G. Wu, Phys. Rev. D 85, 034038 (2012)ADSCrossRefGoogle Scholar 8. S.J. Brodsky, X.G. Wu, Phys. Rev. Lett. 109, 042002 (2012)ADSCrossRefGoogle Scholar 9. M. Mojaza, S.J. Brodsky, X.G. Wu, Phys. Rev. Lett. 110, 192001 (2013)ADSCrossRefGoogle Scholar 10. S.J. Brodsky, M. Mojaza, X.G. Wu, Phys. Rev. D 89, 014027 (2014)ADSCrossRefGoogle Scholar 11. S.J. Brodsky, G.P. Lepage, P.B. Mackenzie, Phys. Rev. D 28, 228 (1983)ADSCrossRefGoogle Scholar 12. M. Gell-Mann, F.E. Low, Phys. Rev. 95, 1300 (1954)ADSCrossRefGoogle Scholar 13. S.J. Brodsky, X.G. Wu, Phys. Rev. D 86, 054018 (2012)ADSCrossRefGoogle Scholar 14. A. Deur, S.J. Brodsky, G.F. de Teramond, Phys. Lett. B 750, 528 (2015)ADSCrossRefGoogle Scholar 15. A. Deur, S.J. Brodsky, G.F. de Teramond, Phys. Lett. B 757, 275 (2016)ADSCrossRefGoogle Scholar 16. A. Deur, J.M. Shen, X.G. Wu, S.J. Brodsky, G.F. de Teramond, Phys. Lett. B 773, 98 (2017)ADSCrossRefGoogle Scholar 17. X.G. Wu, Y. Ma, S.Q. Wang, H.B. Fu, H.H. Ma, S.J. Brodsky, M. Mojaza, Rep. Prog. Phys. 78, 126201 (2015)ADSCrossRefGoogle Scholar 18. X.G. Wu, S.Q. Wang, S.J. Brodsky, Front. Phys. 11(1), 111201 (2016)CrossRefGoogle Scholar 19. X.G. Wu, S.J. Brodsky, M. Mojaza, Prog. Part. Nucl. Phys. 72, 44 (2013)ADSCrossRefGoogle Scholar 20. H.Y. Bi, X.G. Wu, Y. Ma, H.H. Ma, S.J. Brodsky, M. Mojaza, Phys. Lett. B 748, 13 (2015)ADSCrossRefGoogle Scholar 21. J.M. Shen, X.G. Wu, B.L. Du, S.J. Brodsky, Phys. Rev. D 95, 094006 (2017)ADSCrossRefGoogle Scholar 22. J.L. Basdevant, Fortschr. Phys. 20, 283 (1972)CrossRefGoogle Scholar 23. M.A. Samuel, G. Li, E. Steinfelds, Phys. Lett. B 323, 188 (1994)ADSCrossRefGoogle Scholar 24. M.A. Samuel, J.R. Ellis, M. Karliner, Phys. Rev. Lett. 74, 4380 (1995)ADSCrossRefGoogle Scholar 25. S.J. Brodsky, J.R. Ellis, E. Gardi, M. Karliner, M.A. Samuel, Phys. Rev. D 56, 6980 (1997)ADSCrossRefGoogle Scholar 26. E. Gardi, Phys. Rev. D 56, 68 (1997)ADSCrossRefGoogle Scholar 27. J.R. Ellis, I. Jack, D.R.T. Jones, M. Karliner, M.A. Samuel, Phys. Rev. D 57, 2665 (1998)ADSCrossRefGoogle Scholar 28. P.N. Burrows, T. Abraha, M. Samuel, E. Steinfelds, H. Masuda, Phys. Lett. B 392, 223 (1997)ADSCrossRefGoogle Scholar 29. J.R. Ellis, M. Karliner, M.A. Samuel, Phys. Lett. B 400, 176 (1997)ADSCrossRefGoogle Scholar 30. I. Jack, D.R.T. Jones, M.A. Samuel, Phys. Lett. B 407, 143 (1997)ADSCrossRefGoogle Scholar 31. D. Boito, P. Masjuan, F. Oliani, JHEP 1808, 075 (2018)ADSCrossRefGoogle Scholar 32. P.A. Baikov, K.G. Chetyrkin, J.H. Kuhn, Phys. Rev. Lett. 101, 012002 (2008)ADSCrossRefGoogle Scholar 33. P.A. Baikov, K.G. Chetyrkin, J.H. Kuhn, Phys. Rev. Lett. 104, 132004 (2010)ADSCrossRefGoogle Scholar 34. P.A. Baikov, K.G. Chetyrkin, J.H. Kuhn, J. Rittinger, Phys. Lett. B 714, 62 (2012)ADSCrossRefGoogle Scholar 35. P.A. Baikov, K.G. Chetyrkin, J.H. Kuhn, J. Rittinger, JHEP 1207, 017 (2012)ADSCrossRefGoogle Scholar 36. R. Marshall, Z. Phys. C 43, 595 (1989)ADSCrossRefGoogle Scholar 37. C.S. Lam, T.-M. Yan, Phys. Rev. D 16, 703 (1977)ADSCrossRefGoogle Scholar 38. S.Q. Wang, X.G. Wu, X.C. Zheng, J.M. Shen, Q.L. Zhang, Eur. Phys. J. C 74, 2825 (2014)ADSCrossRefGoogle Scholar 39. P.A. Baikov, K.G. Chetyrkin, J.H. Kuhn, Phys. Rev. Lett. 96, 012003 (2006)ADSCrossRefGoogle Scholar 40. C. Patrignani et al., Particle data group. Chin. Phys. C 40, 100001 (2016)ADSCrossRefGoogle Scholar 41. J.M. Shen, X.G. Wu, Y. Ma, S.J. Brodsky, Phys. Lett. B 770, 494 (2017)ADSCrossRefGoogle Scholar 42. S.J. Brodsky, P. Huet, Phys. Lett. B 417, 145 (1998)ADSMathSciNetCrossRefGoogle Scholar 43. M.S. Costa, V. Goncalves, J. Penedones, JHEP 1212, 091 (2012)ADSCrossRefGoogle Scholar 44. N. Gromov, F. Levkovich-Maslyuk, G. Sizov, Phys. Rev. Lett. 115, 251601 (2015)ADSMathSciNetCrossRefGoogle Scholar 45. N. Gromov, V. Kazakov, S. Leurent, D. Volin, Phys. Rev. Lett. 112, 011602 (2014)ADSCrossRefGoogle Scholar 46. N. Gromov, V. Kazakov, S. Leurent, D. Volin, JHEP 1509, 187 (2015)ADSCrossRefGoogle Scholar 47. N. Gromov, F. Levkovich-Maslyuk, G. Sizov, JHEP 1606, 036 (2016)ADSCrossRefGoogle Scholar Copyright information © The Author(s) 2019 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP3 Authors and Affiliations Bo-Lun Du1Xing-Gang Wu1Email authorView author's OrcID profileJian-Ming Shen1Stanley J. Brodsky21.Department of PhysicsChongqing UniversityChongqingPeople's Republic of China2.SLAC National Accelerator LaboratoryStanford UniversityStanfordUSA This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1140%2Fepjc%2Fs10052-019-6704-9.pdf Bo-Lun Du, Xing-Gang Wu, Jian-Ming Shen, Stanley J. Brodsky. Extending the predictive power of perturbative QCD, The European Physical Journal C, 2019, 182, DOI: 10.1140/epjc/s10052-019-6704-9
CommonCrawl
Skip to main content Skip to sections SN Applied Sciences January 2020 , 2:87 | Cite as An interval type-2 fuzzy TOPSIS using the extended vertex method for MAGDM Iman Mohamad Sharaf First Online: 14 December 2019 Part of the following topical collections: Engineering: Computational Sciences, Artificial Intelligence and Digital Service Platforms In this article, the technique of order preference by similarity to an ideal solution (TOPSIS) is modified to handle interval type-2 fuzzy sets (IT2FSs) using the extended vertex method for distance measure. While the existing TOPSIS techniques for IT2FSs depend on the defuzzification of the average decision matrix or the average weighted decision matrix from the very beginning, the proposed method maintains fuzziness in the preference technique up to the hilt to avoid any information distortion which might lead to false ranking. First, the vertex method for distance measure is extended to encompass IT2FSs. The extended vertex method is an efficient simple formula that requires few computations in contrast to other distance measures based on embedding type 1 fuzzy sets or α-cuts that need special algorithms and can be restrictive in applications that require high computations. Second, the fuzzy positive and negative ideal solutions are defined. Then, the relative degree of closeness to the ideal solutions is computed for each alternative using the extended vertex method. As the relative degree of closeness of an alternative increases, its preference increases. Therefore, the preference technique avoids the flaws of the existing techniques and the computations are reduced. Two illustrative examples are given and the results are compared with the results of the existing TOPSIS methods. In light of the results and comparisons, the role of the defined ideal solutions in ranking is clarified. Fuzzy multi-attributes group decision making TOPSIS Distance measure Interval type-2 fuzzy sets Mathematics Subject Classification In modern decision theory, multiple attributes group decision-making (MAGDM) problems play a vital role. In these problems, the best choice is chosen from a set of alternatives counting on experts' assessments of the alternatives according to their multiple attributes [5]. In many cases, the human evaluations and preferences are often ambiguous and vague. In other cases, some of the evaluation attributes are subjective and qualitative in nature that they cannot be expressed using exact numerical values [17]. The concept of fuzzy sets was introduced to handle vagueness, uncertainty, and imprecision in decision-making problems. Type-1 fuzzy sets (T1FSs) are the early proposed sets. They are characterized by a crisp membership function in the interval [0, 1]. In many situations it is hard to estimate the exact membership function of fuzzy sets. As a result, T1FSs decrease the flexibility and precision of decision-making in an uncertain environment [12]. Recently, MAGDM methods use more elaborate fuzzy sets, e.g. IT2FSs [4, 22]; and intuitionistic fuzzy sets [7, 11, 16, 27, 30]. Type-2 fuzzy sets (T2FSs) are an extension of T1FSs. Zadeh [31] proposed T2FSs to express linguistic terms more efficiently than T1FSs. The membership degree of each element in a T2FS is represented by another fuzzy set defined over the interval [0, 1]. Therefore, the membership function of T2FSs is three dimensional. That provides more degrees of freedom in representing uncertainties in recent models [17]. T2FSs have proven to surpass T1FSs in many applications due to their ability to model uncertainties with greater accuracy [20]. However, T2FSs require complicated and massive tiresome operations [17]. This led to the introduction of fuzzy sets that requires simpler and easier computations than T2FSs. Sambuc [24] introduced interval-valued fuzzy sets (IVFSs) in which the membership values are expressed by intervals. The exact membership degree lies within the considered interval [28]. Consequently, an IVFS is defined by an upper T1FS membership function and a lower T1FS membership function. Later, Liang and Mendel [19] introduced IT2FSs as a particular case of T2FSs whose secondary membership degree is equal to one. IVFSs can be considered as a special case of IT2FSs in which the membership values are equal in both the upper and lower fuzzy numbers. Although IVFSs and IT2FSs might appear to be similar, they are not totally equivalent. The information capacity of these fuzzy sets is not equal [21]. IT2FSs can represent concepts that cannot be represented by IVFSs. Actually, IT2FSs can be considered as a generalization of IVFSs [28]. For the points of similarity and difference between the two sets, the reader is referred to Niewiadomski [21] and Sola [28]. The technique of order preference by similarity to an ideal solution (TOPSIS) is one of the well-known methods to solve MAGDM problems. TOPSIS is preferred due to its simplicity, intuitiveness and limited computations [15]. It proved to be a useful practical tool for ranking and selecting among alternatives [23]. A TOPSIS solution is an alternative which is the closest to the positive ideal solution (PIS) and the farthest from the negative ideal solution (NIS). First, the weighted ratings are defuzzified into crisp values. Second, their distance from both the positive and negative ideal solutions is calculated. Then, a closeness coefficient is defined to determine the ranking order of the alternatives. Chu and Lin [6] proposed the conversion of the weighted normalized decision matrix to crisp values by defuzzification to change a fuzzy MAGDM problem into a crisp one. Several modifications have been introduced to fuzzy TOPSIS. These modifications are either in the defuzzification technique or in the preference technique. Defuzzification is simple and easy. Nevertheless, defuzzification loses uncertainty of messages. On the other hand, a fuzzy pair-wise comparison is complex and difficult. However, it preserves fuzziness in messages [15]. Ashtiani et al. [1] proposed a triangular interval-valued fuzzy TOPSIS (IVF-TOPSIS) to solve MAGDM problems. Chen and Lee [5] presented an interval type-2 fuzzy TOPSIS (IT2F-TOPSIS) for fuzzy MAGDM problems. The method relies on the early defuzzification of the average weighted decision matrix. This leads to two drawbacks, it gives an incorrect preferred order of the alternatives in some situations, and the preferred order of the alternatives will change if additional alternatives are added [3]. To overcome these drawbacks, Chen and Hong [3] introduced a new ranking technique for IT2FSs and utilized it in TOPSIS. Still, early defuzzfication is carried out for the attributes' fuzzy weights and the average fuzzy decision matrix. Rashid et al. [23] extended TOPSIS for generalized interval-valued trapezoidal fuzzy numbers. Yet, the used heuristic expression to calculate the difference between interval-valued trapezoidal fuzzy numbers was not justified [8]. Therefore, Dymova et al. [8] introduced an interval type-2 fuzzy extension of the TOPSIS method using α-cuts to avoid the limitations and drawbacks of the existing methods. Ilieva [15] used the graded mean integration to defuzzify IT2FSs into two crisp values and then working with their average value. Kumar and Garg [18] proposed a TOPSIS method for interval-valued intuitionistic fuzzy sets based on set pair analysis. Sharaf [25] proposed an IVF-TOPSIS using similarity measure based on map distance for preference comparison. In this article, TOPSIS is modified to handle IT2FSs using the extended vertex method for distance measure. The existing TOPSIS methods for IT2FSs rely on the early defuzzification of the average decision matrix or the average weighted decision matrix. The proposed method maintains fuzziness in the preference technique to avoid the disadvantages of defuzzification which may lead to incorrect ranking. First, the vertex method for distance measure is extended to include IT2FSs. The proposed distance measure is simple and requires few computations. Meanwhile, the other distance measures based on embedding type 1 fuzzy sets or α-cuts need special algorithms and can be restrictive in applications that require high computations. The performance of the vertex method is compared versus other distance measures. Second, the fuzzy positive and negative ideal solutions are defined for IT2FSs. Then, the relative degree of closeness to the ideal solutions is computed for each alternative using the proposed distance measure. As the relative degree of closeness of an alternative increases, its preference increases. Therefore, the preference technique depends on a fuzzy basis to avoid the flaws of the existing techniques resulting from the loss of information due to defuzzification and the computations are reduced due to the simple and efficient formula of the extended vertex method for distance measure. The article is organized as follows. Different types of fuzzy sets, distance measures, and the classical TOPSIS are presented in Sect. 2. The extended vertex method is introduced in Sect. 3. The proposed TOPSIS method is introduced in Sect. 4. Two illustrative examples are given to demonstrate the approach; and the results are compared with the results of some existing TOPSIS methods in Sect. 5. Discussion is given in Sect. 6. Finally, the conclusion is given in Sect. 7. 2 Preliminaries 2.1 Interval type-2 fuzzy numbers A type-2 fuzzy set is represented by $${\tilde{\text{A}}} = \int\limits_{{\forall x \in \, \varvec{X}}} {\int\limits_{{\forall u \in J_{x} \subseteq \left[ {0,1} \right]}} {\mu_{{\tilde{A}}} \left( {x,u} \right)/\left( {x,u} \right),} }$$ where \(\mu_{{\tilde{A}}} \left( {x,u} \right)\) is a type-2 membership function and ∫∫ denotes the union over all admissible x and u [8]. IT2FS is a T2FS with \(\mu_{{_{{\tilde{A}}} }} \left( {x,u} \right) = 1.\) IT2FS are represented using two trapezoidal fuzzy numbers as follows: $$\tilde{A} = \left[ {\tilde{A}^{L } , \tilde{A}^{U } } \right] = [(a_{1}^{L} , a_{2}^{L} , a_{3}^{L} , a_{4}^{L} ;w_{1}^{L} , w_{2}^{L} ), (a_{1}^{U} , a_{2}^{U} , a_{3}^{U} , a_{4}^{U} ; w_{1}^{U} , w_{2}^{U} )],$$ where \(a_{1}^{L} , a_{2}^{L} , a_{3}^{L} , a_{4}^{L} , a_{1}^{U} , a_{2}^{U} , a_{3}^{U} , and a_{4}^{U} \in R\) (the set of real numbers) are the reference points of the IT2FS, and \(w_{1}^{L} , w_{2}^{L} , w_{1}^{U} and w_{2}^{U} \in \left[ {0,1} \right]\) denote the membership values [17]. Trapezoidal IVFS is a special case of trapezoidal IT2FS when \(w_{1}^{L} = w_{2}^{L}\) and \(w_{1}^{U} = w_{2}^{U}\). For the two interval type-2 fuzzy numbers \(\tilde{A} = [(a_{1}^{L} , a_{2}^{L} , a_{3}^{L} , a_{4}^{L} ;w_{{1\tilde{A}}}^{L} ,w_{{2\tilde{A}}}^{L} ), (a_{1}^{U} , a_{2}^{U} , a_{3}^{U} , a_{4}^{U} ;w_{{1\tilde{A}}}^{U} ,w_{{2\tilde{A}}}^{U} )]\) and \(\tilde{B} = [(b_{1}^{L} , b_{2}^{L} , b_{3}^{L} , b_{4}^{L} ;w_{{1\tilde{B}}}^{L} ,w_{{2\tilde{B}}}^{L} ), (b_{1}^{U} , b_{2}^{U} , b_{3}^{U} , b_{4}^{U} ;w_{{1\tilde{B}}}^{U} ,w_{{2\tilde{B}}}^{U} )]\), the aggregation operations are defined as follows [5]: $$\tilde{A} \oplus \tilde{B} = \left[ {\left( {a_{1}^{L} + b_{1}^{L} ,a_{2}^{L} + b_{2}^{L} ,a_{3}^{L} + b_{3}^{L} ,a_{4}^{L} + b_{4}^{L} ;min\left( {w_{{1\tilde{A}}}^{L} ,w_{{1\tilde{B}}}^{L} } \right),min\left( {w_{{2\tilde{A}}}^{L} ,w_{{2\tilde{B}}}^{L} } \right)} \right),\left( {a_{1}^{U} + b_{1}^{U} ,a_{2}^{U} + b_{2}^{U} ,a_{3}^{U} + b_{3}^{U} ,a_{4}^{U} + b_{4}^{U} ; min\left( {w_{{1\tilde{A}}}^{U} ,w_{{1\tilde{B}}}^{U} } \right),\,min\,\left( {w_{{2\tilde{A}}}^{U} ,w_{{2\tilde{B}}}^{U} } \right)} \right)} \right]. \\$$ $$\tilde{A} \oplus \tilde{B} = \left[ {\left( {a_{1}^{L} b_{1}^{L} ,a_{2}^{L} b_{2}^{L} ,a_{3}^{L} b_{3}^{L} ,a_{4}^{L} b_{4}^{L} ;min\left( {w_{{1\mathop A\limits^{} }}^{L} ,w_{{1\mathop B\limits^{} }}^{L} } \right),min\left( {w_{{2\mathop A\limits^{} }}^{L} ,w_{{2\mathop B\limits^{} }}^{L} } \right)} \right),\left( {a_{1}^{U} b_{1}^{U} ,a_{2}^{U} b_{2}^{U} ,a_{3}^{U} b_{3}^{U} ,a_{4}^{U} b_{4}^{U} ;~\left( {w_{{1\mathop A\limits^{} }}^{U} ,w_{{1\mathop B\limits^{} }}^{U} } \right),min\left( {w_{{2\mathop A\limits^{} }}^{U} ,w_{{2\mathop B\limits^{} }}^{U} } \right)} \right)} \right].$$ For an arbitrary real number k $$\begin{aligned} & k.\tilde{A} = \tilde{A}.k \\ & \quad = \left\{ {\begin{array}{*{20}l} {\left[ {\left( {k.a_{1}^{L} ,k.a_{2}^{L} ,k.a_{3}^{L} ,k.a_{4}^{L} ;w_{{1\tilde{A}}}^{L} ,w_{{2\tilde{A}}}^{L} } \right),\left( {k.a_{1}^{U} ,k.a_{2}^{U} ,k.a_{3}^{U} ,k.a_{4}^{U} ; w_{{1\tilde{A}}}^{U} ,w_{{2\tilde{A}}}^{U} } \right)} \right];} \hfill & {if} \hfill & {k \ge 0,} \hfill \\ {\left[ {\left( {k.a_{4}^{L} ,k.a_{3}^{L} ,k.a_{2}^{L} ,k.a_{1}^{L} ;w_{{1\tilde{A}}}^{L} ,w_{{2\tilde{A}}}^{L} } \right),\left( {k.a_{4}^{U} ,k.a_{3}^{U} ,k.a_{2}^{U} ,k.a_{1}^{U} ;w_{{1\tilde{A}}}^{U} ,w_{{2\tilde{A}}}^{U} } \right)} \right];} \hfill & {if} \hfill & {k \le 0.} \hfill \\ \end{array} } \right. \\ \end{aligned}$$ 2.2 Distance measure 2.2.1 Concept of distance measures Distance measures have been extensively studied due to their applications in multiple areas, e.g. risk analysis, data mining, signal processing and pattern recognition [26, 27]. A distance measure depicts the difference between two fuzzy sets. Computing the difference between two fuzzy sets as a crisp number is crucial for ranking and preference. Yet as the distance is computed in an inaccurate domain, due to vagueness, a rational problem arises [13]. Definition 2.2.1.1 [13] A real function "d" is called a metric distance, if "d" satisfies the following properties \(d\left( {\tilde{A},\tilde{B}} \right) \ge 0,\) for any two IT2FSs \(\tilde{A}\) and \(\tilde{B}\). \(d\left( {\tilde{A},\tilde{B}} \right) = d\left( {\tilde{B},\tilde{A}} \right),\) for any two IT2FSs \(\tilde{A}\) and \(\tilde{B}\). \(d\left( {\tilde{A},\tilde{B}} \right) + d\left( {\tilde{B},\tilde{C}} \right) \ge d\left( {\tilde{A},\tilde{C}} \right),\) for any three IT2FSs \(\tilde{A}\), \(\tilde{B}\) and \(\tilde{C}\). [13] for any three IT2FSs \(\tilde{A}\), \(\tilde{B}\) and \(\tilde{C}\), we can write \(\tilde{A} \prec \tilde{B} \prec\) \(\tilde{C}\) if and only if \(d\left( {\tilde{A},\tilde{B}} \right) < d\left( {\tilde{A},\tilde{C}} \right)\), which means that \(\tilde{B}\) is closer to \(\tilde{A}\) than \(\tilde{C}\). Most of the distance measures for IT2FSs are generalizations of the distances used in the crisp sets, replacing the characteristic functions by the membership functions, e.g. the normalized Hamming distance, the normalized Euclidean distance and the normalized Hamming distance based on Hausdorff metric. Heidarzade et al. [13] demonstrated that these three distance measures are not appropriate for IT2FSs. Figueroa-Garcia and Hernandez-Perez [10] proposed a distance measure for triangular IT2FSs, i.e. with triangular lower and upper membership functions, using its decomposition into α-cuts. However, the α-cuts based distance can be restrictive in applications that require high computational efforts [9]. For this reason, Figueroa-Garcia et al. [9] proposed centroid based distance measures for triangular IT2FSs. However, centroids are a sort of defuzzification, regardless the formula used for measuring distance based on them. Heidarzade et al. [13] proposed a distance measure for IT2FSs. The algorithm assumes n embedded type-1 fuzzy numbers within the surface of the footprint of uncertainty (FOU). By increasing n, the surface of FOU is embedded with more type-1 fuzzy sets. Therefore, the value of n has a direct impact on the difference between the upper and lower membership functions for the IT2FS. For details on distance measures for IT2FSs, the reader is referred to Zhang et al. [32], Figueroa-Garcia et al. [9] and Heidarzade et al. [13]. 2.3 The classical TOPSIS Given a decision matrix D that contains n alternatives and m attributes, Open image in new window the TOPSIS method as initially developed by Hwang and Yoon (1981) can be summarized as follows. Step 1. Form the normalized decision matrix. In this step, the various attribute dimensions are transformed into non-dimensional attributes to allow comparing the attributes. Step 2. Form the weighted normalized decision matrix. Since we can't assume that all the criteria are of equal importance, the decision-makers assign a set of weights \(w = \left( {w_{1} , w_{2} , \ldots ,w_{m} } \right)\) for the criteria. Each criterion is multiplied by its associated weight Step 3. Determine the positive and negative ideal solutions. $$\begin{aligned} \varvec{v}^{ + } & = \left\{ {\left. {\left( {\mathop {\hbox{max} }\limits_{\varvec{j}} \varvec{v}_{{\varvec{ij}}} |j \in F_{b} } \right),\left( {\mathop {\hbox{min} }\limits_{\varvec{j}} \varvec{v}_{{\varvec{ij}}} |j \in F_{c} } \right)} \right|j = 1,2, \ldots n} \right\} = \left( {v_{1}^{ + } , v_{2}^{ + } , \ldots v_{m}^{ + } } \right), \\ \varvec{v}^{ - } & = \left\{ {\left. {\left( {\mathop {\hbox{min} }\limits_{\varvec{j}} \varvec{v}_{{\varvec{ij}}} |j \in F_{b} } \right),\left( {\mathop {\hbox{max} }\limits_{\varvec{j}} \varvec{v}_{{\varvec{ij}}} |j \in F_{c} } \right)} \right|j = 1,2, \ldots n} \right\} = \left( {v_{1}^{ - } , v_{2}^{ - } , \ldots v_{m}^{ - } } \right), \\ \end{aligned}$$ where Fb are the benefit criteria, and Fc are the cost criteria. Step 4. Calculate the separation measures. The Euclidean distance is used to compute the separation measures between each alternative and the ideal solutions. $$S_{j}^{ + } = \sqrt {\mathop \sum \limits_{i = 1}^{m} (v_{ij} - v^{ + } )^{2} } ,\;S_{j}^{ - } = \sqrt {\mathop \sum \limits_{i = 1}^{m} (v_{ij} - v^{ - } )^{2} } ,\;j = 1,2, \ldots n.$$ Step 5. Calculate the relative closeness to the ideal solution. $$R_{j} = \frac{{S_{j}^{ - } }}{{S_{j}^{ + } + S_{j}^{ - } }}, j = 1,2, \ldots n$$ Step 6. Rank the preference order. The alternatives can be ranked to the descending order of the relative closeness. 3 The extended vertex method Chen [2] extended the TOPSIS method for group decision making under fuzzy environment. To find the relative degree of closeness, the distance to both the PIS and the NIS must be calculated. Chen [2] proposed the vertex method to calculate the distance between two triangular type-1 fuzzy numbers. The vertex method is defined as follows. [2]: For the two triangular type-1 fuzzy numbers \(\tilde{m} = \left( {\tilde{m}_{1} ,\tilde{m}_{2} ,\tilde{m}_{3} } \right)\) and \(\tilde{n} = \left( {\tilde{n}_{1} ,\tilde{n}_{2} ,\tilde{n}_{3} } \right)\), the distance between them is defined as $$d\left( {\tilde{m}, \tilde{n}} \right) = \sqrt {\frac{1}{3}\left[ {\left( {m_{1} - n_{1} } \right)^{2} + \left( {m_{2} - n_{2} } \right)^{2} + \left( {m_{3} - n_{3} } \right)^{2} } \right]} .$$ Generalizing the concept to handle IT2FSs, the extended vertex method is defined as follows. Consider the fuzzy numbers \(\tilde{A} = [(a_{1}^{L} , a_{2}^{L} , a_{3}^{L} , a_{4}^{L} ;w_{{1\tilde{A}}}^{L} ,w_{{2\tilde{A}}}^{L} ), (a_{1}^{U} , a_{2}^{U} , a_{3}^{U} , a_{4}^{U} ;w_{{1\tilde{A}}}^{U} ,w_{{2\tilde{A}}}^{U} )]\) and \(\tilde{B} = [(b_{1}^{L} , b_{2}^{L} , b_{3}^{L} , b_{4}^{L} ;w_{{1\tilde{B}}}^{L} ,w_{{2\tilde{B}}}^{L} ), (b_{1}^{U} , b_{2}^{U} , b_{3}^{U} , b_{4}^{U} ;w_{{1\tilde{B}}}^{U} ,w_{{2\tilde{B}}}^{U} )].\) The sum of square the distances between the vertices of the lower membership functions, \(\left( {a_{1}^{L} ,0} \right),\left( {a_{2}^{L} ,w_{{1\tilde{A}}}^{L} } \right)\left( {a_{3}^{L} ,w_{{2\tilde{A}}}^{L} } \right)\left( {a_{4}^{L} ,0} \right)\) and \(\left( {b_{1}^{L} ,0} \right),\left( {b_{2}^{L} ,w_{{1\tilde{B}}}^{L} } \right)\left( {b_{3}^{L} ,w_{{2\tilde{B}}}^{L} } \right)\left( {b_{4}^{L} ,0} \right)\), is given by: $$d^{2} \left( {\tilde{A}^{L} , \tilde{B}^{L} } \right) = \left( {a_{1}^{L} - b_{1}^{L} } \right)^{2} + \left( {a_{2}^{L} - b_{2}^{L} } \right)^{2} + \left( {w_{{1\tilde{A}}}^{L} - w_{{1\tilde{B}}}^{L} } \right)^{2} + \left( {a_{3}^{L} - b_{3}^{L} } \right)^{2} + \left( {w_{{2\tilde{A}}}^{L} - w_{{2\tilde{B}}}^{L} } \right)^{2} + \left( {a_{4}^{L} - b_{4}^{L} } \right)^{2} .$$ The sum of square the distances between the vertices of the upper membership functions, \(\left( {a_{1}^{U} ,0} \right),\left( {a_{2}^{U} ,w_{{1\tilde{A}}}^{L} } \right)\left( {a_{3}^{U} ,w_{{2\tilde{A}}}^{L} } \right)\left( {a_{4}^{U} ,0} \right)\) and \(\left( {b_{1}^{U} ,0} \right),\left( {b_{2}^{U} ,w_{{1\tilde{B}}}^{U} } \right)\left( {b_{3}^{U} ,w_{{2\tilde{B}}}^{L} } \right)\left( {b_{4}^{U} ,0} \right)\), is given by: $$d^{2} \left( {\tilde{A}^{U} , \tilde{B}^{U} } \right) = \left( {a_{1}^{U} - b_{1}^{U} } \right)^{2} + \left( {a_{2}^{U} - b_{2}^{U} } \right)^{2} + \left( {w_{{1\tilde{A}}}^{U} - w_{{1\tilde{B}}}^{U} } \right)^{2} + \left( {a_{3}^{L} - b_{3}^{U} } \right)^{2} + \left( {w_{{2\tilde{A}}}^{U} - w_{{2\tilde{B}}}^{U} } \right)^{2} + \left( {a_{4}^{U} - b_{4}^{U} } \right)^{2} .$$ For the two interval type-2 fuzzy numbers $$\tilde{A} = [(a_{1}^{L} , a_{2}^{L} , a_{3}^{L} , a_{4}^{L} ;w_{{1\tilde{A}}}^{L} ,w_{{2\tilde{A}}}^{L} ), (a_{1}^{U} , a_{2}^{U} , a_{3}^{U} , a_{4}^{U} ;w_{{1\tilde{A}}}^{U} ,w_{{2\tilde{A}}}^{U} )]$$ $$\tilde{B} = [(b_{1}^{L} , b_{2}^{L} , b_{3}^{L} , b_{4}^{L} ;w_{{1\tilde{B}}}^{L} ,w_{{2\tilde{B}}}^{L} ), (b_{1}^{U} , b_{2}^{U} , b_{3}^{U} , b_{4}^{U} ;w_{{1\tilde{B}}}^{U} ,w_{{2\tilde{B}}}^{U} )]$$ , the distance between them is given by the formula: $$\begin{aligned} d\left( {\tilde{A}, \tilde{B}} \right) = \sqrt {\frac{1}{8}\left[ \begin{aligned} & \left( {a_{1}^{L} - b_{1}^{L} } \right)^{2} + \left( {a_{2}^{L} - b_{2}^{L} } \right)^{2} + \left( {a_{3}^{L} - b_{3}^{L} } \right)^{2} + \left( {a_{4}^{L} - b_{4}^{L} } \right)^{2} + \left( {a_{1}^{U} - b_{1}^{U} } \right)^{2} + \left( {a_{2}^{L} - b_{2}^{U} } \right)^{2} + \left( {a_{3}^{U} - b_{3}^{U} } \right)^{2} + \\ & \quad \left( {a_{4}^{U} - b_{4}^{U} } \right)^{2} + \left( {w_{{1\tilde{A}}}^{L} - w_{{1\tilde{B}}}^{L} } \right)^{2} + \left( {w_{{2\tilde{A}}}^{L} - w_{{2\tilde{B}}}^{L} } \right)^{2} + \left( {w_{{1\tilde{A}}}^{U} - w_{{1\tilde{B}}}^{U} } \right)^{2} + \left( {w_{{2\tilde{A}}}^{U} - w_{{2\tilde{B}}}^{U} } \right)^{2} . \\ \end{aligned} \right]} \\ \end{aligned}$$ The properties of the metric distance, \(d\left( {\tilde{A},\tilde{B}} \right) \ge 0, d\left( {\tilde{A},\tilde{B}} \right) = d\left( {\tilde{B},\tilde{A}} \right),\) and \(d\left( {\tilde{A},\tilde{B}} \right) + d\left( {\tilde{B},\tilde{C}} \right) \ge d\left( {\tilde{A},\tilde{C}} \right),\) are trivial from the formula. If \(\tilde{A}\) and \(\tilde{B}\) are real numbers, then the distance measure reduces to the Euclidean distance. If \(\tilde{A}\) is a real number, then \(a_{1}^{L} = a_{2}^{L} = a_{3}^{L} = a_{4}^{L} = a_{1}^{U} = a_{2}^{U} = a_{3}^{U} = a_{4}^{U} = a\) and \(w_{{1\tilde{A}}}^{L} = w_{{2\tilde{A}}}^{L} = w_{{1\tilde{A}}}^{U} = w_{{2\tilde{A}}}^{U} = 1\). Similarly, if \(\tilde{B}\) is a real number, then $$b_{1}^{L} = b_{2}^{L} = b_{3}^{L} = b_{4}^{L} = b_{1}^{U} = b_{2}^{U} = b_{3}^{U} = b_{4}^{U} = b\;{\text{and}}\;w_{{1 \tilde{B}}}^{L} = w_{2B}^{L} = w_{1B}^{U} = w_{2B}^{U} = 1.$$ Substituting in the extended vertex formula, $$\begin{aligned} d\left( {\tilde{A}, \tilde{B}} \right) &= \sqrt {\frac{1}{8}\left[ \begin{aligned} & \left( {a - b} \right)^{2} + \left( {a - b} \right)^{2} + \left( {a - b} \right)^{2} + \left( {a - b} \right)^{2} + \left( {a - b} \right)^{2} + \left( {a - b} \right)^{2} + \left( {a - b} \right)^{2} \\ & \quad + \left( {a - b} \right)^{2} + \left( {1 - 1} \right)^{2} + \left( {1 - 1} \right)^{2} + \left( {1 - 1} \right)^{2} + \left( {1 - 1} \right)^{2} \\ \end{aligned} \right]} \\ & = \sqrt {\frac{1}{8}\left[ {8\left( {a - b} \right)^{2} } \right]} = \left| {a - b} \right|. \\ \end{aligned}$$ Two IT2FSs \(\tilde{A}\) and \(\tilde{B}\) are identical if and only if \(d\left( {\tilde{A}, \tilde{B}} \right) = 0.\) (i) Let \(\tilde{A}\) = \(\tilde{B}\), then $$a_{1}^{L} = b_{1}^{L} , a_{2}^{L} = b_{2}^{L} ,a_{3}^{L} = b_{3, }^{L} a_{4}^{L} = b_{4}^{L} , a_{1}^{U} = b_{1}^{U} ,a_{2}^{U} = b_{2}^{U} , a_{3}^{U} = b_{3}^{U} , a_{4}^{U} = b_{4}^{U} ,w_{{1\tilde{A}}}^{L} = w_{{1 \tilde{B}}}^{L} , w_{2B}^{L} = w_{{2\tilde{A}}}^{L} ,w_{{1\tilde{A}}}^{U} = w_{1B}^{U} ,\, and \,w_{{2\tilde{A}}}^{U} = w_{2B}^{U} .$$ Substituting in the vertex formula gives \(d\left( {\tilde{A}, \tilde{B}} \right) = 0.\) If \(d\left( {\tilde{A}, \tilde{B}} \right) = 0\), then $$\sqrt {\frac{1}{8}\left[ \begin{aligned} & \left( {a_{1}^{L} - b_{1}^{L} } \right)^{2} + \left( {a_{2}^{L} - b_{2}^{L} } \right)^{2} + \left( {a_{3}^{L} - b_{3}^{L} } \right)^{2} + \left( {a_{4}^{L} - b_{4}^{L} } \right)^{2} + \left( {a_{1}^{U} - b_{1}^{U} } \right)^{2} + \left( {a_{2}^{L} - b_{2}^{U} } \right)^{2} + \left( {a_{3}^{U} - b_{3}^{U} } \right)^{2} \\ & \quad + \left( {a_{4}^{U} - b_{4}^{U} } \right)^{2} + \left( {w_{{1\tilde{A}}}^{L} - w_{{1 \tilde{B}}}^{L} } \right)^{2} + \left( {w_{{2\tilde{A}}}^{L} - w_{{2 \tilde{B}}}^{L} } \right)^{2} + \left( {w_{{1\tilde{A}}}^{U} - w_{{1 \tilde{B}}}^{U} } \right)^{2} + \left( {w_{{2\tilde{A}}}^{U} - w_{{2 \tilde{B}}}^{U} } \right)^{2} . \\ \end{aligned} \right]} = 0$$ , which implies that $$a_{1}^{L} = b_{1}^{L} , a_{2}^{L} = b_{2}^{L} ,a_{3}^{L} = b_{3, }^{L} a_{4}^{L} = b_{4}^{L} , a_{1}^{U} = b_{1}^{U} ,a_{2}^{U} = b_{2}^{U} , a_{3}^{U} = b_{3}^{U} , a_{4}^{U} = b_{4}^{U} ,w_{{1\tilde{A}}}^{L} = w_{{1\tilde{B}}}^{L} , w_{2B}^{L} = w_{{2\tilde{A}}}^{L} ,w_{{1\tilde{A}}}^{U} = w_{1B}^{U} , and w_{{2\tilde{A}}}^{U} = w_{2B}^{U} , then \tilde{A}=\tilde{B}$$ Wu and Mendel [29] proposed 32 words for computing with words. They can be grouped into three classes. Class one: small-sounding words (none to very little, teeny-weeny, a smidgen, tiny, very small, very little, a bit, little, low amount, small, and somewhat small). Class two: medium-sounding words (some, some to moderate, moderate amount, fair amount, medium, modest amount, and good amount). Class three: large-sounding words (sizeable, quite a bit, considerable amount, substantial amount, a lot, high amount, very sizeable, large, humongous amount, huge amount, very high amount, extreme amount, and maximum amount). For the corresponding IT2FSs of these words the reader is referred to Wu and Mendel [29] or Heidarzade et al. [13]. The performance of the extended vertex method is compared with the performance of the signed distance and the method of Heidarzade et al. [13]. Both the signed distance and the method of Heidarzade et al. [13] for trapezoidal IT2FSs are restricted to normalized IT2FSs with equal membership values i.e. \(w_{1}^{L} = w_{2}^{L}\) and \(w_{1}^{U} = w_{2}^{U}\). The extended vertex method computes the distance between any two IT2FSs whether normalized or not and the membership values are not necessarily equal. The distances between the first word FOU (None to very little) with the other 31 words FOUs are computed. Since the words are ranked from the smallest to the largest, according to Wu and Mendel's centroid based ranking, the distances should be in an increasing order. The results are given in Table 1. The distance between the first element and the other elements Signed distance Heidarzade et al. method [13] Extended vertex method D (none to very little, teeny-weeny) D (none to very little, a smidgen) D (none to very little, tiny) D (none to very little, very small) D (none to very little, very little) D (none to very little, a bit) D (none to very little, little) D (none to very little, low amount) D (none to very little, small) D (none to very little, somewhat small) D (none to very little, some) D (none to very little, some to moderate) D (none to very little, moderate amount) D (none to very little, fair amount) D (none to very little, medium) D (none to very little, modest amount) D (none to very little, good amount) D (none to very little, sizeable) D (none to very little, quite a bit) D (none to very little, considerable amount) D (none to very little, substantial amount) D (none to very little, a lot) D (none to very little, high amount) D (none to very little, very sizeable) D (none to very little, large) D (none to very little, very large) D (none to very little, humongous amount) D (none to very little, huge amount) D (none to very little, very high amount) D (none to very little, extreme amount) D (none to very little, maximum amount) From Table 1, it is obvious that the distances obtained by the method of Heidarzade et al. [13] are in an increasing order. As for the distances obtained by the signed distance, they are in an increasing order except for the words very small and very little, little and low amount, and substantial amount and a lot. Regarding the extended vertex method, the distances are also in an increasing order except for the words fair amount and medium, substantial amount and a lot, and very sizeable and large. In any case, the maximum difference was 0.2 on the 0 to 10 scale. This is acceptable in fuzzy sets as these words are quite equivalent when defuzzified, and the order may differ according to the used defuzzification technique. The results indicate the extended vertex method is appropriate for measuring the distances between IT2FSs. Regarding the computational complexity, the extended vertex method has the least processing time. When implemented using MatLab on a PC (Intel(R) Core(TM) i3-6100 [email protected] GHz), the processing time was as follows. The signed distance takes 1.8 × 10−5 s. The method of Heidarzade et al. [13] takes 3.3 × 10−5 s, using the minimum number of embedded T1FSs (only the upper and lower membership functions), in practice large number of embedded T1FSs are used, e.g., 10 or 20. The extended vertex method takes 1.6 × 10−5 s. Since calculating the distance between IT2FSs is repeated 2mn times (m criteria and n variables) in IT2F-TOPSIS, then the extended vertex method reduces the computations. 4 The proposed TOPSIS TOPSIS was first introduced by Hwang and Yoon [14] for real-valued data. Later, Chen [2] extended the method to the fuzzy environment using T1FSs. Chen and Lee [5] modified the technique to use IT2FSs. Recent research has been devoted to the fuzzy extension of TOPSIS, but only a few studies handled IT2FSs [8]. In this section, The TOPSIS method is modified. First, the PIS and NIS proposed by Rashid et al. [23] for IVFSs are extended to IT2FSs. Then, the extended vertex method is used to calculate the distance between the alternatives and the ideal solutions. For a MAGDM problem, let \(X = \left\{ {{\text{x}}_{1} , {\text{x}}_{2} , \ldots ,{\text{x}}_{n} } \right\}\) be the set of n alternatives and \(F = \{ f_{1} , f_{2} , \ldots ,f_{m} \}\) be the set of m attributes in the presence of k decision makers \(D_{1} , D_{2} , \ldots ,D_{k}\). The set of attributes "F" is divided into two sets, the set of benefit attributes "Fb" and the set of cost attributes "Fc" such that \(F_{b} \cap F_{c} = \emptyset\). It is assumed that the data used is normalized i.e. the fuzzy sets lies in the interval \(\left[ {0,1} \right]\). Step 1: Constructing the fuzzy decision matrix and the average decision matrix where \(\tilde{f}_{{\varvec{ij}}} = \left( {\frac{{\tilde{f}_{ij}^{1} \oplus \tilde{f}_{ij}^{2} \oplus \cdots \oplus \tilde{f}_{ij}^{k} }}{k}} \right)\) is an IT2FS, \(1 \le i \le m, 1 \le j \le n\) and \(1 \le p \le k\). Step 2: Constructing the weighting matrix and the average weighting matrix, where \(\tilde{w}_{\varvec{i}} = \left( {\frac{{{\tilde{\text{w}}}_{i}^{1} \oplus {\tilde{\text{w}}}_{i}^{2} \oplus \cdots \oplus {\tilde{\text{w}}}_{i}^{k} }}{k}} \right)\) is an IT2FS, \(1 \le i \le m\) and \(1 \le p \le k\). Step 3: Constructing the normalized weighted decision matrix where \(\tilde{v}_{ij} = {\tilde{\text{w}}}_{i} \otimes \tilde{f}_{ij}\), \(1 \le i \le m\) and \(1 \le j \le n.\) Step 4: Defining the fuzzy PIS \(\left( {{\tilde{\mathbf{V}}}^{ + } } \right)\) and the fuzzy NIS solution (\({\tilde{\mathbf{V}}}^{ - }\)) where, $$\tilde{v}_{i}^{ + } \left\{ {\begin{array}{*{20}l} {\left[ {\begin{array}{*{20}l} {\left( {\mathop {\hbox{max} }\limits_{j} v_{1ij}^{L} ,\mathop {\hbox{max} }\limits_{j} v_{2ij}^{L} ,\mathop {\hbox{max} }\limits_{j} v_{3ij}^{L} ,\mathop {\hbox{max} }\limits_{j} v_{4ij}^{L} ;\mathop {\hbox{max} }\limits_{j} w_{1ij}^{L} ,\mathop {\hbox{max} }\limits_{j} w_{2ij}^{L} } \right),} \hfill \\ {\left( {\mathop {\hbox{max} }\limits_{j} v_{1ij}^{U} ,\mathop {\hbox{max} }\limits_{j} v_{2ij}^{U} ,\mathop {\hbox{max} }\limits_{j} v_{3ij}^{U} ,\mathop {\hbox{max} }\limits_{j} v_{4ij}^{U} ;\mathop {\hbox{max} }\limits_{j} w_{1ij}^{U} ,\mathop {\hbox{max} }\limits_{j} w_{2ij}^{U} } \right)} \hfill \\ \end{array} } \right],\quad {\text{if}}\quad f_{i} \in F_{b} } \hfill \\ {\left[ {\begin{array}{*{20}l} {\left( {\mathop {\hbox{min} }\limits_{j} v_{1ij}^{L} ,\mathop {\hbox{min} }\limits_{j} v_{2ij}^{L} ,\mathop {\hbox{min} }\limits_{j} v_{3ij}^{L} ,\mathop {\hbox{min} }\limits_{j} v_{4ij}^{L} ;\mathop {\hbox{min} }\limits_{j} w_{1ij}^{L} ,\mathop {\hbox{min} }\limits_{j} w_{2ij}^{L} } \right),} \hfill \\ {\left( {\mathop {\hbox{min} }\limits_{j} v_{1ij}^{U} ,\mathop {\hbox{min} }\limits_{j} v_{2ij}^{U} ,\mathop {\hbox{min} }\limits_{j} v_{3ij}^{U} ,\mathop {\hbox{min} }\limits_{j} v_{4ij}^{U} ;\mathop {\hbox{min} }\limits_{j} w_{1ij}^{U} ,\mathop {\hbox{min} }\limits_{j} w_{2ij}^{U} } \right)} \hfill \\ \end{array} } \right],\quad {\text{if}}\quad f_{i} \in F_{c} ,} \hfill \\ \end{array} } \right.$$ \({\mathbf{V}}^{ + } = \left[ {\begin{array}{*{20}c} {\tilde{v}_{1}^{ + } } & {\tilde{v}_{2}^{ + } } \\ \end{array} \ldots \tilde{v}_{m}^{ + } } \right]\). $$\tilde{v}_{i}^{ - } \left\{ {\begin{array}{*{20}l} {\left[ {\begin{array}{*{20}l} {\left( {\mathop {\hbox{min} }\limits_{j} v_{1ij}^{L} ,\mathop {\hbox{min} }\limits_{j} v_{2ij}^{L} ,\mathop {\hbox{min} }\limits_{j} v_{3ij}^{L} ,\mathop {\hbox{min} }\limits_{j} v_{4ij}^{L} ;\mathop {\hbox{min} }\limits_{j} w_{1ij}^{L} ,\mathop {\hbox{min} }\limits_{j} w_{2ij}^{L} } \right),} \hfill \\ {\left( {\mathop {\hbox{min} }\limits_{j} v_{1ij}^{U} ,\mathop {\hbox{min} }\limits_{j} v_{2ij}^{U} ,\mathop {\hbox{min} }\limits_{j} v_{3ij}^{U} ,\mathop {\hbox{min} }\limits_{j} v_{4ij}^{U} ;\mathop {\hbox{min} }\limits_{j} w_{1ij}^{U} ,\mathop {\hbox{min} }\limits_{j} w_{2ij}^{U} } \right)} \hfill \\ \end{array} } \right],\quad {\text{if}}\quad f_{i} \in F_{b} } \hfill \\ {\left[ {\begin{array}{*{20}l} {\left( {\mathop {\hbox{max} }\limits_{j} v_{1ij}^{L} ,\mathop {\hbox{max} }\limits_{j} v_{2ij}^{L} ,\mathop {\hbox{max} }\limits_{j} v_{3ij}^{L} ,\mathop {\hbox{max} }\limits_{j} v_{4ij}^{L} ;\mathop {\hbox{max} }\limits_{j} w_{1ij}^{L} ,\mathop {\hbox{max} }\limits_{j} w_{2ij}^{L} } \right),} \hfill \\ {\left( {\mathop {\hbox{max} }\limits_{j} v_{1ij}^{U} ,\mathop {\hbox{max} }\limits_{j} v_{2ij}^{U} ,\mathop {\hbox{max} }\limits_{j} v_{3ij}^{U} ,\mathop {\hbox{max} }\limits_{j} v_{4ij}^{U} ;\mathop {\hbox{max} }\limits_{j} w_{1ij}^{U} ,\mathop {\hbox{max} }\limits_{j} w_{2ij}^{U} } \right)} \hfill \\ \end{array} } \right],\quad {\text{if}}\quad f_{i} \in F_{c} ,} \hfill \\ \end{array} } \right.$$ \({\mathbf{V}}^{ - } = \left[ {\begin{array}{*{20}c} {\tilde{v}_{1}^{ - } } & {\tilde{v}_{2}^{ - } } \\ \end{array} \ldots \tilde{v}_{m}^{ - } } \right]\). Step 5: Constructing the ideal separation matrix \({\mathbf{S}}^{ + }\) and the anti-ideal separation matrix \({\mathbf{S}}^{ - }\) using the extended vertex method for distance measure. Step 6: Calculating the relative degree of closeness of each alternative to the ideal solution and ranking. $$R\left( {x_{j} } \right) = \frac{{S^{ - } \left( {x_{j} } \right)}}{{S^{ + } \left( {x_{j} } \right) + S^{ - } \left( {x_{j} } \right)}}$$ In this section, two examples are solved. The first example is due to Rashid et al. [23], the second example is due Chen and Hong [3]. The results of the proposed IT2F-TOPSIS are compared with their results. A manufacturing company intends to purchase robots for material-handling. The decision makers 1, 2, and 3 evaluates three robots on six attributes: man–machine interface (f1), programming flexibility (f2), vendor's service contract (f3), purchase cost (f4), load capacity (f5), and positioning accuracy (f6). The benefit attributes are f1, f2, f3 and f5. The cost attributes are f4 and f6. Some of the attributes are subjective f1, f2 and f3. The other three attributes are objective f4, f5 and f6. Let \(X = \left\{ {{\text{x}}_{1} ,{\text{x}}_{2 } , {\text{x}}_{3} } \right\}\) be the set of alternatives, and F = {f1, f2, f3, f4, f5, f6} be the set of attributes. The decision makers use two sets of linguistic terms: the weighting set \(W = \left\{ {{\text{Very Low}}\left( {\text{VL}} \right), {\text{Low}}\left( {\text{L}} \right), {\text{Medium}}\left( {\text{M}} \right), {\text{High}}\left( {\text{H}} \right), {\text{Very High }}\left( {\text{VH}} \right)} \right\}\) and the rating set \(R = \left\{ {{\text{Very Poor}}\left( {\text{VP}} \right), {\text{Poor}}\left( {\text{P}} \right),{\text{Fair}}\left( {\text{F}} \right), {\text{Good}}\left( {\text{G}} \right), {\text{Very Good}}\left( {\text{VG}} \right)} \right\}\). The objective attributes are denoted by \({\text{O}}_{i}^{j}\) in the decision matrices. To summarize, only the elements of the normalized weighted decision matrix are given. For the interpretation of linguistic variables in terms of IT2FSs and complete details, see Rashid et al. [23]. Step 1: a) Constructing the decision matrices, and the average decision matrix Step 2: a) Constructing the weighting matrices, and the average weighting matrix Step 3: Constructing the weighted normalized decision matrix $$\begin{aligned} \tilde{v}_{11} & = \left[ {\left( {0.5408,0.6094,0.7157,0.7516;0.8,0.8} \right)\left( {0.4464,0.5577,0.7727,0.8577;1,1} \right)} \right], \\ \tilde{v}_{12} & = \left[ {\left( {0.5408,0.6094,0.7157,0.7516;0.8,0.8} \right)\left( {0.4464,0.5577,0.7727,0.8577;1,1} \right)} \right], \\ \tilde{v}_{13} & = \left[ {\left( {0.7264,0.7981,0.8818,0.9080;0.8,0.8} \right)\left( {0.6446,0.7564,0.9287,0.9900;1,1} \right)} \right], \\ \tilde{v}_{21} & = \left[ {\left( {0.4965,0.5469,0.6216,0.6484;0.8,0.8} \right)\left( {0.4285,0.5104,0.6619,0.7233;1,1} \right)} \right], \\ \tilde{v}_{22} & = \left[ {\left( {0.8046,0.8705,0.9244,0.9390;0.8,0.8} \right)\left( {0.7470,0.8426,0.9569,0.9900;1,1} \right)} \right], \\ \tilde{v}_{23} & = \left[ {\left( {0.6408,0.7040,0.7778,0.8010;0.8,0.8} \right)\left( {0.5704,0.6673,0.8193,0.8733;1,1} \right)} \right], \\ \tilde{v}_{31} & = \left[ {\left( {0.1622,0.2019,0.2772,0.3082;0.8,0.8} \right)\left( {0.1059,0.1686,0.3192,0.3978;1,1} \right)} \right], \\ \tilde{v}_{32} & = \left[ {\left( {0.2180,0.2644,0.3416,0.3727;0.8,0.8} \right)\left( {0.1534,0.2286,0.3837,0.4590;1,1} \right)} \right], \\ \tilde{v}_{33} & = \left[ {\left( {0.2399,0.2870,0.3755,0.4108;0.8,0.8} \right)\left( {0.1682,0.2465,0.4236,0.5100;1,1} \right)} \right], \\ \tilde{v}_{41} & = \left[ {\left( {0.3649,0.4170,0.4967,0.5262;0.8,0.8} \right)\left( {0.2897,0.3737,0.5378,0.6048;1,1} \right)} \right], \\ \tilde{v}_{42} & = \left[ {\left( {0.3798,0.4331,0.5181,0.5510;0.8,0.8} \right)\left( {0.2977,0.3841,0.5648,0.6358;1,1} \right)} \right], \\ \tilde{v}_{43} & = \left[ {\left( {0.3880,0.4393,0.5295,0.5632;0.8,0.8} \right)\left( {0.3049,0.3924,0.5774,0.6500;1,1} \right)} \right], \\ \tilde{v}_{51} & = \left[ {\left( {0.8324,0.8747,0.9198,0.9363;0.8,0.8} \right)\left( {0.7938,0.8474,0.9546,0.9900;1,1} \right)} \right], \\ \tilde{v}_{52} & = \left[ {\left( {0.7551,0.7944,0.8278,0.8436;0.8,0.8} \right)\left( {0.7194,0.7693,0.8610,0.8852;1,1} \right)} \right], \\ \tilde{v}_{53} & = \left[ {\left( {0.7466,0.7855,0.8278,0.8436;0.8,0.8} \right)\left( {0.7111,0.7605,0.8797,0.9043;1,1} \right)} \right], \\ \tilde{v}_{61} & = \left[ {\left( {0.6203,0.6705,0.7673,0.8137;0.8,0.8} \right)\left( {0.5563,0.6047,0.8608,0.9800;1,1} \right)} \right], \\ \tilde{v}_{62} & = \left[ {\left( {0.4926,0.5282,0.5865,0.6238;0.8,0.8} \right)\left( {0.4270,0.4703,0.6528,0.7000;1,1} \right)} \right], \\ \tilde{v}_{63} & = \left[ {\left( {0.4785,0.5127,0.5580,0.5776;0.8,0.8} \right)\left( {0.4157,0.4576,05916,0.6322;1,1} \right)} \right] \\ \end{aligned}$$ Step 4: Define the fuzzy positive ideal solution \({\mathbf{V}}^{ + } = \left[ {\begin{array}{*{20}c} {\tilde{v}_{1}^{ + } } & {\tilde{v}_{2}^{ + } } & {\tilde{v}_{3}^{ + } } & {\tilde{v}_{4}^{ + } } & {\tilde{v}_{5}^{ + } } & {\tilde{v}_{6}^{ + } } \\ \end{array} } \right]\) and the fuzzy negative ideal solution \({\mathbf{V}}^{ - } = \left[ {\begin{array}{*{20}c} {\tilde{v}_{1}^{ - } } & {\tilde{v}_{2}^{ - } } & {\tilde{v}_{3}^{ - } } & {\tilde{v}_{4}^{ - } } & {\tilde{v}_{5}^{ - } } & {\tilde{v}_{6}^{ - } } \\ \end{array} } \right]\). $$\begin{aligned} \tilde{v}_{1}^{ + } & = \left[ {\left( {0.7264,0.7981,0.8818,0.9080;0.8,0.8} \right)\left( {0.6466,0.7564,0.9287,0.9900;1,1} \right)} \right], \\ \tilde{v}_{2}^{ + } & = \left[ {\left( {0.8046,0.8705,0.9244,0.9390;0.8,0.8} \right)\left( {0.7470,0.8426,0.9569,0.9900;1,1} \right)} \right], \\ \tilde{v}_{3}^{ + } & = \left[ {\left( {0.2399,0.2870,0.3755,0.4108;0.8,0.8} \right)\left( {0.1682,0.2465,0.4236,0.5100;1,1} \right)} \right], \\ \tilde{v}_{4}^{ + } & = \left[ {\left( {0.3649,0.4170,0.4967,0.5262;0.8,0.8} \right)\left( {0.2897,0.3737,0.5378,0.6048;1,1} \right)} \right], \\ \tilde{v}_{5}^{ + } & = \left[ {\left( {0.8324,0.8747,0.9198,0.9363;0.8,0.8} \right)\left( {0.7938,0.8474,0.9546,0.9900;1,1} \right)} \right], \\ \tilde{v}_{6}^{ + } & = \left[ {\left( {0.4785,0.5127,0.5580,0.5776;0.8,0.8} \right)\left( {0.4157,0.4576,0.5916,0.6322;1,1} \right)} \right], \\ \tilde{v}_{1}^{ - } & = \left[ {\left( {0.5408,0.6094,0.7157,0.7516;0.8,0.8} \right)\left( {0.4464,0.5577,0.7727,0.8577;1,1} \right)} \right], \\ \tilde{v}_{2}^{ - } & = \left[ {\left( {0.4965,0.5469,0.6216,0.6484;0.8,0.8} \right)\left( {0.4285,0.5104,0.6619,0.7233;1,1} \right)} \right], \\ \tilde{v}_{3}^{ - } & = \left[ {\left( {0.1622,0.2019,0.2772,0.3082;0.8,0.8} \right)\left( {0.1059,0.1686,0.3192,0.3978;1,1} \right)} \right], \\ \tilde{v}_{4}^{ - } & = \left[ {\left( {0.3880,0.4393,0.5295,0.5632;0.8,0.8} \right)\left( {0.3049,0.3924,0.5774,0.6500;1,1} \right)} \right], \\ \tilde{v}_{5}^{ - } & = \left[ {\left( {0.7466,0.7855,0.8275,0.8436;0.8,0.8} \right)\left( {0.7111,0.7605,0.8610,0.8852;1,1} \right)} \right], \\ \tilde{v}_{6}^{ - } & = \left[ {\left( {0.6203,0.6705,0.7673,0.8137;0.8,0.8} \right)\left( {0.5563,0.6047,0.8606,0.9800;1,1} \right)} \right]. \\ \end{aligned}$$ Step 5: Constructing the ideal and the anti-ideal separation matrices using the extended vertex method for distance measure. $$\begin{aligned} \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i1} ,v_{i}^{ + } ) & = 0.1745 + 0.3053 + 0.0915 + 0 + 0 + 0.2178 = 0.7891, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i2} ,v_{i}^{ + } ) & = 0.1745 + 0 + 0.0323 + 0.0203 + .0872 + .0388 = 0.3531, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i3} ,v_{i}^{ + } ) & = 0 + 0.1539 + 0 + 0.0306 +0.0864 + 0 =0.2709. \\ \end{aligned}$$ $$\begin{aligned} \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i1} ,v_{i}^{ - } ) & = 0 + 0 + 0 + 0.0306 +0.0864 + 0 = 0.1170, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i2} ,v_{i}^{ - } ) & = 0 + 0.3053 +0.0602 +0.0104 +0.0112 +0.1809 =0.5680, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i3} ,v_{i}^{ - } ) & = 0.1745 +0.1522 +0.0915 + 0 + 0 + 0.2178 = 0.6360. \\ \end{aligned}$$ Step 6: Calculating the relative degree of closeness of each alternative to the fuzzy ideal solutions. $$R\left( {{\text{x}}_{1} } \right) = \frac{0.1170}{0.1170 +0.7891} = 0.1291, \quad R\left( {{\text{x}}_{2} } \right) = \frac{0.5680}{0.5680 + 0.3531} = 0.6167, \quad R\left( {{\text{x}}_{3} } \right) = \frac{0.6360}{0.6360 + 0.2709} = 0.7013.$$ From the results,\(R({\text{x}}_{3} ) > R({\text{x}}_{2} ) > R({\text{x}}_{1} )\), the ranking is \({\text{x}}_{3} > {\text{x}}_{2} > {\text{x}}_{1}\). Then, x3 is the best alternative. The result coincides with the results of Rashid et al. [23]. A company wants to hire a system analyst. Three decision makers D1, D2, and D3 will count on two attributes to rate the applicants: emotional steadiness (f1) and oral communication skills (f2). The decision makers use two sets of linguistic terms: the weighting set \(W = \{ {\text{Very Low}}\left( {\text{VL}} \right), {\text{Low}}\left( {\text{L}} \right), {\text{Medium}} - {\text{low}}\left( {\text{ML}} \right),{\text{Medium}}\left( {\text{M}} \right),\) \({\text{Medium}} - {\text{High }}\left( {\text{MH}} \right), {\text{High}}\left( {\text{H}} \right), {\text{Very High }}\left( {\text{VH}} \right)\}\), and the rating set $$R = \{ {\text{Very Poor}}\left( {\text{VP}} \right), {\text{Poor}}\left( {\text{P}} \right),{\text{Medium Poor}}\left( {\text{MP}} \right),{\text{Medium}}\left( {\text{M}} \right), {\text{Medium Good}}\left( {\text{MG}} \right),$$ \({\text{Good}}\left( {\text{G}} \right), {\text{Very Good}}\left( {\text{VG}} \right)\}\). For the details of the IT2FSs representation of the linguistic ratings and the attributes' weights, the reader is referred to Chen and Hong [3]. For comparison, the normalized ratings are used directly in the decision matrices as given by Chen and Hong [3]. Step 1: Constructing the decision matrices and the average decision matrix $$\begin{aligned} \tilde{f}_{11} & = \left[ {\left( {0,0.03,0.03,0.17;1,1} \right)\left( {0,0.03,0.03,0.17;1,1} \right)} \right], \\ \tilde{f}_{12} & = \left[ {\left( {0.07,0.23,0.23,0.43;1,1} \right)\left( {0.07,0.23,0.23,0.43;1,1} \right)} \right], \\ \tilde{f}_{13} & = \left[ {\left( {0,0.07,0.07,0.1;1,1} \right)\left( {0,0.07,0.07,0.1;1,1} \right)} \right], \\ \tilde{f}_{14} & = \left[ {\left( {0,0.07,0.07,0.1;1,1} \right)\left( {0,0.07,0.07,0.1;1,1} \right)} \right], \\ \tilde{f}_{21} & = \left[ {\left( {0,0.03,0.03,0.17;1,1} \right)\left( {0,0.03,0.03,0.17;1,1} \right)} \right], \\ \tilde{f}_{22} & = \left[ {\left( {0,0.07,0.07,0.1;1,1} \right)\left( {0,0.07,0.07,0.1;1,1} \right)} \right], \\ \tilde{f}_{23} & = \left[ {\left( {0.07,0.23,0.23,0.43;1,1} \right)\left( {0.07,0.23,0.23,0.43;1,1} \right)} \right], \\ \tilde{f}_{24} & = \left[ {\left( {0.3,0.5,0.5,0.7;1,1} \right)\left( {0.3,0.5,0.5,0.7;1,1} \right)} \right]. \\ \end{aligned}$$ Step 2: a) Constructing the weighting matrices and the average weighting matrix. $$\begin{aligned} \tilde{w}_{1} & = \left[ {\left( {0.63,0.83,0.83,0.97;1,1} \right)\left( {0.63,0.83,0.83,0.97;1,1} \right)} \right], \\ \tilde{w}_{2} & = \left[ {\left( {0.63,0.83,0.83,0.97;1,1} \right)\left( {0.63,0.83,0.83,0.97;1,1} \right)} \right]. \\ \end{aligned}$$ Step 3: Constructing the average weighted decision matrix. $$\begin{aligned} \tilde{v}_{11} & = \left[ {\left( {0,0.0249,0.0249,0.1649;1,1} \right)\left( {0,0.0249,0.0249,0.1649;1,1} \right)} \right], \\ \tilde{v}_{12} & = \left[ {\left( {0.0441,0.1909,0.1909,0.4171;1,1} \right)\left( {0.0441,0.1909,0.1909,0.4171;1,1} \right)} \right], \\ \tilde{v}_{13} & = \left[ {\left( {0,0.0581,0.0581,0.0970;1,1} \right)\left( {0,0.0581,0.0581,0.0970;1,1} \right)} \right], \\ \tilde{v}_{14} & = \left[ {\left( {0,0.0581,0.0581,0.0970;1,1} \right)\left( {0,0.0581,0.0581,0.0970;1,1} \right)} \right], \\ \tilde{v}_{21} & = \left[ {\left( {0,0.0249,0.0249,0.1649;1,1} \right)\left( {0,0.0249,0.0249,0.1649;1,1} \right)} \right], \\ \tilde{v}_{22} & = \left[ {\left( {0,0.0581,0.0581,0.0970;1,1} \right)\left( {0,0.0581,0.0581,0.0970;1,1} \right)} \right], \\ \tilde{v}_{23} & = \left[ {\left( {0.0441,0.1909,0.1909,0.4171;1,1} \right)\left( {0.0441,0.1909,0.1909,0.4171;1,1} \right)} \right], \\ \tilde{v}_{24} & = \left[ {\left( {0.1890,0.4150,0.4150,0.6790;1,1} \right)\left( {0.1890,0.4150,0.4150,0.6790;1,1} \right)} \right]. \\ \end{aligned}$$ Step 4: Define the fuzzy positive ideal solution \({\mathbf{V}}^{ + } = \left[ {\begin{array}{*{20}c} {\tilde{v}_{1}^{ + } } & {\tilde{v}_{2}^{ + } } \\ \end{array} } \right]\) and the fuzzy negative ideal solution \({\mathbf{V}}^{ - } = \left[ {\begin{array}{*{20}c} {\tilde{v}_{1}^{ - } } & {\tilde{v}_{2}^{ - } } \\ \end{array} } \right].\) $$\begin{aligned} \tilde{v}_{1}^{ + } & = \left[ {\left( {0.0441,0.1909,0.1909,0.4171;1,1} \right)\left( {0.0441,0.1909,0.1909,0.4171;1,1} \right)} \right], \\ \tilde{v}_{2}^{ + } & = \left[ {\left( {0.1890,0.4150,0.4150,0.6790;1,1} \right)\left( {0.1890,0.4150,0.4150,0.6790;1,1} \right)} \right]. \\ \tilde{v}_{1}^{ - } & = \left[ {\left( {0,0.0249,0.0249,0.0970;1,1} \right)\left( {0,0.0249,0.0249,0.0970;1,1} \right)} \right], \\ \tilde{v}_{2}^{ - } & = \left[ {\left( {0,0.0249,0.0249,0.0970;1,1} \right)\left( {0,0.0249,0.0249,0.0970;1,1} \right)} \right]. \\ \end{aligned}$$ Step 5: Constructing the ideal and the anti-ideal separation matrices. $$\begin{aligned} \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i1} ,v_{i}^{ + } ) & = 0.1737 + 0.3887 = 0.5624, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i2} ,v_{i}^{ + } ) & = 0 + 0.3966 = 0.3966, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i3} ,v_{i}^{ + } ) & = 0.1869 + 0.2180 = 0.4049, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i4} ,v_{i}^{ + } ) & = 0.1869 + 0 = 0.1869. \\ \end{aligned}$$ $$\begin{aligned} \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i1} ,v_{i}^{ - } ) & = 0.0339 + 0.0339 = 0.0678, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i2} ,v_{i}^{ - } ) & = 0.1997 + 0.0235 = 0.2232, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i3} ,v_{i}^{ - } ) & = 0.0235 + 0.1997 = 0.2232, \\ \mathop \sum \limits_{i = 1}^{m} d(\tilde{v}_{i4} ,v_{i}^{ - } ) & = 0.0235 + 0.4119 = 0.4354. \\ \end{aligned}$$ Step 6: Calculating the relative degree of closeness of each alternative to the ideal solution. $$\begin{aligned} & R\left( {{\text{x}}_{1} } \right) = \frac{0.0678}{0.0678 + 0.5624} = 0.1076,\quad R\left( {{\text{x}}_{2} } \right) = \frac{0.2232}{0.2232 + 0.3966} = 0.3601, \\ & R\left( {{\text{x}}_{3} } \right) = \frac{0.2232}{0.2332 +0.4049} = 0.3554, \quad R\left( {{\text{x}}_{4} } \right) = \frac{0.4354}{0.4354 + 0.1809} = 0.6997. \\ \end{aligned}$$ From the results,\(R({\text{x}}_{4} ) > R({\text{x}}_{2} ) > R({\text{x}}_{3} ) > R({\text{x}}_{1} ).\) Then, the ranking is \({\text{x}}_{4} > {\text{x}}_{2} > {\text{x}}_{3} > {\text{x}}_{1}\). This ranking coincides with the ranking of Chen and Lee [5]. The ranking of Chen and Hong [3] is \({\text{x}}_{4} > {\text{x}}_{2} = {\text{x}}_{3} > {\text{x}}_{1}\). Therefore, the best and the worst alternative are the same as their result. However, the second alternative is not equivalent to the third. The second alternative is better than the third with a slight difference. According to Chen and Hong [3], the second and third alternatives should be equally preferred, and this was supported by the results of their proposed IT2F-TOPSIS. As justified by Chen and Hong [3], this is due to the decision makers' similar evaluations of the alternatives with respect to the given attributes together with the equivalence in weights. The decision makers' evaluations of the second attribute with respect to the emotional steadiness (f1) are \(\left\{ {{\text{MP}}, {\text{MP}}, {\text{P}}} \right\}\), and for the oral communication skills (f2) are \(\left\{ {{\text{P}}, {\text{VP}}, {\text{P}}} \right\}\). While the evaluations of the third attribute with respect to the emotional steadiness are \(\left\{ {{\text{P}}, {\text{VP}}, {\text{P}}} \right\},\) and for the oral communication skills are \(\left\{ {{\text{MP}}, {\text{MP}}, {\text{P}}} \right\}\). Both are given equal weights \(\left\{ {{\text{MH}}, {\text{H}}, {\text{MH}}} \right\}\). This is obvious in the average weighted decision matrix where \(\tilde{v}_{12} = \tilde{v}_{23}\) and \(\tilde{v}_{13} = \tilde{v}_{22}\). On the other hand, the influence of the ideal solutions was not taken into consideration. In this example, the positive ideal solution for the emotional steadiness is different from that of the oral communication skills, i.e.\(\tilde{v}_{1}^{ + } \ne \tilde{v}_{2}^{ + }\). Consequently, the distance between \(\tilde{v}_{12}\) and \(\tilde{v}_{1}^{ + }\) is different from the distance between \(\tilde{v}_{23}\) and \(\tilde{v}_{2}^{ + }\). Similarly, the distance between \(\tilde{v}_{13}\) and \(\tilde{v}_{1}^{ + }\) is different from the distance between \(\tilde{v}_{22}\) and \(\tilde{v}_{2}^{ + }\). Therefore, this difference has an impact on the ranking and lead to the preference of the second alternative over the third one. This asserts that early defuzzification may affect the results and give an incorrect preference. In addition, it emphasizes on the role of the defined ideal solutions in ranking. When proposing the PIS and the NIS, some researchers use the absolute ideal solutions \(\left\{ {v^{ + } = \left( {1,1,1,1;1,1} \right)\; {\text{and}}\; v^{ - } = \left( {0,0,0,0;1,1} \right)} \right\}\), which represent the perfect PIS and NIS that can be attained. Other researchers use the relative ideal solutions, as in the proposed IT2F-TOPSIS, which are related to the performance of the available alternatives with respect to the selected attributes. Resolving the same example with the absolute ideal solutions \(\left\{ {\tilde{v}_{1}^{ + } = \tilde{v}_{2}^{ + } = \left( {1,1,1,1;1,1} \right) {\text{and }}\tilde{v}_{1}^{ - } = \tilde{v}_{2}^{ - } = \left( {0,0,0,0;1,1} \right)} \right\}\), the distances from the ideal solutions are the same for the second and third alternatives and they are equally preferred. Subsequently, it can be concluded that the relative ideal solutions are more discriminating than the absolute ideal solutions in IT2F-TOPSIS. In this article, an IT2F-TOPSIS was proposed using the extended vertex method for distance measure. While the existing TOPSIS techniques for IT2FSs depend on the defuzzification of the average decision matrix or the average weighted decision matrix in the early steps, the proposed method maintains fuzziness in the preference technique to avoid any information distortion which might lead to incorrect results. First, the vertex method is extended to include IT2FSs. This distance measure is a simple formula that requires few computations. Meanwhile, the other distance measures are either inappropriate or requires extensive computations. The performance of the extended vertex method was examined using the 32 words for computing with words proposed by Wu and Mendel [29]. The results indicate that the method is efficient in measuring the distance between IT2FSs. In addition, it has the least processing time compared to other distance measures for trapezoidal IT2FSs. Second, the fuzzy positive and negative ideal solutions are defined. Then, the relative degree of closeness to the ideal solutions is computed for each alternative. As the relative degree of closeness of an alternative increases, its preference increases. Two illustrative examples were solved using the proposed IT2F-TOPSIS. Regarding the first example, the ranking coincide with the ranking of the IVF-TOPSIS proposed by Rashid et al. [23]. As for the second example, the ranking coincides with the ranking of the IT2F-TOPSIS proposed by Chen and Lee [5]. On the other hand, when compared to the ranking of Chen and Hong [3], the second alternative was not equivalent to the third one, it proved to be better. This can be attributed to maintaining fuzziness throughout the solution steps. It was also found that the defined ideal solutions have an impact on the ranking. The relative ideal solutions are more discriminating than the absolute ideal solutions in IT2F-TOPSIS. Compliance with ethical standards The author declares that there is no conflict of interest. Ashtiani B, Haghighirad F, Makui A, Montazer GA (2009) Extension of fuzzy TOPSIS method based on interval-valued fuzzy sets. Appl Soft Comput 9:457–461CrossRefGoogle Scholar Chen C-T (2000) Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets Syst 114:1–9CrossRefGoogle Scholar Chen S-M, Hong J-A (2014) Fuzzy multiple attributes group decision making based on ranking interval type-2 fuzzy sets and the TOPSIS method. IEEE Trans Syst Man Cybern Syst 44(12):1665–1673MathSciNetCrossRefGoogle Scholar Chen S-M, Kuo L-W (2017) Autocratic decision making using group recommendations based on interval type-2 fuzzy sets, enhanced Karnik–Mendel algorithms, and the ordered weighted aggregation operator. Inf Sci 412–413:174–193MathSciNetCrossRefGoogle Scholar Chen S-M, Lee L-W (2010) Fuzzy multiple attributes group-decision making based on the interval type-2 TOPSIS method. Expert Syst Appl 37:2790–2798CrossRefGoogle Scholar Chu T-C, Lin Y-C (2003) A fuzzy TOPSIS method for robot selection. Int J Adv Manuf Technol 21:284–290CrossRefGoogle Scholar Das S, Kar S, Pal T (2016) Robust decision making using intuitionistic fuzzy numbers. Granul Comput 2(1):41–54CrossRefGoogle Scholar Dymova L, Sevastjanov P, Tikhonenko A (2015) An interval type-2 fuzzy extension of the TOPSIS methods using alpha cuts. Knowl-Based Syst 83:116–127CrossRefGoogle Scholar Figueroa-Garcia JC, Chalco-Cano YC, Roman-Florez H (2015) Distance measures for interval-type-2 fuzzy numbers. Discrete Appl Math 197:93–102MathSciNetCrossRefGoogle Scholar Figueroa-Garcia JC, Hernandez-Perez G (2014) On the computation of the distance between interval type-2 fuzzy numbers using α-cuts. In: IEEE conference on Norbert Wiener in the 21st centuryGoogle Scholar Garg H (2016) A new generalized improved score function of interval-valued intuitionistic fuzzy sets and application in expert systems. Appl Soft Comput 38:988–999CrossRefGoogle Scholar Ghorabaee MK (2016) Developing an MCDM method for robot selection with interval type-2 fuzzy sets. Robot Comput Integr Manuf 37:221–232CrossRefGoogle Scholar Heidarzade A, Mahdavi I, Mahdavi-Amiri N (2016) Supplier selection using a clustering method based on a new distance for interval type-2 fuzzy sets: a case study. Appl Soft Comput 38:213–231CrossRefGoogle Scholar Hwang CL, Yoon K (1981) Multiple attributes decision making methods and applications. Springer, Berlin HeidelbergCrossRefGoogle Scholar Ilieva G (2016) TOPSIS modification with interval type-2 fuzzy numbers. Cybern Inf Technol 16(2):60–68MathSciNetGoogle Scholar Jiang Y, Xu Z, Shu Y (2017) Interval-valued intuitionistic multiplicative aggregation in group decision making. Granul Comput 2:387–407CrossRefGoogle Scholar Kahraman C, Öztayşi B, Sari IU, Turanoglu E (2014) Fuzzy analytic process with interval type-2 fuzzy sets. Knowl-Based Syst 59:48–57CrossRefGoogle Scholar Kumar K, Garg H (2018) TOPSIS method based on the connection number of set pair analysis under interval-valued intuitionistic fuzzy set environment. Comput Appl Math 37:1319–1329MathSciNetCrossRefGoogle Scholar Liang Q, Mendel J (2000) Interval-type 2 fuzzy logic systems: theory and design. IEEE Trans Fuzzy Syst 8(5):535–550CrossRefGoogle Scholar McCulloch J, Wagner C, Aickelin U (2013) Extending similarity measures of interval type-2 fuzzy sets to general type-2 fuzzy sets. In: IEEE international conference on fuzzy systems, Hyderabad, IndiaGoogle Scholar Niewiadomski A (2007) Interval-valued and interval type-2 fuzzy sets: a subjective comparison. IEEE, LondonGoogle Scholar Qin J (2017) Interval type-2 fuzzy Hamy mean operators and their application in multiple criteria decision making. Granul Comput 2:249–269CrossRefGoogle Scholar Rashid T, Beg I, Husnine SM (2014) Robot selection by using generalized interval-valued fuzzy numbers with TOPSIS. Appl Soft Comput 21:462–468CrossRefGoogle Scholar Sambuc R (1975) Function Φ-flous, application a l'aide au diagnostic en pathologie thyroidienne. Thèse de Doctorate en Medicine, Séction Medecine University of Marseille, Marseille, FranceGoogle Scholar Sharaf IM (2018) TOPSIS with similarity measure for MADM applied to network selection. Comput Appl Math 37(4):4104–4121MathSciNetCrossRefGoogle Scholar Singh P (2014) Some new distance measures for type-2 fuzzy sets and distance measure based ranking for group decision making problems. Front Comput Sci 8(5):741–752MathSciNetCrossRefGoogle Scholar Singh S, Garg H (2017) Distance measures between type-2 intuitionistic fuzzy sets and their applications to multi-criteria decision-making process. Appl Intell 46:788–799CrossRefGoogle Scholar Sola HB, Fernandez J, Hagras H, Herrera F, Pagola M, Barrenechea E (2015) Interval type 2fuzzy sets: toward a wider view on their relationship. IEEE Trans Fuzzy Syst 23(5):1876–1882CrossRefGoogle Scholar Wu D, Mendel JM (2009) A comparative study of ranking methods, similarity measures and uncertainty measures for interval type-2 fuzzy sets. Inf Sci 179:1169–1192MathSciNetCrossRefGoogle Scholar Xu Z, Gou X (2017) An overview of interval-valued intuitionistic fuzzy information aggregations and applications. Granul Comput 2:13–39CrossRefGoogle Scholar Zadeh LH (1975) The concept of a linguistic variable and its applications to approximate reasoning. Inf Sci 8:199–249MathSciNetCrossRefGoogle Scholar Zhang H, Zhang W, Mei C (2009) Entropy of interval-valued fuzzy sets based on distance and its relationship with similarity measure. Knowl-Based Syst 22:449–454CrossRefGoogle Scholar © Springer Nature Switzerland AG 2019 1.Department of Basic SciencesHigher Technological InstituteTenth of Ramadan CityEgypt Sharaf, I.M. SN Appl. Sci. (2020) 2: 87. https://doi.org/10.1007/s42452-019-1825-1 Received 21 October 2019 Accepted 28 November 2019 First Online 14 December 2019 DOI https://doi.org/10.1007/s42452-019-1825-1 Publisher Name Springer International Publishing
CommonCrawl
Home > Electronic Theses and Dissertations > Physics & Astronomy ETDs > 28 Physics & Astronomy ETDs Structural and Magnetic Phase Transitions in Manganese Arsenide Thin-Films Grown by Molecular Beam Epitaxy Felix T. Jaeckel Phase transitions play an important role in many fields of physics and engineering, and their study in bulk materials has a long tradition. Many of the experimental techniques involve measurements of thermodynamically extensive parameters. With the increasing technological importance of thin-film technology there is a pressing need to find new ways to study phase transitions at smaller length-scales, where the traditional methods are insufficient. In this regard, the phase transitions observed in thin-films of MnAs present interesting challenges. As a ferromagnetic material that can be grown epitaxially on a variety of technologically important substrates, MnAs is an interesting material for spintronics applications. In the bulk, the first order transition from the low temperature ferromagnetic $\alpha$-phase to the $\beta$-phase occurs at 313~K. The magnetic state of the $\beta$-phase has remained controversial. A second order transition to the paramagnetic $\gamma$-phase takes place at 398~K. In thin-films, the anisotropic strain imposed by the substrate leads to the interesting phenomenon of coexistence of $\alpha$- and $\beta$-phases in a regular array of stripes over an extended temperature range. In this dissertation these phase transitions are studied in films grown by molecular beam epitaxy on GaAs (001). The films are confirmed to be of high structural quality and almost purely in the $A_0$ orientation. A diverse set of experimental techniques, germane to thin-film technology, is used to probe the properties of the film: Temperature-dependent X-ray diffraction and atomic-force microscopy (AFM), as well as magnetotransport give insights into the structural properties, while the anomalous Hall effect is used as a probe of magnetization during the phase transition. In addition, reflectance difference spectroscopy (RDS) is used as a sensitive probe of electronic structure. Inductively coupled plasma etching with BCl$_3$ is demonstrated to be effective for patterning MnAs. We show that the evolution of electrical resistivity in the coexistence regime of $\alpha$- and $\beta$-phase can be understood in terms of a simple model. These measurements allow accurate extraction of the order-parameter "phase fraction" and thus permit us to study the hysteresis of the phase transition in detail. Major features in the hysteresis can be correlated to the ordering observed in the array of $\alpha$- and $\beta$-stripes. As the continuous ferromagnetic film breaks up into isolated stripes of $\alpha$-phase, a hysteresis in the out-of-plane magnetization is detected from measurements of the anomalous Hall effect. The appearance of out-of-plane domains can be understood from simple shape-anisotropy arguments. Remarkably, an anomaly of the Hall effect at low fields persists far into the $\beta$-phase. Signatures of the more elusive $\beta$- to $\gamma$-transition are found in the temperature-dependence of resistivity, the out-of-plane lattice constant, and reflectance difference spectra. The transition temperature is significantly lowered compared to the bulk, consistent with the strained state of the material. The negative temperature coefficient of resistivity, as well as its anisotropic changes, lend support to the idea of an antiferromagnetic order within the $\beta$-phase. Malloy, Kevin First Committee Member (Chair) Boyd, Stephen Ducan, Robert El-Emawy, Abdel-Rahman Phase Transitions, Manganese Arsenide, Thin-Films, Molecular Beam Epitaxy, Magnetotransport, X-Ray Diffraction, Atomic Force Microscopy, Reflectance Difference Spectroscopy, Phase Coexistence Jaeckel, Felix T.. "Structural and Magnetic Phase Transitions in Manganese Arsenide Thin-Films Grown by Molecular Beam Epitaxy." (2011). https://digitalrepository.unm.edu/phyc_etds/28 Physics & Astronomy @ UNM
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. What's chiral about the Dirac points in graphene? One of the main interesting properties of graphene is the appearance of Dirac points in its energy band structure, i.e. the presence of points where the valence and conduction bands meet at conical intersections at which the dispersion is (locally) the same as for massless fermions moving at the speed of light. These Dirac points occur at the vertices of the standard hexagonal Brillouin zone, and they come in two sets of three, normally denoted $K$ and $K'$. Surprisingly, however, these two sets (despite looking much the same) turn out to be inequivalent: as far as the lattice translations know, the three $K$ points are exactly the same, but there's nothing in the lattice translation symmetry that relates them to the $K'$ points. Even more interestingly, these two Dirac points are generally referred to as having a specific chirality (as an example, see Phys. Rev. Lett. 107, 166803 (2011)). Now, I understand why the two are not fully equivalent (as a simple insight, if you rotate the lattice by 60° about a carbon, you don't get the same lattice, which you do after 120°) and why they're related by a reflection symmetry (obvious in the diagram above, and also by the fact that if you tag the two inequivalent carbons as in this question, a 60° rotation is equivalent to a reflection on a line); since the two are inequivalent but taken to each other by a mirror symmetry, I understand why the designation as 'chiral' applies. Nevertheless, I normally associate the word chiral (when restricted to 2D objects) as something that can be used to fix an orientation of the plane, or, in other words, something associated with a direction of rotation of that plane, and this is where my intuitive understanding of these $K$ and $K'$ points stops. So, in short: in what way are the $K$ and $K'$ points of graphene associated with an orientation and/or a direction of rotation on the plane, in both the real and the dual lattices? Similarly, and more physically, what is it about the $K$ and $K'$ points that makes them respond differently to chiral external drivers, such as magnetic fields or circularly polarized light? condensed-matter symmetry graphene chirality Emilio Pisanty Emilio PisantyEmilio Pisanty $\begingroup$ I think the chirality of the 'Dirac points' usually refers to a property of the spinors of the Dirac fermions which emerge from an effective relativistic description of the low energy physics of graphene. There is a derivation of this in section II B of this review (Rev. Mod. Phys. 81, 109 (2009)). $\endgroup$ – Stephan $\begingroup$ @Stephan Yeah, I've seen that one and it has plenty of useful information, but I feel it still lacks some intuitive oomph to really explain that chirality in real-world terms, if you know what I mean. $\endgroup$ I know, this question is a bit dated but I stumbled across the same problem and would like to share my insight. If you have found an answer yourself, please provide it, I would be highly interested. As you pointed out yourself, there are two sublattices and the honeycomb lattice itself is not a Bravais lattice. To construct the graphene lattice, you would use a unit cell containing two carbon atoms. Those two inequivalent atoms give rise to an additional degree of freedom (Is the electron at site A or B?). It is often refered to as pseudospin as it can be treated mathematically in the same way as the spin-1/2 property. This is the point where chirality comes in to play: In graphene, the direction of the pseudospin is linked to the direction of the electron/hole momentum. An electron's pseudospin is parallel to its momentum and for holes, it is antiparallel. Hence, the electron and hole states are called chiral states. To sum it up, the two sublattices give rise to a spin-like property which is called pseudospin and which is linked to an electron's/a hole's momentum. This is why the property of chirality is introduced. A good read for more detail are these two articles: https://www.nature.com/articles/nphys384 Lecture on Graphene's electronic bandstructure and Dirac fermions lmrlmr Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged condensed-matter symmetry graphene chirality or ask your own question. Chirality effect in graphene How to construct the matrix of Hamiltonians for a hexagonal lattice Why do electrons in graphene behave as Dirac fermions near the Dirac points? How to determine the orientation of the massive Dirac Hamiltonian? Why do edge states in graphene exist between the valence and conduction band? Inversion symmetry points of graphene Is the Berry curvature in perfect monolayer graphene zero? How to determine the degeneracy of an energy level for a periodic quantum system from its band structure over the Brillouin zone? How to make sense of the multiple bands obtained for multiple-atoms-per-unitcell crystals, such as graphene? $4\times4$ Dirac Hamiltonian in Graphene
CommonCrawl