text
stringlengths
100
500k
subset
stringclasses
4 values
Knowledge graph prediction of unknown adverse drug reactions and validation in electronic health records A knowledge graph to interpret clinical proteomics data Alberto Santos, Ana R. Colaço, … Matthias Mann Drug prioritization using the semantic properties of a knowledge graph Tareq B. Malas, Wytze J. Vlietstra, … Kristina M. Hettne Network medicine for disease module identification and drug repurposing with the NeDRex platform Sepideh Sadegh, James Skelton, … Tim Kacprowski A reference set of clinically relevant adverse drug-drug interactions Elpida Kontsioti, Simon Maskell, … Munir Pirmohamed Chia, a large annotated corpus of clinical trial eligibility criteria Fabrício Kury, Alex Butler, … Chunhua Weng Applications of machine learning in drug discovery and development Jessica Vamathevan, Dominic Clark, … Shanrong Zhao Knowledge integration and decision support for accelerated discovery of antibiotic resistance genes Jason Youn, Navneet Rai & Ilias Tagkopoulos Predicting the need for a reduced drug dose, at first prescription Adrien Coulet, Nigam H. Shah, … Michel Dumontier Computational drug repurposing based on electronic health records: a scoping review Nansu Zong, Andrew Wen, … Hongfang Liu Daniel M. Bean1, Honghan Wu1, Ehtesham Iqbal1, Olubanke Dzahini2,3, Zina M. Ibrahim1,5, Matthew Broadbent2, Robert Stewart2,4 & Richard J. B. Dobson ORCID: orcid.org/0000-0003-4224-92451,5 An Author Correction to this article was published on 06 March 2018 Unknown adverse reactions to drugs available on the market present a significant health risk and limit accurate judgement of the cost/benefit trade-off for medications. Machine learning has the potential to predict unknown adverse reactions from current knowledge. We constructed a knowledge graph containing four types of node: drugs, protein targets, indications and adverse reactions. Using this graph, we developed a machine learning algorithm based on a simple enrichment test and first demonstrated this method performs extremely well at classifying known causes of adverse reactions (AUC 0.92). A cross validation scheme in which 10% of drug-adverse reaction edges were systematically deleted per fold showed that the method correctly predicts 68% of the deleted edges on average. Next, a subset of adverse reactions that could be reliably detected in anonymised electronic health records from South London and Maudsley NHS Foundation Trust were used to validate predictions from the model that are not currently known in public databases. High-confidence predictions were validated in electronic records significantly more frequently than random models, and outperformed standard methods (logistic regression, decision trees and support vector machines). This approach has the potential to improve patient safety by predicting adverse reactions that were not observed during randomised trials. Hospital admissions resulting from adverse drug reactions (ADRs) have been projected to cost the National Health Service £466m1. A meta-analysis of US hospitals estimated the incidence of serious ADRs at 6.7%2. ADRs are therefore a significant risk to patient health, treatment compliance and healthcare costs. ADRs are also a key factor in the cost-benefit analysis of pharmacological treatments. This analysis is critical to the decision making process for drug licensing and prescription. Although ADRs are monitored during clinical trials, practical limitations on sample size and study population mean not all ADRs of a drug will be detected before it is approved for use. Ongoing pharmacovigilance and monitoring of drugs for post-marketing side effects is therefore essential. Spontaneous reports of ADRs are sent to regulatory bodies such as the US Food and Drug Administration (via the FDA Adverse Event Reporting System, FAERS3), the World Health Organisation (via VigiBase4), the UK Medicines and Healthcare Products Regulatory Agency (via the yellow card scheme5) or the European Medicines Agency (via EudraVigilance6). These reports may eventually end up in drug product inserts, or could result in a drug being withdrawn from the market. Unfortunately, post-marketing surveillance is limited by under-reporting of ADRs due to time constraints, limited training in reporting procedures and the low perceived impact of an individual report, amongst other factors7. Until enough reports emerge for a previously unknown ADR to be recognised as such, these unknown ADRs pose a risk to patients, limit the accuracy of cost-benefit analysis and lead to unexpected healthcare costs. The ability to predict ADRs is therefore highly desirable, and has been the subject of numerous previous studies (reviewed in8,9). In silico prediction of the safety profile of a candidate molecule has the advantage of extremely high throughput and is increasingly a part of the lead optimisation pipeline in drug discovery10. Similar methods can be applied to marketed drugs, and may benefit from the increased data available on the drug. The aims of the research reported in this paper were to predict additional (unknown) ADRs for drugs currently in use, and to verify those predictions using information extracted from anonymised electronic health records (EHRs). "Knowledge graphs" represent facts as edges between nodes that represent entities (e.g. people, drugs) or concepts (e.g. actor, migraine). Regardless of the specific technology used to create them, representing facts as a graph allows both highly efficient querying and automated reasoning. We constructed a knowledge graph containing publicly available data on drugs, their target proteins, clinical indications and known ADRs. In the context of this graph, unknown ADRs for drugs are missing edges between drugs nodes and ADR nodes. This graph is the input to our prediction algorithm, which predicts unknown ADRs by inferring missing edges in the graph. Edges may be absent from the graph for three main reasons: 1) The drug does not cause the ADR, 2) The drug can cause the ADR and this fact is known but missing from the source database, 3) The drug can cause the ADR but is not yet known to. The aim of the algorithm presented in this paper is to place new edges in the graph that fall into categories 2 or 3. Importantly, correct predictions in these two classes are equivalent in terms of validating the prediction algorithm, even though the category 2 edges (known but missing from the graph) are known elsewhere. The correct prediction of edges in category 2 does not directly contribute to patient care, but as these databases are widely used for research purposes it is valuable to detect missing information. Predicting unknown ADRs ADR prediction has been the subject of numerous previous publications, which have been reviewed thoroughly8,9. Existing approaches can be subdivided into two key objectives. Firstly, to predict ADRs for a lead compound before marketing, and secondly, to make predictions that add new ADRs to the existing profile. The work presented here falls into the relatively uncommon category of predicting new ADRs for drugs in the post-marketing period. In this same category of study, Cami et al.11 trained a logistic regression classifier using structural properties of the drug-ADR network together with chemical and taxonomic properties of drugs as features to predict unknown ADRs for marketed drugs. The authors tested the predictive performance of their model in a simulated prospective framework using snapshots from a commercial database of spontaneous ADR reports; the best model achieved an AUROC of 0.87 with a sensitivity of 0.42. We use the same classification method as one of the benchmarks for the performance of our algorithm. Rahmani et al.12 predicted unknown ADRs by applying a random walk algorithm to a network with drug and ADR nodes, where drug-ADR edges represent known ADRs and drug-drug edges indicate drug target similarity, but did not validate new ADRs in any real-world clinical data. Bresso et al.13 constructed a database of drug, ADR and target knowledge and used decision trees and inductive logic programming to predict ADR profiles (rather than individual ADRs) and validated predicted ADRs using FAERS3. These previous studies demonstrate the ability of existing machine learning algorithms to predict new ADRs for marketed drugs, but are limited in terms of validation. Spontaneous reports are one of the foundations of post-marketing pharmacovigilance and are widely used as validation data in ADR prediction, however these databases depend on reports being submitted, and further on the accuracy of those reports. Under-reporting of ADRs7 significantly limits the use of such databases both for ADR detection and as a validation set for a prediction task. To address this issue, electronic health records (EHRs) can potentially be analysed to detect ADRs8, removing the dependency on reporting. Data mining from EHRs has been used to detect novel ADRs14, or combined with spontaneous reports to increase confidence in detected drug-ADR signals15. Reliably extracting ADR mentions from the free text of EHRs is challenging – a single concept may be described in several different specific ways (e.g. synonyms, acronyms or shorthand), or may be mentioned in a historical context. We therefore focus on a subset of ADRs for which the NLP pipeline is validated16. Basis for the prediction algorithm The workflow of the prediction algorithm is shown in Fig. 1. Intuitively, the drugs that cause a given ADR should have certain features in common that are related to the mechanism(s) by which they cause the ADR, such as protein targets, transporters or chemical features. One way to identify these features is by a simple enrichment test17,18. The enrichment test is a simplistic way to mimic the human reasoning process. For example, for any given ADR of interest the known causes are likely to also cause nausea, but we wouldn't predict that therefore any drug that can cause nausea could be a cause of our ADR of interest because we know most drugs can cause nausea. In other words nausea is not a specific feature of the known causes of the ADR and we would ignore it. The result is achieved in our method by testing for the enrichment of each feature (e.g. also causing nausea or targeting a specific protein) for all the known causes of an ADR of interest vs all other drugs. Features that are found to be significantly enriched for the known causes of an ADR are used in the predictive model, other features are not included. Overview of the prediction algorithm. (a) Starting from a knowledge graph containing all publicly available information on the ADR being predicted, an enrichment test is used to identify predictive features of the drugs known to cause the ADR. The total adjacency of every drug with all predictors of each type (the columns of the matrix) is calculated from the graph. Blue nodes are drugs, red nodes are ADRs, orange nodes are targets, green nodes are indications. (b) The features (adjacency matrix from (a)) are scaled and weighted to produce a final score for every drug. (c) The optimum weight vector from (b) is learned from the knowledge graph to maximize an objective function. The predictions of this optimized model are tested in EHRs. This set of features can be thought of as a "meta-drug" with only the enriched features of the known causes of an ADR. Any drug can now be scored for its similarity to this profile, and we expect the known causes of this ADR to score relatively highly. Any drug that is not currently known to cause the ADR but that also has a high similarity to the enriched "meta-drug" profile is predicted to be a potential new cause. This process is repeated for every ADR to generate new predictions. The features produced from the knowledge graph (in conjunction with the enrichment test) can be used for classification by any standard machine learning algorithm such as logistic regression (LR), decision trees (DT) and support vector machines (SVM). These three methods are used as a benchmark for the method presented here. Our method is most similar to LR, indeed the hypothesis functions can be stated equivalently. The significant difference between our method and LR is the objective function used to optimise the feature weights. In LR the weight vector is selected to maximising the (log) likelihood, whereas our method optimises Youden's J statistic (see Methods). The performance of the prediction algorithm on the constructed knowledge graph was assessed in 3 ways: 1) ability to correctly classify the known causes of each ADR, 2) ability to predict (replace) edges deleted from the graph, 3) ability to predict ADRs not present in the graph but observed in EHRs. The presented prediction algorithm performed well across all tests, indicating that automated reasoning from knowledge graphs representing publicly available knowledge can be used to accurately predict unknown ADRs that have been observed in clinical practice. Filling in the blanks in our knowledge of ADRs would potentially reduce risks to patients and associated healthcare costs. Construction of the drug knowledge graph Public data on drug targets, indications and ADRs were retrieved and integrated to create the drug knowledge graph. Only marketed drugs with at least one edge of each type were retained in the graph. The final graph contains 70,382 edges (clinical indication, protein target, adverse reactions) for 524 drugs (Table 1). The distributions of numbers of known causes per ADR and known ADRs per drug were both highly skewed (Supplementary Figure S1), with a median of 3 known causes per ADR and 85 known ADRs per drug. The most common ADR was nausea, which is a known reaction to 88% of drugs in the graph, followed by headache (86%) and rash (81%). 32% of all ADR nodes in the graph have only a single known cause, for example Acrodynia (discolouration and pain of hands and feet, a rare side effect of Riluzole). Table 1 Size of the drug knowledge graph. Raw data was filtered to retain only marketed drugs with at least one known ADR, target and indication. Model performance in a simulated prediction task The goal of the prediction algorithm is to use knowledge about drugs known to cause an ADR to predict new causes, which is equivalent to adding edges in the knowledge graph. To simulate this task, a proportion of the existing drug-ADR edges for each ADR are deleted from the graph before training a predictive model. The performance of the model is evaluated using the proportion of the deleted edges that it correctly placed. Importantly, and unlike a standard k-fold cross validation, the test set of drugs is included in the training data, but as true negatives. This is an exact simulation of the intended use-case: adding new edges to nodes already present in the graph. This procedure was performed over 10 folds for each ADR in the graph (meaning in each fold 10% of the "known cause" edges from this ADR to drug nodes are deleted). Deleted edges are replaced before beginning the next fold. Over all ADRs, 67.3% of deleted edges were correctly predicted by the trained models. As a benchmark, we also tested the performance of several standard machine learning methods that have previously been used for ADR prediction. The benchmark methods used were logistic regression (LR, used for ADR prediction in11), decision trees (DT, as used in13) and support vector machines (SVM, as used in19). Our method gave substantially better performance than all the benchmark methods (67.3% correctly predicted vs. 20.0% for DT, 14.5% for SVM, 14.3% for LR, Fig. 2a). Furthermore, the performance of our method was consistently high independent of the number of known causes of each ADR (Fig. 2b). The performance of all four methods was compared to the expected performance of random guessing (Fig. 2c). The distance of each point from the diagonal (where model performance equals random) was used as a measure of confidence in the ability of the optimised model to outperform a random model in the EHR validation. Our method outperformed random for all ADRs, whereas all other methods were no better than random for a proportion of ADRs (DT 7.9%, SVM 41.9%, LR 27.5%). This large difference in performance is partly due to the high proportion of models that do not make any new predictions for the other methods. Over all folds for all ADRs, our method makes new predictions for 99.7% of models, compared to 92.5% for DT, 52.4% for SVM and 65.1% for LR. Trained models outperform random and standard models in simulated prediction tasks. (a) Distribution of the proportion of deleted edges that was correctly predicted by each method for each ADR, as an average over all folds. (b) Average proportion of deleted edges correctly predicted by each algorithm for all ADRs. (c) Proportion of deleted edges predicted by trained models compared to the expected proportion achieved by a random model. Solid diagonal line represents identical performance. Points above the line indicate the trained model performed better than random. DT = Decision Trees, LR = Logistic Regression, SVM = Support Vector Machines. The benchmark methods used here are widely used for classification. We found that our method performed similarly to LR, DT and SVM at classifying the known causes of ADRs (Supplementary Note S1 and Supplementary Figure S2). Model validation in Electronic Health Records The EHR at the South London and Maudsley NHS Foundation Trust was used to validate drug-ADR associations predicted by the algorithm. ADR mentions were identified from the free text of the EHR using a published NLP pipeline developed previously using the same EHR16. Reports were only considered a validation of a drug-ADR association when patients were prescribed a single drug and then reported the ADR within 30 days. To evaluate the performance of the prediction algorithm we identified a set of ADRs for validation for which 1) onset would be expected to occur within 30 days, 2) the ADR concept in the knowledge graph can be reliably detected in the EHR text with the NLP pipeline and 3) a predictive model could be built from the knowledge graph. Applying these criteria left a set of 10 ADRs for validation (Table 2). Importantly, as shown in Table 2, these test cases were not selected based on confidence in the predictive model. Table 2 ADRs for which we attempted to validate novel predicted drug associations in the EHR. The "known" column refers to the total number of drugs in the knowledge graph with an edge to each ADR. The predictive performance of each model was first assessed by comparing the number of new predictions made by the trained model that were validated in the EHR to the performance of random models (Table 3). The number of new predictions that were validated in EHR data was greater than expected by chance for all tested models, however for Alopecia and Stevens-Johnson Syndrome a considerable proportion of random models did perform at least as well (14% for Stevens-Johnson syndrome, 36% for alopecia). The confidence grouping derived from cross validation performance compared to random models performed well overall in identifying models that are outperformed by random in <5% of cases in EHR validation (Table 3). The only exceptions were for neuroleptic malignant syndrome and pericarditis. Table 3 Validation of trained models in EHR data. N = number of drugs predicted to cause the ADR that were tested in the EHR data. V = number of predicted drugs that were associated with the ADR (validated) in EHR data. E = expected number of validated predictions given N and the proportion of all drugs that are associated in the EHR. Random models generate N predictions for each ADR, and the trained model is considered significant if <5% of 100,000 random models had an equal or greater validation rate. Comparison to existing methods As a benchmark, we applied machine learning methods that have previously been successfully applied to the ADR prediction task for which well documented implementations are readily available. The predictions generated by previously published methods will depend on the input features, so these methods were trained on the same input features as used with our method, and their prediction performance was evaluated with the same EHR pipeline. In all cases, new predictions were taken as the false positives from the model and the validation rate was compared to random chance. The benchmark methods are the same as those used earlier in the simulated prediction task: LR, used for ADR prediction in11, DT, as used in13 and SVM, as used in19. The results for all methods are shown in Table 4. Table 4 Prediction performance compared to other methods. By definition the method developed in this paper makes predictions for all 10 of the validation ADRs. The average percent of random models with better performance is calculated considering only the ADRs with at least one validated prediction. LR = logistic regression, DT = decision trees, SVM = support vector machines. The 10 ADRs used for validation were partly selected because the predictive model made new predictions that could be tested, so all 10 have new predictions, however our method also generated new predictions for more ADRs overall. There was only one ADR for each of the three alternative methods that performed better than randomly selecting the same number of drugs at least 95% of the time (galactorrhoea for SVM and LR, pericarditis for DT). On average over all validation ADRs with new predictions, our method outperformed random 92.3% of the time, compared to 75.4% for the next best method (LR). Therefore, the method presented here both produces new predictions for more ADRs, and the validation rate of new predictions is much better. Examples of validated ADR predictions The overall top 10 highest-scoring predicted ADRs that were validated in EHR data are shown in Table 5. As a secondary validation of these predictions, the European Medicines Agency EurdaVigilance database of spontaneous ADRs was queried for reports of the same association (Table 5). An advantage of the prediction method used here is that the enriched features may provide a molecular mechanism for the ADR17. Table 5 The ten highest-scoring predicted ADRs that were not present in the drug knowledge graph and were validated in EHRs. The number of reports of each drug-ADR pair ("Drug + ADR") and the total number of reports of all ADRs for each drug ("Drug (all)") are shown for both the EHR used for validation and the EudraVigilance database. The total ADR reports for each drug in the EHR only includes the 10 ADRs used for validation. The EudraVigilance reports include all cases for all ADRs reported in the dataset up to August 2017 (accessed October 2017). Note that the ratio of "Drug + ADR" to "Drug (all)" is expected to be much larger in the EHR as only 10 ADRs are considered, vs all ADRs for EudraVigilance. Of the top 10 predictions, 3 are for akathisia and 4 are for pulmonary embolism. Akathisia is a movement disorder and extrapyramidal side effect characterised by a feeling of restlessness and a compulsion to move. A pulmonary embolism is a blockage of the pulmonary artery, which supplies blood to the lungs. An embolism elsewhere can cause pulmonary embolism if the clot dislodges and reaches the lung. All three drugs associated with akathisia in Table 5 are tricyclic antidepressants (TCAs), and the indications "depression" and "major depression" were predictors in the model. Extrapyramidal side effects, including akathisia, have been associated with TCAs in case reports20,21, as well as with the related class of selective serotonin reuptake inhibitor (SSRI) drugs22. Extrapyramidal side effects are listed as possible rare ADRs in the data sheets for imipramine and amitriptyline. As noted by Vandel et al. in their review of these case reports20, given the widespread prescription of TCAs, the incidence of extrapyramidal side effects must be very low to have resulted in only a small number of reports. The underlying mechanism of akathisia remains unclear but is thought to involve dysregulation (hypo-activity) of dopaminergic neurotransmission23, which can result from serotonin potentiation by TCAs or SSRIs. This theory is consistent with the predictive protein targets used in the model, which include several serotonin and dopamine receptor subtypes. The four drugs in Table 5 associated with pulmonary embolism are unrelated to each other in their primary action: clomipramine (TCA), lamotrigine (anticonvulsant), donepezil (cholinesterase inhibitor), haloperidol (antipsychotic). Of these, haloperidol is associated with venous thromboembolism in the knowledge graph, and pulmonary embolism is associated generally with antipsychotics as a class of drugs. One case report was found in which thrombosis occurred following clomipramine treatment24. The prediction that these drugs can cause pulmonary embolism was largely based on their other known ADRs. Examples of predictive ADRs in the model with direct relevance to pulmonary embolism include deep vein thrombosis, venous embolism, thrombocytosis, thrombophlebitis and increased prothrombin levels. At the time a drug is approved for use, only a subset of the possible adverse reactions to that drug will be known from clinical trials. Electronic health records are a vast repository of actual patient outcomes, however much of this data is only contained in the free text. In this paper, we have developed a prediction algorithm that uses publicly available data on drugs, which would be available at the time of marketing, that can predict ADRs observed in health records that are not found in public databases. With this algorithm, we demonstrate a pharmacovigilance pipeline using drug-ADR associations extracted from the free text of an EHR to verify predictions made using a knowledge graph of publicly available data. A significant distinction between the existing approaches to ADR prediction is whether the predictions are generated for drug-like molecules currently in development, or for drugs that have undergone clinical trials. From a modelling perspective, the key difference between these two situations is the availability of a side effect profile for the drug (albeit a likely incomplete one). Intrinsic structural properties of the drug-ADR network alone can achieve surprisingly high performance in predicting additional ADRs, but the integration of additional data improves performance11. ADR predictions for lead molecules have tended to focus on chemical features (such as widely-used quantitative structure-activity relationship models25,26), possibly also including gene expression profiles27 or drug targets28,29. This study is focused on predicting additional ADRs for drugs in the post-marketing phase. In general the method presented could be used to make predictions for lead molecules, but the drug knowledge graph used in this paper is limited to targets, indications and ADRs. The lack of any chemical features means there would be very little input data remaining in the present knowledge graph for a lead molecule. Integrating chemical features into the knowledge graph could also improve the predictions for marketed drugs, as well as allowing predictions to be made for lead drugs. For example, certain ADRs are related to specific chemical subgroups in the drug molecule17,30,31. One way to achieve this integration would be to represent the presence of different chemical substructures as facts in the graph, which have previously been used to predict side effect profiles30. Alternatively, Shao et al. showed that representing the molecular structure of drugs as a graph and then applying pattern mining techniques to identify features outperformed more widely used methods such as molecular fingerprints for ADR prediction31. Drug structure graphs (or features of these graphs) could be represented within the knowledge graph and used to generate predictions, possibly increasing predictive value. Cami et al.11 used 16 molecular features of drugs as covariates in their ADR prediction model, several of which are continuous values (e.g. molecular weight, rotatable bond count) rather than binary facts used here. An expanded version of the knowledge graph could incorporate these continuous features in the graph as edge weights. Including chemical features would change the input data, so the performance would need to be re-evaluated. Previous studies18,27 found that including GO annotation of target proteins or differentially expressed genes improved classifier performance in the ADR prediction task. The combined GO annotation and ontology forms a knowledge graph, and as such this data could straightforwardly be integrated in the drug knowledge graph constructed in this study. It is therefore possible that expanding the input knowledge graph (for example with chemical features of drugs) would improve the accuracy of the predicted ADRs. Verifying predicted ADRs using EHRs rather than relying on spontaneous reports has several advantages, the most significant being that this approach overcomes the under-reporting issues of spontaneous report databases. However, there are some general limitations. The most important limitation is that we have to assume patients are complying with their prescriptions, and also that they are not taking any medications not captured in the EHR (which could be the true cause of the ADR). This is particularly problematic for outpatients. As with spontaneous reports, these associations alone do not prove a causal link between a drug and an ADR and we cannot truly establish (and report) a causal link without further manual investigation to rule out other possible causes. To mitigate these limitations, we focused on patients who were only prescribed a single drug and then reported the ADR within 30 days. This increases our confidence that the prescribed drug is associated with the ADR, however it is not a perfect solution. Some ADRs can have chronic onset (such as amenorrhoea, galactorrhoea, alopecia) or may be reported "out of sync" with drug prescriptions, i.e. an ADR could have been caused by a previous medication that was stopped, but the ADR was only recorded after another prescription was given. Considering these limitations, we consider the drug-ADR associations presented here as strong candidates that require further clinical validation. Considering only those patients prescribed a single drug also excludes a significant proportion of patients who are prescribed multiple drugs concomitantly. Technically the prediction algorithm could straightforwardly extend to predict ADRs for combinations of drugs, increasing its utility in likely real-world contexts where many patients take multiple medications. Including drug combinations is practically challenging as the size of the prediction problem would increase exponentially. Finally, the validation is dependent on reliable NLP to extract ADR mentions from the free text of EHRs. As the pipeline used for validation was developed and validated using the same EHR16, we are confident that the error rate is low. The predictive models generated using our method are essentially sets of drug properties for each type of information in the knowledge graph, along with a weight for each type. The basis for a given drug-ADR prediction is therefore very clear, and the sets of predictors – particularly the predictive targets – may provide valuable mechanistic information for future drug development. As new drugs are developed and added to the knowledge graph, the model optimisation process should be repeated. Although it is possible to use the previous model to calculate a score for a new drug and any ADR, adding edges to the graph would affect all the underlying enrichment calculations that are used to identify predictors. This is computationally expensive and may also result in previous predictions changing. However, this is also a reasonable feature by analogy to the human reasoning process – when we learn new information, it may require us to revise previous predictions. Beyond the task of ADR prediction, the algorithm used here is a general-purpose method that can be applied to any knowledge graph. Further work is needed determine its performance in other contexts. Machine learning algorithms could become valuable tools for pharmacovigilance, which is a critical element of drug safety. Systems pharmacology methods such as that presented here could be used to predict and understand ADRs that were not observed in clinical trials but are possible given the observed ADR profile and other known properties of the drug. These predictions may not warrant inclusion in drug safety leaflets for patients, but could be provided to clinicians to promote reporting of these predicted ADRs. Looking further ahead, we can envisage a learning healthcare system in which ADRs are automatically detected in patient records and are reported to relevant regulatory bodies if the specific drug-ADR association is missing from the system's knowledge graph. Such a system lowers the burden on clinicians' time and mitigates the under-reporting problems of current post-marketing surveillance, potentially improving patient safety. These reports could be used to dynamically generate ever more accurate ADR predictions. Knowledge graph construction The knowledge graph was constructed as a Neo4j 3.0 database containing publicly available data on marketed drugs. Data on drug targets was retrieved from DrugBank version 4.5.0 (www.drugbank.ca)32. Adverse drug reactions and indications extracted from drug package inserts were retrieved from SIDER (www.sideeffects.embl.de)33 on 09/06/2016. Drugs were matched between datasets using PubChem compound identifiers. Target proteins were identified using Uniprot identifiers, indications and adverse drug reactions were identified using Unified Medical Language System (UMLS) terms. The constructed graph contains 4 types of node (drug, ADR, indication, target) and 3 types of edge representing each relationship. Every edge in the graph has a drug node at exactly one end. Only drugs with at least one edge of each type were retained in the final knowledge graph (i.e. each drug must have at least one known target, indication and ADR). The complete public knowledge graph used in this study is provided in Supplementary Table S1. Prediction algorithm The prediction algorithm is a binary classifier which is used to place edges in a graph. For each ADR node, the knowledge graph is queried to find known causes. For each of the three types of drug knowledge in the graph (target, indication and ADR), Fisher's exact test is used to identify enriched properties of the known causes of the ADR vs. all other drugs in the graph (excluding the ADR being modelled). Enriched properties are those with a p-value for enrichment <0.05 following false discovery rate correction. These identified properties are nodes in the knowledge graph, and the raw features used to make predictions are the adjacency of all drug nodes in the graph with these (enriched) nodes. Standard feature scaling is applied to each feature type separately to scale values to the range 0–1. Feature scaling is important as drugs typically have many more known ADRs than targets or indications. The scaled features can be represented with matrix D, where each row corresponds to a drug and each column corresponds to a predictor node type. The value Di,j is therefore the total (scaled) adjacency of drug i with all predictors of type j. The score for each drug is calculated by multiplying D by a weight vector w (to be learnt from the data), which contains the weight of each predictor type in the same order as the columns of D. A drug is predicted to cause the ADR if its score is greater than a threshold. New predictions from the model are any drugs that are not known causes of the ADR that score higher than the threshold, and there may not be any new predictions for a given model. The feature weights and score threshold are selected to maximise the objective function. In general, any objective function could be used to determine the threshold, based on the cost context of false positives vs false negatives, and the weights for each predictor type could be varied over any given range. For this study, the weights for all predictor types were kept within the range 0–1 with L2 normalisation. Youden's J statistic was used as the objective function, which is defined as J = Sensitivity + Specificity − 1 (equation 1). Sensitivity is the true positive rate, meaning the proportion of all positive examples that was correctly identified. Specificity is the true negative rate, meaning the proportion of all negative examples that was correctly identified. $$J=\,\frac{TP}{P}+\,\frac{TN}{N}-1=Sensitivity+Specificity-1$$ At least one predictor for each node type was required before training a model. This requirement for predictors of all types to be identified meant that it was not possible to build a model for all ADRs in the training data. Relaxing this constraint would allow a greater number of predictive models to be built. The prediction algorithm is implemented as an open source python library which available and documented in detail at https://github.com/KHP-Informatics/ADR-graph. Benchmark methods Machine learning methods that have previously been successfully applied to the ADR prediction task and for which well documented implementations are readily available were tested used as benchmarks. The selected methods were logistic regression (LR), support vector machines (SVM) and decision trees (DT), all of which are implemented in scikit-learn34 version 0.17.1. SVM classifiers used the RBF kernel and default settings to emulate the SVM classifiers used in19. DTs were configured to emulate the method of Bresso et al.13 by setting the minimum samples per leaf to 5. LR classifiers used L2 regularisation and default settings. All benchmark machine learning methods were trained on the same input data as described for our method (normalised adjacency with enriched properties of known causes of each ADR). False positives from all models were taken as new predictions and validated using the same EHR pipeline. For each ADR, 100,000 random models were generated that made the same number of new predictions as the trained model for the same ADR. Random models selected drugs uniformly at random from the list of all drugs that are not known to cause the ADR and are prescribed in the EHR used for validation. The predictions from these random models are validated in the EHR using the same pipeline as for the trained models. The proportion of random models with at least as many validated predictions as the trained model was used as the performance metric for trained models (Supplementary Figure S3). Trained models with more validated predictions than >95% of random models selecting the same number of drugs were considered significant. Cross-validation To simulate the prediction task, the edges for each ADR node were divided into 10 folds. In each fold, 1/10th of the edges to drugs from an ADR were deleted and a model was then optimised using the resulting graph. The proportion of deleted edges that are correctly predicted by the model is determined. Importantly, only edges (not nodes) are deleted from the graph in each fold. This means the drug nodes that were previously connected to the ADR remain in the graph and are considered true negatives when the model is trained. This is a more accurate simulation of the intended prediction task than a standard cross-validation where the test set is completely held out. Cross validation performance was used to derive a confidence score for the predictions for each ADR generated on the full graph. The raw confidence is average proportion of deleted edges that was correctly predicted over all folds, relative to the expected performance of a random model. If the relative performance equals 1, the model performs as expected by random chance. These raw scores were binned into confidence groups, where high confidence models score better than the median and mid confidence models score better than random but less than the median. Any ADRs with performance less than or equal to random are assigned low confidence. ADRs with fewer than 6 successful folds were not analysed, and were assigned low confidence by default. Validation in electronic health records and Eudravigilance De-identified patient records were accessed through the Clinical Record Interactive Search (CRIS)35 at the Maudsley NIHR Biomedical Research Centre, South London and Maudsley NHS Foundation Trust. This is a widely used clinical database with a robust data governance structure which has received ethical approval for secondary analysis (Oxford REC C 08/H0606/71 + 5). Free text from these mental health electronic health records was processed using a Natural Language Processing pipeline described previously16. Briefly, ADR mentions are identified using a dictionary of related terms (including synonyms and misspellings) and further processed to identify negation or other experiencers, favouring precision over recall. In this pipeline, ADRs are any adverse events in the record that could be an ADR, although causality is not established16. When an ADR mention is detected in the free text, we associate it with all drugs prescribed within the previous 30 days based on the typical onset of the ADRs used for validation. Only ADR mentions with a single associated prescription were considered in the validation; reports from patients prescribed more than one drug in the previous 30 days are ignored. The Eudravigilence database was queried via www.adrreports.eu on 04/10/2017. All ADR reports up to August 2017 were retrieved. The data on drug indications, ADRs and targets are publicly available (see "Knowledge graph construction") and the knowledge graph produced from this data is available as Supplementary Table S1. The prediction algorithm is implemented as an open source python library, available and documented with the knowledge graph at https://github.com/KHP-Informatics/ADR-graph. The anonymised EHR data are available for secondary research via CRIS35 subject to approval by the CRIS Oversight Committee in adherence with strict patient-led governance35. A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has been fixed in the paper. Pirmohamed, M. et al. Adverse drug reactions as cause of admission to hospital: prospective analysis of 18 820 patients. BMJ 329, 15–19 (2004). Lazarou, J., BH, P. & PN, C. Incidence of adverse drug reactions in hospitalized patients: A meta-analysis of prospective studies. JAMA 279, 1200–1205 (1998). U. S. Food and Drug Administration. The FDA Adverse Events Reporting System. Available at: https://open.fda.gov/data/faers/ (accessed 2017). Uppsala Monitoring Centre. VigiBase. Available at: https://www.who-umc.org/vigibase/vigibase/ (accessed 2017). Medicines and Healthcare Products Regulatory Agency. The Yellow Card Scheme. Available at: https://yellowcard.mhra.gov.uk/ (accessed 2017). European Medicines Agency. European Database of Suspected Adverse Drug Reaction Reports. Available at: http://www.adrreports.eu/ (accessed 2017). Hazell, L. & Shakir, S. A. W. Under-reporting of adverse drug reactions: A systematic review. Drug Safety 29, 385–396 (2006). Ho, T.-B., Le, L., Thai, D. T. & Taewijit, S. Data-driven approach to detect and predict adverse drug reactions. Curr. Pharm. Des. 22, 3498–3526 (2016). Boland, M. R. et al. Systems biology approaches for identifying adverse drug reactions and elucidating their underlying biological mechanisms. Wiley Interdiscip. Rev. Syst. Biol. Med. 8, 104–122 (2016). Whitebread, S., Hamon, J., Bojanic, D. & Urban, L. In vitro safety pharmacology profiling:an essential tool for successful drug development. Drug Discov. Today 10, 1421–1433 (2005). Cami, A., Arnold, A., Manzi, S. & Reis, B. Predicting Adverse Drug Events Using Pharmacological Network Models. Sci. Transl. Med. 3, 114ra127–114ra127 (2011). Rahmani, H., Weiss, G., Méndez-Lucio, O. & Bender, A. ARWAR: A network approach for predicting Adverse Drug Reactions. Comput. Biol. Med. 68, 101–108 (2016). Bresso, E. et al. Integrative relational machine-learning for understanding drug side-effect profiles. BMC Bioinformatics 14, 207 (2013). Liu, M. et al. Comparative analysis of pharmacovigilance methods in the detection of adverse drug reactions using electronic medical records. J. Am. Med. Informatics Assoc. 1–8 (2012). Harpaz, R. et al. Combing signals from spontaneous reports and electronic health records for detection of adverse drug reactions. J. Am. Med. Informatics Assoc. 20, 413–419 (2013). Iqbal, E. et al. ADEPt, a semantically-enriched pipeline for extracting adverse drug events from free-text electronic health records. PLoS One 12, 1–16 (2017). Duran-Frigola, M. & Aloy, P. Analysis of Chemical and Biological Features Yields Mechanistic Insights into Drug Side Effects. Chem. Biol. 20, 594–603 (2013). Huang, L. C., Wu, X. & Chen, J. Y. Predicting adverse side effects of drugs. BMC Genomics 12 Suppl 5, S11–2164–12–S5–S11. Epub 2011 Dec23 (2011). Liu, M. et al. Large-scale prediction of adverse drug reactions using chemical, biological, and phenotypic properties of drugs. J. Am. Med. Informatics Assoc 19, e28–e35 (2012). Vandel, P., Bonin, B., Leveque, E., Sechter, D. & Bizouard, P. Tricyclic antidepressant-induced extrapyramidal side effects. Eur. Neuropsychopharmacol. 7, 207–212 (1997). Gill, H. S., DeVane, C. L. & Risch, S. C. Extrapyramidal Symptoms Associated With Cyclic Antidepressant Treatment: A Review of the Literature and Consolidating Hypotheses. J. Clin. Psychopharmacol. 17 (1997). Lane, R. M. SSRI-Induced extrapyramidal side-effects and akathisia: implications for treatment. J. Psychopharmacol. 12, 192–214 (1998). Loonen, A. J. M. & Stahl, S. M. The Mechanism of Drug-induced Akathisia. Trends Psychopharmacol. 16, 7–10 (2011). Eikmeier, G., Kuhlmann, R. & Gastpar, M. Thrombosis of cerebral veins following intravenous application of clomipramine. J. Neurol. Neurosurg. &amp;amp; Psychiatry 51, 1461 (1988). Cherkasov, A. et al. QSAR modeling: Where have you been? Where are you going to? Journal of Medicinal Chemistry 57, 4977–5010 (2014). Frid, A. A. & Matthews, E. J. Prediction of drug-related cardiac adverse effects in humans-B: Use of QSAR programs for early detection of drug-induced cardiac toxicities. Regul. Toxicol. Pharmacol. 56, 276–289 (2010). Wang, Z., Clark, N. R. & Ma'ayan, A. Drug-induced adverse events prediction with the LINCS L1000 data. Bioinformatics 32, 2338–2345 (2016). Pérez-Nueno, V. I., Souchet, M., Karaboga, A. S. & Ritchie, D. W. GESSE: Predicting Drug Side Effects from Drug-Target Relationships. J. Chem. Inf. Model. 55, 1804–1823 (2015). Yamanishi, Y., Pauwels, E. & Kotera, M. Drug side-effect prediction based on the integration of chemical and biological spaces. J. Chem. Inf. Model. 52, 3284–3292 (2012). Pauwels, E., Stoven, V. & Yamanishi, Y. Predicting drug side-effect profiles: a chemical fragment-based approach. BMC Bioinformatics 12, 169 (2011). Shao, Z., Hirayama, Y., Yamanishi, Y. & Saigo, H. Mining Discriminative Patterns from Graph Data with Multiple Labels and Its Application to Quantitative Structure-Activity Relationship (QSAR) Models. J. Chem. Inf. Model. 55, 2519–2527 (2015). Law, V. et al. DrugBank 4.0: shedding new light on drug metabolism. Nucleic Acids Res. 42, D1091–D1097 (2014). Kuhn, M., Letunic, I., Jensen, L. J. & Bork, P. The SIDER database of drugs and side effects. Nucleic Acids Res. 44, D1075–D1079 (2016). Pedregosa, F. & Varoquaux, G. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12 (2011). Fernandes, A. C. et al. Development and evaluation of a de-identification procedure for a case register sourced from mental health electronic records. BMC Med. Inform. Decis. Mak. 13, 71 (2013). This paper represents independent research funded by the National Institute for Health Research (NIHR) Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King's College London. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. This research was supported by researchers at the National Institute for Health Research University College London Hospitals Biomedical Research Centre, and by awards establishing the Farr Institute of Health Informatics Research at UCL Partners, from the Medical Research Council, Arthritis Research UK, British Heart Foundation, Cancer Research UK, Chief Scientist Office, Economic and Social Research Council, Engineering and Physical Sciences Research Council, National Institute for Health Research, National Institute for Social Care and Health Research, and Wellcome Trust (grant MR/K006584/1). Department of Biostatistics and Health Informatics, Institute of Psychiatry Psychology and Neuroscience, King's College London, 16 De Crespigny Park, London, SE5 8AF, United Kingdom Daniel M. Bean, Honghan Wu, Ehtesham Iqbal, Zina M. Ibrahim & Richard J. B. Dobson South London and Maudsley NHS Foundation Trust, Denmark Hill, London, SE5 8AZ, United Kingdom Olubanke Dzahini, Matthew Broadbent & Robert Stewart Institute of Pharmaceutical Science, King's College, London, 5th Floor, Franklin-Wilkins Building, 150 Stamford Street, London, SE1 9NH, United Kingdom Olubanke Dzahini Institute of Psychiatry, Psychology and Neuroscience, King's College London, 16 De Crespigny Park, London, SE5 8AF, United Kingdom Robert Stewart Farr Institute of Health Informatics Research, UCL Institute of Health Informatics, University College London, London, WC1E 6BT, United Kingdom Zina M. Ibrahim & Richard J. B. Dobson Daniel M. Bean Honghan Wu Ehtesham Iqbal Zina M. Ibrahim Matthew Broadbent Richard J. B. Dobson D.B., H.W. and R.J.B.D. designed the study. O.D. advised on validation. H.W. and E.I. processed EHR data. D.B. designed the prediction algorithm, performed analysis and prepared the manuscript. R.J.B.D., Z.M.I., M.B. and R.S. supervised the project. All authors reviewed the manuscript. Correspondence to Richard J. B. Dobson. MB and RS have received recent research funding from Roche, Janssen, and GSK. The other authors declare no competing financial interests. A correction to this article is available online at https://doi.org/10.1038/s41598-018-22521-4. Supplementary Table S1 Bean, D.M., Wu, H., Iqbal, E. et al. Knowledge graph prediction of unknown adverse drug reactions and validation in electronic health records. Sci Rep 7, 16416 (2017). https://doi.org/10.1038/s41598-017-16674-x A semantic web technology index Gongjin Lan Ting Liu Zhisheng Huang Scientific Reports (2022) Comprehensive network medicine-based drug repositioning via integration of therapeutic efficacy and side effects Paola Paci Giulia Fiscon Joseph Loscalzo npj Systems Biology and Applications (2022) Towards a knowledge graph for pre-/probiotics and microbiota–gut–brain axis diseases Jaap Heringa The Use of Artificial Intelligence in Pharmacovigilance: A Systematic Review of the Literature Maribel Salas Jan Petracek Tina Bostic Pharmaceutical Medicine (2022) Creating Knowledge Graph of Electric Power Equipment Faults Based on BERT–BiLSTM–CRF Model Fanqi Meng Shuaisong Yang Han Liu Journal of Electrical Engineering & Technology (2022)
CommonCrawl
A meso-level empirical validation approach for agent-based computational economic models drawing on micro-data: a use case with a mobility mode-choice model New Frontiers in Economics: the Agent-Based Approach Alperen Bektas ORCID: orcid.org/0000-0002-4476-59161, Valentino Piana1 & René Schumann1 SN Business & Economics volume 1, Article number: 80 (2021) Cite this article A Correction to this article was published on 25 May 2021 This article has been updated The complex nature of agent-based modeling may reveal more descriptive accuracy than analytical tractability. That leads to an additional layer of methodological issues regarding empirical validation, which is an ongoing challenge. This paper offers a replicable method to empirically validate agent-based models, a specific indicator of "goodness-of-validation" and its statistical distribution, leading to a statistical test in some way comparable to the p value. The method involves an unsupervised machine learning algorithm hinging on cluster analysis. It clusters the ex-post behavior of real and artificial individuals to create meso-level behavioral patterns. By comparing the balanced composition of real and artificial agents among clusters, it produces a validation score in [0, 1] which can be judged thanks to its statistical distribution. In synthesis, it is argued that an agent-based model can be initialized at the micro-level, calibrated at the macro-level, and validated at the meso-level with the same data set. As a case study, we build and use a mobility mode-choice model by configuring an agent-based simulation platform called BedDeM. We cluster the choice behavior of real and artificial individuals with the same ex-ante given characteristics. We analyze these clusters' similarity to understand whether the model-generated data contain observationally equivalent behavioral patterns as the real data. The model is validated with a specific score of 0.27, which is better than about 95% of all possible scores that the indicator can produce. By drawing lessons from this example, we provide advice for researchers to validate their models if they have access to micro-data. Avoid the most common mistakes and prepare your manuscript for journal editors. Modeling economies as complex systems has been attracting many scholars (Hamill and Gilbert 2016). Agent-based (AB) models are one of the modeling tools for complex systems, which can provide a realistic way to model economies; thus, their usage has been growing in the field of economics (as well as in other disciplines) during the last 3 decades (Fagiolo et al. 2019; Hamill and Gilbert 2016). AB models consist of autonomous and decentralized entities (agents); each can have dynamic behavior and heterogeneous characteristics (Geanakoplos et al. 2012). The dynamic behavior of heterogeneous agents is governed by decision-making mechanisms (rules) derived from established empirical and theoretical foundations (Dawid et al. 2014). Thus, agents do not necessarily make decisions based on the assumption of a representative agent who is intertemporally optimizing an objective function under rational expectations (Colander et al. 2008). The uses of these models in economics are collected under a common umbrella that we refer to as agent-based computational economics (ACE) (Tesfatsion 2002). AB models have certain features that distinguish them from neoclassical ones (Arthur 1994). Economists often point to such features as a reason to use them (Hamill and Gilbert 2016). First of all, AB models have a bottom–up perspective. The macro-dynamics in these models are the emergent properties of micro-level interactions and agents' behavior, which is not constrained with equilibrium and hyper-rationality (Heckbert et al. 2010). These emergent properties at the macro-level can be used to analyze complex and decentralized systems quantitatively (Duffy 2006). As Arthur (2006) states, emerging properties often feedback micro-level decisions, which leads to a perpetual novelty in the behavior. Thanks to the bottom–up perspective, AB models are capable of modeling each individual's micro-behavior separately, which allows us to have a high level of heterogeneity (Dawid et al. 2012). Secondly, AB models can contain non-trivial interactions, which were governed by ex-ante defined rules of behavior. These interactions are often non-linear, which makes tracing of the emergent macro-patterns harder (Windrum et al. 2007). The interactions can lead to having information and adaptation, which make AB models realistic, as individual decisions (in real world) are largely based on incomplete information and preferences, which indicates that decision-making can evolve in case new information comes (Farmer and Foley 2009). It is an asset for AB models (like other economic models) demonstrating how well the model Data Generating Process (mDGP) represents the real-world Data Generation Process (rwDGP) (Fagiolo et al. 2007; Klügl 2008; Bianchi et al. 2007; Murray-Smith 2015; Beisbart and Saam 2019). One way to do that is to compare the data generated by the mDGP and the rwDGP statistically; we call this procedure empirical validation (Windrum et al. 2007). AB models favor more descriptive accuracy than analytical tractability, contrary to neoclassical ones due to the potential existence (by no means necessary) of non-linearities, macro-micro feedback, heterogeneous interactions (Fagiolo et al. 2007). That makes the relationship and the comparison of AB model-generated data and real data problematic, which leads to complexity and consequently, methodical problems regarding the empirical validation of AB models (Heckbert et al. 2010). Although there are contributions in the last decade, such as Barde (2020), Lamperti (2018a), and Guerini and Moneta (2017), we still do not have standardized empirical validation methods for AB models that inevitably lead to a lack of robustness in terms of validation (Fagiolo et al. 2019). That was recognized by AB modelers themselves and shown as one of the reasons for the reluctance of neoclassical economists to move AB camp, even though they recognized the significance of AB critique (e.g., heterogeneity, learning, interactions, etc.) and try to update their models accordingly (Windrum et al. 2007). Previous research recommends the involvement of machine learning techniques as for empirical validation methods (Fagiolo et al. 2019; Barde and Van Der Hoog 2017), which allows us to perform more thorough comparisons of mDGP generated data and rwDGP generated data. The present paper has been motivated by this research and proposes an unsupervised machine-learning algorithm,Footnote 1 specifically cluster analysis (Russell and Norvig 2002), as an empirical validation method. The method focuses on the AB models that use micro-data as input and produce results accordingly to address questions from the real world. It aims to compare model-generated data and real data at the meso-level. To do this, it suggests clustering the ex-post behavior of real individuals and artificial agents, who have the same ex-ante given characteristics. Then, it quantitatively assesses how well the clusters are overlapping in a multidimensional latent space. Thus, the behavioral patterns in model-generated data and real data are compared. The method is discussed in the next section in detail. To apply the method as a case study, we build an AB model through configuring an AB simulation platform called Behavior Driven Demand Model (BedDeM) (Nguyen and Schumann 2019). The model and its features are explained in "Case study" thoroughly. The rest of the paper is organized as follows. In "Methods", which consists of three subsections, we first discuss the theoretical background of the validation of AB models in light of existing literature. Then, we touch on the recently introduced validation approaches. After that, we explain the proposed method and discuss how it could expand the existing literature. In "Case study", we build an AB model to apply the method as a case study. "Results" shows and interprets the validation results of the case study. "Discussion" discusses the value of the method and its applicability to other AB models. It gives practical advice for the researchers who want to apply this validation method to their AB models. It also discusses what kind of AB models could be assessed by the method and provide some example models for the sake of clarity. Finally, the paper ends with the future works and conclusions sections. Theoretical background of validation of AB models In this section, we follow a general-to-specific way to discuss the validation of AB models in the light of existing literature. First, we introduce the types of validation techniques (stages) for AB models in general terms. We utilize a procedure to validate AB models, which was introduced by Klügl (2008), and discuss the validation stages that are ordered in that procedure. Then, we discuss one of these stages (the last one) called empirical validation in detail, since we introduce a novel method for that stage in this paper. One of the major valuable aspects of using AB models is to explain and understand a real-world phenomenon that is costly and sometimes difficult to analyze in real world (e.g., field experiments, real laboratory experiments, etc.) (Xiang et al. 2005). As Farmer and Foley (2009, p. 686) state, "AB models allow for the creation of a kind of artificial (virtual) universe in which many players act in complex and realistic ways". Thus, such models enable to analyze—in silico—the future status of the original system under novel conditions. Assessing how well the artificial universe (i.e., AB models) represents a proportion of the original system (i.e., a part of the real world that is aimed to be modeled) is an asset for models that potentially makes the modeling results more credible (Klügl 2008). This assessment is called validation in the literature (Windrum et al. 2007; Bianchi et al. 2007). If the model is validated, the answers derived from the model can be utilized to answer questions directed to the original system (Klügl 2008). A general procedure to validate AB models (Klügl 2008) Klügl (2008) introduced a framework (see Fig. 1) that places different validation stages in an order to validate AB models. Some stages in the framework are also discussed in Balci (1994) separately (i.e., without being a part of a framework). The framework starts with face validity. In that stage, the modelers are supposed to contact to domain experts to assess whether the model behaves reasonably. The experts provide subjective judgments on the accuracy of the model. Sensitivity analysis comes next, where the impact of different parameters on the model output is assessed. It is assumed that the relationship between a parameter and the output occurring in the model should occur similarly in the original system as well. Once such impacts are analyzed, then the appropriate values are assigned in calibration for the parameters. Calibration aims for finding the "optimal" parameter set, which resembles the output of the model to the output from the original system. In general, AB model parameters are calibrated to aggregated (macro) patterns (Guerini and Moneta 2017). The plausibility check comes after calibration, where human experts assess the plausibility of the model outcome (e.g., dynamics and trends of the different output values of model runs). It is technically the same as the previously discussed face validity, as Klügl (2008) states. Finally, statistical tests are applied to compare model-generated data and real data as named empirical validation. Empirical validation is the last stage of the procedure in Fig. 1 and aims to compare the data coming from the rwDGP and the mDGP statistically. Assume that we have real data generated by the rwDGP, which contains different data points in a time-series. The data points can be at the micro-level as the expression in (1) denotes (Pyka and Fagiolo 2007; Windrum et al. 2007), where I represents the population of individuals whose heterogeneous behaviors are observed and contained in the vector of z in a finite time-series of n. For instance, for a mobility mode-choice model, z would be individual level mobility mode choice behavior: $$\begin{aligned} (z)_i= & {} \{ z_{i,t}, t= t_0,\ldots , t_n\}\quad i\ \epsilon \ I, n\ \epsilon \ {\mathbb {N}} \end{aligned}$$ $$\begin{aligned} (Z)= & {} \{ Z_{t}, t= t_0,\ldots , t_n\}\quad n\ \epsilon \ {\mathbb {N}}. \end{aligned}$$ The data points that the rwDGP generates at the micro-level can be aggregated to obtain macro-data points, as denoted in (2) (Pyka and Fagiolo 2007; Windrum et al. 2007), where the vector of Z contains macro-data points of a population (i.e., I) over a time series. For instance, a household's consumption behavior is represented by a micro-level data point, while the aggregation of all households in a population I is represented by a macro-level data point, which can then be used as a component of the GDP. Modelers aim to approximate values for the vector of z or Z for which finding the optimal micro (\( \theta \), e.g., agent preferences) and macro (\( \Theta \), e.g., the environment) parameters is needed for calibration. Once the optimal parameters are set, which is the one step before the empirical validation in Fig. 1, then the output of the model can be compared empirically to real data from the original system (Fagiolo et al. 2007; Guerini and Moneta 2017). As Klügl (2008, p. 6) states, "calibration and validation must use different data sets for ensuring that the model is not merely tuned to reproduce given data, but may also be valid for inputs that it was not given to before". However, having two data sets from the same original system is not often possible. In such cases, the available data can be used on all available levels, as Klügl (2008) asserts. For instance, a model can use micro-data as input, be calibrated at the macro-level, and be validated at the meso-level. Therefore, the same data set can be exploited at different levels without over-fitting. In this section, we first discuss recently introduced validation methods. Then, we explain why our method is related to the discussed methods and how it could expand them for the sake of readers. Lamperti (2018b) has offered an information theoretic criterion called General Subtracted L-divergence (GSL-div) as a validation method for AB models. The method measures the similarity between model-generated and real-world time-series. It assesses the extend of models' capability to mimic patterns (e.g., distribution of time-series such as changes of values from one point in time to another) occurring in real-world time-series. It is related to our method, because our method aims to compare the similarity among patterns occurring in real data and model-generated data as well. However, GSL-div focuses only on aggregated time-series data as Fagiolo et al. (2019) indicate, while our method focuses rather on meso-level behavioral patterns that are constructed by micro attributes. We discuss the advantages of the meso-level approach later. The authors state that the GSL-div can overcome certain shortcomings of the method of simulated moments (MSM), e.g., it does not need to resort to any likelihood function and provides a better representation about the behavior of complex time-series. Their method could be applied technically to any AB model that produces time-series data. Detailed explanation of the method, illustrative examples, and case studies can be found in Lamperti (2018a, 2018b). Barde (2020, 2016) has introduced another information theoretic criterion as a validation method for AB models. The method is called Markovian information criterion (MIC). It follows the minimum description length (MDL) principle, which hinges on the efficiency of data compression to measure the accuracy of models' output (Grünwald and Grunwald 2007). It first uses model-generated data to create a Markov transition matrix for the model, and then uses the real data to produce a log score for the model on the data. The method uses the Kullback–Leibler (KL) divergence to measure the distance between real and model-generated data; thus, the accuracy of the mDGP is assessed. As the author states, the method does not include estimation; instead, it is applied to already calibrated models to assess their output. It is related to our method from that aspect. However, similar to GSL-div, the application level of our method is different than MIC as we explain in detail in the following section. Grazzini and Richiardi (2015) discuss estimation methods for dynamic stochastic general equilibrium modeling (DSGE) models and analyze whether such models can also be applied to AB models. The authors mention the simulated minimum distance (SMD) methods, such as the method of simulated moments (MSM), as a natural approaches to the estimation of AB models. Such methods aim for estimating model parameters by minimizing the distance between the aggregates between model output and real data. Our approach differs from these methods, because it focuses on the last step of the procedure of Klügl (2008) (see Fig. 1). In other words, it is applied to already calibrated models, similarly to the method of Barde (2016). Thus, the estimation methods in the class of SMD can be only complementary to our method. As we discussed in the future works section, in a future paper, we plan to couple an SMD method with our method to apply together on an AB model. Differently from the previously discussed methods, Guerini and Moneta (2017) offer a method that aims to compare causal relationships in model-generated data and real-world data to validate AB models. The method hinges on estimating Structural Vector Autoregressive (SVAR) models through real and artificial time-series and comparing them to get a validation score. Our method does not rely on time-series and we compare relationships at meso-level, while the method of Guerini and Moneta (2017) focuses only on aggregate time-series. To conclude, as Fagiolo et al. (2019, p. 14) state in their critical review, "all these recently developed validation methods focus only on aggregate time-series, while most of AB models have been able to replicate both micro and macro stylized facts". Some of the discussed methods could be applied in principle at the micro-level, but there is no "proof-of-concept" yet. Besides, applications of such methods at the micro-level could lead to over-fitting if a model gets micro-data as input and its parameters are estimated to fit individual behavior one-to-one (e.g., fitting behavior of artificial agent to its real counterpart). Considering the increasing availability of micro-data, the number of AB models using micro-data as input increases (Macal and North 2014; Hamill and Gilbert 2016). Therefore, in this paper, we offer a meso-level validation method for the models drawing on micro-data. The method involves an unsupervised machine-learning algorithm along the lines suggested by Fagiolo et al. (2019) and Barde (2016). They represent contributions regarding machine-learning involvement on the side of estimation (van der Hoog 2019). However, such involvement is still lacking on the side of validation. Our method could expand the existing validation methods towards the direction of machine-learning and encourage future contributions. The further text is structured as follows: we discuss the overall concept of our method in the next section in detail. We also discuss for what kind of AB models the method could be applied and provide some example models from recent research in "An overview of AB models that might be validated with our method". The overall concept of the meso-level validation method This section introduces a meso-level empirical validation method for AB models drawing on micro-data first as a broad methodological choice, and then, we describe it in detail. In broad terms, we sharply distinguish the different phases and goals of the relationship between real (empirical) data and the model in the following way: the meso-level is exclusively used for validation, whereas the micro-level is used for input micro-data into the agents in terms of parameters (not of outcomes of their decision-making process, because this could lead to over-fitting) and the macro-level for calibration. By this distinction, we radically eliminate any source of overlap between what is given to the model as input, what is used for calibrating its overall results and macro–micro-parameters, and what is used for validation. More specifically, our method consists of sequential steps for which we created an overall concept as in Fig. 2. We explain each step one after another, according to their sequence in the concept. The main goal of the concept is to compare model-generated data and real data at the meso-level to understand how well the mDGP can produce the behavioral patterns that occur in real data. It produces a quantitative score in a spectrum according to which we can assess the validity. The overall concept of the proposed empirical validation method The overall concept gets two data sets as input. The first data set contains information regarding the ex-ante characteristics of artificial individuals (i.e., agents) and their ex-post behavior, generated by the mDGP. The second data set involves information regarding the ex-ante characteristics of real individuals and their ex-post behavior, generated by the rwDGP. Both data sets contain information at the individual level, since the mDGP of AB models produce data at the individual level (i.e., micro-level). Individuals are clustered according to their characteristics and behavior in the data sets, and these clusters are compared at meso-level quantitatively. An essential point for the comparison according to the method is that the real data should be the one that is used to initialize the model. In this case, individuals in real data are mapped to artificial agents one-to-one; thus, the number of real and artificial individuals becomes equal, which is a prerequisite to apply the validation method. The data sets can differ in what model-wise is an ex-post behavior, because an artificial agent might behave differently than a real individual with the same characteristics. The variables constituting the ex-ante characteristics should ideally be the ones influencing the ex-post behavior. By having this, the clusters involve a combination of the variables in individuals' characteristics and consequent behavior. Hence, by comparing clusters, we can study the behavioral patterns (e.g., the relationship between the characteristics and the behavior) in model-generated data and real data. Instead of clustering artificial and real data sets separately, we merge them as indicated in Fig. 2, and cluster them together to analyze the balance in the clusters (i.e., how many real and how many artificial individuals are in each cluster). Individuals in the merged data are placed in a multidimensional latent space based on their attributes (i.e., ex-ante characteristics and ex-post behavior). The latent space is represented by a symmetric distance matrix.Footnote 2 Several metrics exist to create that matrix, such as Euclidean, Manhattan, Gower, etc. (Bektas and Schumann 2019a). In the overall concept, we utilize the Gower distance metric, since it can handle different column typesFootnote 3 (e.g., categorical, numerical, ordinal, etc.) to place instances in the latent space (Gower 1971). For instance, the merged data might contain some attributes of households that can be categorical such as income level, or numerical such as age. Gower distance can determine the positions of the individuals in the latent space based on these columns without any transformation, while other metrics such as Euclidean accepts only numerical ones (Bektas and Schumann 2019a): $$\begin{aligned} \text {Sil} = \frac{b_i - a_i}{\max \{a_i,b_i\}}. \end{aligned}$$ As for the clustering algorithm, we utilized the k-medoids clustering algorithm, since it is compatible with the latent space created by the Gower distance metric (Bektas and Schumann 2019a). However, k-medoids is an unsupervised algorithm; thus, we need to find ex-ante the optimal number of clusters. There are the goodness-of-fit metrics in the literature [e.g., Average Silhouette Width (ASW), Calinski and Harabasz Index (CH) and Pearson version of Hubert's \(\Gamma \) (PH) (Campello and Hruschka 2006)], which can provide quantitative measurement scores regarding the quality of clustering with the different number of clusters. The ASW is one of the most widely used approaches that measures how well an instance is matched with its own cluster (Maulik and Bandyopadhyay 2002; Bektas and Schumann 2019a). As a goodness-of-fit measure, it reflects how well intra-cluster homogeneity and inter-cluster dissimilarity are maximized (Rousseeuw 1987). The idea for pre-specifying the optimal number of clusters is to try different k-values in an interval and appoint one of them, which has the highest ASW value, as the optimal number of clusters. For each k number, the ASW value of the clusters is calculated according to Eq. (3), which depicts the Silhouette value of instance i. The feature \(a_i\) represents average dissimilarity of i to all other objects in the cluster a (the smaller the value, the better the assignment). Another feature \(b_i\) reflects the minimum dissimilarity of the instance i to all objects in any other cluster (the closest cluster to i except its own cluster). Equation (3) returns values between \(-1\) and 1. Values close to 1 indicate that instance i is assigned to the proper cluster. Average Sil values of all instances (ASW) give an idea about the quality of the clustering (Rousseeuw 1987). After the instances are placed in a latent space, and the optimal number of clusters are found, the k-medoids algorithm (see Algorithm 1) partitions the instances into k (the optimal number) clusters. To understand how well clusters from real and artificial data overlap, we compare the quantity of artificial and real individuals in the clusters according to the indicator (4). In the formulation of the indicator (4), R represents the number of real instances, A represents the number of artificial instances, and N is the optimal number of clusters. The indicator finds the dissimilarity in the balance of artificial and real instances for each cluster. Finally, it returns a normalized score in a spectrum between zero and one. The indicator uses the L1 norm (i.e., least absolute deviation) similarly to the Manhattan distance, since it gives equal importance to all clusters that might have different dissimilarities (i.e., balance differences).Footnote 4 Besides, the L1 form is more preferable for high-dimensional data applications (Aggarwal et al. 2001): $$\begin{aligned} \frac{\sum _{k=1}^{N}=\frac{\mid R_k - A_k \mid }{R_k + A_k}}{N}. \end{aligned}$$ If an artificial agent behaves observationally equivalent to the real individual with whom it has the same characteristic, they are placed in the same position in the latent space; thus, they are supposed to be in the same cluster. If all artificial agents behave observationally equivalent with the real individuals with whom they have the same characteristics, it is expected that the clusters would have 50% artificial fifty percent real instances (as in the simple experiment in Online Appendix A). In this case, the indicator's outcome (4) becomes zero, which indicates a perfect match. In other words, a zero score demonstrates that the behavior patterns in real data are perfectly overlapping with the ones from the artificial data. Conversely, if an artificial agent produces different ex-post behavior than his real counterpart, they are placed in different positions in the latent space. Thus, they are supposed to be in different clusters. That leads to unbalanced clusters and, consequently, a weak validation score according to the indicator in (4). The overall concept is completed with the determination of the place of the score in the distribution of all possible scores it could theoretically take, which allows us to interpret it. To determine a meaningful threshold, we obtain all possible scores it can have and their frequency in the exhaustive list of all possible cases, which is the state space. The state space contains all possible alternative ways in which a total can be distributed,Footnote 5 with Page (2012) demonstrating a Java algorithm to obtain them in a broad variety of restrictions. In the case at hand, we study the scores that the indicator (4) generates in all possible subdivisions of the total number of artificial agents and of the total number of real individuals in the clusters. Accordingly, we obtain the distribution of possible scores, which allows us to judge the specific score—that a model achieves in the previous steps—concerning all other possible scores. Overall, this procedure builds on the idea that a validated model should produce "indistinguishable" results from real data. Going beyond the inter-personal qualitative procedure proposed in Piana (2013), we deliver a method having a quantitative indicator of "goodness-of-validation," taking values from zero to one. The method can be used for the AB models having micro-data as input and produce results accordingly. We discuss such models and provide examples in "Discussion". The method provides these models two advantages: avoiding over-fitting of the micro-level validation and having more detailed validation than macro-level, as recommended in Fagiolo et al. (2019). In the next section, we apply the method to a specific model in the personal mobility domain, implementing a certain simulation platform. This section consists of three subsections. In the first, we describe an AB simulation platform called Behavior Driven Demand Model (BedDeM) that we configured to build a specific model. The platform building process is discussed in Nguyen and Schumann (2019).Footnote 6 and a use case is addressed in Bektas et al. (2018) and Bektas and Schumann (2019b). In the second subsection, we discuss the model building process by configuring the generic platform with empirical (real) data. The proposed validation method is applied to the built model, and the results are discussed in the next section. In the third subsection, we discuss the specific variables that constitute individuals' ex-ante characteristics and ex-post behavior in the built model. These variables are used to place real and artificial individuals in a multidimensional latent space to compare meso-level patterns. The simulation platform The BedDeM platform has been developed as a generic tool that can be configured to address specific issues from different research domains (e.g., household consumption, mobility, tourism, etc.). It comprises of the key theoretical tenets of the multi-agent cognitive system, in which heterogeneous and autonomous agents are capable of making choices (decisions). It enables modeling the micro-behavior of each individual (agent) separately. The core element of the BedDeM platform is an agent-based simulator, written in Java based on the RePast library (Nguyen and Schumann 2019), complemented by key concepts from Triandis' Theory of Interpersonal Behavior (TIB) (Triandis 1979), described in "Agent's decision-making mechanism". TIB explains the origin of individual behavior, which is utilized as a decision-making framework (component) in the platform. Hence, agents make their choices (decisions) according to the contained determinants in the TIB. Overview of agent's design BedDeM consists of autonomous agents that have (not necessarily) heterogeneous characteristics and preferences. Agents are assigned tasks and are supposed to choose an option to perform their tasks according to the ex-ante defined behavioral rules (e.g., decision-making mechanism). Tasks and options are specified according to the application domain. For instance, agents might choose a tourist place to visit or choose a mobility mode to perform their trips, according to configuration. Overview of agent's design (Nguyen and Schumann 2019) When an agent performs a task, he first collects information. The perception module (see Fig. 3) gets information about the present state of the environment and combines it with other agents' opinions. It then brings the information to the decision-making module for reasoning. The obtained information is combined with heterogeneous preferences and also with the past decisions in the memory. As agents maintain their local state (the individual level memory, see Fig. 3), the decision-making becomes time-inseparable. In the end, the agent lists all available options to perform the task and choose the most preferable one according to his individual reasoning. After the choice, he informs the other agents about his choice that can be used as information in others' decision-making modules. Agent's decision-making mechanism Triandis' theory of inter-personal behavior (TIB) Since micro-behavior is the main output of the mDGP, it should be well specified to obtain precise emergent properties with the original system. While the BedDeM platform was being constructed, first, the origin of individual behavior was addressed. The idea was obtaining a standard theory that depicts the origin of individual behavior and using this theory as agents' decision-making mechanism. In cognitive science, there exist such theories, e.g., Ajzen's theory of planned behavior (TBP) (Ajzen et al. 1991) and Ajzen and Fischbein's theory of reasoned action (TRA) (Chang 1998). These theories state that individual's intention to act is the key determinant of behavior (Bektas et al. 2018). There are several AB models and platforms they attempted to incorporate these theories (Nguyen and Schumann 2019). Triandis TIB model (Triandis 1979) Triandis extended these theories in his TIB model (see Fig. 4). He added two new components over them, habits and facilitating conditions. According to TIB, the frequency of past behavior forms a habit that partly impacts current behavior. Hence, the current behavior is determined by the current status of the environment (e.g., economic parameters) and the previous decisions in the individual memory. The theory as well as other empirical research state that intention is moderated by habit that leads to non-deliberate decision-making (Verplanken et al. 1994; Bamberg et al. 2003). As Nguyen and Schumann (2019) state, TIB includes all aspects of TRA and TPB, as well as additional components such as habits that potentially improve its predictive power and descriptive accuracy. Although there is no proof which theory is more suited to build an AB platform, TIB was chosen for the BedDeM platform, since it provides a more comprehensive understanding of the origins of individual behavior. Implementation of TIB as agent decision-making mechanism The full implementation of the TIB model as an agent decision-making mechanism is illustrated in Fig. 5. When a task is assigned to an agent, he first gets information from the environment to have available options to perform the task. Then, according to each determinant (d) (i.e., box in Fig. 5) in the first layer, the agents sorts available options (opt) in a list according to their score (see Eq. (5)). The score is calculated by comparing the property of an option with other's (\(R_{d}(\text {opt})\)). To calculate the scores in the first level, either a real numerical system (for quantitative determinants such as price) or a ranking function (for the determinants such as emotions) is utilized. Both numerical values and rankings can come from empirical data or be calibrated through experts' assessment (Nguyen and Schumann 2019): $$\begin{aligned} \begin{aligned} \begin{aligned}&&R_d(\text {opt}) = \sum \limits _{c=1}^C \left( R_c(\text {opt}) / \left( \sum \limits _{o=1}^O R_c(o)\right) * w_c \right) \\&&\bullet R_d(\text {opt}) \text { is the score of an option}\ (\text {opt}) \text { at determinant}\ d.\\&&\bullet C \text { is the set of the children of}\ d (\text {i.e.}, \text {determinants connects with}\ d \\&&\text { in the previous level}).\\&&\bullet O \text { is the set of all available options}.\\&&\bullet w_c \text { is the weight of child determinant}\ c. \end{aligned} \end{aligned} \end{aligned}$$ Agent's decision-making mechanism with mapping to TIB (Bektas et al. 2018) Once all options are ranked in lists according to each determinant in the first layer, the lists are merged and normalized with associated weights (\(w_d\)) to pass in the next layer (see Eq. (5)). The score of each option according to each determinant is multiplied with the associated weight, which becomes the new score of the option. The weights in decision-making represent the importance of determinant. For instance, if it is desired to have time-separable decision-making, the weight of habit can set to zero, which means that the memory (i.e., past decisions) does not impact the current behavior. Once all decision-making steps are merged, the agent ends up with a sorted list of options according to their scores. According to the configuration, he can choose the first best option deterministically in the list, or certain probabilities can be created over the scores; thus, he can choose an option stochastically. More detailed information regarding the platform and its decision-making mechanism can be found in Nguyen and Schumann (2019). We are currently applying the platform in the mobility domain. The BedDeM platform becomes an AB mobility mode-choice model through the configuration, which aims to generate heterogeneous mobility demands at the household level. The model allows for mode-choices for mobility trips based on price and non-price signals through its decision-making mechanism. It has the ability to generate yearly data that can be interpreted at the granularity of historical evolution of mobility, which largely hinges on aggregate kilometers traveled and emissions produced by mode, including possible decarbonization trajectories (Bektas et al. 2018). Input of the model We utilize the "Mobility and Transport Micro-census (MTMC)" (ARE/BfS 2017) data of the Swiss statistical office to build the model. The data are at the micro-level and can be easily mapped to the agents. The data contain information regarding Swiss households' socio-economic characteristics (e.g., location, income level, car/travelcard ownership, etc.) and daily mobility activities. We map the real respondents one-to-one agents; thus, each agent represents a real Swiss household by having all his characteristics, including mobility activities. Besides, all the exogenous variables that are used to shape the environment, such as fuel prices, reflect the Swiss system. Output of the model Each agent in the model is assigned a task list (i.e., trips of the real households) to perform. Agents evaluate existing options (e.g., car, public transportation, soft mobility, etc.) according to the decision-making mechanism introduced in the previous section and choose a mode for each of their trips. The model simulates each agent's micro-behavior separately and generates micro-level heterogeneous mobility mode-choices as the core output. The output can be aggregated to obtain macro-patterns (i.e., modal-split) over which the model has already been calibrated (Nguyen and Schumann 2019), including using data from the Swiss Household Energy Demand Survey (Weber et al. 2017). Each agent has a weight-to-universe value, which is used as upscale factor to get macro-patterns. Through the aggregation, various sorts of outputs can be derived, e.g., total emissions and kilometers traveled per mode; thus, the model can be used to test climate change policies in-silico, for instance. Variable selection for the case study To apply the validation method, the variables identifying the ex-post behavior and the ex-ante characteristics of individuals should be chosen and given to the validation method as input (see Fig. 2). As ex-post behavior, mode-choice should be in the chosen variables (the last variable in the list below). As for the variables constituting ex-ante characteristics, we identified the ones influencing mode-choice in our previous research (Bektas and Schumann 2019a) and use them in this case study (the first four variables in the list below). The full list of the used variables is demonstrated below: Number of cars in the household Number of daily trips Having a half-fare travelcard Daily distance Mode-choice. We apply the overall concept step-by-step and discuss the results of each step sequentially. We commenced the overall concept with a merged data set containing 3000 artificial and 3000 real (MTMC) individuals with the chosen variables. Before clustering the individuals, we obtained the optimal number of clusters. We utilized the Average Silhouette Width (ASW) score that gives statistics to determine the optimal number of clusters. As illustrated in Fig. 6, we clustered individuals in the final data set into a different number of clusters within an interval ([2:15]) for each, we calculated the ASW score. The results show that we get the highest cluster quality when we cluster the individuals into six clusters. In other words, we obtain the optimal intra-cluster homogeneity and inter-cluster heterogeneity by dividing individuals into six clusters. We utilized the score as a statistical ground and used the obtained optimal number of clusters to proceed the method. ASW values with different cluster numbers We placed the individuals into a multidimensional latent space based on the chosen variables, divided them into six clusters according to their positions in the latent space, and analyzed such clusters' composition. For each cluster, we got the quantities of artificial and real individuals to compute the indicator. The obtained quantities are demonstrated in Table 1. Table 1 Quantity of artificial agents and real instances in the clusters and corresponding scores After we obtained the quantity of artificial and real instances in the clusters, we applied the indicator (4) as the overall concept indicated and got the value 0.2750. By construction, it is between zero and one; the lower, the stronger the validation. But how to judge this specific value in general (e.g., independently on the number of clusters)? As anticipated in "Methods", we iterate the computation of this indicator (4) for all possible cases (i.e., balance combinations), which is the matrix product of two identical state spaces. An example of such a case is the situation where the 3000 real agents are all in one cluster. That can be matched by the situation in which all 3000 artificial agents are in the same cluster (good) or in another cluster (bad). Alternatively, 2750 artificial agents are in that cluster or in another. Examples like this are many thousands, but Page (2012) provides a computational method to elicit all of them. It computes not only how many but also enlist which ones they are. Mathematically speaking, it generates the weak composition of 3000 in 6. Since the full number is way too high to be computed in a reasonable time, we first quantize and then fit the results with a continuous function. We quantize the 3000 in 20 groups of 150 units each (in a procedure that is similar to bootstrap). We perform Page's algorithm in what Piana et al. (2020) would call shapes (20, 6): a state space enlisting the ways in which 20 units (in our case groups of agents) can be separated into six classes (in our case: clusters). The code to compute this state space is distributed as complementary material to Piana et al. (2020), drawing on Page (2012), McGhee (2008) and McGhee (2006). Density distribution of all possible validation scores For each of its rows, the outcome of the indicator can be computed. By having 20 groups (i.e., 150 quantum size of 3000 individuals) and 6 clusters, we obtained two states (for artificial and real individuals), each with around 50,000 combinations. For all possible number of artificial and real individual distributions in the clusters, we applied the indicator. Then, we obtained the density distribution of all the possible scores (i.e., outcomes of the indicator), which is illustrated in Fig. 7. Thus, we defined the space to see where our model's validation score is, which enables us to judge the score. Thanks to the computed all possible scores, we could easily judge the specific score that the model achieves. We summed up the number of cases that have a better score than 0.2750. We divided it into the total number of cases to obtain the percentage. By this means, we calculate the area under the curve (integral) in Fig. 7. The results show us that approximately 4.2% of cases would produce a score equal to or lower than the score of the model, 0.2750. We interpret these results in this way that the model is validated at the conventional threshold of 5%. The findings show that the built model for the case study satisfactorily represents the original system at the meso-level for the given variables. The artificial agents in the model behave observationally similar to the real individuals, who have the same ex-ante given characteristics. It can be interpreted in this way that the mDGP mimics the rwDGP by producing observationally similar data (behavior) with the given input data. To judge the validation score that we obtained, we created the density distribution of all possible scores, as explained in "The overall concept of the meso-level validation method". We utilized a simplification by setting the quantum size 150 to reduce high computational time. We tested whether the quantum size is relevant for the density distribution by applying another quantum size (300) (see Fig. 8 in Online Appendix C). We compared the functional forms of the density distributions to see whether different quantum sizes lead to different curves. It was observed that both 150 and 300 quantum sizes produce almost identical curves. We report it to demonstrate that the result is robust to the changes in simplified assumptions. Additionally, in Online Appendix B, we provide two state spaces without quantization for the cases having a lower number of agents. Discussion of the potential application of the meso-level approach to validation upon further models In this subsection, we discuss certain insights that we gained during the implementation in the case study that may turn out to be useful to other researchers who want to assess the empirical validation of a certain AB model drawing on micro-data. Many AB models do not draw upon real data, and for that group, the method cannot be applied. However, if the modeler's golden rule laid down in Piana (2004) is followed, and agents are given rules that can be directly embedded in questionnaires to real people, then by actually carrying out such surveys, the modeler can have at her/his disposal micro-data with which he can initialize the artificial agents. Indeed, this is often the case: AB models are frequently built and initialized with micro-data, since they aim to model heterogeneous behavior of each individual separately and in a highly realistic way (Dawid et al. 2012). Such micro-data contain information at the individual level and can be mapped to artificial agents. Thus, artificial agents get their ex-ante characteristics from real individuals, and they are supposed to generate ex-post behavior according to the behavioral rules of the mDGP. As long as the ex-post behavior of real individuals is known, the proposed empirical validation method can be applied to an AB model drawing on micro-data for which a meso-level can be computed, at which agents can be clustered and compared. For instance, in the model that we built for the case study, agents represent households and it produces mobility mode-choice behavior. Since we also know real households' behavior with the same characteristics, we could apply the method in a rather straightforward way. In another AB model, agents might represent real firms, and the micro-data containing information regarding real firms may come from accounting systems and declarations to the statistical offices. Real and artificial firms' behavior is clustered with their characteristics as the proposed method suggests and can be compared at the meso-level. Conversely, if in a macroeconomic AB model, there is a wide range of types of agents (firms, households, financial institutions, public institutions, etc.), our procedure might become too cumbersome if applied to all such types. In other words, the method is not dependent on the domain (scope), but its applicability is restricted to AB models for which a meso-level can be computed from available micro-data, possibly of only one type. In the procedure of clustering, one needs to select the variables upon which clustering occurs and determine the optimal number of clusters. After that, the application of the indicator in (4) can be carried out. The variables should be available for both the artificial and the real agents; they should be relevant for the main behavior that the model is called to describe. In our use case, we used the variables that a previous analysis demonstrated having a large impact on the behavior. However, if one cannot proceed with such an analysis, one might take the neutral stance of taking all common variables across real and artificial agents. The optimal number of clusters can be obtained as we did (by taking the number of clusters for which ASW is maximal), but any method that would single out a non-arbitrary number of clusters might be used, if appropriate. Finally, one needs to compute the probability of the goodness-of-validation to be higher than a certain threshold, much alike the p value. This probability is to be computed using the procedure indicated before.Footnote 7 An overview of AB models that might be validated with our method Keeping into account its general requirements, our methodology can be applied to many AB models such as the ones introduced in Axtell et al. (2014), Nelson et al. (2015), de Koning and Filatova (2020), and Klein et al. (2020). One should not expect that the authors did utilize our novel methodology to validate their models, and thus, their current validation method is inevitably different from those we are proposing. However, the description of the data they utilized for their AB model suggests applicability. Moreover, in their text, they commit to a certain vision we share: "we seek two classes of data to feed analysis and modeling: micro-data and event data. This fine resolution is necessary if we assume heterogeneous decision-making, a hallmark of agent-based modeling. Aggregate statistics are insufficient. We need to have realistic household socio-demographic variables and resource endowments." (Nelson et al. 2015, p. 1). "We live in the era of 1-to-1 computational instantiations of many complex systems, and agent-based computing is a way for economics to join this zeitgeist of digital synthesis" (Axtell et al. 2014, p. 3). Axtell et al. (2014) provides the methodological explanation of an AB model of a metropolitan housing market, which has been extended to the national level by Geanakoplos et al. (2012), which in turn has been considered the best model to cover such issue by Carstensen (2015). Klein et al. (2020) describe an AB model of the diffusion of electric vehicles. The initialization of its agents comes from micro-data collected by their original experiment (a conjoint-analysis) conducted with 552 people, representing the German population. "Parametrization and initialization of the characteristics and behavior of consumer agents was done using empirical data from our own study. Using these data, each consumer agent of the ABS was then initialized based on the corresponding characteristics of one real participant from our empirical study. Note, we also simulated larger populations in our sensitivity analysis. However, owing to relatively stable results, we decided to use 552 exactly matching consumers, which significantly reduced the time of each simulation run. Additionally, this allowed the direct initialization of each consumer agent using the responses of exact one consumer from our empirical study" (Klein et al. 2020, p. 12). de Koning and Filatova (2020) do not only describe an AB model to explore how urban housing markets evolve in the presence of climate-driven floods and behavioral biases on the agent level for which an ad-hoc survey of 600 respondents has been utilized to initialize the agents, but it explicitly calls for multi-scale validation. It falls short of singling out the meso-level as particularly appropriate for validation, which is our novel claim. Moreover, in recognizing that "there is no definite answer as to how much empirical validation is enough in order to make a model useful for its purpose" (p. 139), it implicitly valorizes our attempt to provide a metrics and a quantitative test with a threshold that can give a satisfactory interruption of a potentially never-ending cycle of reparametrizations ("Validation can be a continuous iterative process", as this paper puts it at p. 139). Indeed, it is important to remark that after calibration and validation, and finally, our models need to produce results. For instance, after validating the model that we built by configuring the BedDeM platform, we have been generating 320 alternative scenarios of mobility evolution 2015–2050 for Switzerland (currently delivered in an internal document for the funding agency). The present work proposes an unsupervised machine learning algorithm—cluster analysis—as a meso-level empirical validation method for AB models drawing on micro-data. The model aims to cluster the ex-post behavior of real and artificial individuals with the same ex-ante given characteristics. It produces a validation score in [0, 1] by comparing the similarity among clusters. The clusters do not only contain the ex-post behavior of real and artificial individuals but also their ex-ante given characteristics that influence the behavior. Hence, comparing clusters enables us to compare behavioral patterns in model-generated data and real data. To provide an instance of application of the method, we referred to an AB model that aims to model heterogeneous mobility mode-choice behavior. The specific model obtained a satisfactory validation score that shows that, in this case, the mDGP can mimic the rwDGP successfully for the given variables. More, in general, the proposed empirical validation method has certain advantages. First, it fully leverages the specificity of agent-based models covering highly heterogeneous agents and their potential multi-level aggregation. An agent-based model can be initialized at the micro-level, be calibrated at the macro-level, and be validated at the meso-level with the same data set and for the same time frame. A procedure that is often used in time-series to calibrate the model for a first segment of time periods and then validate in out-of-sample successive time suffers from the necessity of assuming that there are no structural breaks over time. This assumption may not be particularly suitable for models looking for emerging properties, high non-linearities, and, indeed, structural breaks. The second advantage is that with the meso-level validation, we can compare the behavioral patterns that the mDGP and the rwDGP generate, respectively. It is not easy with macro-level validation, because it compares only the aggregates. Therefore, the relationship between the ex-ante given characteristics and ex-post generated behavior cannot be easily compared. In short, we offer to the community of researchers devising and using agent-based models a method to empirically validate them, which is a crucial intermediate step in the overall useful application of this highly promising approach. We envisage different dimensions in the frame of future works to take the present work forward. First, as discussed in the related work section, the simulated minimum distance (SMD) methods, including the method of simulated moments (MSM), can be complementary to our method. A model's parameters can be estimated by an SMD method at the macro-level and its output can be validated by our method at the meso-level. We plan to research about the coupling of an SMD method and our method to use them together on the same model. Second, as our method aims to validate AB models drawing on micro-data, we aim to introduce a new technique generating synthetic micro-data from macro-aggregates for the modelers having limited access to micro-data. Third, we aim to assess the impact of the number of agents and the optimal number of clusters on the method's results in detail. We plan to apply the method on AB models having micro-data from different original systems and domains. Finally, we aim to explore the situations in which the output of an AB model is observationally similar with the real data at the macro-level but not at the meso-level. The supporting information regarding the data source can be found in Bektas and Schumann (2019a) and ARE/BfS (2017). Code availability The source code of the agent-based simulation platform is available online at BedDeM (2020). The Java code that is used to create state spaces (morphospaces) and the density distribution can be found in Page (2012). A Correction to this paper has been published: https://doi.org/10.1007/s43546-021-00089-y Unsupervised machine learning detects previously undetected patterns in a data set with no pre-existing labels and with a minimum of human supervision. Cluster analysis is considered as one of the most common unsupervised machine-learning algorithms (Kassambara 2017). We provide to readers an example latent space, which we generated for the experiment in Online Appendix A. Following the dominant language conventions in the distance metrics and cluster domains, columns mean attributes throughout the article. Rows are cases (agents) which are clusterised. The L2 norm puts more emphasis on the clusters with large balance discrepancies. We provided a numerical example in Online Appendix B regarding how the total can be distributed among clusters. We provide also example state spaces for different clustering configurations. The source code of the platform is available online at BedDeM (2020). One can download the state space from App. 1 of Piana et al. (2020) or compute it by executing the Java code, downloadable from Online Appendix B of the same paper. Aggarwal CC, Hinneburg A, Keim DA (2001) On the surprising behavior of distance metrics in high dimensional space. In: International conference on database theory. Springer, pp 420–434 Ajzen I et al (1991) The theory of planned behavior. Organ Behav Hum Decis Process 50(2):179–211 ARE/BfS (2017) Verkehrsverhalten der Bevölkerung Ergebnisse des Mikrozensus Mobilität und Verkehr 2015. Federal Office for Spatial Development and Swiss Federal Statistical Office Arthur WB (1994) Increasing returns and path dependence in the economy. University of Michigan Press Arthur WB (2006) Out-of-equilibrium economics and agent-based modeling. Handb Comput Econ 2:1551–1564 Axtell R, Farmer D, Geanakoplos J, Howitt P, Carrella E, Conlee B, Goldstein J, Hendrey M, Kalikman P, Masad D et al (2014) An agent-based model of the housing market bubble in metropolitan Washington, DC. In: Whitepaper for Deutsche Bundesbank's spring conference on "Housing markets and the macroeconomy: challenges for monetary policy and financial stability Balci O (1994) Validation, verification, and testing techniques throughout the life cycle of a simulation study. Ann Oper Res 53(1):121–173 Bamberg S, Ajzen I, Schmidt P (2003) Choice of travel mode in the theory of planned behavior: the roles of past behavior, habit, and reasoned action. Basic Appl Soc Psychol 25(3):175–187 Barde S (2016) Direct comparison of agent-based models of herding in financial markets. J Econ Dyn Control 73:329–353 Barde S (2020) Macroeconomic simulation comparison with a multivariate extension of the Markov information criterion. J Econ Dyn Control 111:103795 Barde S, Van Der Hoog S (2017) An empirical validation protocol for large-scale agent-based models. Bielefeld Working Papers in Economics and Management BedDeM (2020) Github—silab-group/beddem\_simulator. https://github.com/SiLab-group/beddem_simulator. Accessed 16 July 2020 Beisbart C, Saam NJ (2019) Computer simulation validation. Springer Bektas A, Schumann R (2019a) How to optimize Gower distance weights for the k-medoids clustering algorithm to obtain mobility profiles of the swiss population. In: 2019 6th Swiss conference on data science (SDS). IEEE, pp 51–56, ISBN:978-1-7281-3105-4 Bektas A, Schumann R (2019b) Using mobility profiles for synthetic population generation. In: Proceedings of the social simulation conference 2019, Mainz, Germany Bektas A, Nguyen K, Piana V, Schumann R (2018) People-centric policies for decarbonization: testing psycho-socio-economic approaches by an agent-based model of heterogeneous mobility demand. In: Computing in economics and finance (CEF) conference Bianchi C, Cirillo P, Gallegati M, Vagliasindi PA (2007) Validating and calibrating agent-based models: a case study. Comput Econ 30(3):245–264 Campello RJ, Hruschka ER (2006) A fuzzy extension of the silhouette width criterion for cluster analysis. Fuzzy Sets Syst 157(21):2858–2875 Carstensen CL (2015) An agent-based model of the housing market. Steps toward a computational tool for policy analysis. University of Copenhagen, MSc-szakdolgozat Chang MK (1998) Predicting unethical behavior: a comparison of the theory of reasoned action and the theory of planned behavior. J Bus Ethics 17(16):1825–1834 Colander D, Howitt P, Kirman A, Leijonhufvud A, Mehrling P (2008) Beyond DSGE models: toward an empirically based macroeconomics. Am Econ Rev 98(2):236–40 Dawid H, Gemkow S, Harting P, Van der Hoog S, Neugart M (2012) The eurace@ unibi model: an agent-based macroeconomic model for economic policy analysis. Bielefeld working papers in economics and management Dawid H, Gemkow S, Harting P, van der Hoog S, Neugart M (2014) Agent-based macroeconomic modeling and policy analysis: the eurace@ unibi model. Bielefeld Working Papers in Economics and Management Duffy J (2006) Chapter 19 agent-based models and human subject experiments. Volume 2 of handbook of computational economics Fagiolo G, Moneta A, Windrum P (2007) A critical guide to empirical validation of agent-based models in economics: methodologies, procedures, and open problems. Comput Econ 30(3):195–226 Fagiolo G, Guerini M, Lamperti F, Moneta A, Roventini A (2019) Validation of agent-based models in economics and finance. In: Computer simulation validation. Springer, pp 763–787 Farmer JD, Foley D (2009) The economy needs agent-based modelling. Nature 460(7256):685 Geanakoplos J, Axtell R, Farmer JD, Howitt P, Conlee B, Goldstein J, Hendrey M, Palmer NM, Yang CY (2012) Getting at systemic risk via an agent-based model of the housing market. Am Econ Rev 102(3):53–58 Gower JC (1971) A general coefficient of similarity and some of its properties. Biometrics 857–871 Grazzini J, Richiardi M (2015) Estimation of ergodic agent-based models by simulated minimum distance. J Econ Dyn Control 51:148–165 Grünwald PD, Grunwald A (2007) The minimum description length principle. MIT Press Guerini M, Moneta A (2017) A method for agent-based models validation. J Econ Dyn Control 82:125–141 Hamill L, Gilbert GN (2016) Agent-based modelling in economics. Wiley Online Library Heckbert S, Baynes T, Reeson A (2010) Agent-based modeling in ecological economics. Ann N Y Acad Sci 1185(1):39–53 van der Hoog S (2019) Surrogate modelling in (and of) agent-based models: a prospectus. Comput Econ 53(3):1245–1263 Kassambara A (2017) Practical guide to cluster analysis in R: unsupervised machine learning, vol 1. Sthda Klein M, Lüpke L, Günther M (2020) Home charging and electric vehicle diffusion: agent-based simulation using choice-based conjoint data. Transp Res Part D: Transp Environ 88:102475 Klügl F (2008) A validation methodology for agent-based simulations. In: Proceedings of the 2008 ACM symposium on applied computing. ACM, pp 39–43 de Koning K, Filatova T (2020) Multi-scale validation of an agent-based housing market model. In: Advances in social simulation. Springer, pp 135–140 Lamperti F (2018a) Empirical validation of simulated models through the GSL-div: an illustrative application. J Econ Interact Coord 13(1):143–171 Lamperti F (2018b) An information theoretic criterion for empirical validation of simulation models. Econom Stat 5:83–106 Macal C, North M (2014) Introductory tutorial: agent-based modeling and simulation. In: Proceedings of the winter simulation conference 2014. IEEE, pp 6–20 Maulik U, Bandyopadhyay S (2002) Performance evaluation of some clustering algorithms and validity indices. IEEE Trans Pattern Anal Mach Intell 24(12):1650–1654 McGhee G (2008) Convergent evolution: a periodic table of life, pp 17–31 McGhee GR (2006) The geometry of evolution: adaptive landscapes and theoretical morphospaces. Cambridge University Press Murray-Smith DJ (2015) Testing and validation of computer simulation models. Springer, Cham. https://doi.org/10:978-3 Nelson JB, Kennedy WG, Greenberg AM (2015) Agents and decision trees from microdata. In: Proceedings of the 24th conference on behavior representation in modeling and simulation (BRIMS) Nguyen K, Schumann R (2019) On developing a more comprehensive decision-making architecture for empirical social research. In: The 20th international workshop on multi-agent-based simulation (MABS 2019), 13 May 2019 Page DR (2012) Generalized algorithm for restricted weak composition generation. J Math Model Algorithms Oper Res 12(4):345–372 Piana V (2004) Consumer decision rules for agent-based models. Economics Web Institute Piana V (2013) Routines. Economics Web Institute EWI concepts series Piana V, Bektas A, Khoa N (2020) Temporal morphogenesis. Economics Web Institute EWI Essay series Pyka A, Fagiolo G (2007) 29 Agent-based modelling: a methodology for neo-Schumpeterian economics'. Elgar companion to neo-schumpeterian economics 467 Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65 Russell S, Norvig P (2002) Artificial intelligence: a modern approach, 3rd edn Tesfatsion L (2002) Agent-based computational economics: growing economies from the bottom up. Artif Life 8(1):55–82 Triandis HC (1979) Values, attitudes, and interpersonal behavior. In: Nebraska symposium on motivation, University of Nebraska Press Verplanken B, Aarts H, Van Knippenberg A, van Knippenberg C (1994) Attitude versus general habit: antecedents of travel mode choice 1. J Appl Soc Psychol 24(4):285–300 Weber S, Burger P, Farsi M, Martinez-Cruz AL, Puntiroli M, Schubert I, Volland B (2017) Swiss household energy demand survey (SHEDS): objectives, design, and implementation. Technical report, IRENE Working Paper, SCCER CREST Windrum P, Fagiolo G, Moneta A (2007) Empirical validation of agent-based models: alternatives and prospects. J Artif Soc Soc Simul 10(2):8 Xiang X, Kennedy R, Madey G, Cabaniss S (2005) Verification and validation of agent-based scientific simulation models. In: Agent-directed simulation conference, vol 47, p 55 Open access funding provided by University of Applied Sciences and Arts Western Switzerland (HES-SO). This research is part of the activities of SCCER CREST, which is financially supported by the Swiss Innovation Agency (Innosuisse). HES-SO Valais/Wallis, Rue de Technopole 3, 3960, Sierre, Switzerland Alperen Bektas, Valentino Piana & René Schumann Alperen Bektas Valentino Piana René Schumann A.B. wrote the original draft. V.P. contributed to the subsequent drafts and provided scientific and practical input. All authors reviewed and edited the manuscript, including the final version. R.S. acquired funding for the research and contributed the revision. Correspondence to Alperen Bektas. The authors declare no conflicts of interest/competing interests. The original online version of this article was revised due to an error in an author name. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 2607 kb) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Bektas, A., Piana, V. & Schumann, R. A meso-level empirical validation approach for agent-based computational economic models drawing on micro-data: a use case with a mobility mode-choice model. SN Bus Econ 1, 80 (2021). https://doi.org/10.1007/s43546-021-00083-4 Received: 30 September 2020 Accepted: 26 April 2021 Empirical validation Unsupervised machine learning Computational economics
CommonCrawl
Applied Mathematics and Numerical Analysis Seminar 25/01/2023, 16:00 — 17:00 — Room P3.10, Mathematics Building Victor Ortega, Departmento de Matemática Aplicada, Universidad de Granada, Spain and CEMAT, Faculdade de Ciências, Universidade de Lisboa, Portugal Some stability criteria in the periodic prey-predator Lotka-Volterra model In this talk, we present some stability results in a classical model concerning population dynamics, the nonautonomous prey-predator Lotka-Volterra model under the assumption that the coefficients are $T$-periodic functions \begin{equation}\label{sysLV} \left\lbrace \begin{array}{l} \dot{u}= u(a(t) - b(t)\,u - c(t)\,v), \\ \dot{v}= v(d(t) + e(t)\,u - f(t)\,v), \end{array} \right. \end{equation} where $u\gt 0$, $v\gt 0$. The variables $u$ and $v$ represent the population of a prey and its predator, respectively. Some instances with this kind of dynamics can be: snowshoe hare and lynx canadensis, paramecium and didinium, fish population and fishermen, etc. The periodicity of this model takes into account changes of the environment in which the predation process takes place. For instance, seasonality or variations of the temperature in laboratory conditions. In the system \eqref{sysLV} the coefficients $b(t)$, $c(t)$, $e(t)$ and $f(t)$ are positive. The coefficients $c(t)$ and $e(t)$ describe the interaction between $u$ and $v$; $a(t)$ and $b(t)$ describe the growth rate for the prey $u$; $d(t)$ and $f(t)$ represent the analogous for the predator $v$. Solutions for the system \eqref{sysLV} with both components positive are called coexistence states and the necessary and sufficient conditions for their existence are well understood, see [2]. After reviewing those conditions, we present some results concerning the stability of a special kind of coexistence state, positive $T$-periodic solutions. In [3] the author gave a sufficient condition for the uniqueness and asymptotic stability of the positive $T$-periodic solution. This criterion is formulated in terms of the $L^1$ norm of the coefficients of a planar linear system associated to \eqref{sysLV}. On the other hand, in [1], assuming that the system \eqref{sysLV} has no sub-harmonic solutions of second order (periodic solutions with minimal period $2T$), the authors proved that there exists at least one asymptotically stable $T$-periodic solution. Here the result is formulated in terms of the $L^\infty$ norm. Our result, in [4], gives a $L^p$ criterion, building a bridge between the two previous results. This is a Joint work with Carlota Rebelo (Departmento de Matemática and CEMAT, Faculdade de Ciências, Universidade de Lisboa, Portugal). Acknowledgements: This work was partially supported by the Spanish Ministerio de Universidades and Next Generation Funds of the European Union. Z. Amine, R. Ortega, A periodic prey-predator system, Journal of Mathematical Analysis and Applications,185(2): 477-489, 1994. J. López-Gómez, R. Ortega and A. Tineo, The periodic predator-prey Lotka-Volterra model, Adv. Differential Equations, 1(3): 403-423, 1996. R. Ortega, Variations on Lyapunov's stability criterion and periodic prey-predator systems, Electronic Research Archive, 29(6): 3995-4008, 2021. V. Ortega, C. Rebelo, A $L^p$ stability criterion in the periodic prey-predator Lotka-Volterra model, In preparation, 2023. Euripides J. Sellountos, CEMAT, Instituto Superior Técnico Boundary Element Methods in flow problems governed by Navier-Stokes equations In this presentation will be discussed recent advances of Boundary Element Method (BEM) in Computational Fluid Dynamics (CFD). Unlike other methods, BEM is a a multi-angle numerical technique, that permits the approach to a partial differential equation (PDE) in completely different ways. In Navier-Stokes equations in particular, many different test functions can be used in the weak form, as the Laplace, the Stokeslet, the convective parabolic-diffusion or other convective fundamental solutions, among others. Apart from that, it is found recently that hypersingular BEM in Navier-Stokes equations have a broad area of applicability, as they provide the gradients of the field. These gradients can further be applied to numerous cases as impovement of system's condition number, enforcing continuity, computation of wall quantities such as wall vorticities, strain and stress tensors, and pressure calculation, among others. However, derivation of such equations is not always simple since they are accompanied with extra terms, mainly in convection. Another important finding is that hypersingular equations can permit the use of constant elements simplifying immensely the preparation of the computational model. Another part of the talk will be dedicated to the transformation of the BEM system to Finite Element (FEM) or Finite Volume (FVM) equivalent in terms of sparsity. A system produced by BEM with domain unknowns cannot be solved efficiently, but with proper transformations it can be changed to a sparse system, which can be solved remarkably faster. Other accelerating techniques like hypersingular BEM/ Fast multipole (FMM) and meshless Local Boundary integral equation (LBIE) will be discussed. Yassine Tahraoui, CMA-FCT, Universidade Nova de Lisboa On the optimal control and the stochastic perturbation of a third grade fluid Most studies on fluid dynamics have been devoted to Newtonian fluids, which are characterized by the classical Newton's law of viscosity. However, there exist many real fluids with nonlinear viscoelastic behavior that does not obey Newton's law of viscosity. My aim is to discuss two problems related to a class of non-Newtonian fluids of differential type. Namely, the optimal control of incompressible third-grade fluids in 2D, via Pontryagin's maximum principle and the strong well-posedness, in PDEs and probabilistic senses, of the 3D stochastic third-grade fluids in the presence of multiplicative noise driven by a Q-Wiener process. The talk is based on recent works with Fernanda Cipriano (CMA, Univ. NOVA de Lisboa). 27/07/2022, 16:00 — 17:00 — Mathematics Building Thomas Eiter, Weierstrass Institute for Applied Analysis and Stochastics, Berlin, Germany Resolvent estimates for the flow past a rotating body and existence of time-periodic solutions Anna Lancmanová, Faculty of Mechanical Engineering, Czech Technical University in Prague, Czech Republic, and CEMAT On the development of a numerical model for the simulation of air flow in the human airways The main motivation for this study is the air flow in the human respiratory system, although similar problems are also common in other areas of biomedical, environmental or industrial fluid mechanics. The detailed experimental studies of respiratory system in humans and animals are very challenging and even impossible in many cases due to various medical, technical or ethical reasons. This leads to the development of more and more realistic mathematical and numerical models of the flow in airways including the complex geometry of the problem, but also various fluid- and bio-mechanics features. The main difficulties are not just in the geometrical complexity of the computational domain with several levels of branching, but also in the need to prescribe mathematically suitable, but yet sufficiently realistic boundary conditions for the computational model. This leads to a complex multiscale problem, whose solution requires large amount of complicated and time-consuming numerical calculations. In this work we are considering simplified simulations in a two-dimensional rigid channel coupled with a one-dimensional extended flow model derived from a 3D fluid-structure interaction (FSI) model under certain conditions. For this purpose we built a simple test code employing an immersed boundary method and a finite difference discretization. At this stage the air flow in human airways is considered as incompressible, described by the Navier-Stokes equations. This simple code was developed with the aim of testing and improving boundary conditions using reduced order models. The incompressible model will later be replaced by a compressible one, to be able to evaluate the impact of intensive pressure changes in human airways while using realistic, patient specific airways geometry. The main idea is to use different dimensional models, 3D(2D), 1D and 0D, with different levels of complexity and accuracy and to couple them into a single working model. In the present talk, first results of the 2D-1D coupled toy model will be presented, focusing on the main features of the computational setup, coupling strategy and parameter sensitivity. In addition, some long term outlook of the more complex 3D-1D(-0D) model will be discussed. Acknowledgment: Center for Computational and Stochastic Mathematics - CEMAT (UIDP/04621/2022 IST-ID). Thi Minh Thao Le, University of Tours, France Multiple Timescales in Microbial Interactions The purpose of this work is the theoretical and numerical study of an epidemiological model of multi-strain co-infection. Depending on the situation, the model is written as ordinary differential equations or reaction-advection-diffusion equations. In all cases, the model is written at the host population level on the basis of a classical susceptible-infected-susceptible system (SIS). The infecting agent is structured into N strains, which differ according to 5 traits: transmissibility, clearance rate of single infections, clearance rate of double infections, probability of transmission of strains, and co-infection rates. The resulting system is a large system ($N^2 + N + 1$ equations) whose complete theoretical study is generally inaccessible. This work is therefore based on a simplifying assumption of trait similarity - the so-called quasi-neutrality assumption. In this framework, it is then possible to implement Tikhonov-type time scale separation methods. The system is thus decomposed into two simpler subsystems. The first one is a so-called neutral system - i.e., the value of the traits of all the strains are equal - which supports a detailed mathematical analysis and whose dynamics turn out to be quite simple. The second one is a "replication equation" type system that describes the frequency dynamics of the strains and contains all the complexity of the interactions between strains induced by the small variations in the trait values. The first part explicitly determines the slow system in an a spatial framework for N strains using a system of ordinary differential equations and justifies that this system describes the complete system well. This system is a replication system that can be described using the $N(N −1)$ fitnesses of interaction between the pairs of strains. It is shown that these fitnesses are a weighted average of the perturbations of each trait. The second part consists in using explicit expressions of these fitnesses to describe the dynamics of the pairs (i.e. the case $N = 2$) exhaustively. This part is illustrated with many simulations, and applications on vaccination are discussed. The last part consists in using this approach in a spatialized framework. The SIS model is then a reaction-diffusion system in which the coefficients are spatially heterogeneous. Two limiting cases are considered: the case of an asymptotically small diffusion coefficient and the case of an asymptotically large diffusion coefficient. In the case of slow diffusion, we show that the slow system is a system of type "replication equations", describing again the temporal but also spatial evolution of the frequencies of the strains. This system is of the reaction-advection-diffusion type, the additional advection term explicitly involving the heterogeneity of the associated neutral system. In the case of fast diffusion, classical methods of aggregation of variables are used to reduce the spatialized SIS problem to a homogenized SIS system on which we can directly apply the previous results. Sílvia Barbeiro, CMUC, Department of Mathematics, University of Coimbra Learning stable nonlinear cross-diffusion models for image restoration Image restoration is one of the major concerns in image processing with many interesting applications. In the last decades there has been intensive research around the topic and hence new approaches are constantly emerging. Partial differential equation based models, namely of non-linear diffusion type, are well-known and widely used for image noise removal. In this seminar we will start with a concise introduction about diffusion and cross-diffusion models for image restoration. Then, we will discuss a flexible learning framework in order to optimize the parameters of the models improving the quality of the denoising process. This is based on joint work with Diogo Lobo. Arnab Roy, Basque Center of Applied Mathematics, Bilbao, Spain Existence of strong solutions for a compressible viscous fluid and a wave equation interaction system In this talk, we consider a fluid-structure interaction system where the fluid is viscous and compressible and where the structure is a part of the boundary of the fluid domain and is deformable. The reference configuration for the fluid domain is a rectangular cuboid with the elastic structure being the top face. The fluid is governed by the barotropic compressible Navier–Stokes system, whereas the structure displacement is described by a wave equation. We show that the corresponding coupled system admits a unique, locally-in-time strong solution for an initial fluid density and an initial fluid velocity in $H^3$ and for an initial deformation and an initial deformation velocity in $H^4$ and $H^3$ respectively. Pierre-Alexandre Bliman, INRIA, Sorbonne Université, Université Paris-Diderot SPC, CNRS, Laboratoire Jacques-Louis Lions, Paris, France Modelling, analysis, observability and identifiability of epidemic dynamics with reinfections In order to understand if counting the number of reinfections may provide supplementary information on the evolution of an epidemic, we consider in this paper a general SEIRS model describing the dynamics of an infectious disease including latency, waning immunity and infection-induced mortality. We derive an infinite system of differential equations that provides an image of the same infection process, but counting also the reinfections. Well-posedness is established in a suitable space of sequence valued functions, and the asymptotic behavior of the solutions is characterized, according to the value of the basic reproduction number. This allows to determine several mean numbers of reinfections related to the population at endemic equilibrium. We then show how using jointly measurement of the number of infected individuals and of the number of primo-infected provides observability and identifiability to a simple SIS model for which none of these two measures is sufficient to ensure on its own the same properties. This is a joint work with Marcel Fang. More details may be found in the report https://arxiv.org/abs/2011.12202. 02/03/2022, 16:00 — 17:00 — Online Irene Marín Gayte, Instituto Superior Técnico, CEMAT Minimal time optimal control problems This talk is devoted to the theoretical and numerical analysis of some minimal time optimal and control problems associated to linear and nonlinear differential equations. We start by studying simple cases concerning linear and nonlinear ODEs. Then, we deal with the heat equation. In all these situations, we analyze the existence of solution, we deduce optimality results and we present several algorithms for the computation of optimal controls. Finally, we illustrate the results with several numerical experiments. Jesús Bellver Arnau, Laboratoire Jacques-Louis Lions and INRIA, Paris Dengue outbreak mitigation via instant releases In the fight against arboviruses, the endosymbiotic bacterium Wolbachia has become in recent years a promising tool as it has been shown to prevent the transmission of some of these viruses between mosquitoes and humans. This method offers an alternative strategy to the more traditional sterile insect technique, which aims at reducing or suppressing entirely the population instead of replacing it. In this presentation I will present an epidemiological model including mosquitoes and humans. I will discuss optimal ways to mitigate a Dengue outbreak using instant releases, comparing the use of mosquitoes carrying Wolbachia and that of sterile mosquitoes. This is a joint work with Luis Almeida (Laboratoire Jacques-Louis Lions), Yannick Privat (Université de Strasbourg) and Carlota Rebelo (Universidade de Lisboa). 27/10/2021, 16:00 — 17:00 — Room P3.10, Mathematics Building Online Pierre-Alexandre Bliman, INRIA and Laboratoire Jacques-Louis Lions, Paris Minimizing epidemic final size through social distancing How to apply partial or total containment measures during a given finite time interval, in order to minimize the final size of an epidemic - that is the cumulative number of cases infected during its course? We provide here a complete answer to this question for the SIR epidemic model. Existence and uniqueness of an optimal strategy is proved for the infinite-horizon problem corresponding to control on an interval $[0,T]$, $T\gt 0$ (1st problem), and then on any interval of length $T$ (2nd problem). For both problems, the best policy consists in applying the maximal allowed social distancing effort until the end of the interval $[0,T]$ (1st problem), or during a whole interval of length $T$ (2nd problem), starting at a date that is not systematically the closest date and that may be computed by a simple algorithm. These optimal interventions have to begin before the proportion of susceptible individuals crosses the herd immunity level, and lead to limit values of that proportion smaller than this threshold. More precisely, among all policies that stop at a given distance from the threshold, the optimal policies are the ones that realize this task with the minimal containment duration. Numerical results are exposed that provide the best possible performance for a large set of basic reproduction numbers and lockdown durations and intensities. Details and proofs of the results are available in [BDPV,BD]. This is a joint work with Michel Duprez (Inria), Yannick Privat (Université de Strasbourg) and Nicolas Vauchelet (Université Sorbonne Paris Nord). [BDPV] Bliman, P.-A., Duprez, M., Privat, Y., and Vauchelet, N. (2020). Optimal immunity control by social distancing for the SIR epidemic model. Journal of Optimization Theory and Applications. https://link.springer.com/article/10.1007/s10957-021-01830-1 [BD] Bliman, P. A., and Duprez, M. (2021). How best can finite-time social distancing reduce epidemic final size?. Journal of Theoretical Biology 511, 110557. https://www.sciencedirect.com/science/article/pii/S0022519320304124 Paulo Amorim, Instituto de Matemática - Universidade Federal do Rio de Janeiro Predator-prey dynamics with hunger structure We present, analyse and simulate a model for predator-prey interaction with hunger structure. The model consists of a nonlocal transport equation for the predator, coupled to an ODE for the prey. We deduce a system of 3 ODEs for some integral quantities of the transport equation, which generalises some classical Lotka-Volterra systems. By taking an asymptotic regime of fast hunger variation, we find that this system provides new interpretations and derivations of several variations of the classical Lotka--Volterra system, including the Holling-type functional responses. We next establish a well-posedness result for the nonlocal transport equation by means of a fixed-point method. Finally, we show that in the basin of attraction of the nontrivial equilibrium, the asymptotic behaviour of the original coupled PDE-ODE system is completely described by solutions of the ODE system [SIAM J. Appl. Math., 80(6), 2631-2656 (2020)]. Henrique Oliveira, Instituto Superior Técnico, Department of Mathematics and CMAGSD Mathematical Models in Epidemiology. The COVID-19 case. In this talk we overview the mathematical continuous and discrete models in use in Mathematical epidemiology. We analyse the evolution of COVI-19 in Portugal. Carlota Rebelo, Departamento de Matemática FCUL and CEMAT, Lisboa, Portugal Some results on predator-prey and competitive population dynamics Mathematical analysis is a useful tool to give insights in very different mathematical biology problems. In this talk we will consider predator-prey and competition population dynamics models. We will give an overview of recent results in the case of seasonally forced models not entering in technical details. First of all we consider predator-prey models with or without Allee effect and prove results on extinction or persistence. We will give some examples such as models including competition among predators, prey-mesopredator-superpredator models and Leslie-Gower systems. When Allee effect is considered, we deal with the cases of strong and weak Allee effect. Then we consider competition models of two species giving conditions for the extinction of one or both species and for coexistence. This talk is based in joint works with I. Coelho, M. Garrione, C. Soresina and E. Sovrano. [1] I. Coelho and C. Rebelo, Extinction or coexistence in periodic Kolmogorov systems of competitive type, submitted. [2] M. Garrione and C. Rebelo, Persistence in seasonally varying predator-prey systems via the basic reproduction number, Nonlinear Analysis: Real World Applications, 30, (2016) 73-98. [3] C. Rebelo and C. Soresina, Coexistence in seasonally varying predator-prey systems with Allee effect, Nonlinear Anal. Real World Appl. 55 (2020), 103140, 21 pp. Erida Gjini, Instituto Superior Técnico Understanding the dynamics of co-colonization systems with multiple strains The high number and diversity of microbial strains circulating in host populations pose challenges to human health and have inspired extensive research on the mechanisms that maintain such biodiversity. While much of the theoretical work focuses on strain-specific and cross-immunity interactions, another less explored mode of pairwise interaction is via altered susceptibilities to co-colonization (co-infection) in hosts already colonized by one strain. Diversity in such interaction coefficients enables strains to create dynamically their niches for growth and persistence, and 'engineer' their common environment. How such a network of interactions with others mediates collective coexistence remains puzzling analytically and computationally difficult to simulate. Furthermore, the gradients modulating stability-complexity regimes in such multi-player endemic systems remain poorly understood. In this seminar I will present results from an epidemiological study where we analyze mathematically such an interacting system and the eco-evolutionary dynamics that emerge. Adopting a slow-fast dynamic decomposition of the original SIS model, we obtain a model reduction coinciding with a version of the replicator equation from evolutionary game theory. This enables us to highlight the key coexistence principles and the critical shifts in multi-strain dynamics potentiated by mean-field gradients. Johan Gielis, Genicap Beheer BV (www.genicap.com); The Antenna Company International (www.antennacompany.com); University of Antwerp, Bio-Engineering Sciences Gielis Transformations in mathematics, the natural sciences and technological applications The Gielis Transformation (GT) defines measures and unit elements specific to the shape, extending Euclidean geometry and challenging current notions of curvature, complexity and entropy. Global anisotropies or (quasi-) periodic local deviations from isotropy or Euclidean perfection in many forms that occur in nature can be effectively dealt with by applying Gielis transformations to the basic forms that show up in Euclidean geometry, e.g. circle and spiral. Anisotropic versions of the classical constant mean curvature and minimal surfaces have been developed. In mathematical physics it has led to developing analytical solutions to a variety of boundary value problems with Fourier-like solutions for anisotropic domains. GT have been used in over 100 widely different applications in science, education and technology. In the field of design and engineering they have been used, among others, for the optimization of wind turbine blades, antennas, metamaterials, nanoparticles and lasers. Constantino Pereira Caetano, Instituto Nacional de Saúde Doutor Ricardo Jorge Modelling the transmission dynamics of SARS-CoV-2 in Portugal In March 11th of 2020, the World Health Organization declared the COVID-19 global public health emergency a pandemic [1]. Since the appearance of the first cases in Wuhan, China, several countries have employed the use of mathematical and statistical techniques to ascertain the course of the disease spread. The most common mathematical tool available to model such phenomena are systems of differential equations. The most notable are the SIR and SEIR model first developed by Kermack and McKendrick (1927). These models have been used to study an array of different epidemic questions. At the start of the pandemic, these models were employed to nowcast and forecast the national spread of SARS-CoV-2 in China. In [2] the authors create scenarios of transmissibility reduction and mobility reduction associated with the measures employed by the Chinese government. Similar models were also used to estimate the proportion of susceptible individuals in a population, i.e. how much is the case ascertainment in a given country [3]. This topic is very important since it has been shown that a high percentage of infected individuals do not develop symptoms [4] but are still able to infect others [5]. The main purpose of these modelling techniques has been to evaluate the impact of contagion mitigation measures, such as the closure of schools and lockdowns [6]. In Portugal, the team at the department of epidemiology Instituto Nacional de Saúde Doutor Ricardo Jorge, has been, since the start of the epidemic developing reports with an array of different statistical and mathematical procedures [7], in order to present a clear picture of the evolution of the epidemic, with the objective of supporting public health policy making. Part of this work involved building a SEIR-type model with heterogeneous mixing among age groups. This model was key to provide some evidence on the impact of the lockdown in Portugal from March 22th until May 4th. Using data from google mobility reports [8], the model showed that a decrease in transmission was expect after the implementation of the lockdown, which was not yet noticeable due to the delay between infection and case notification. With the increase, as of late, of the daily incidence of COVID-19 cases and with the opening of schools, public health decision makers need to know what will be the expected impact on the Portuguese health system, and what non-pharmaceutical-interventions (NPI) can be adapted in order to compensate for such increase. Several epidemiologist state that higher and faster contact tracing might be the best and most efficient measure to compensate for such increase. The team is currently developing a new model that takes into account several NPIs, such as contact tracing, case ascertainment, mask usage, shielding of vulnerable (elderly) individuals, and closure/opening of schools, among others. The main objective is to provide possible scenarios for the magnitude of the impact of these measures. Joint work with: Maria Luísa Morgado, Departamento de Matemática, UTAD & CEMAT IST Paula Patrício, Centro de Matemática e Aplicações & Departamento de Matemática Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa Baltazar Nunes, Instituto Nacional de Saúde Doutor Ricardo Jorge ECDC: Event Background-COVID-19. Wu, J. T., Leung, K., & Leung, G. M. (2020). Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. The Lancet, 395(10225), 689–697. doi: 10.1016/s0140-6736(20)30260-9 Maugeri A, Barchitta M, Battiato S, Agodi A. Estimation of Unreported Novel Coronavirus (SARS-CoV-2) Infections from Reported Deaths: A Susceptible-Exposed-Infectious-Recovered-Dead Model. J Clin Med. 2020;9(5):1350. Published 2020 May 5. doi:10.3390/jcm9051350 Instituto Nacional de Saúde Dr. Ricardo Jorge (2020). Relatório de Apresentação dos Resultados Preliminares do Primeiro Inquérito Serológico Nacional COVID-19. Available: (acesso a 25/08/2020) Huang L-S, Li L, Dunn L, He M. Taking. Account of Asymptomatic Infections in Modeling the Transmission Potential of the COVID-19 Outbreak on the Diamond Princess Cruise Ship. medRxiv. 2020:2020.04.22.20074286. Prem, K., Liu, Y., Russell, T., Kucharski, A. J., Eggo, R. M., Davies, N., Jit, M. Klepac, P. (2020). The effect of control strategies that reduce social mixing on outcomes of the COVID-19 epidemic in Wuhan, China. The Lancet Public Health. doi: 10.1101/2020.03.09.20033050 Nunes B, Caetano C, Antunes L, et al. Evolução do número de casos de COVID-19 em Portugal. Relatório de nowcasting. Inst. Nac. Saúde Doutor Ricardo Jorge. 2020; Relatórios de mobilidade da comunidade da COVID19. Sandra Pinelas, Academia Militar, Departamento de Ciências Exatas e Engenharia Oscillatory behavior of a mixed type difference equation with variable coefficients In this talk, we present a study on the oscillatory behaviour of the mixed type difference equation with variable coefficients$$\Delta x(n) = \sum_{i=1}^l p_i(n)x(\tau_i(n)) + \sum_{j=1}^m q_j(n)x(\sigma_j(n)),\, n \geq n_0.$$ Marília Pires, Departamento de Matemática, Escola de Ciências e Tecnologia, Universidade de Évora An alternative stabilization in numerical simulations of Oldrod-B type fluids The numerical simulation of non-Newtonian viscoelastic fluids flow is a challenging problem. One of the approaches being often adopted to stablize the numerical simulations is based on addition of stress diffusion term into the transport equations for viscoelastic stress tensor. The additional term affect the solution of the problem and special care should be taken to keep the modified model consistent with the original problem. In this work it was analyzed in detail the influence of numerical stabilization using artificial stress diffusion and it was presented a new arternative. Instead of the classical addition of artificial stress diffusion term it was used the modified additional term which is only present during the transient phase and should vanish in when approaching the stationary case. The steady solution is not affected by such vanishing artificial term, however the stability of the numerical method is improved. This is joint work with Tomás Bodnár (Institute of Mathematics, Czech Academy of Sciences and Faculty of Mechanical Engineering, Czech Technical University in Prague, Czech Republic). Older session pages: Previous 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Oldest
CommonCrawl
DISCUSS: "Mass Psychosis" Trending on Twitter The Duckingstool by Charles Stanley Reinhart The concept of a 'mass psychosis', an unrecognised force which is influencing humanity in its darkest hour, has been gaining traction on social media. It might provide a useful mechanism to explain how so many people could be fooled by a fake pandemic, but is it in danger of letting certain narrative kingpins and zealots off the hook? In The Last American Vagabond's The Daily Wrap Up, Ryan Cristián covers Mass Formation Psychosis' recent public exposure. He says that the media's attempt to recast this issue as 'far right' is 'waking people up to the illusion of the two party paradigm'. He adds that mass formation is a 'large part' if not the 'entirety' of the covid phenomenon, but adds the caveat that there are 'a lot of other factors involved'. Cristián proceeds to underpin and refresh important facts and plot-holes in the fast-decaying Covid narrative – vital information to keep at the forefront of our minds as face-saving, retouching and whitewashing picks up apace. Please watch his video below: The term 'mass formation' comes from the work of Prof. Mattias Desmet, a psychoanalyst from Ghent University, whose theories on Mass Formation, or Mass Psychosis Formation, were recently expounded by Dr Robert Malone on the Joe Rogan Experience. The Joe Rogan podcast, which aired on 1 January and has since been scrubbed from Youtube and Twitter, featured Dr Malone talking about his recent Twitter ban due to his stance on covid vaccination. Dr Malone's endorsement of Prof. Desmet's ideas received many millions of views before the podcast was taken down, and has caused #MassPychosis to trend on twitter. You can view the Joe Rogan podcast on Bitchute below: And here is a very interesting interview from August 2021 between Prof. Mattias Desmet and Dr. Reiner Fuellmich: At the heart of 'mass formation', says Prof. Mattias, are many contributing factors, including a lack of 'meaning making' in our lives and excess 'free-floating anxiety' within society. This can result in a sort of group hypnosis within social groups, which may be seen in its most extreme form in historical authoritarian regimes. For those familiar with Jung, this might tie into many of his ideas. For example, his concept of the 'Shadow': the Personal Shadow (the hidden dark side of an individual) and the Collective Shadow (the hidden dark side of society) and the necessity to understand both in order to be healthy and balanced people/societies, or risk falling foul of the unacknowledged shadow's negative influence. Everyone carries a shadow, and the less it is embodied in the individual's conscious life, the blacker and denser it is. At all counts, it forms an unconscious snag, thwarting our most well-meant intentions. Could an open discussion of Mass Formation Psychosis pave the way to becoming more aware of our personal and collective 'shadow'? Might it provide a graceful climb down for fervent adherents of the cult of covid, and expedite the way out of this mess of untruths founded on untruths founded on myths founded on lies? Will it allow us to place judgement to one side, and forge a new path based on self-knowledge, transparency, devotion to truth, facts and actual science? Or perhaps, as the covid narrative noticeably struggles, we should be cautious of fast-trending alt. narratives coming to the fore? While Prof. Desmet's theories are indeed interesting and provide much potential insight into many aspects of the covid phenomenon (and ourselves), a fast-trending hashtag is a likely target for spin. What could that spin be? As the pandemic narrative self destructs, could mass psychosis provide a moral hall of mirrors? Could it let some very culpable and unsavoury characters off the hook with an easy plea of temporary group insanity? If too much blame is laid at the feet of a sociological phenomenon, might we become drunk on new-found forgiveness and dewey-eyed reconciliation with our fellow man, and lose our vigilance? Might we turn around to discover the history books have been quietly rewritten, that evil truths have been airbrushed away and COVID erected in their place, and that a globalist agenda has been stealthily working behind the scenes? If we can avoid the traps, there is potentially a lot to be gained by a greater understanding of 'mass formation', integrated into a wider pursuit of truth and justice. Please, discuss your thoughts below. Filed under: coronavirus, discussion threads, featured, latest Tagged with: #masspsychosis, corona virus, coronavirus, covid 19, discuss, discussion thread, Dr Robert Malone, Joe Rogan, Jung, Mass Formation, mass psychology, mass psychosis, Mattias Desmet, Ryan Christian, Ryan Cristián, The Last American Vagabond Tweet from Consent Factory: MassFormationPsychosis is a red herring. https://twitter.com/consent_factory/status/1478686570173632514 Fact Checker Reply to rob2 Yes, but is it as much of a red herring as "GloboCap"? Reply to Fact Checker I'm always annoyed by his "GloboCap" angle but I believe that's his sincere, personal conviction, not intent to mislead. What name would you give it 👨‍💻 tent than GloboCap? It is globalised capitalism after all. Oh wait it's probably communism to you people "you people" Way to hate your neighbor, Neighbor! Still, you ask a legitimate question: it's global, authoritarian tyranny enabled by millions and millions of cogs whose lives depend on its success. It's certainly not the exercise of genuine, free trade. I dunno…AuthGlob? truthsayer https://www.independent.co.uk/arts-entertainment/tv/news/joe-rogan-covid-vaccine-antivax-b1840060.html Lost in a dark wood Reply to truthsayer Presumably, he received a word from his sponsors. This is getting confusing. Alex Jones attacks Trump because Trump makes a tactical decision to oppose mandates; rather than committing political suicide {1} by being cast as an "anti-vaxxer". Yet Jones promotes someone who supports (by implication) the mandating of the jab for "unhealthy" children. My guess is that this has nothing whatsoever to do with Jones being sued for millions over Sandy Hoax. https://twitter.com/Lukewearechange/status/1479134785566089226 Dr. Malone is right, they literally brag about bringing forward the Great Reset! Attached: Clip of Malone being interviewed by Infowars https://unityprojectonline.com Working together to STOP COVID-19 Vaccine Mandates for Healthy Children K-12 – Dr. Peter McCullough – Dr. Robert Malone – Dr. Paul Alexander – Dr. Aaron Kheriaty {1} I remember the UK Labour Party committing political suicide in the early 1980s so as to clear the decks for Tony Blair and "New Labour"; so I know what it looks like. Jan 7, 2022 10:20 AM Reply to Lost in a dark wood I would have thought the Infowars crew would double check the numbers – so as to not send out any inadvertent messages. https://www.infowars.com/posts/the-defenestration-of-dr-robert-malone The Defenestration of Dr. Robert Malone by John Mac Ghlionn | BrownStone Institute January 6th 2022, 1:33 pm Robert Malone is a wise man, an honest man, and a highly credible man. This is Infowars shilling for psychiatry. There was once a time when Infowars would have aligned with the libertarian position of Thomas Szasz (see earlier post). This is that psychiatry is a foundational element in the "medical tyranny takeover operation". https://www.bitchute.com/video/wBAgNr9xlV4Q Must Watch: Covid Mass Formation Psychosis Exposed By Top Psychiatrist Dr. Mark McDonald of http: // markmcdonaldmd.com joins guest host Owen Shroyer on The Alex Jones Show to expose the mass mental illness infecting the population as they face COVID fear propaganda from a medical tyranny takeover operation. Trump did nothing except waste 4/5 years of people hopes. AJ needs to be current and at the moment trump is not current. saying you disagree about the mandates but have screamed your anti vaccine since early 2013 then does another one of his famous u turns and tells family's to jab their children when people are getting injured or dying is worse than asking Christians to become m*slem. Many people i no who liked trump have totally seen what he is about.now and woken up to the fraud. At this rate. without the control op alt media support, i doubt he will get in as another shill will be sold to the dummies and the cycle will be repeated. He served his purpose. Reply to entitled2 Trump is very much current for several reasons. a) see my posts on Continuity of Government (a.k.a. "Devolution") and Trump being a "wartime president" fighting an "existential war" against the "invisible enemy". b) the Republican Party is being radically reformed by Trump and MAGA, such that it is now effectively Trump's party. For instance, Ted Cruz is the latest RINO to get his ass kicked. Trump is doing all this in order to win the war against the deep state; and in fighting this war, he gets some choice over the battlefield. In this case, he chose mandates because he's far more certain of winning that battle. It's pretty straightforward really so tell your Lincoln Project friends to fuck off to where they belong. And if you find anywhere that Jones acknowledges the above, please post below. I should have added that another part of the chosen battlefield is therapeutics, such as HCQ. "Give me therapeutics every time" says Trump. That's the message that got through to MAGA supporters, and it's why so many of them declined the jab. But the Lincoln Project won't mention that! You been singing Simon parkes hoaxs nonsense about fed this and fed that and Q this when it was current, it lost it current save the children this lies only person watching trump now is not well people Maga is not a Christian party more kabbalah and fake occult.,upset many Christians when they found out what maga really meant!! even worse he let bannon free when he stole of maga supporters. that is low. The TDS worn of many people even the old voters had enough of the lies and nothing came to fruition predictions except what the oppsite lot said which helped drive voters away from the system to see that it is all lies. The voters like lemming follow what is trendy and mr go get the vaccine and adf puppet and deep state fake fight has come to an end like Q. Your the type who cried when boy zone slit up and was offered counseling. I suggest you smell the roses you been had. you did disappear after the fake selections where he lost it must of been heartbreaking for you after 4/5 years of lies. it theatre. .Your better than this. I said a year ago that Parkes is a fraud; so you're wrong with your first statement. I didn't bother reading any of the rest of your crap. you changed your tune when others pointed out Simon was a shill. lets get that correct,. you never pointed out anything really just followed what is trendy and usually wrong. later on you act like a expert like now with pied piper corbin when he was called out 2 years ago. i dont write crap your just hurt and lost and suffering from disillusions. Re Simon Parkes – Hence, he's a fraud. https://off-guardian.org/2021/01/09/sometimes-you-drain-the-swamp-sometimes-the-swamp-drains-you/#comment-302292 I've only recently started following Simon Parkes, but in his latest update he claims to have spoken to the real Q. Of course, as anybody who's been following Q posts would know, this would breach the "no outside comms" principle. I call out Corbyn NOW because he's advocating violence and criminality in the same way as the Jan 6 operatives. People not familiar with this sort of stuff can then see the parallels. https://www.revolver.news/2021/12/damning-new-details-massive-web-unindicted-operators-january-6 Meet Ray Epps, Part 2: Damning New Details Emerge Exposing Massive Web Of Unindicted Operators At The Heart Of January 6 Your use of the asterisk with the word Muslim and your mentions of "people finally woke up to trump" is the all the proof I need that you shouldn't ever be allowed to vote! EVER! Rrriiiiiiight if you're watching Alex Jones beyond 2003 you need to seriously wake the fuck up and smell the cocoa! Rancid Esther pushing the mass psychosis narrative. https://www.bbc.co.uk/news/av/uk-19937792 Jimmy Savile scandal: 'We were all culpable', says Rantzen A report in the Sun claims ChildLine founder and former BBC presenter Esther Rantzen had been alerted to Jimmy Savile's abuse of children. Ms Rantzen told the BBC she had heard only rumours, and therefore said nothing at the time because "a rumour is not evidence." She added: "There has never been a child that has reported abuse to me that I have not taken action to protect." ^ 13 October 2012 Rantzen was in bed the with head of the BBC at the time. She says she couldn't have said anything to anyone. Bit strange she did not say those words "savile is child rapist" and instead said "harder, faster, deeper" Paul Vonharnish We can immerse one another in the turgid morbidities of psychoanalysis or observe the environmental consequences of broadcast technology… > "While the attention of a terrified world has been riveted on a virus, and while concern about radiation has been focused on 5G on the ground, the assault on the heavens has reached astronomical proportions. During the past two years, the number of satellites circling the earth has increased from 2,000 to 4,800, and a flood of new projects has brought the number of operating, approved, and proposed satellites to at least 441,449. And that number only includes low-earth-orbit (LEO) satellites that will reside in the ionosphere." Another excerpt: > ALTERATION OF THE EARTH'S ELECTROMAGNETIC ENVIRONMENT "What everyone is completely blind to is the effect of all the radiation from satellites on the ionosphere, and consequently on the life force of every living thing. The relationship of electricity to qi and prana has escaped the notice of modern humans. Atmospheric physicists and Chinese physicians have yet to share their knowledge with one another. And at this time, such a sharing is crucial to the survival of life on Earth. "The pure Yang forms the heaven, and the turbid Yin forms the earth. The Qi of the earth ascends and turns into clouds, while the Qi of the heaven descends and turns into rain." So the Yellow Emperor's Classic of Internal Medicine described the global electric circuit 2,400 years ago — the circuit that is generated by the ionosphere and that flows perpetually between the Yang (positive) heaven and the Yin (negative) earth. The circuit that connects us to earth and sky and that flows through our meridians giving us life and health. A circuit that must not be polluted with frequencies emitted by a hundred thousand satellites, some of whose beams will have an effective power of up to ten million watts. That is sheer insanity, and so far no one is paying attention. No one is even asking whether the satellites have anything to do with the profound and simultaneous decline, planetwide, in the number of insects and birds, and with the pandemic of sleep disorders and fatigue that so many are experiencing. Everyone is so focused on a virus, and on antennas on the ground, that no one is paying attention to the holocaust descending from space." (Arthur Firstenberg) It would be considerably more helpful if the civilian public would consider that they, and every public "servant" are being driven insane by electromagnetic noise and stimulation. Complete article: 441449-Low-Earth-Orbit-Satellites.pdf (cellphonetaskforce.org) Reply to Paul Vonharnish Psychiatrist Mark McDonald observes a concerning rise in sadism emerging from the psychosis of fear. See the interview with Mike Ryan on 24th December 2021 on the devastating effects on the human mind of the pandemic of fear; Asia Pacific Today, found on Rumble. https://www.asiapacifictoday.tv/ Florin Flueras I prefer to see all this covidianism as religion, on the lines described by Illich and Agamben. A very old religion that now entered a fundamentalist phase, actualizing the Antichrist: "According to Illich, God's Incarnation brought a new possibility of a love based being, outside of norms and identities – "not under the law, but under grace."(Paul). A new type of freedom was introduced as possibility into the world, and a new type of evil. The church started a process of institutionalisation and perversion of love, the transformation of love into rules. The good, the love, the faith were transformed into norms, services and commodities. This process was continued and amplified by institutions like school and medicine. They dispossess humans of their capacities and practises of learning, healing, creating. The power of love, of the body to affect and to be affected was constantly captured and diminished. Institutionalisation "deaden the heart and shackle the imagination", it transforms everything into anti-love. Medicine leads this process. it became a totalitarian religion, with its life-god at the core, and it made people acquire medical, machine-like, bodies. People got disembodied, the sense of themselves lost and the connection between people's feelings and nature dismantled – "the poetic, performative quality of existence was erased and forgotten in field after field." Our world became the negative actualization of the Christ, the culmination of a new evil that appeared as a corruption of love – the Antichrist. This anti-love, antichrist process will probably quickly eat humanity, the environment and the world, bringing up the Apocalypse." https://florinflueras.substack.com/p/who-are-the-antivaxxers Reply to Florin Flueras shaydeegrove When is some interviewer going to ask Mattias Desmet about the relationship between his "mass formation" (always in quotes!) concept and his previous research on alexithymia (no quotes, but needed)? In particular, his definition of "free-floating." And where are the citations to his peer-reviewed treatise on "mass formation?" Who gives a shit….It sort of fits the anti-Covid narrative…Whatever.. This is a brilliant summary of what is happening in the minds of the Covidian herd. Great to send to people at the moment as it is topical. Its a British presenter Marc Malone (Malones are loving Convid) so is good to send to Brits to wake them up as there are still MANY who are completely under the Convid spell, but there are clear signs that a shift is underway too. Also if they go to Malones other stuff he talks good sense and has done his homework Cults, Mass Hypnosis + A Way Out hughman I went to Vegas.. Honest brokers everywhere.. I went back home… Act accordingly! I knew it, Reply to Hank Classic Hank. They should replace school desks with restaurant tables. That way, when kids sit down, they can take their masks off I think it has achieved it's purpose already. and we are whales caught on the beach with the tide out. Dr. Mark McDonald called this out months and months ago, calling it "mass delusional psychosis". That's what it is–believing something that is provably false–and manifesting complete inability to acknowledge the falsehood of your beliefs even when presented with evidence that disproves them. Reply to KMS The thing is, because of MSM, most ordinary people aren't actually aware of the reality of anything that has gone down over the last 2 years (or before). So they are making a judgement based on false premises. Those who have been hypnotised are the experts and those in govt and this is what has fed the media. So I guess you could say mass hypnosis of the masses by default. p.brooksmcginis No More Nuclear Weapons Here is a free book on Nuclear Weapons & why we need to Ban Nuclear Weapons. https://sun.iwu.edu/~rwilson/nuclearwar-book.pdf Only Evil Nations threaten life on Mother Earth with Nuclear Weapons. Nuclear War #1. https://www.youtube.com/watch?v=Y-FWBZLmlBo Nuclear War is Ultra-Most Evil. No More Nuclear Bombs #2. https://www.youtube.com/watch?v=NS6XJr7qnzA Nuclear Weapons are Stupid & Pure Evil https://www.youtube.com/watch?v=xLFmg1DS3Bo All War is Evil. No More War The Ending Nuclear Moment https://www.youtube.com/watch?v=NgrrCy8cnvU When the nukes start flying, when we see the mushroom cloud growing on the horizon, when reality comes crashing down in the most overt way possible, when the realization slowly dawns that this really is the end, none of our old stuff will matter anymore. It will not matter if you are American, Russian or Chinese. It will not matter if your skin is darker or lighter. It will not matter if you feel like a man or a woman or both or neither. It will not matter if your politics are left, right or center. It will not matter who you voted for. All that will matter, in that final moment, is that it is ending. We will behold that final moment standing alongside progressives and conservatives, racists and radlibs, socialists and soldiers, communists and cops, and all our irreconcilable differences will suddenly dissolve into nothing. Against the suddenly visible backdrop of total annihilation, the existence of any human anywhere is a miracle, and the existence of life on this planet is a priceless gift. We won't even care whose fault it was, whether it was deliberate or accidental, or whether it was the result of some malfunction, miscommunication, or misunderstanding. All we will care about is that it is ending. And in that final moment we will hug our loved ones tight, whether we are Christian or atheist, Jew or Arab, Indian or Pakistani, anti-vaxxer or Antifa. And in that final moment we will say, in our heart of hearts, with our innermost voices, "Oh, I see it now! I see how easy it is to stand together! I see how small our differences are compared to this great commonality! I see where we went wrong, and how very easy it would be to fix it!" And in that final moment we will say, "We see it now! We see the mistakes we made, and made and made and kept on making! We understand our fundamental error! Just give us one do-over and we can correct it immediately! Could we have a do-over please? Could we have a do-over please?" In that final moment, we will ask, "Could we have a do-over please?" A TIME LAPSE MAP OF EVERY NUCLEAR EXPLOSION SINCE 1945https://www.bitchute.com/video/5sRDjyb35PpH/ Nuclear Weapons are Evil Against All of Life Here is a Free Book on "No More War" http://www.ratical.org/ratville/CAH/warisaracket.html Disband the American Military! No More War! War is Evil. America is ruled by Pure Evil at the very Top. Stop paying these monsters Income Taxes. Reply to p.brooksmcginis Yes the amount of money they collect and then weaponise against the public needs to form a MASSIVE part of the conversation going forward. Why do we give these wankers all this money, only for them to spend it on arms and weapons and protection for themselves and to serve us shit, shit and more shit. Time for that to change. All War is Evil No More War Here in Canada, PM Trudeau is working to keep that mass hysteria going. As he blames the unvaxxed for the political choice made to shut businesses and close schools yet again.. Since the vaxx/jab/toxic swill can't stop transmission, shouldn't he be blaming big pharma? Trudeau Covers For Jab Failure to Work as Sold, Resorts to Identity Politics Mass psychoses are always trending on Twitter. That's one of the primary purposes of anti-social media. Robert Malone is now speaking out against mRNA biotech. Where was he when he was helping to invent this Frankenscience? Does this count as an example? John the First Reply to niko The word 'trending' in a mass-democracy is a euphemism for mass-hysteria and even mass-psychosis. Social media is a euphemism for Mob-media. These euphemistic words are invented by democratic propagandists. The reason why the theory of mass-psychosis has been picked up, why it has been lifted out of academic obscurity, after crowd-psychology long ago has been buried by mass-democracy itself, is because a mass democracy for all parties, pro and con, is about the experience of mass-sensations. Bob the Hod Maybe it just counts as an example of people changing their opinion because they realise that they were wrong? If more people were willing to do that we wouldn't be in this mess. Better late than never to the truth. Reply to Bob the Hod Yes, let's all learn and grow, but maybe if more people were willing to make some better choices to begin with we wouldn't be in such a mess. If people make killing a career and then write their memoirs confessing their 'mistakes' I'd say better never than late with that. sean ryan "Where was he when he was helping to invent this Frankenscience?" The Law of Unintended Consequences. Plus, most Researchers are allowed to do only the work they are funded for. And that funding is quite strict & constraining. The world is currently seeing near-record levels of wealth concentrations. Meaning the same relatively small groups are controlling most everything. People are forced to either comply with the systematic, wealth-ordered hierarchies, and remain in gainful employment, or starve. Reply to sean ryan I can't imagine a lot of professional class people like Malone facing starvation, rather than less money, status, etc. And whatever circumstances, it seems hard choices are demanded of us now, perhaps in large part from not having made them before. We have to applaud Malone for what he has done. The call came and he stepped up and delivered. Hes getting better and better over time as well, plus he has great character, Contrast this with all the British doctors and scientists. Bunch of spineless useless good for nothing money grabbers with their snouts in the trough and their soul left at their birth beds. This country suffers from SERIOUS issues with the truth. Meet an Englishman and youve met a liar. Its a real problem. Reply to Mucho The public school boys are especially worthy of avoiding. Toy Aussie Great thread on Robert Malone's background. https://threadreaderapp.com/thread/1448762599277936650.html Meanwhile, here in Australia, 'experts' want us to be even more uncomfortable just as the summer heat increases …. "Industrial respirator masks should be widely adopted by the public to counter the explosion of Omicron cases in Australia because cloth and surgical masks, while much better than no mask at all, allow particles with COVID-19 to spread. This is the stance of a number of infection control experts who believe the more tightly fitting P2 or N95 masks, available at hardware stores, supermarkets and chemists, should be subsidised or made free by the government." Yet when in a cafe/restaurant, you can take off your mask when seated – must put them on again when standing. So what use is a tight fitting mask? https://static.ffx.io/images/$zoom_0.738%2C$multiply_2.1164%2C$ratio_1.5%2C$width_756%2C$x_0%2C$y_0/t_crop_custom/q_62%2Cf_auto/e5f077f1dee5c08a6226aa4ced7182532f2e609b Reply to Kika You simply make yourself a medical exemption card and get on with life… Paul _too Reply to Edith Better still just ignore the lies and go about your life without a mask nor an exemption card. I'm aware it's been hassle for people in some countries but in the UK there's no excuse to be openly showing support for easily-provable government lies. In nearly 2 years nobody except supermarket 'guards' has said a word to me for doing so. Politely said to the supermarket workers I'm exempt and carried on as usual. The government's own guidelines tell you all you have to do is say you're exempt if for any reason you don't want to gimp up out in public and yet people are still afraid of (mistakenly) disobeying something they haven't even taken 2 minutes to read the actual guidelines for. Mass psychosis can be quickly cured with a Exorcism.. I would expect large demonstrations from the "Fully Vaxed" against Djokovic at the Australian Open. It would fit the Gov. narrative to make an example out of him as a whistleblower. J.Butties Reply to Wil Well it turned out that PM Scutt Worrison predicted that if Djorko did not have proper exemption he would be on the next plane out. Now the PM would have full details of his visa prior to arrival. Low and behold he is now confined to a room with no phone and his visa is 'faulty'. You cannot push back at the front line obviously. Examples have to be made. Set Up naah! That's an odd narrative. I've been following it a bit. It's contrary. All over the place. Almost as if it's being pushed to whip up hysteria I doubt it will come to this, given that the unvaxxed numbers are too great, but the psychiatric concept of "psychosis" is the tool which would be used to incarcerate and "treat" the refuseniks. It would be applied by those people we consider to be the epitome of insanity; namely, the medical doctors who are fully aligned with the "pandemic". https://www.nhs.uk/mental-health/conditions/psychosis/overview The 2 main symptoms of psychosis are: hallucinations – where a person hears, sees and, in some cases, feels, smells or tastes things that do not exist outside their mind but can feel very real to the person affected by them; a common hallucination is hearing voices delusions – where a person has strong beliefs that are not shared by others; a common delusion is someone believing there's a conspiracy to harm them Getting help for others: If you think the person's symptoms are severe enough to require urgent treatment and could be placing them at possible risk, you can: . . . call 999 and ask for an ambulance Note that "delusion" in this context is defined not by reference to objective reality (whatever that might be), but in terms of commonly shared beliefs. Hence, the belief that vaccines (especially mRNA jabs) are often harmful may at some point be classed as a psychotic delusion. Likewise for mistrust of authority. Furthermore, since it is the divergence from the majority which is considered to be a sign of psychosis, the term "mass psychosis" is an oxymoron if applied to an entire population. Seems like the symptoms you voters suffer from. 😀 The Mass Sheeple are easily led. The danger is that this latest "best" Psy-op Plandemic is working so well that it may be played out up and through the 2024 US elections. The "Forever Wars" $Money stream will continue as always, but this "Money/Power grab ($$Testing/$$Hospital Income/$$Vaccine manufacture/application/$$Testing/$$Testing/$$Variants-test-jab-jab,etc., etc.) can be used indefinitely if the "Sheeple' REFUSE to admit being played by their Political Tribe… There's, unfortunately, NO VACCINE for STUPID !!!!!! https://www.fox13now.com/news/local-news/entrata-chair-emails-tech-ceos-claiming-covid-vaccine-part-of-sterilization-plot-by-the-jews Reply to banana It's most likely a psyop to cast anti-vaxxers as psychotic. Same with Piers Corbyn calling for the burning down of MP's offices; and the clown harassing children while they queue for the panto. Freecus 'Mass psychosis' is a symptom that can be rapidly eradicated with 'mass non-compliance'. Technical discussion of the symptom has its place but we need direct action to create the world we desire. Mr Y Reply to Freecus That strange thing previously known as knowledge would have done wonders … Breeding the next generation of psychosis in the workplace: https://www.weforum.org/agenda/2021/10/unilever-leena-nair-future-of-work-soft-skills-hard-skills?utm_source=twitter&utm_medium=social_scheduler&utm_term=Leadership&utm_content=05/01/2022+03:00 Empathy and vulnerability? How about snowflakery and a tendency to scream racism/sexism/whateverism when you don't get your way? Does Klaus Schwab seem particularly empathic and vulnerable? Unilever seem especially embedded in the WEF nexus and I hope people will have nothing to do with them. That talk of "unlearning" sounds rather like Ewen Cameron's experiments in 'de-patterning'…. Reply to Edwige Sounds like 're-education' to me. Reimagining HellWhat should be done with our deposed rulers?https://markgresham.substack.com/p/reimagining-hell George Mc "Australian fury over Djokovic vaccine waiver" FURY! "Australians have reacted angrily to news that tennis player Novak Djokovic will play in the Australian Open, after being exempted from vaccination rules. All players and staff at the tournament must be vaccinated or have an exemption granted by an expert independent panel. Djokovic has not spoken about his vaccination status, but last year said he was "opposed to vaccination"." How dare he! All his million fellow Australians are laying down their lives for the patriotic cull! Why does Djokovic think he has the right to stay alive? Reply to George Mc It's all media driven as usual, they seem to think the unjabbed are diseased and dangerous typhoid Mary's, it's just so fucking stupid. Clutching at straws Buyer remorse and jealousy. Seansaighdeor He is Serbian not Australian. BBC fury over Djokovic vaccine waiver. FIFY. Ignore. Reply to Seansaighdeor Sport is Australia's national religion. Nothing may stand in its way – even a fake pandemic. "Novak Djokovic is being held in a room with police out front after landing in Melbourne for the Australian Open, his father said Wednesday amid reports that a visa mix-up could jeopardize the top-ranked Serb's entry into the country." Whole sports-mad country waiting to see what happens next. Another distraction – perhaps to deflect the mounting anger of the long queues of sheeple waiting to have a covid 'test'. Test kit supplies running low. Oh dear! Maybe the name "Australian Open" should be changed to "Australian Closed". Janey B We know the media lie about the Covid fraud, so why assume they are telling the truth here? They lie all the time. They lie about polls,and probably about this too. The bigger mystery is why "smart" people even bother with them or ANY of their lies. 🙄 Mass formation has been going on in the Occident for quite some time, having rendered people, who once upon a time had the guts and balls to win for themselves somewhat decent living conditions, completely docile, sheep-fucking-like. Freedom must be fought for in an ongoing manner, or else it will be lost. The question now is whether peoples elsewhere will allow to be subjugated the same way as the West has been. Some seem to be determined not to go down without a fight: https://centralasia.media/news:1754444 Owen Jones frets inthe Graud: "The UK is in danger of becoming a police state masquerading as a democracy" Can he be thinking of covid measures being used for such a pretext? Of course not! That would be a "Tory thing" to do: "Note the Tory MPs who have defined temporary restrictions to prevent the spread of a deadly virus as transforming Britain into a police state." No, here's what bothers Owen: "Hysteria whipped up against groups like Insulate Britain masks the dark side of the Conservatives' police and courts bill" Insulation and climate protest! That's where it's at! Owen Jones has been a tool for a long time but since his targets were on the deplorable side of the culture wars nobody noticed until their own oxen were gored. Good news travels fast …. https://world-signals.com/news/2021/12/31/french-rebels-massively-destroy-5g-networks/?fbclid=IwAR06rgPRrtdWHmUocPedo3-X_1TDysUG-fEcUjXxOWalAJF6LGqH9adAQIQ Reply to Jacques OT and random- Jacques, you've not offerred to punch anyones fizzog in, in a long time now… are you the same Jacques? did they set you limits on language? could OG host a wee gaming platform, like those old PS boxing games, mannequin style. we could blooter-fest away instead of the + / – trend… get some interplay on the go, just for sheer entertainment if nothing else, "and Sofey takes down Topmoc with a headburster, as Czaques crumples before Reserchey's relentless body shots…" just a daft idea to entertain myself… vis "punch yer face in" jacques of old ; ) Sgt Oddball – Now *This*… I would *LIKE* to see!… – Howaboutit, Sophie and Sam?… …- Imma bet both moneycircus and wardropper sport a *Wicked* limit-break final combo, 'heid… …"FAARUKOOOOONNNN… – *PAAANNCHIIIIIIIIII!!!!!*…" Reply to Sgt Oddball seems like another lost cause but better than laying down and dying. If there's any psychosis, it's among the self-professed elite. Looks like they might be projecting again… One belief that would explain a lot is if the elite believe in reincarnation. This would explain: 1) How they can conceive and stick to plans that unfold over centuries (like the overthrow of "throne and altar" – although certain thrones somehow seem always to escape). 2) Why they can inflict such cruelty and not be troubled by it – hey, we;ve killed you but never mind, it's just flesh and your spirit will be back. 3) Why they keep promoting belief systems like Eastern mysticism that believe in reincarnation. 4) Why they occasionally promote past-life regression and don't dismiss it as mere crankiness (although there may be an element of cover for mind-control alters here too). Of course none of this is meant to excuse, only possibly explain. a better explanation is "Big Pharma and the mainstream media are largely owned by two asset management companies: BlackRock and Vanguard without them there would likely be no Big Pharma? These investment giants are substantial [Time Warner, Comcast, Disney, and News Corp, and NYT] stockholders, so they can influence/control of all media narratives, including elections, politics, bug scares, vaccine prevention, and whatever and can get their message delivered to audiences in nearly every nation state and media outlet in audience accessible language formats. The presidents of these investment giants have more power than all of the legislators and presidents of the nations in the global world. If you want the scamdemic explained you might ask members of the families that own the stock in companies that own copyright, patent and government contracts (which give monopoly power over in nearly every product produced or service provided world over)? Members of the Rothschild, Orsini, Bush, British Royal, Dupont, Morgan, Vanderbilt, and Rockerfeller families might enlighten you. Since these guys own what we call Big Pharma, they probably know why the companies they invest in are promoting whatever.. The elite doesn't believe in a soul, so there is nothing to reincarnate. As there is no soul, the life essence is not lost when it 'left' the body. The body is just an avatar in this reality. There is no death. That's why they try to upload all their memories into a computer and play in the metaverse. That's how they can live forever and have pleasure all the time (which is not possible in a body). Reply to Terrestrial Only they can't, they just wish they could Theospophy and Luciferianism all believe / promote reincarnation so maybe one reason. Of course that doesn't make it true though… Please keep your "religion" with your congregation or church or whatever it is. susan mullen Queen Elizabeth awarded honorary Knighthood in the British Empire to 3 vax executives, one each to Moderna, Pfizer, and BioNTech for their service during Covid 19. "Officers of the Order of the British Empire (OBE)"…"Dr. Melanie Jane Ivarsson. Senior Vice President, Chief Development Officer, Moderna Therapeutics, United States of America. For services to Public Health during COVID-19."…["I am passionate about delivering innovative new medicines to patients and am excited by the potential of mRNA as a new class of medicines, said Dr. Ivarsson."]…The other two, Pfizer and BioNtech got "Companion (CMG)"…"Dr. Alexander Roderick MacKenzie. Chief Development Officer and Executive Vice President, Global Product Development, Pfizer. For services to Public Health during Covid-19.…Sean de Gruchy Marett. Chief Operating Officer, BioNtech. For services to the development of a Covid-19 vaccine."…["Order of the Companions Honour: This is awarded for service of conspicuous national importance and is limited to 65 people. Recipients are entitled to put the initials CH after their name"]…12/31/21, "New Year Honours list in full as Covid heroes and sports stars recognised," UK Metro, Emma Brazell… I live in the US but it's my opinion that Queen Elizabeth is the most powerful person in the world, certainly more powerful than any US president. The entire US political class is fine with this, they love monarchies. Selling out US peasants to "the Crown" is exciting for them, makes them feel important. A fairytale that has nothing to do with money of course: https://www.abc.net.au/news/2022-01-05/how-did-novak-djokovic-get-covid-vaccination-exemption/100738684 Ha, ha, ha, ha, ha. And Bill Gates should get the Nobel Peace prize. Vagabard Well, I'll be cheering him on anyhow Pro-truther Djokovic gets an exemption for the Australian tennis Open https://www.dailymail.co.uk/news/article-10368957/Debate-erupts-anti-vaxxer-Novak-Djokovic-receives-medical-exemption-Australian-Open.html …some say it's a 'return to the land of common sense' As Queensland senator Matt Canavan puts it: 'Natural immunity by multiple studies is much, much stronger than the immunity you get from having a vaccination,' 'We've got to get back to a sensible world here and move on with life and thankfully, with the seemingly less lethal Omicron variant, I think we're very close to that, and here perhaps is just another small step to ending the pandemic and returning, as I say, to the land of commonsense.' Hope he wins Reply to Vagabard As an avowed Vegan and elite athlete, Djokovic is more aware than 90% of the population what is good or potentially dangerous for his body. Despite what he said to the officials that will be his thinking. PS. I don't give a shit who wins. It's corporate sport for the profit of a select few. JohnEss I live in the 'land of common sense' and know there is zero chance of any ordinary Aussie obtaining even a temporary exemption from having the kill shot. If you also live here, then you also know that. It's a bung, pure and simple. Money changes hands; doctors sign forms and the entitled sportsman gains entry. Morrison and all the VIC snouts in the trough wouldn't have it any other way. Another day; another distraction; another display of democracy. All in plain sight. All under the noses of the fools who worship the idiot box; the very fools who brought us to this place of subjugation. "Mankind is getting dumber by the day" Reply to JohnEss Another thread unravels from their blanket of deception John Ess. Said blanket is sufficiently threadbare as to be see-through to those who see…. It's been that way for a long time, of course. Which is why the blanket-holders feel safe. Perhaps it's not quite a settled matter yet, although Djokovic has apparently already arrived. Maybe the 'bung' will ultimately outweigh any political posturing. Time will tell Australia could refuse Novak Djokovic entry over vaccine row – PM https://www.bbc.co.uk/news/world-australia-59884038 Djokovic has landed in Australia and is awaiting a decision. This morning, I see that the great player's visa has been cancelled, in response to "outrage" at the decision to allow him in by virtue of an exemption from taking the kill shot. Daily Mail: https://www.dailymail.co.uk/news/article-10372875/Novak-Djokovics-visa-CANCELLED-held-isolation-Melbourne.html "He's not the Messiah; he's a very naughty billionaire…" Yet more theatre to enable the voters to feel justified in voting for the same pollies they hated until the grubmint did the right thing by the people; proving beyond all doubt, that "we're all in this together." Did someone say "tennis racket"? Who needs one. Fundamental human rights are our exemption from experimental treatments, but those running the movement in Aus seem to be as dismissive of human rights as the oppressors. Universal Human Rights: Our Most Sacred Trust Films an documentaries about Nazi Germany mostly focus on the war and especially on the Nazi's particular barbarity. It's easy to believe German fascism was an aberration. What happened was not an aberration. Many seem to think German fascism is the only type of fascism, seemingly unaware of the Spanish, Portuguese, Italian Variants, as well as southern American iterations… It's perfectly rational behaviour to wear a facemask, if authorities told you it would protect you. It's quite rational behaviour submit to a series of injection, if authorities said it would protect you…A lot of people's behaviours these days does seem a bit "psychotic", but i think it's probable all quite rational… Those behaviours are rational if you have blind faith in the people telling you to do them and that it's for your own good. Having blind faith in those people, on the other hand, is completely irrational. New Nane The lack of films on the horrors of the Soviet gulags is a reflection of Hollyweird's attitudes and biases. Hollyweird and the media are the primary drivers of the scamdemic. jimbojames Rational means using your brain to assess a set of variables. There is nothing rational about doing what you're told or blindly following orders. All fascism is an aberration for the masses. It is simply another method of control. As Upton Sinclair said "Fascism is capitalism plus murder:. This looks like it could be significant. The beginnings of a pushback by the travel industry against draconian testing measures. Something they should have done from the outset, of course. But better late, than never Covid: Travel firms call for removal of testing rules This is how the brainwashing works. Every time you use the term "mass psychosis", it is YOU who are being programmed into believing that "psychosis" is a meaningful concept; and that the White Coats are the guardians of meaning. It is this deification of White Coats which is at the root of the current mess. We have a psychiatrists (ffs) telling us not to believe in viruses. How about not believing in psychiatry? It could start by not mindlessly repeating psychiatric garbage such as "psychosis" and "mental illness". https: // citizenfreepress .com/breaking/mass-formation-psychosis Via Paul Joseph Watson Over the last century, lots of people have written about the likely consequences of this "psychiatric" delusion; for instance, G.K. Chesterton in Eugenics and Other Evils (1922). But probably the clearest statement is by Thomas Szasz. Thomas Szasz New Preface to The Myth of Mental Illness (~2010) Formerly, when Church and State were allied, people accepted theological justifications for state-sanctioned coercion. Today, when Medicine and the State are allied, people accept therapeutic justifications for state-sanctioned coercion. This is how, some two hundred years ago, psychiatry became an arm of the coercive apparatus of the state. And this is why today all of medicine threatens to become transformed from personal therapy into political tyranny. https://www.szasz.com/Szasz50newpreface.pdf https://www.szasz.com This is how the brainwashing works. C1A M*SSAD M15 connected. Fake ex conspiracy theorist who through watching dis infor wars found politics…… Fake Christian Paul- Fake Conservative Paul who Fakes interviews with Fake people and clearly is suffering from Mk ultra herself is now telling her mass psychosis viewers about brainwashing! LOL. Irony at its best but in reality this is a wonderful sad example of mass psychosis. I once sat next to a young woman from Nigeria who said she was studying psychology at the University. She said "After studying sex perverts for three years I get sent home where they call me a psychologist". There's an element of truth in Jung's views regarding different types of psychology, but I daren't say what it is. Freud was getting there initially with his 1896 The Aetiology of Hysteria, but it was quickly reversed and covered up. The problem with psychotherapy-derived psychology is that it is typically drawn from the extremes of society, and each culture can produce characteristic extremes. Protestantism, for example, has fire and brimstone, pulpit bashing loonies, and this produces a unique type of neurosis. World's Largest Big Asset Management firms & Big Banks: NAME: AUM (assets under management): BlackRock $9,464B Vanguard $8,400B UBS $4,432B Fidelity $4,230 State Street $3,860 Morgan Stanley $3,274 JP Morgan $2,996 Allianz $2,953 The Capital Group $2,600 Goldman Sachs $2,372 Bank of New York Mellon $2,310 PIMCO $2,200 TOTAL AUM: OVER $49 TRILLION! These Largest Big Asset Management firms & Big Banks operate like a true CARTEL. Largely existing as the largest shareholders/investors of each other, in a highly-complex & extremely convoluted schema of cross-ownership. Take Allianz (FWB:alv), for example, which is largely owned by: Vanguard, BlackRock and Fidelity. And JP Morgan (NYSE:jpm) which is largely owned by: Vanguard, BlackRock, State Street, the Capital Group, Fidelity, Morgan Stanley, et al. And Morgan Stanley (NYSE:ms) which is largely owned by: BlackRock, Vanguard, State Street, JP Morgan, Fidelity, et al. NOTICING A PATTERN HERE? These mega Big Asset Management firms & Big Banks exist as the largest owners of the largest "competing" corporations, in most every single industry. From Big Oil to Big Chem to Big Pharma to Big Med to Big Insurance to Big Ag to Big Auto to Big Energy (both "green" & "brown") to Big Food (both "conventional" and "organic") to Big Tech to Big Media to Big Entertainment to Big News to Big Everything. THESE ARE THE CONSTRUCTORS AND OPERATORS OF THE CORPORATO-SOCIALIST NEO-FEUDAL SOCIETY IN WHICH WE CURRENTLY LIVE. A WHOLLY CENTRALLY-PLANNED ECONOMY DESIGNED SOLELY TO BENEFIT THE NEO-FEUDAL LORDS. BY ESSENTIALLY CONTROLLING ENTIRE ECONOMIES, THEY CONTROL POLITICS (of both the "left" and "right") AND GOVERNANCE. CORPORATO-SOCIALIST NEO-FEUDAL Can you say Capitalism? Around 1318 trans-national enterprises control 80% of all large-scale industry and trade; within them, a core group of 147 almost independent ones controls nearly 40% through convoluted links. -Zurich U 2014. 17 investment enterprises control assets worth $41 trillion. -Peter Phillips 2018. That was 4 years ago. So, now it is 12 enterprises owning $49 trillion. Reply to mgeo Now consider this. Even many, if not most. "socialist" and "communist" countries have wealth-hoarding billionaire "capitalists". -ism = a reliance on something a : act : practice : process; focus b : manner of action or behavior characteristic of a (specified) person or thing. Like alcoholism is a reliance on alcohol, capitalism is a reliance on capital. "Communist" China has a billionaire capitalist class. "Socialist" Russia has a billionaire capitalist class. "Socialist" North Korea has a billionaire capitalist class. "Socialist" Vietnam has a billionaire capitalist class. In each of those countries, those wealth-hoarding billionaires use their massive wealth to unduly influence & control politics & governance. To their advantage. Regardless of stated political or economical ideology. Just as is done in the U.S. & much of Europe. China is currently one of the most attractive economies (if not the most attractive) for billionaire investors from Europe & the U.S. Despite common narratives to the contrary, "Socialism" and "capitalism" are not necessarily opposing ideologies, nor opposing forces. Numerous historical figures, from Plutarch to Vilfredo Pareto to Adam Smith to Thomas Jefferson to Thomas Paine to Albert Einstein, and so many more have pointed to the fact that wealth, capital & assets historically tend to concentrate in the hands of a few. ANY system can be corrupted. "Socialists" need to be very careful, because with central planning & control being a primary tenet of most "socialist" models, they can easily be misled into allowing Corporate Lords to instill themselves as that central planning & controlling authority. Keep in mind, the Federal Reserve is the primary central planning authority in the U.S. (creating policy on interest rates, employment rates, money supply, etc), and it is owned by the largest national banks. One can take a look at WTID inequality charts and see how most every country is currently experiencing vast wealth & income inequality. Those Big Asset Management firms & Big Banks listed above are not confined solely to the U.S. and/or Europe. They are truly global in nature. Active in the largest economies. Prior to the American Revolution, mega-corporations, like the East India Co., the South Sea Co., and others dominated entire economies and nations. It's been estimated that the East India Co. alone effectively maintained control over 24 percent of the entire global population leading to the 18th century. We have returned to that same schema of corporate control. Imperialism is a little different, as the threat is overt. One economist estimated the total wealth England/Britain extracted from India at a 2-digit figure in trillions of dollars. Sorry, I'm not sure I understand your stance. Imperialism is a policy or ideology of extending rule over peoples and other countries, for extending political and economic access, power and control, often through employing hard power, especially military force, but also soft power. Soft power can be achieved by corporate influence & control (hence monetary influence & control). These are transnational firms. With even larger transnational footprints & close inter-connected relationships. To be clear, I'm not claiming we experiencing the exact same scenario as 18th century Britain, but there are a good number of disturbing parallels. And people need to be ware of those. The failure to identify similar patterns is dangerous. I.e. those whom fail to learn lessons of the past….. Much of East India Company's influence over Britain was covert. It used it's vast wealth to vastly influence public policy, behind the scenes. Often via manipulating systems of public debt. In re: to "Imperialism is a little different, as the threat is overt." I warn people against rigid, constraining, "either/or" dichotomous thinking habits. I would not agree that Imperialism necessarily be overt. I would claim that Imperialism can be either overt or covert. Or both, at the same time. It's not necessarily a zero-sum game. The Declaration of Independence was not a declaration merely against the corrupted British government, but similarly against the mega-corporations whom were exercising undue influence & effective control against that British government. The Boston Tea Party was not an act merely against the corrupted British government, but similarly against the mega-corporations whom were exercising undue influence & effective control against that British government. I would argue that Imperialism is in existence today, by these firms. But I also allow others their own opinions. "The one who engages in conversation," Cicero wrote, "should not debar others from participating in it, as if he were entering upon a private monopoly; but, as in other things, so in a general conversation he should think it not unfair for each to have his turn." I'm just not sure your point. But I always enjoy productive, civil & reasonable conversation. A late comment: VI Lenin wrote "Imperialism. The Highest Stage of Capitalism." I think it was the Situationist International (1960s) who countered "Imperialism. The First Stage of Capitalism." They were making the point that you first had to conquer a people / nation before you could exploit it… This. Thank you. People seem to forget the fundamental role of land revenue in capitalism. I used to pilot a small plane, have kept up a little: Planes use 4.2 to 4.4 gigahertz to determine altitude. 5G becomes active tomorrow on 3.7 to 3.9 gigaherz. 3.9 may be too close to 4.2; pilots have expressed concern. Reply to Penelope Has air pressure gone out of fashion? Orthus, radio altimeters. …After two years of patient observation I've come to the irrevocable conclusion that what we are witnessing is merely a mass outbreak of 'Chūnibyō' (or '8th-grader syndrome') amongst the Davostani elites… – Mercifully however it is, thus far, *Exclusively Isolated* to them… https://tvtropes.org/pmwiki/pmwiki.php/Main/Chuunibyou https://en.wikipedia.org/wiki/Ch%C5%ABniby%C5%8D I tend to discount all this "mass psychosis" bit cuz I am not seeing it. It doesn't seem to be present in small town USA or in rural areas. Certainly I've come across a FEW people who adhere to the covid line, but I would describe them as PUNITIVE– not fearful. The attitude of the few coviders I've seen is You-can't-think-differently-than I-do. I-will make-you-conform. i.e, different pathology than mass psychosis. Question: WHY is it different in bigger cities? Apparently you all are seeing something like mass psychosis in bigger cities. What is it about bigger cities that breeds this? Close proximity and large electronic billboards. The media is everywhere you look. It leads to groupthink. Wheat Cracker You should consider reading the essay "The Authoritarian Personality in the 21st Century". Groupthink is everywhere, even in supposed "rogue" and "outlaw" groups. I have spent decades in "underground" & "outsider" groups and most everyone conforms. The human species is largely a gregarious one. The human species has "evolved" to become highly insecure. Each of which tend to lead to group identity. But bigger cities mean more traffic, more crowds, less space, more congestion, more competition, more hassles, etc. Tempers flare when there is less space to move freely. Plus more people, greater density usually means more diversity. And whilst I value diversity, truth is people like to identify and congregate in groups, often based on identities. And more diversity in identities means more conflict (again, often stemming from groupthink). Bigger cities are more self-destructing. Due to the density, crowd, space, congestion issues mentioned above. Plus often increases in crime. They're also more often more complex and costlier, which add stress factors. But try being a true "outsider" in a small town if you want to experience small town mass psychosis. Hopelessness. NickM You-can't-think-differently-than I-do. I-will make-you-conform" is similar to the mass psychosis called Nazism, which bodily genocided nonconformists; and McCarthyism, which culturally genocided U$ communists. Reply to NickM The "communists" committed a lot more genocide than the others mentioned. And they are right behind the present genocide which threatens to dwarf all it's predecessors. They are the centers of economy. More functions are being carried out in the city, more capital circulation, more workplaces, more police, more company representatives, denser Spectacle. The city is a constant thought-stopper in the cultic sense. Too many responsibilities, too much sensory input, all the random people one must constantly deal with. People in cities have no time or opportunity to step away, there's always the next round of ever increasing bills to keep up with. There are numerous reports of magnets sticking to vaxxed arms, but it's not universal. I think it impresses people and could potentially open a lot of minds if we could figure out under what circumstances this magnetism occurs. Does the vaxxed person have to be indoors, perhaps close to an outlet or an electronic device? Close to a car w the engine running and the hood up? In some places the magnetic effect is common, but I don't know enough vaxxed people where I am to be able to investigate this very thoroughly. Does anyone have any ideas? We don't want to wait for 5G. 5G Dangers. This document alone should set alarm bells ringing. If you search for the vax + 5G it's all a conspiracy like 3G and 4G was if you know what I mean. https://ecfsapi.fcc.gov/file/1053072081009/5G%20Radiation%20Dangers%20-%2011%20Reasons%20To%20Be%20Concerned%20_%20ElectricSense.pdf Slow death rays. Theobalt And this is an FCC gov. official document, public record. I don't think we have to look very far… any antennas near you? Some officials at gov. are obviously bought and paid for, but they still have these official studies out…. I wonder how long that is going to last… I also liked the link I got from Google strait to the CIA web site, in winter 2020, when I wanted to know how many people died per year in different countries, just curious… watching the WHO report of Covid deaths with the other eye. Reply to Theobalt Antennas are everywhere and growing but it seems to have been halted. I witnessed a frenzy in 2020 alone with engineers rushing to install the new equipment. I have CAT cables attached to the PC and rarely use wireless. I'm getting the feeling the removal of Chinese equipment was a bluff, when in fact it was already in place and just needed the finishing touches to get it ready for activation. All the feelings we can get is from the info they have not filtered yet… I'm pretty sure they are keeping buzy… as Musk said, data is the new gold… my ISP premiums are a testimonial to that Blackmail comes next. Alphabet agencies already have our data stored ready for analysis. In the 1940s, the US Navy studied radio frequencies waves and came up with a list of about 10 or 12 potential dangers. Yet the subject remains taboo – as does geoengineering/climate engineering, which also goes back to the 1940s. Tony yes I'm quite sure that 5G is harmful, and it seems likely that it may interact with magnetic components of the vaxx like graphene oxide/reduced graphene oxide. All meant to facilitate their human/AI connections– or at the least a thorough surveillance/control system. But at the mo I was hoping to find a set of circumstances under which we could show magnetic effects of the vaxx right now– before 5G– cuz I think it wd be a cautionary tale to those who are considering the vaxx. Well I will certainly run around the office asap with a low mass magnet, see if I can interest some of my colleagues into a little novelty game… will take notes… vaxxed: magnet held between 5-10 seconds on average, d-vaxxed: 30-60 seconds, t-vaxxed: had to pull really hard on the magnet to retrieve it and hurt the subject a little It was a big thing, but seems to have died down. Perhaps, they modified the jab. Tim Drayton It seems from various reports by independent researchers who have looked into their contents that the injections do not all contain the same things, so magnets may only stick to the arms of those who have received an injection with a certain ingredient. It could also depend on whether the injection was made into muscle or a blood vessel. Reply to Tim Drayton and, why are (so many?) unvexed magnetic too? one of my pals got a horrendous fright, he could stick a corona of those wee magnets to his head and move it about.. hard core anti-plandemic free person he is. Extensive list of published research showing the dangers of the electromagnetic waves – microwaves and millimeter waves as used by all wireless tech – on the human body. Published Scientific Research on 5G, Small Cells Wireless and Health https://committees.parliament.uk/writtenevidence/2230/html/ A Russian Review on Millimeter Waves declassified by the CIA in 2015 "Biological Effect of Millimeter Waves" reported multiple research findings and concluded that: "Morphological, functional and biochemical studies conducted in humans and animals revealed that millimeter wave caused changes in the body manifested in structural alterations in the skin and internal organs, qualitative and quantitative changes of the blood and bone marrow composition and changes of the conditioned reflex activity, tissue respiration, activity of enzymes participating in the process of tissue respiration and nucleic metabolism. The degree of unfavorable effect of millimeter waves depended on the duration of the radiation and individual characteristics of the organism." They want to pound you with millimeter waves 247 and have your house fully wired up with millimeter wave devices, on top of all the microwaves they already pump out selecting the most nefarious frequencies to broadcast on. Further important info: First 10 mins of this clip essential, explains how regulatory capture facilitates these crimes against humanity – the rollout of these silent weapons in broad daylight with no oversight – they make it up as they go along and deny the existence of sub thermal radiation – science that proves the harm. 5G – Kevin Mottus https://www.youtube.com/watch?v=Me1YfVZgHlA The industry admits formally it spends ZERO dollars on safety testing its products: US Senator Blumenthal Raises Concerns on 5G Wireless Technology Health Risks at Senate Hearing Mucho, Thank you for the three links. I especially liked Kevin Mottus' presentation. I've added all three links to my file. We are under so many attacks that I think we must remove the stolen wealth from the control of the billionaire/trillionaire class. It is that concentration of wealth and ownership in so few hands that gives them power over us. Be Afraid! Be Very Afraid! Oops! I posted the wrong link. THIS is what you should be afraid of: That is scary shit, isn't it! Reply to DM: Don't worry. Most of them are now double-jabbed and fully boosted. If you encounter one, just make a dash for a flight of stairs. a couple of days ago I was walking in a pool between 2 senior doctors who were so very distressed about Qld health reemploying unvax worker….the world is going to end apparently as they spread covid far and wide,,,,funny they didn't for the last 18 months… If I had any respect left for the profession….and believe me I have very little…what was left disappeared in that moment…..obviously the vax has obliterated any ethics they previously had, if they had any, and one of them knew I am unvax and will remain that way…their arrogance is unlimited but their own health isn't…I shall be interested to see how long they last. I assume English doctors are just as brainwashed as the Aussies, and it seems that about 50% of them might be waking up. https://www.dailymail.co.uk/news/article-10243769/Just-40-frontline-NHS-staff-England-booster-Covid-vaccine.html Just 40% of frontline NHS staff in England had a booster Covid vaccine at start of November and fewer than three in 10 care home staff are triple-jabbed now — despite being first in queue for third doses Caption: All staff groups involved in direct patient care reported a Covid booster uptake below 50 per cent. Doctors were the ones most likely to get a third vaccine dose, followed by qualified clinical staff, a group that includes midwives and paramedics, nurses, and finally support staff such as healthcare assistants coming in last. Seems the more well paid here will conform. the miller school zombie town thank you that was soooooo emotional all white coats should know satan has your back go forth and capture souls for the beast system Reply to gordan utter boak. Voodoo and ritual – now, with your white coat – you are a Figure of Authority. Ha ha. Jeeez. I think we are in a situation of survival and self preservation don't you?… moral debates?… who ever offered one to me always had a material agenda…. Pfff come on… what are we trying to achieve here Here is a link to a youtube by After Skool and Academy of ideas on mass psychosis : https://www.youtube.com/watch?v=09maaUaRT4M Croach It's bs. Most people are simply ignorant of reality. They're buried under a shitonne of disinformation and have been conditioned for generations not to engage their brains when it comes to big complex issues that they feel they can have no influence on. Exiting learned helplessness can be utterly terrifying for most people. Has stunk like a divide and rule psyop from the start. Remember – punch up or shut up. Reply to Croach Croach, yes you're right; there's such a lot of passivity– such an attitude that judging important topics is simply outside of their competence. Sometimes if you dig a little you find that they DO have an opinion and that it's fairly rational, but there's a reluctance to assert an opinion: "Who am I to know?" attitude. In case people missed it, I'll repost this request. The Definition and Determination of "Psychosis" I've so far had only a superficial skim of a fraction of the material. With regard to the concept of "psychosis", if there is anywhere provided an objective definition and means of determination, please post a summary and link below. Rogerthecat I saw the original YouTube video, which remarkably is still out there. Desmet goes to great lengths to describe the phenomenon of Mass Formation. He describes this as a type of mass hypnosis and relates how quite sober, rational people can be made to behave out of character by hypnosis and that surgical procedures, some very major, can also be carried out under hypnosis without the subject being aware of any sensation. In answer to a question he also confirms that to his knowledge there appears to be a limit to what hypnosis can achieve. There appears to be an inner moral consciousness which prevents or blocks subjects from carrying out acts of extreme harm to others. This is a crucial point. When questioned about the degree of culpability a person under hypnosis might have towards acts they have committed he suggested that it is very likely that they can understand right from wrong. If this is the case, the pretext that a participant in Mass Formation might use to abdicate themselves from crimes because they were not "conscious" would fail. This would also apply if the claim was made that they were in some way the unwitting "victim" of mass hypnosis when the acts were committed. In short, there should be no defence against acts of atrocity committed by participants, influencers and leaders. In the trial of Eichmann the "I was only obeying orders" defence was deemed to have no merit. Reply to Rogerthecat He could have called it mass hypnosis/delusion. Scholars want to be remembered for the new terms they concoct.
CommonCrawl
The 2016 ARML Competition, Problem 7 Solution 1 Using "~" to indicate that a random variable "has a probability density function that follows the tilde," $\max(x,y)\;\tilde\;2a,\\ \max(x,y,z)\;\tilde\;3a^2,\,\text{etc}\ldots\;(a\in[0,1]).\\ \text{This means}\;~\; \min(x,y,z)\;\tilde\;3a^2-6a+3.\\ \text{Convolve to get}\, -6a^2+6a.\\ \text{Integrate from}\; 0\rightarrow 2/3.$ Answer: $0.7407\ldots$ Consider the ordering $x\ge y\ge z.\,$ The probability to be calculated is $3!,$ times the probability for the assumed case. The $3!\,$ comes from permuting the chosen ordering. Thus, $\displaystyle \begin{align} P&=3!\int_{z=0}^1\int_{y=z}^{\min(1,z+2/3)}\int_{x=y}^{\min(1,z+2/3)}dxdydz\\ &=6\left(\int_{z=0}^{1/3}\int_{y=z}^{z+2/3}\int_{x=y}^{z+2/3}dxdydz+\int_{z=1/3}^1\int_{y=z}^1\int_{x=y}^1dxdydz\right)\\ &=6\left(\frac{2}{27}+\frac{4}{81}\right)\\ &=\frac{20}{27}\approx 0.741 \end{align}$ Let $r=\max(x,y,z)-\min(x,y,z).\,$ The distribution has for pdf $f(w)=3-3(1-w)^2-3w^2\,$ and for cdf $F(w)=3w^2-2w^3.\,$ $\displaystyle F\left(\frac{2}{3}\right)=\frac{20}{27}.$ Expanded Solution 3 ... we will follow three approaches one a formal proof, one an intuitive snap derivation, one a Monte Carlo assessment. Approach 1 We have 6 permutations of the set $\{x,y,z\},\,$ $\{x\ge y\ge z,\,x\ge z\ge y,\,y\ge x\ge z,\,y\ge z\ge x,\,z\ge x\ge y,\,z\ge y\ge x\}.$ all equiprobable with $\displaystyle \frac{1}{6}\,$ probability. So getting the marginal distribution of the middle in order, defined as $\{y:\,x\ge y\ge z\},\,$ that is, lying between $x\,$ (the max) and $z\,$ (the min) gives us: $\displaystyle f_y(x\ge y\ge z)=\int_0^1\int_0^1\mathbb{1}_{x\ge y\ge z}dxdz=y-y^2.$ Now the cdf of $W,\,$ i.e., $P(w\gt max-min)\,$ corresponds to $\displaystyle P(x\ge y\ge z)=6\left(\int_0^w(y-y^2)dy\right)=\frac{1}{2}w^2-\frac{1}{3}w^3.$ (How did we derive the double integral? $\displaystyle \int_0^1\mathbb{1}_{x\ge y}dx=1-y,\,$ then step 2, $\displaystyle \int_0^1(1-y)\mathbb{1}_{y\ge z}dz=y-y^2.)$ We have the cdf of $\min(x,y,z),\,$ $F_{\min}(w)=1-(1-w)^3,\,$ that of $\max(x,y,z),\,$ $F_{\max}(w)=w^3.\,$ The pdf's are as follows: $p_{\min}(w)=3(1-w)^2,\,$ $p_{\max}(w)=3w^2.\,$ The pdf of a uniform variable lying between the two (assuming only 3 draws): $\displaystyle p_w(w)=3-p_{\min}(w)-p_{\max}(w)=6(w-w^2).$ Why? I don't know exactly, but snaps out immediately. We option traders are mandated to never accept a theoretical result without the Monte Carlo check, just in case something slipped through the derivations. And, believe me, things slip through derivations. Consider points lying in a unit cube in the first octant with one corner at the origin. To find the volume of the region of points where the inequality holds, take cross-sections for each value of $z\,$ from $0\,$ to $1.\,$ For any particular value of $z,\,$ the values of $x\,$ and $y\,$ must satisfy $0 \le x \le 1,\,$ $0 \le y \le 1,\,$ $\displaystyle -\frac{2}{3}\le y-x\le\frac{2}{3},\,$ $\displaystyle z-\frac{2}{3}\le x\le z+\frac{2}{3},\,$ $z-\frac{2}{3}\le y\le z+\frac{2}{3}.$ Depending on the value of $z,\,$ the resulting region is a square of area $\displaystyle \frac{4}{9}\,$ (when $z=0\,$ or $1),\,$ or a hexagon of area $\displaystyle \frac{4}{9}+\frac{4(1-z)}{3}\,$ when $\displaystyle \frac{2}{3}\lt z\lt 1).\,$ The region where $\displaystyle z=\frac{1}{6}\,$ is shown. In the cases where $z\,$ is not between $\displaystyle \frac{1}{3}\,$ and $\displaystyle \frac{2}{3},\,$ the area changes linearly with $z\,$ and hence can be averaged to $\displaystyle \frac{1}{2}\left(\frac{4}{9}+\frac{8}{9}\right)=\frac{2}{3}.\,$ This area corresponds to $\displaystyle \frac{2}{3}\,$ of the $z$-values, with the other $\displaystyle \frac{1}{3}\,$ having area $\displaystyle \frac{8}{9},\,$ for a total probability of $\displaystyle \frac{2}{3}\cdot\frac{2}{3}+\frac{1}{3}\cdot\frac{8}{9}=\frac{20}{27}.$ This problem #7 from the 2016 American Regional Mathematical League Competition. Solution 1 is by Barry Vanhoff; Solution 2 is by Amit Itagi; Solution 3 is by N. N. Taleb; Proof 4 is the official proof available at the site of the Competition. N. N. Taleb has generously offered to provide details for his derivation. Gratefully, I took him up at his word. Expanded Solution 3 is the result of that interaction. Mike Lawler offered an exciting 3d illustration of the relevant region. Mike later used 3d printing to work out the problem with his young sons, see his blog. Geometric Probability Geometric Probabilities Are Most Triangles Obtuse? Eight Selections in Six Sectors Three Random Points on a Circle Barycentric Coordinates and Geometric Probability Stick Broken Into Three Pieces (Trilinear Coordinates) Stick Broken Into Three Pieces. Solution in Cartesian Coordinatess Bertrand's Paradox Birds On a Wire (Problem and Interactive Simulation) Birds on a Wire: Solution by Nathan Bowler Birds on a Wire. Solution by Mark Huber Birds on a Wire: a probabilistic simulation. Solution by Moshe Eliner Birds on a Wire. Solution by Stuart Anderson Birds on a Wire. Solution by Bogdan Lataianu Buffon's Noodle Simulation Averaging Raindrops - an exercise in geometric probability Averaging Raindrops, Part 2 Rectangle on a Chessboard: an Introduction Marking And Breaking Sticks Random Points on a Segment Semicircle Coverage Hemisphere Coverage Overlapping Random Intervals Random Intervals with One Dominant Points on a Square Grid Flat Probabilities on a Sphere Probability in Triangle |Contact| |Front page| |Contents| |Algebra| |Probability|
CommonCrawl
Recent questions tagged beta If \(\sin \theta+\cos \theta=1\), then what is the value of \(\theta\left(0^{\circ}<\theta<90^{\circ}\right)\) ? asked Mar 8, 2022 in Mathematics by ♦Gauss Diamond (71,587 points) | 251 views Which of these investments is seen as riskiest? asked Nov 3, 2021 in General Knowledge by ♦MathsGee Platinum (163,814 points) | 217 views riskiest Let \(\alpha=111.371^{\circ}\) and \(\beta=3728^{\prime} 17^{\prime \prime}\). asked Sep 19, 2021 in Mathematics by ♦MathsGee Platinum (163,814 points) | 185 views Is \(\dfrac{\sin 30^{\circ} \times \tan 60^{\circ}}{\tan 30^{\circ} \times \cos 60^{\circ}}=3\) Solve the equation : \(2 \cos ^{2} \theta-5 \cos \theta=3 ; 0^{\circ} \leq \theta \leq 360^{\circ}\) How would you differentiate \(\cos x \sin x ?\) How would you differentiate \(\cos z+\csc z ?\) Prove the identity $\frac{\cos \alpha}{\sin \alpha} \cdot \frac{1}{\cos ^{2} \alpha} \cdot \sin ^{2} \alpha=\tan \alpha$ asked Aug 28, 2021 in Mathematics by Siyavula Bronze Status (8,302 points) | 437 views Prove that $\tan \lambda \cdot \cos \lambda=\sin \lambda$ Prove that $1-\sin ^{4} \alpha=\cos ^{2} \alpha\left(1+\sin ^{2} \alpha\right)$ Simplify $\frac{1}{\sin ^{2} \theta}\left(\frac{1}{\cos ^{2} \theta}-1\right)$ asked Aug 28, 2021 in Mathematics by Siyavula Bronze Status (8,302 points) | 92 views Simplify $\tan ^{2} \beta\left(1+\frac{1}{\tan ^{2} \beta}\right)$ Simplify $\frac{\cos ^{2} \alpha}{\sin ^{2} \alpha}\left(\frac{1}{\cos ^{2} \alpha}-1\right)$ Simplify the expression $\sin x\left(\cot ^{2} x+1\right)$ Show that $\sin \theta(\csc \theta-\sin \theta)=\cos ^{2} \theta$ Verify that $\tan ^{2} x\left(\csc ^{2} x-1\right)=1$ Verify that $\cos ^{2} \theta-\sin ^{2} \theta=1-2 \sin ^{2} \theta$ Verify that $-\sin (-x) \cos (-x) \tan x \csc x=\sin x$ Show that $\frac{\sec \alpha}{\tan \alpha}=\csc \alpha$ Simplify $\frac{\sin ^{2} x+\cos ^{2} x}{\sin x}$ Simplify $\sin x \cot x$ Prove: $\tan x+\cot x=\sec x \csc x$ Prove: $\tan x+\cos x=\sin x(\sec x+\cot x)$ Verify the identity: $\sin \theta \cot \theta=\cos \theta$. Multiply $(\sin \theta+2)(\sin \theta-5)$. Add $\frac{1}{\sin \theta}+\frac{1}{\cos \theta} .$ Write sec $\theta$ tan $\theta$ in terms of $\sin \theta$ and $\cos \theta$, and then simplify. Write $\tan \theta$ in terms of $\sin \theta$. If $\cos \theta=\frac{1}{2}$ and $\theta$ terminates in quadrant IV, find the remaining trigonometric ratios for $\theta$. If $\sin \theta=\frac{3}{5}$ and $\theta$ terminates in quadrant II, find $\cos \theta$. If $\sin \theta=-\frac{3}{5}$ and $\cos \theta=\frac{4}{5}$, find $\tan \theta$ and $\cot \theta$. If $\sin \theta=\frac{3}{5}$, then $\csc \theta=\frac{5}{3}$, because $\sin 43^{\circ} \cos 23^{\circ}-\cos 43^{\circ} \sin 23^{\circ}$ is equal to asked Aug 7, 2021 in Mathematics by ♦MathsGee Platinum (163,814 points) | 319 views Prove that $\sin 2 \alpha=2 \sin \alpha \cdot \cos \alpha$ asked May 26, 2021 in Mathematics by ♦MathsGee Platinum (163,814 points) | 417 views Prove that $\sin (\alpha+\beta)=\sin \alpha \cdot \cos \beta+\cos \alpha \cdot \sin \beta$ Determine the general solution to: $3 \sin \theta \cdot \sin 22^{\circ}=3 \cos \theta \cdot \cos 22^{\circ}+1$ Given that $\cos 61^{\circ}=p$, express the following in terms of $p$: Simplify $\frac{\tan (-420 \circ) \cdot \cos 156^{\circ} \cdot \cos 294^{\circ}}{\sin 492^{\circ}}$ Prove that $\tan^2\theta -\sin^2 \theta =\tan^2\theta.\sin^2\theta $ asked Apr 8, 2021 in Mathematics by ♦Gauss Diamond (71,587 points) | 719 views Prove that $\cos (2\theta) = \cos^2 \theta - \sin^2 \theta$ The alpha helix and the beta pleated sheet are both common polypeptide forms found in what level of protein structure? asked Jun 13, 2020 in General Knowledge by ♦MathsGee Platinum (163,814 points) | 1,659 views polypeptide Given $f(x)=2\sin{x}$ and $g(x)=\cos{(x+30^{\circ})}$ for $x \in [0^{\circ}; 360^{\circ}]$ asked May 24, 2020 in Mathematics by ♦MathsGee Platinum (163,814 points) | 1,102 views Solve for $x$: $\text{cosec}2x = 2.114$ for $2x \in [0^{\circ};180^{\circ}]$ Complete the following identity: $1 -\sin^{2}3x= ...$ Simplify the trig expression fully without using a calculator Mathematics (12,349) Data Science & Statistics (5,915) General Knowledge (9,524)
CommonCrawl
Which law in probability theory states the following? If we have a large enough number of samples, their histogram function converges their true probability density function. (for a continuous random variable) I know that "In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed." But this law is just about expected value. It does not include variance or probability density function. probability statistics probability-theory JulieJulie $\begingroup$ I don't think there's a law so much as some rules of thumb. First off, you need to assume at least the form of distribution and just estimate the parameters. Otherwise, the longer and thinner the tails, the more data you would need. So I don't think there's a general result that applies to arbitrary distributions. $\endgroup$ – Gregory Grant Jun 16 '15 at 15:37 $\begingroup$ thanks for your reply, I have a question regarding their standard deviation: can we use law of large number to say that with large enough samples, their sample standard deviation converges their standard deviation? $\endgroup$ – Julie Jun 16 '15 at 15:41 $\begingroup$ You can use convergence properties of estimates of the standard deviation, but they depend on assuming at least the form of the distribution. Though I guess you could use Chebychev's inequality. Are you familiar with Chebychev? $\endgroup$ – Gregory Grant Jun 16 '15 at 15:56 $\begingroup$ Thanks I just read about it in Wikipedia, it says "under Chebyshev's inequality a minimum of just 75% of values must lie within two standard deviations of the mean and 89% within three standard deviations" how can this rule say that from large samples we can estimate standard deviation? $\endgroup$ – Julie Jun 16 '15 at 16:01 $\begingroup$ Chebyshev won't help, alas: it assumes that your distribution has a finite variance, which isn't one of the hypotheses you gave. As Wikipedia says, "Let $X$ (integrable) be a random variable with finite expected value $\mu$ and finite non-zero variance $\sigma^2$... " [emphasis mine] $\endgroup$ – John Hughes Jun 16 '15 at 16:54 I worry that the statement you've made doesn't quite make sense. Exactly what is the definition of "the probability density function of the samples"? For instance, if you take the uniform probability on the unit interval, after $k$ samples, you'll might say that that "prob. density of the samples" is a function that's $1/k$ at each of the $k$ sample points...but such a function won't converge to the everywhere-one function, for the limit of these individual functions can be nonzero on at most a countable number of points. As for your followup question about standard deviation, I believe that the answer is "no," for there are distributions whose standard deviation is infinite, but the sample-standard-deviation will always be finite, hence not "close" to the SD of the underlying distribution. John HughesJohn Hughes $\begingroup$ Thanks, I edited the question. suppose that the random variable is a continuous random variable not discrete. $\endgroup$ – Julie Jun 16 '15 at 16:02 $\begingroup$ Right ... that's exactly the case where things don't make sense: with a finite collection of samples, your "sample distribution" will be discrete, and hence won't converge to the distribution of the RV. If you mean "the distribution is from a known, few-parameter family like Gaussians, and my "sample dist." is the best-fit Gaussian for the samples," that's an altogether different question. Once again: What definition are you using for "the probability density function of the samples"? Only once we know that can we hope to answer your question carefully. $\endgroup$ – John Hughes Jun 16 '15 at 16:15 $\begingroup$ Thank you, yes you are right many parameters can affect the answer. Suppose that we do not know the distribution form and we have a large number of samples, using which we want to estimate the PDF of the underlying continuous random variable. $\endgroup$ – Julie Jun 16 '15 at 16:23 $\begingroup$ In that case, my answer stands: your statement doesn't make sense until you define what you mean by "the probability density function of the samples". Sorry I can't fix this, but I really don't know what it means. $\endgroup$ – John Hughes Jun 16 '15 at 16:32 $\begingroup$ thanks, I also found this en.wikipedia.org/wiki/… $\endgroup$ – Julie Jun 16 '15 at 21:17 Probably the result closest to what you're saying would the Glivenko-Cantelli theorem: https://en.wikipedia.org/wiki/Glivenko%E2%80%93Cantelli_theorem. This states that the empirical distribution function of a random sample converges in a certain sense to the true distribution function as the sample size tends to infinity. dsaxtondsaxton $\begingroup$ thank a lot. what is the relationship between the Glivenko-Cantelli theorem and this en.wikipedia.org/wiki/…? $\endgroup$ – Julie Jun 17 '15 at 19:05 $\begingroup$ The first statement there refers to pointwise convergence of the empirical distribution function, which is to say the proportion of values not exceeding any t converges to the true probability. This is nothing but the law of large numbers, so the interesting statement is the second which says that the distance between the true and observed functions (under the supremum norm definition of distance) itself goes to zero. The second statement is exactly the Glivenko-Cantelli theorem. $\endgroup$ – dsaxton Jun 17 '15 at 20:33 Your statement is rather imprecise. One could assume the following (but perhaps this is restrictive) : the density $f(x)$ has support on $[a,b]$. Let $g_{n,m}(x)$ be the histogram constructed by taking $n$ samples and dividing the support interval in $m$ segments of same length $h=(b-a)/m$. Let $x_i$ be the center point of each histogram segment. Then $g_{n,m}(x_i)$ is a Binomial $B(n,p)$ variable with $p=\int_{x_i-h/2}^{x_i+h/2} f(x) dx = I_{x_i,h}\approx h f(x_i)$ This approximation holds if $h \to 0$ and the function has bounded derivative. Let $$w_{n,m}(x_i)=\frac{g_{n,m}(x_i)}{nh}$$ be the normalized histogram. Then $$E\left(w_{n,m}(x_i)\right)= \frac{ I_{x_i,h}}{h}\approx f(x_i)$$ $$Var\left(w_{n,m}(x_i)\right)= \frac{1}{n h^2} I_{x_i,h}(1-I_{x_i,h})\approx \frac{ f(x_i)}{n h}$$ Then, if the above condition holds, and $h\to 0$, and $n h \to \infty$, the histogram is asymptotically unbiased, and it's variance tends to zero, hence it converges in mean square (and hence in probability). leonbloyleonbloy Not the answer you're looking for? Browse other questions tagged probability statistics probability-theory or ask your own question. Definition of random Is probability and the Law of Large Numbers a huge circular argument? Is the Law of Large Numbers empirically proven? Law of large numbers - almost sure convergence Reciprocal of a normal variable with non-zero mean and small variance Law of large numbers and theoretical probability Does the law of large numbers hold for a large number of different trials? Intuition on the Expected Value and Standard Deviation of a binomial
CommonCrawl
Convolutional neural network-based automatic heart segmentation and quantitation in 123I-metaiodobenzylguanidine SPECT imaging Shintaro Saito ORCID: orcid.org/0000-0002-0948-17151, Kenichi Nakajima ORCID: orcid.org/0000-0001-7188-87462, Lars Edenbrandt3, Olof Enqvist4,5, Johannes Ulén5 & Seigo Kinuya6 EJNMMI Research volume 11, Article number: 105 (2021) Cite this article Since three-dimensional segmentation of cardiac region in 123I-metaiodobenzylguanidine (MIBG) study has not been established, this study aimed to achieve organ segmentation using a convolutional neural network (CNN) with 123I-MIBG single photon emission computed tomography (SPECT) imaging, to calculate heart counts and washout rates (WR) automatically and to compare with conventional quantitation based on planar imaging. We assessed 48 patients (aged 68.4 ± 11.7 years) with heart and neurological diseases, including chronic heart failure, dementia with Lewy bodies, and Parkinson's disease. All patients were assessed by early and late 123I-MIBG planar and SPECT imaging. The CNN was initially trained to individually segment the lungs and liver on early and late SPECT images. The segmentation masks were aligned, and then, the CNN was trained to directly segment the heart, and all models were evaluated using fourfold cross-validation. The CNN-based average heart counts and WR were calculated and compared with those determined using planar parameters. The CNN-based SPECT and conventional planar heart counts were corrected by physical time decay, injected dose of 123I-MIBG, and body weight. We also divided WR into normal and abnormal groups from linear regression lines determined by the relationship between planar WR and CNN-based WR and then analyzed agreement between them. The CNN segmented the cardiac region in patients with normal and reduced uptake. The CNN-based SPECT heart counts significantly correlated with conventional planar heart counts with and without background correction and a planar heart-to-mediastinum ratio (R2 = 0.862, 0.827, and 0.729, p < 0.0001, respectively). The CNN-based and planar WRs also correlated with and without background correction and WR based on heart-to-mediastinum ratios of R2 = 0.584, 0.568 and 0.507, respectively (p < 0.0001). Contingency table findings of high and low WR (cutoffs: 34% and 30% for planar and SPECT studies, respectively) showed 87.2% agreement between CNN-based and planar methods. The CNN could create segmentation from SPECT images, and average heart counts and WR were reliably calculated three-dimensionally, which might be a novel approach to quantifying SPECT images of innervation. Estimating sympathetic nervous activity using 123I-metaiodobenzylguanidine (MIBG) is a valuable adjunct for assessing the severity, prognosis, and effects of treatment for heart failure, arrhythmogenic disease, and neurological diseases such as dementia with Lewy bodies and Parkinson's disease [1,2,3,4,5,6,7,8]. The heart-to-mediastinum ratio (HMR) and washout rate (WR) in planar images are common indicators of sympathetic nervous activity [9]. Some studies have shown good reproducibility using 123I-MIBG planar images [9,10,11]. However, depending on the method of regions of interest (ROI) definition, up to about 40% of results might located lying in a gray zone around the cut-off, through which normal and abnormal innervation are differentiated in the clinical context [12]. In Japan, the HMR and WR have been calculated from planar images using smartMIBG, a semiautomated ROI setting software developed under collaboration with FUJIFILM Toyama Chemical Co. Ltd., Tokyo, Japan [9], whereas ROI has also been set manually according to American Society of Nuclear Cardiology and European recommendations [13,14,15]. Single-photon emission computed tomography (SPECT) generates three-dimensional (3D) images that are potentially useful to discriminate organ and background activities that overlap the heart. Degrees of segmental defects can also be scored using the 17-segment model applied in myocardial perfusion imaging (MPI) [1]. However, 3D 123I -MIBG distribution seemed to be heterogeneous based on SPECT studies [16]. Besides, segmental uptake differs among 123I-MIBG SPECT images of individuals. The normal database for 123I-MIBG sympathetic imaging shows relatively decreased activity in the inferior wall, and this was more prominent in late images [17]. To set three-dimensional ROI using the conventional method is difficult in practice. Here, we present an artificial intelligence (AI) method based on convolution neural networks (CNNs) to define cardiac lesions and calculate heart counts without a manual setting. Deep learning algorithms, in particular CNNs, have become the methodology of choice for analyzing medical images [18]. The deep learning approach has been applied to assess conditions such as cardiovascular diseases and prostate cancer using radiology and nuclear medicine [19, 20]. The CNN can directly identify patterns in 3D SPECT images, which allows the classification of each pixel into anatomical components in the image. However, 3D CNN segmentation and automatic calculation of heart counts for 123I-MIBG SPECT have not been reported because cardiac uptake is quite variable and sometimes significantly reduced in patients with severe heart failure and dementia with Lewy bodies. The present study aimed to create a segmentation method and to calculate heart counts and WR in 123I-MIBG SPECT images using CNN. We also compared this novel approach with conventional quantitation based on planar images. We assessed 51 consecutive patients with heart and neurological diseases by 123I-MIBG planar and SPECT imaging at Kanazawa University Hospital during 2018 and 2019. We selected data from 48 patients with visible lung and liver uptake to evaluate standard organ segmentation of 123I-MIBG images. One patient had low accumulation in the liver parenchyma due to a giant liver cyst, and two others had low accumulation in the lungs partly due to leakage at antecubital injection sites. Table 1 shows the characteristics of the 48 patients (male, n = 32; female, n = 16; average age, 68.4 ± 11.7; range, 26–84 years; weight, 61.1 ± 13.5; range, 28.8–101 kg; body mass index, 23.0 ± 4.1; range, 16–33). Neurological diseases in 27 patients comprised Parkinson's disease (n = 4), dementia with Lewy bodies (n = 2), familial amyloid polyneuropathy (n = 6), and other neurological diseases including progressive supranuclear palsy and related movement disorders (n = 15). Heart diseases in 21 patients comprised chronic heart failure (n = 13), arrhythmia (n = 5), and cardiomyopathy (n = 3). Cardiac 123I-MIBG uptake was considerably reduced to HMR of < 1.5 in 17 patients. The left ventricular ejection fraction (EF) measured by echocardiography (n = 38) was 56.1% ± 17.3% (24–77%), whereas EF was not available in 10 patients with neurological diseases. Table 1 Clinical characteristics of the patients 123I-MIBG imaging Anterior planar and SPECT images were acquired using an Anger camera (Siemens Healthcare, Tokyo, Japan) equipped with a low-medium-energy (LME) collimator from 15–20 (early phase) and 180–240 (late phase) min after the patients received an intravenous injection of 123I-MIBG (111 MBq, FUJIFILM Toyama Chemical Co. Ltd., Tokyo, Japan). The 123I energy was centered at 159 keV with a window of 15% or 20%. Planar images were acquired for 5 min under conditions of a 256 × 256 matrix, 2.4-mm pixels, and zoom factor 1.0, and SPECT images were acquired for 30 s per view under conditions of a 64 × 64 matrix, 6.6-mm pixels, zoom factor 1.45, 60 projections, 360° circular orbit (radius of rotation 24 cm), and rotation radius 24 cm. The SPECT data were reconstructed using filtered back projection (FBP). Planar image analysis Early (E) and late (L) average heart counts in planar images (planar HE and HL, unit counts/pixel) and average mediastinal counts (planar ME and ML, unit counts/pixel) were calculated using semiautomated smartMIBG software to set ROI as described in detail elsewhere [9]. In brief, the software algorithm uses a circular heart ROI and a mediastinal ROI that was 10% of the width of the body and a 30% of the height of the mediastinum. After pointing into the center of the heart, all processing is automated, and manual modifications can be added as required. Early and late heart counts in planar images were calculated using the following formulae for planar HBC, planar H, and planar HMR. Planar HBC and planar H were divided by a decay correction factor (DCF) and injected dose (MBq)/ kg body weight (BW). The DCF was calculated as 0.5^ (time [h] between early and late imaging/13). If the interval between early and late was 3 h, the DCF was 0.85. The timing of early imaging was then set at zero (namely DCF = 1). $${\text{Planar}}\,{\text{H}}_{{{\text{BC}}}} ,\,{\text{with}}\,{\text{background}}\,{\text{correction}}\,\left( {{\text{BGC}}} \right) \, = {\text{ (Planar}}\,{\text{H - M)}}/{\text{DCF}}/({\text{injected}}\,{\text{dose}}/{\text{kg}}\,{\text{BW}}),$$ $${\text{Planar}}\,{\text{H}},\,{\text{without}}\,{\text{BGC}} = {\text{Planar}}\,{\text{H}}/{\text{DCF}}/({\text{injected}}\,{\text{dose}}/{\text{kg}}\,{\text{BW}}),$$ $${\text{Planar}}\,{\text{HMR}} = {\text{Planar}}\,{\text{H}}/{\text{Planar}}\,{\text{M}}.$$ Washout rates (WR, %) were calculated using the following formulae for planar WRBC, planar WRNC, and planar WRHMR as: $${\text{Planar}}\,{\text{WR}}_{{{\text{BC}}}} ,\,{\text{with}}\,{\text{background}}\,{\text{correction}}\,\left( {{\text{BGC}}} \right) = [({\text{Planar}}\,{\text{H}}_{{\text{E}}} - {\text{M}}_{{\text{E}}} ) - ({\text{Planar}}\,{\text{H}}_{{\text{L}}} - {\text{M}}_{{\text{L}}} )/{\text{DCF}}]/({\text{Planar}}\,{\text{H}}_{{\text{E}}} - {\text{M}}_{{\text{E}}} ) \times \, 100$$ $${\text{Planar}}\,{\text{WR}}_{{{\text{NC}}}} \,{\text{without}}\,{\text{BGC}} = \left( {{\text{Planar}}\,{\text{H}}_{{\text{E}}} - {\text{ Planar}}\,{\text{H}}_{{\text{L}}} /{\text{DCF}}} \right)/{\text{Planar}}\,{\text{H}}_{{\text{E}}} \times 100$$ $${\text{Planar}}\,{\text{WR}}_{{{\text{HMR}}}} = {\text{ (Planar}}\,{\text{H}}_{{\text{E}}} /{\text{M}}_{{\text{E}}} - {\text{Planar}}\,{\text{H}}_{{\text{L}}} /{\text{M}}_{{\text{L}}} {)}/({\text{Planar}}\,{\text{H}}_{{\text{E}}} /{\text{M}}_{{\text{E}}} ) \, \times 100$$ Segmentation based on CNN We used the following two-step model: Early and late images were registered using uptake in the liver and lungs that is highly visible in both images. The heart was directly segmented using both images as input and a single volume as output. All models were trained and evaluated using fourfold cross-validation. We trained the CNN to segment the lungs and liver on early and late SPECT images using ADAM [21] and a negative log-likelihood loss with an initial learning rate of 0.001. Images in each cross-validation fold were divided 80%/20% into training and validation sets, respectively, using the CNN architecture described in Fig. 1. The batch size was 150 and the model stopped training when the validation loss remained stable for 10 epochs. The resulting segmentations were converted to binary masks and used to register early and late images with Elastix [22]. The advanced mean square metric was used with the full image sampler and 200 iterations of gradient descent. Architecture of CNN used to segment lungs and liver. Convolution layers do not use padding. Input shape to network is 72 × 72 × 72 pixel cube; output shape is 8 × 8 × 8 pixel cube. Heart segmentation To create a target segmentation for a given early and late image pair, manual heart segmentation masks were aligned using the transformation computed above. The result was fractional labeling with heart probabilities between 0 and 1 depending on whether or not the two aligned segmentations agreed. Using this target volume, the CNN was trained taking the two aligned SPECT volumes as input. We used the same training pipeline as described [23], but to avoid excluding uptake from the heart due to under-segmentation, the background loss was set to 0 for all pixels within 1.2 cm (2 pixels) from the heart that did not overlap with lungs or the liver in either of the aligned masks. CNN-based calculation of average heart counts and WR We calculated SPECT early and late average heart counts per pixel (Early HCNN and Late HCNN) and SPECT washout rate (WRCNN) using CNN-based heart segmentation. The SPECT HCNN and WRCNN were determined by taking the average counts in the heart VOI from early and late images without background or reference volumes. The SPECT HCNN and WRCNN were calculated as: $${\text{SPECT}}\,{\text{H}}_{{{\text{CNN}}}} \,{\text{without}}\,{\text{BGC}} = {\text{H}}_{{{\text{CNN}}}} /{\text{DCF}}/({\text{Injected}}\,{\text{dose}}/{\text{kg}}\,{\text{BW}})$$ $${\text{SPECT}}\,{\text{WR}}_{{{\text{CNN}}}} \,{\text{without}}\,{\text{BGC }} = ({\text{Early}}\,{\text{H}}_{{{\text{CNN}}}} - {\text{Late}}\,{\text{H}}_{{{\text{CNN}}}} /{\text{DCF}})/{\text{Early}}\,{\text{H}}_{{{\text{CNN}}}} \times \, 100$$ Comparison of CNN-based and conventional quantitation We investigated correlations between SPECT HCNN and planar HBC, planar H, and planar HMR for each early and late image. We also investigated correlations between SPECT WRCNN and planar WRBC, planar WRNC, and planar WRHMR. Cutoff values for planar WR parameters to distinguish normal from abnormal determined from standard values created using JSNM working group databases (n = 62) were: planar WRBC 34.0%, planar WRNC 30.1%, planar WRHMR = 14.2% [24]. The cutoff for SPECT WRCNN was determined from linear regression lines determined by the relationship between planar and SPECT WR. We divided images into normal and abnormal groups, according to the cutoff values for SPECT WRCNN and planar WR parameters, and then analyzed agreement between them. Data are expressed as means and standard deviation (SD). Differences in average heart counts and WR between SPECT and planar images regarding were analyzed using t tests and two-way analysis of variance. Differences among WR were also analyzed by Bland–Altman plot [25]. Relationships between SPECT and planar methods were assessed by linear regression analysis. Agreement between automated and manual segmentations was estimated using the Sørensen–Dice (Dice) index as numbers of overlapping voxels. All data were statistically analyzed using JMP version 14 (SAS Institute Inc., Cary, NC, USA). Values with p ≤ 0.05 were considered statistically significant. Segmentation on images using CNN Figure 2 shows examples of CNN-based segmentation. The CNN method correctly identified cardiac regions in patients with normal and reduced uptake. Additionally, the heart, liver, and lungs were appropriately segmented in a natural anatomical form as the original organs. The CNN method did not generate sub-diaphragmatic artifacts, and liver and heart segmentation did not overlap in any patients. However, the CNN did not appropriately segment these organs due to high accumulation in an expanding renal pelvis in one patient, and these data were excluded from further statistical analysis. The automatic segmentation had a Sørensen–Dice (Dice) index for early and late SPECT images of 0.63 ± 0.15, recall of 0.82 ± 0.15, and precision of 0.54 ± 0.19. CNN-based segmentation images with 123I-MIBG SPECT data. Patients with normal (A) and reduced (B) uptake. Heart segmentation is correctly identified without anatomical CT images. Liver and lungs are naturally segmented as original organs. Contrast-enhanced X-ray CT images, which were performed for different purposes, are shown as an anatomical reference. H, heart; LL, left lung; Lv, liver; RL, right lung. SPECT H CNN versus planar H BC , planar H, and planar HMR The average heart counts were compared between SPECT images using CNN and planar images using the conventional method for early and late imaging. The correlation between SPECT HCNN and planar HBC with background correction was close (SPECT HCNN = 10.3 + 4.25 × planar HBC; R2 = 0.862, p < 0.0001; Fig. 3A). Correlations were also good between SPECT HCNN and planar H without background correction, and between SPECT HCNN and planar HMR (R2 = 0.827 and 0.729, p < 0.0001, respectively; Fig. 3B and C). Correlations were positive between SPECT HCNN and the planar parameters HBC, H, and HMR even in patients with reduced myocardial 123I-MIBG uptake with HMR < 1.5, (R2 = 0.460–0.498, p < 0.0001 for all; Additional file 1: Figure S1). Relationship of average heart counts calculated from SPECT images using CNN and from conventional early and late planar images. SPECT HCNN vs. planar HBC (A), planar H (B), and planar HMR (C). Red circles and blue squares, early and late images, respectively. Shaded area, confidence of fit. SPECT WR CNN versus planar WR BC , planar WR NC , and planar WR HMR We compared washout rates in SPECT images determined using CNN and in planar images determined using the conventional method. Correlations were significant between SPECT WRCNN and planar WR parameters (R2 = 0.584, 0.568 and 0.507, p < 0.0001; Fig. 4). The systematic error between SPECT WRCNN and planar WRBC was on the borderline of significance as shown in Bland–Altman plots (p = 0.052). The SPECT WRCNN showed systematically higher values compared with planar WRNC and planar WRHMR (p = 0.006 and p < 0.0001, respectively). The cutoff value of SPECT WRCNN determined by linear regression with the upper limit of the normal range (34%) by the planar WR [24], was 30%. We assigned the patients to groups with normal and abnormal WR based on these cutoff values of SPECT WRCNN and planar WR parameters (Table 2). Although six outliers remained, agreement between SPECT WRCNN and planar WRBC was good at 41 (87.2%) of 47 (Table 2A). The agreement rates between SPECT WRCNN and planar WRNC and planar WRHMR were 78.7% and 72.3%, respectively (Table 2B and C). Relationships between washout rates calculated from SPECT images using CNN and planar images using conventional methods: linear regression lines (upper panels) and Bland–Altman plots (lower panels). SPECT WR vs. planar WRBC (A), planar WRNC (B), and planar WRHMR (C). Shaded area, confidence of fit; dotted lines, 95% confidence intervals Table 2 Washout rates determined from SPECT and planar images using CNN-based and standard methods, respectively While 3D quantitation for sympathetic nerve imaging is potentially useful, the feasibility of artificial intelligence for 123I-MIBG studies has not been verified. Therefore, the present study aimed to achieve segmentation and accurate quantitative values using CNN. The CNN segmented organs in 3D and calculated heart counts even when cardiac accumulation was low. The method presented herein could serve as a good foundation for 3D quantitative assessments. Advantages of SPECT over planar image acquisition Although sympathetic nervous activity associated with 123I-MIBG has usually been estimated using planar imaging, the usefulness of HMR for diagnosis and prognosis has been confirmed. However, since HMR is a crude parameter based simply on cardiac and mediastinal regions, the planar method has inherently limited objectivity. Since anatomical structures including the heart are three-dimensional, the data obtained from two-dimensional images cannot perfectly separate these structures. In contrast, the 3D approach is fundamentally more appropriate for evaluating actual myocardial activity because it avoids organ overlap, and the myocardial wall excluding the LV cavity can be identified. We compared the new approach using SPECT images with conventional planar quantitation, but we could not strictly define myocardial walls. Since perfusion studies with 99mTc-labeled tracers and X-ray CT studies were not included in the protocol for this study, the whole heart was segmented by the CNN algorithm. Further development will be required to strictly segment the myocardial wall. The SPECT approach is also feasible as more institutions now have cadmium-zinc telluride SPECT cameras. Solid-state SPECT is capable of 3D evaluation with high-resolution and sensitivity, image acquisition is rapid, and radiation exposure is low due to a low injected dose, whereas planar images are not readily available. Therefore, the determination of total tracer uptake in organs using 3D images is an essential step and might lead to improved objectivity and diagnostic accuracy. Comparison with literature Chen et al. assessed global quantitation of cardiac uptake using 123I-MIBG SPECT [26]. They calculated the SPECT HMR using a ratio of mean counts between cardiac and mediastinal volumes of interest (VOI), determined on transaxial images, and then compared them with the planar HMR. However, defining heart VOI using the SPECT quantitation tool includes some manual procedures. The shape of the heart VOI is an oval that does not precisely reflect the contour of the heart under examination. Here, we did not use a predefined heart model but automatically segmented the location of the heart and measured counts using the CNN. Since the CNN was trained on manual organ segmentation, the heart VOI was determined in a naturally shaped heart. Although the shape of heart cannot be traced in patients with extremely low cardiac activity, the CNN-segmented heart was placed on the approximate location of the cardiac region, and the average counts would not have significantly differed from those determined using a manually traced heart region. Heart segmentation and quantitation The most crucial issue with heart segmentation using only SPECT images is the prevalence of low cardiac uptake in early images. We used a two-step approach to overcome this. Registration and final segmentation can be achieved using different methods, but we believe the two-step approach makes the model more robust and ensures consistent heart volumes for the two images. The accumulation of 123I-MIBG is usually high in scintigraphy of the liver and heart, and moderate in the lungs. Since the distribution profile of 123I-MIBG is similar regardless of camera types, the CNN constructed herein will probably be applicable to other vendors, but further study will be required for confirmation. Automated segmentation failed for one of our patients due to high tracer retention in an expanding renal pelvis. Unusually high or low accumulation in other locations, for example, the renal pelvis, large liver defects, extraordinary anatomical structures, can result in segmentation error. Although we already confirmed useful segmentation methods in most situations, adjustments might be required to minimize the frequency of errors. Training models on patients with atypical distribution might also improve performance. That is, the results will become more stable when the CNN is trained more on the anatomical locations of organs, as well as variations including regions of high accumulation outside the liver and heart. The correlation of heart counts between CNN-based SPECT and conventional planar images was good (r2 = 0.73–0.86), whereas the correlation between CNN-based WR and planar WR parameters was lower than the CNN-based SPECT and planar heart counts (r2 = 0.51–0.58). Since WR is calculated as the subtraction and ratio of small values in reduced myocardial 123I-MIBG uptake, fluctuations in quantitation might have occurred at the higher range of WR. This variation resulted in the lower correlations between the CNN-based WR and planar WR parameters compared with normal 123I-MIBG uptake. However, the patients were separated well into normal and abnormal groups according to cutoff values for CNN-based WR and planar WR parameters. Future directions for 123 I-MIBG imaging Since the data obtained from this study are relative quantitation, an absolute quantitation method using CNN should be established. For example, the standardized uptake value (SUV) can be calculated if data can be acquired with SPECT-CT and appropriate reconstruction method. To obtain better segmentation, additional anatomical information incorporating X-ray computed tomography with SPECT might be useful. Thereafter, a new three-dimensional index for globally measuring the total amount of 123I-MIBG might be developed. Such a novel quantitative approach will improve the uncertainty of the conventional method regarding two-dimensional quantitation and could be the next step towards absolute quantitation using the CNN. Including data from different cameras and reconstruction methods in CNN training would also improve the accuracy of segmentation. This study had some limitations. Since we included a relatively small patient cohort, further investigations of larger patient cohorts are needed to develop more accurate segmentation. This study included patients with cardiac and neurological diseases, and some of them have yet to be finally diagnosed and/or their prognoses have yet to be confirmed. Clinical 123I-MIBG innervation studies in Japan have included both neurological and cardiac diseases. The present study aimed to create a methodology for 3D heart segmentation and the quantitation of both types of diseases. Therefore, consecutive patients with various backgrounds were selected to ensure that the CNN methods are broadly applicable, although disease-specific analyses, final diagnoses, and prognoses could not be included. To create the CNN architecture, three patients with indistinguishable lungs and liver were not included because the method relies on visualizing the contours of organs. Poor segmentation in one patient was due to excessive accumulation at another location. Such circumstances might be addressed by fusing SPECT-CT imaging with novel CNN-based segmentation. However, since X-ray CT has not been routinely applied for sympathetic nerve imaging at our institution, modifications of the study protocol will be required for further investigation. The CNN can be trained to determine organ contours and to automatically calculate heart counts and washout rates in 123I-MIBG SPECT images. Average SPECT heart counts calculated by CNN significantly correlated with those determined by conventional quantitation of planar images in patients with cardiac and neurological diseases. Washout rates also significantly correlated between SPECT with CNN segmentation and planar parameters. Automatic quantitation with CNN might have excellent potential and provide a foundation for the development of an absolute quantitative method. The image datasets generated and/or analyzed during the current study are not publicly distributed, which is not approved by the Ethics Committees at Kanazawa University, but can be available from the corresponding author on reasonable request. HBC : Average heart counts with background correction HCNN : Average heart counts using CNN-based segmentation HMR: Heart-to-mediastinum ratio MIBG: Metaiodobenzylguanidine Single-photon emission computed tomography WR: Washout rates WRBC : Washout rates with background correction WRCNN : Washout rates using CNN-based segmentation WRNC : Washout rates without background correction Nakajima K, Nakata T, Doi T, Kadokami T, Matsuo S, Konno T, et al. Validation of 2-year 123I-meta-iodobenzylguanidine-based cardiac mortality risk model in chronic heart failure. Eur Heart J Cardiovasc Imaging. 2018;19:749–56. Nakata T, Nakajima K, Yamashina S, Yamada T, Momose M, Kasama S, et al. A pooled analysis of multicenter cohort studies of 123I-mIBG imaging of sympathetic innervation for assessment of long-term prognosis in heart failure. JACC Cardiovasc Imaging. 2013;6:772–84. Travin MI, Henzlova MJ, van Eck-Smit BLF, Jain D, Carrio I, Folks RD, et al. Assessment of 123I-mIBG and 99mTc-tetrofosmin single-photon emission computed tomographic images for the prediction of arrhythmic events in patients with ischemic heart failure: Intermediate severity innervation defects are associated with higher arrhythmic risk. J Nucl Cardiol. 2017;24:377–91. Nakajima K, Nakata T, Yamada T, Yamashina S, Momose M, Kasama S, et al. A prediction model for 5-year cardiac mortality in patients with chronic heart failure using 123I-metaiodobenzylguanidine imaging. Eur J Nucl Med Mol Imaging. 2014;41:1673–82. Orimo S, Suzuki M, Inaba A, Mizusawa H. 123I-MIBG myocardial scintigraphy for differentiating Parkinson's disease from other neurodegenerative parkinsonism: a systematic review and meta-analysis. Parkinsonism Relat Disord. 2012;18:494–500. Nakajima K, Nakata T, Doi T, Tada H, Maruyama K. Machine learning-based risk model using 123I-metaiodobenzylguanidine to differentially predict modes of cardiac death in heart failure. J Nucl Cardiol. 2020. https://doi.org/10.1007/s12350-020-02173-6. McKeith IG, Boeve BF, Dickson DW, Halliday G, Taylor JP, Weintraub D, et al. Diagnosis and management of dementia with Lewy bodies: fourth consensus report of the DLB Consortium. Neurology. 2017;89:88–100. Yamada M, Komatsu J, Nakamura K, Sakai K, Samuraki-Yokohama M, Nakajima K, et al. Diagnostic criteria for dementia with lewy bodies: updates and future directions. J Mov Disord. 2020;13:1–10. Okuda K, Nakajima K, Hosoya T, Ishikawa T, Konishi T, Matsubara K, et al. Semi-automated algorithm for calculating heart-to-mediastinum ratio in cardiac Iodine-123 MIBG imaging. J Nucl Cardiol. 2011;18:82–9. Owenius R, Zanette M, Cella P. Variability in heart-to-mediastinum ratio from planar 123I-MIBG images of a thorax phantom for 6 common gamma-camera models. J Nucl Med Technol. 2017;45:297–303. Bateman TM, Ananthasubramaniam K, Berman DS, Gerson M, Gropler R, Henzlova M, et al. Reliability of the 123I-mIBG heart/mediastinum ratio: results of a multicenter test-retest reproducibility study. J Nucl Cardiol. 2019;26:1555–65. Klene C, Jungen C, Okuda K, Kobayashi Y, Helberg A, Mester J, et al. Influence of ROI definition on the heart-to-mediastinum ratio in planar 123I-MIBG imaging. J Nucl Cardiol. 2018;25:208–16. Flotats A, Carrio I, Agostini D, Le Guludec D, Marcassa C, Schafers M, et al. Proposal for standardization of 123I-metaiodobenzylguanidine (MIBG) cardiac sympathetic imaging by the EANM Cardiovascular Committee and the European Council of Nuclear Cardiology. Eur J Nucl Med Mol Imaging. 2010;37:1802–12. Soman P, Travin MI, Gerson M, Cullom SJ, Thompson R. I-123 MIBG cardiac imaging. J Nucl Cardiol. 2015;22:677–85. Tilkemeier PL, Bourque J, Doukky R, Sanghani R, Weinberg RL. ASNC imaging guidelines for nuclear cardiology procedures: standardized reporting of nuclear cardiology procedures. J Nucl Cardiol. 2017;24:2064–128. Momose M, Tyndale-Hines L, Bengel FM, Schwaiger M. How heterogeneous is the cardiac autonomic innervation? Basic Res Cardiol. 2001;96:539–46. Nakajima K. Normal values for nuclear cardiology: Japanese databases for myocardial perfusion, fatty acid and sympathetic imaging and left ventricular function. Ann Nucl Med. 2010;24:125–35. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. Al'Aref SJ, Anchouche K, Singh G, Slomka PJ, Kolli KK, Kumar A, et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. Eur Heart J. 2019;40:1975–86. Polymeri E, Sadik M, Kaboteh R, Borrelli P, Enqvist O, Ulen J, et al. Deep learning-based quantification of PET/CT prostate gland uptake: association with overall survival. Clin Physiol Funct Imaging. 2020;40:106–13. Kingma DP, Ba J. Adam: a method for stochastic optimization. The 3rd international conference for learning representations. 2015. https://arxiv.org/abs/1412.6980. Klein S, Staring M, Murphy K, Viergever MA, Pluim JP. elastix: a toolbox for intensity-based medical image registration. IEEE Trans Med Imaging. 2010;29:196–205. Tragardh E, Borrelli P, Kaboteh R, Gillberg T, Ulen J, Enqvist O, et al. RECOMIA-a cloud-based platform for artificial intelligence research in nuclear medicine and radiology. EJNMMI Phys. 2020;7:51. Nakajima K, Okuda K, Matsuo S, Wakabayashi H, Kinuya S. Is 123I-metaiodobenzylguanidine heart-to-mediastinum ratio dependent on age? From Japanese Society of Nuclear Medicine normal database. Ann Nucl Med. 2018;32:175–81. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1:307–10. Chen J, Folks RD, Verdes L, Manatunga DN, Jacobson AF, Garcia EV. Quantitative I-123 mIBG SPECT in differentiating abnormal and normal mIBG myocardial uptake. J Nucl Cardiol. 2012;19:92–9. We appreciate Hiroto Yoneyama and Takahiro Konishi, Department of Radiological Technology, and Takayuki Shibutani, Department of Quantum Medical Technology, for technical assistance. The authors thank Norma Foster for editorial assistance. This study was partly funded by JSPS Grants-in-Aid for Scientific Research (C) in Japan (PI: K. Nakajima, No. 20K07990), the Fund for Basic Research 2020–2021 from Kanazawa University Hospital, Kanazawa, Japan, and Takeda Japan Medical Office Funded Research Grant 2020. Department of Nuclear Medicine, Kanazawa University Graduate School of Medicine, 13-1 Takara-machi, Kanazawa, 920-8640, Japan Shintaro Saito Department of Functional Imaging and Artificial Intelligence, Kanazawa University Graduate School of Medicine, 13-1 Takara-machi, Kanazawa, 920-8640, Japan Kenichi Nakajima Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden Lars Edenbrandt Department of Chalmers, University of Technology, Gothenburg, Sweden Olof Enqvist Eigenvision, Malmö, Sweden Olof Enqvist & Johannes Ulén Department of Nuclear Medicine, Kanazawa University, Kanazawa, Japan Seigo Kinuya Johannes Ulén SS and KN designed and summarized the study. LE, OE, and JU created the CNN architecture for automatic heart segmentation. SS statistically analyzed the data, and KN confirmed the statistical outcomes. SS drafted, and KN, LE, OE, and JU revised the manuscript. SK supervised the study. All authors read and approved the final manuscript. Correspondence to Shintaro Saito or Kenichi Nakajima. The Ethics Committees at Kanazawa University approved the present study. The need for written informed consent from each patient was waived because of the retrospective design. Informed consent including publication was obtained from all patients in the form of opt-out. KN is in a research collaboration with FUJIFILM Toyama Chemical Co. Ltd., Tokyo, Japan, which supplied the 123I-MIBG in Japan. No other authors have any competing interests to declare. Relationship of heart counts in patients with reduced uptake between CNN and conventional methods. Saito, S., Nakajima, K., Edenbrandt, L. et al. Convolutional neural network-based automatic heart segmentation and quantitation in 123I-metaiodobenzylguanidine SPECT imaging. EJNMMI Res 11, 105 (2021). https://doi.org/10.1186/s13550-021-00847-x DOI: https://doi.org/10.1186/s13550-021-00847-x Myocardial sympathetic imaging Innervation Washout rate
CommonCrawl
When people say "an algebra" do they always mean "an algebra over a field"? I don't have much experience with abstract algebra. I'm only versed in linear algebra and vector spaces, and have had a tiny introduction to algebras over fields. However, this question is a purely terminological one: When people say "an algebra" do they always mean "an algebra over a field"? If not, what other things can it refer to? abstract-algebra terminology $\begingroup$ No, "algebra" is one of the most overloaded terms in mathematics. Hell, even for "algebras over a field", people will ask you if you mean unital and associative algebras or not. $\endgroup$ – Derek Elkins Jan 15 '18 at 23:23 $\begingroup$ Sometimes people even mean "an algebra for a functor/monad". $\endgroup$ – lisyarus Jan 16 '18 at 7:58 $\begingroup$ Um boolean algebras in general have nothing to do with fields... $\endgroup$ – user21820 Jan 16 '18 at 10:07 $\begingroup$ @user21820 Well... they sort of do. Any boolean algebra (with its boolean ring operations) is a subalgebra of $\prod_{i\in I} F_2$ for some index set $I$. I'm not sure it has its own name, but it certainly seems fair to call it the ring version of Stone's representation theorem $\endgroup$ – rschwieb Jan 16 '18 at 15:14 $\begingroup$ Any question of the form "When people say X, do they always mean Y?" is likely to have the answer "no"... (Humans aren't very consistent.) $\endgroup$ – Hans Lundmark Jan 16 '18 at 18:03 Quite often that is what is meant, yes: algebras over fields. Often, but not always, associative. However, in commutative algebra it is also common to talk about (associative, with identity) algebras over commutative rings. In this case, a ring $A$ (commutative or not) is called an $R$ algebra over a commutative ring $R$ if there is a unital ring homomorphism from $R$ into the center of $A$. In my experience, the latter one is the largest scope that is in common use, and is not unusual. "Over a field" probably is used more frequently, though. In the field of universal algebra, "algebra" can refer to a set with operations of various -arity, but this use is fairly isolated to the field. There are some folks in the wings who think I really ought to say something about nonassociative algebras and algebras without identity. The description using homomorphisms does not suit for defining such algebras, but the usual "describe-the-action-with-axioms" definition works. Again, without context, it is highly unlikely that someone would call these simply "algebras," but they would probably instead add more adjectives. For example Lie algebras and Jordan algebras are important nonassociatve algebras, but they would probably never be referred to simply as an "algebra" where they are found. Boolean algebras are another interesting case. Again, you'll probably never find these called simply "an algebra." What makes the case interesting is that they have more than one identity as an algebra. First and foremost, it probably fits the category of "type described by universal algebra" mentioned above, using meet and join, a lattice-theoretic description. However, it also has a natural boolean ring structure, and this ring is actually a subalgebra of $\prod_{i\in I}F_2$ for some index set $I$. $\begingroup$ I mean that, if heard out of context, the most general thing it would possibly mean is "commutative ring." But the probability is very large that the speaker is talking about fields only. $\endgroup$ – rschwieb Jan 15 '18 at 17:55 $\begingroup$ Interesting. Coming from (theoretical) computer science, I have only seen the last one ("algebra" in universal algebra). $\endgroup$ – chi Jan 15 '18 at 19:07 $\begingroup$ @rschwieb Out of context, "algebra" could mean a huge variety of things that have little to do with $R$-algebras. Besides the universal algebra case which is already more general than $R$-algebras, there's also algebras of an endofunctor and algebras of a monad in category theory (further generalizing from the universal algebra scenario and also generalizing $R$-algebras). There's also the field "algebra" as a whole, of course. $\endgroup$ – Derek Elkins Jan 15 '18 at 23:21 $\begingroup$ @DerekElkins The title sets the context of when "an algebra" is used, which I don't think many people confuse with "algebra the entire field." As for the other things you mention, I think they are rather specialized compared to the usages I mentioned. Not that I don't think they are not important, it just doesn't seem terribly helpful for this user. $\endgroup$ – rschwieb Jan 15 '18 at 23:31 $\begingroup$ You're right, I missed the "an algebra" aspect. Clearly, no one would refer to the field of algebra as "an algebra" (or would they...?) That said, it is very easy to stumble into a field where "an algebra" means something other than an $R$-algebra. Algebras of endofunctors, combinatory algebras, and universal algebra are usually far more relevant for computer scientists than $R$-algebras, for example. I have definitely seen people struggle to find how some "algebra" was an "algebra over a field" when there was no connection other than the use of the word "algebra". $\endgroup$ – Derek Elkins Jan 16 '18 at 0:04 Fields are very special commutative rings with unit (that I'll just call rings). A general definition is: if $A$ is a ring, an $A-$algebra is a ring $B$ together with a ring homomorphism $f:A \to B$ (this is, for instance, the definition of the classic "Commutative Algebra" by Atiyah and Macdonald). Note that then we can define an "action" of $A$ on $B$ via $a\cdot b:=f(a)b$, so there are actually more explicit definitions, but this is the most succinct I know. Note that if $A$ is a field, $f$ is injective, so an $A-$algebra (for $A$ a field) is just a ring that contains $A$ as a subring. Note that there are many common and important cases of (in general) non-commutative algebras as well! Lie algebras, Hopf algebras and so on. So they require a different definition. But they are usually considered in a different setting. 57Jimmy57Jimmy $\begingroup$ "Note that if A is a field, f is injective, so an A−algebra (for A a field) is just a ring that contains A as a subring." Why is the fact that f is injective implies that B is a ring that contains A ? $\endgroup$ – dafnahaktana Sep 27 '18 at 21:34 $\begingroup$ @dafnahaktana $B$ is a ring by definition. If $A$ is a field then $\ker(f)$ is an ideal of $A$, hence $A$ or $0$, but since $f(1)=1$ ($f$ is a ring homomorphism), if $B\neq 0$ we have $\ker(f)=0$. Then $im(f)\cong A$ is a subring of $B$. $B$ does not literally contain $A$, but an isomorphic copy. $\endgroup$ – 57Jimmy Sep 29 '18 at 6:41 This is a list of the most common descriptions of an "algebra," but is not exhaustive. 1 . An algebra of sets. A collection of subsets of a given set closed under unions and complements. 2 . An associative algebra over a commutative ring. Let $R$ be a commutative, not necessarily unital ring. An algebra over $R$ is a ring $A$, not necessarily commutative or unital, together with an $R$-module structure on $A$, such that scalar multiplication by $R$ is compatible with the ring multiplication in $A$: $$r \cdot (a_1a_2) = (r \cdot a_1)a_2 = (a_1 \cdot r a_2)$$ If $A$ has an identity, then to give the structure of an $R$-algebra on $A$ is the same as giving a ring homomorphism $R \rightarrow A$ whose image lies in the center of $A$: given the $R$-module structure on $A$, one defines the homomorphism $R \rightarrow A$ by sending $r \in R$ to $r \cdot 1_A$. If $R$ has an identity, then $A$ is usually assumed to be unitary as an $R$-module, which is to say $1_R \cdot a = a$ for all $a \in A$. If both $R$ and $A$ have an identity, then saying $A$ is unitary as an $R$-algebra is the same as saying that the homomorphism $R \rightarrow A$ sends $1_R$ to $1_A$. If $A$ and $R$ are both commutative rings with identity, then a unitary $R$-algebra structure on $A$ is the same thing as a ring homomorphism $R \rightarrow A$ which sends $1_R$ to $1_A$. This is how algebras are typically understood in commutative algebra. 3 . Lie algebra over a field. Let $k$ be a field, and let $\mathfrak g$ be a set with two operations $+$ and $\cdot$ satisfying all the axioms of a (not necessarily commutative or unital) ring, except $\cdot$ is not assumed to be associative. Write $[X,Y]$ for $X \cdot Y$. Assume that the following equation holds for all $X, Y, Z \in \mathfrak g$: $$[X,Y] + [Y,Z] + [Z,X] = 0$$ The structure $\mathfrak g$, together with a unitary $k$-module structure on $\mathfrak g$, such that scalar multiplication from $k$ is compatible with the multiplication $[-,-]$ in $\mathfrak g$: $$c \cdot [X,Y] = [c \cdot X,Y] = [X, c \cdot Y]$$ is called a Lie algebra over $k$. 1 . The Borel sets of a topological space $X$. These are subsets of $X$ obtained by taking countable unions and complements of open sets in all possible combinations. 2 . Let $R = \mathbb C$, and let $G$ be a Hausdorff topological group with the property that every neighborhood of the identity contains a compact open subgroup of $G$. Then $G$ is locally compact and has a Haar measure $\mu$. The $\mathbb C$-vector space $C_c^{\infty}(G)$ of locally constant and compactly supported functions $G \rightarrow \mathbb C$ can be made into a unital $\mathbb C$-algebra by defining multiplication as convolution: $$f \ast g(x) = \int\limits_G f(y)g(y^{-1}x) d\mu(y)$$ This is called the Hecke algebra of $G$. It is usually not commutative. If $G$ is compact, it is unital. 3 . Let $\mathfrak g$ be the $k$-vector space of linear transformations of a vector space $V$ to itself. Then $\mathfrak g$ is a Lie algebra over $k$ if we define $[\phi,\psi] = \phi \circ \psi - \psi \circ \phi$. D_SD_S $\begingroup$ An algebra of sets is a true algebra over $\Bbb Z/2\Bbb Z$. $\endgroup$ – user223391 Jan 20 '18 at 15:40 It really depends on the context: most of the time it's used for algebras over rings and fields. Sometimes it is used in the most general context of universal algebra as a generic word to talk about a model for an algebraic theory: for example, groups are algebras for the (syntactic) theory of groups. TheMadcapLaughsTheMadcapLaughs $\begingroup$ Even more general than universal algebra, sometimes an algebra is just an "algebra over a monad", and this encompasses much more (Compact Hausdorff spaces are algebras of this kind for instance) $\endgroup$ – Max Jan 15 '18 at 18:34 Not the answer you're looking for? Browse other questions tagged abstract-algebra terminology or ask your own question. linear algebra over a division ring vs. over a field Cohesive picture of groups, rings, fields, modules and vector spaces. What does "isomorphic" mean in linear algebra? The name: `Algebra' over a field/ring. Is algebra over a set also algebra over a field? The Modules over Algebras over Operads are not what they seem. What are all the different classes of functions upon real numbers and what do they mean, exactly? Algebra over vector space Is there a field structure on $\mathbb{R}^3$ that keeps its structure as a vector space over $\mathbb{Q}$? Slick and fast linear algebra treatment for finite field extensions?
CommonCrawl
involutive_residuated_lattices [2012/07/18 23:23] jipsen involutive_residuated_lattices [2012/07/18 23:24] (current) ====Definition==== ====Definition==== - An \emph{involutive residuated lattice} is a structure $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \tilde, -\rangle$ of type $\langle 2, 2, 2, 0, 1, 1\rangle$ such that + An \emph{involutive residuated lattice} is a structure $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \sim, -\rangle$ of type $\langle 2, 2, 2, 0, 1, 1\rangle$ such that $\langle A, \vee, \wedge, \neg\rangle$ is an [[involutive lattice]] $\langle A, \vee, \wedge, \neg\rangle$ is an [[involutive lattice]] ==Morphisms== ==Morphisms== Let $\mathbf{A}$ and $\mathbf{B}$ be involutive residuated lattices. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: Let $\mathbf{A}$ and $\mathbf{B}$ be involutive residuated lattices. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: - $h(x \vee y)=h(x) \vee h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$, $h({\tilde}x)={\tilde}h(x)$ and $h(1)=1$. + $h(x \vee y)=h(x) \vee h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$, $h({\sim}x)={\sim}h(x)$ and $h(1)=1$. involutive_residuated_lattices.txt · Last modified: 2012/07/18 23:24 by jipsen
CommonCrawl
Technical advance A statistical shape modelling framework to extract 3D shape biomarkers from medical imaging data: assessing arch morphology of repaired coarctation of the aorta Jan L. Bruse1, Kristin McLeod2,3, Giovanni Biglino1,4, Hopewell N. Ntsinjana1, Claudio Capelli1, Tain-Yen Hsia1, Maxime Sermesant3, Xavier Pennec3, Andrew M. Taylor1, Silvia Schievano1 & for the Modeling of Congenital Hearts Alliance (MOCHA) Collaborative Group BMC Medical Imaging volume 16, Article number: 40 (2016) Cite this article Medical image analysis in clinical practice is commonly carried out on 2D image data, without fully exploiting the detailed 3D anatomical information that is provided by modern non-invasive medical imaging techniques. In this paper, a statistical shape analysis method is presented, which enables the extraction of 3D anatomical shape features from cardiovascular magnetic resonance (CMR) image data, with no need for manual landmarking. The method was applied to repaired aortic coarctation arches that present complex shapes, with the aim of capturing shape features as biomarkers of potential functional relevance. The method is presented from the user-perspective and is evaluated by comparing results with traditional morphometric measurements. Steps required to set up the statistical shape modelling analyses, from pre-processing of the CMR images to parameter setting and strategies to account for size differences and outliers, are described in detail. The anatomical mean shape of 20 aortic arches post-aortic coarctation repair (CoA) was computed based on surface models reconstructed from CMR data. By analysing transformations that deform the mean shape towards each of the individual patient's anatomy, shape patterns related to differences in body surface area (BSA) and ejection fraction (EF) were extracted. The resulting shape vectors, describing shape features in 3D, were compared with traditionally measured 2D and 3D morphometric parameters. The computed 3D mean shape was close to population mean values of geometric shape descriptors and visually integrated characteristic shape features associated with our population of CoA shapes. After removing size effects due to differences in body surface area (BSA) between patients, distinct 3D shape features of the aortic arch correlated significantly with EF (r = 0.521, p = .022) and were well in agreement with trends as shown by traditional shape descriptors. The suggested method has the potential to discover previously unknown 3D shape biomarkers from medical imaging data. Thus, it could contribute to improving diagnosis and risk stratification in complex cardiac disease. Diagnosis and risk stratification of cardiac disease using medical imaging techniques are primarily based on the analysis of anatomy and structure of the heart and vessels. This is particularly true for complex conditions such as congenital heart disease (CHD), where the morphology defines the cardiac defect in the first instance and is subsequently altered by surgical and catheter intervention to improve functionality. In clinical practice, however, anatomical analysis of shape and structure is often carried out via simple morphometry, using parameters measured in 2D (e.g. vessel diameter, area, angulation). This does not fully exploit the abundance of information that current imaging techniques such as cardiovascular magnetic resonance (CMR) or computed tomography (CT) offer [1, 2]. Furthermore, using simple shape descriptors, the relationship between complex global and regional 3D shape features, such as the combination of stenoses, dilations or tortuosity and cardiac function has not been fully explored. Conversely, statistical shape models (SSM) allow visualisation and analysis of global and regional shape patterns simultaneously and in 3D [3] as they are constituted by a computational atlas or template, which integrates all anatomical shape information intuitively as a visual and numerical mean shape and its variations in 3D. The template is essentially an anatomical model of the average geometry of a shape population. Based on this template, descriptive or predictive statistical shape models can be built [1, 4], to explore how changes in shape are associated with functional changes. SSMs have been applied in cardiac research for around two decades [5] in order to describe 3D morphological characteristics and, more recently, for diagnostic or prognostic purposes [4, 6, 7]. However, these studies are based on parametric methods, in which: i) Shapes are parameterised by landmarks, and ii) Point-to-point correspondence between input shapes is a requirement. These pre-requisites prove particularly challenging to fulfil when dealing with complex, amorphous structures and, therefore, limit the use of such methods in CHD (Fig. 1). In addition, manual landmarking is laborious, limited to expert users [8] and proves to be challenging in the absence of distinct anatomical landmarks. Point-to-point correspondence problem in complex cardiac morphologies. Widely used parametric methods to build statistical shape models are based on the so called Point Distribution Model (PDM) [5], in which shapes are parameterised by landmarks. Bookstein et al. [40] define landmarks as points on the structure's surface for which "objectively meaningful and reproducible […] counterparts […]" exist in all other structures present in the dataset. In complex cardiac structures however, those point correspondences are difficult to establish, as illustrated here for two aortic arch models from the CoA cohort More recently, a novel non-parametric SSM framework that does not rely on any prior landmarking [9, 10], has been introduced to the cardiac field by Mansi et al. [11–13]. The method is based on a complex mathematical framework, which analyses how a representative template shape deforms into each of the shapes present in the population. In a simplified way, for example, an "ideal" template aorta can be deformed into any possible patient aorta shape by applying the correct transformations. Instead of the shapes themselves, these transformations are analysed [14] and subsequent shape analysis is carried out robustly within this transformation framework. A key advantage, in addition to neither requiring landmarking nor point-to-point correspondence between input shapes, is that the method is able to handle large variability between shapes, making it an even more attractive tool for investigating 3D cardiovascular anatomical structures in CHD. The aim of this paper is to present this shape analysis method to the larger clinical and engineering community by describing a step-by-step approach to set up such a SSM and by demonstrating its validity using conventional morphometric parameters. As an example, the study focuses on aortic arch shapes of patients post coarctation repair [15, 16], as they typically present highly variable, complex shapes, which have been extensively described in terms of traditional morphometric analyses [17–19]. To demonstrate the capabilities of the proposed method, we have derived global and regional shape features potentially associated with ejection fraction (EF) as novel 3D shape biomarkers. We hypothesised that low EF, which characterises poor ventricular function, could be associated with distinct shape patterns of the aortic arch that affect cardiac afterload. Statistical shape modelling framework (SSM) The shape analysis method used here makes use of a framework proposed by Durrleman et al. [12, 14]. To compute a template (i.e. an "anatomical mean shape") and describe shape variability around this template, the framework is based on a forward approach [14], which essentially describes each subject as a deformation of the template plus some residual information (Fig. 2) [12]. The template is deformed into each subject shape by applying an appropriate transformation. Thus, the transformation function is the crucial component for shape analysis as it "maps" (i.e. describes how to transform one geometry into the shape of another geometry) the template towards each individual subject shape (Appendix 1). Forward approach: Transformations of the template characterise individual subject shapes. The statistical shape analysis method is based on analysing subject-specific transformations that deform a computed template shape towards each patient shape rather than considering the actual 3D shapes. The transformations are unique for each subject and comprise all relevant shape features that characterise the subject shape To represent shapes non-parametrically without involving landmarking, the framework relies on mathematical currents introduced to anatomical analysis by Glaunès and Vaillant [9]. Currents act as surrogate representations of shapes by characterising a shape as a distribution of shape features [14]. Shapes can then be compared by computing how distributions of features differ, rather than by computing differences between individual points. This removes the parameterisation required by other methods. Currents can be seen as an indirect measure of shape as they model geometric objects via their reaction on a probing test vector field [20, 21]. An analogy to currents representing shapes could be probing an object with a 3D laser scanner (the "test vector field") with a certain direction from all possible angles or positions around the object (Fig. 3) [20]. Mathematically, currents are linear applications allowing the computation of the mean, standard deviation, and other descriptive statistics – on 3D shapes. Transferring surface shapes into the space of currents: Analogy to 3D laser scanning of objects. Landmarking of the input shapes is avoided by using mathematical currents as non-parametric shape descriptors that model a specific patient shape as a distribution of shape features. Obtaining a currents representation as a surrogate for the actual 3D shape can be compared to probing a surface with a laser beam from different angles and positions Input shapes are typically given as computational surface meshes (Fig. 4a), which provide point coordinates in space and describe how those points are connected. Here, surface meshes define the wall of the aorta, for example. As a first step, the meshes need to be transferred into their currents representation. The resolution of the currents, λW, controls to which degree small shape features of the input shapes are included – large λW result in neglecting small features (Fig. 4b). This becomes particularly useful when it is not desirable to retain small features extracted from the segmentation, which may be caused by image artefacts or suboptimal registration [21]. Once the resolution λW is set, the template is computed as the average of all currents (Appendix 1). Influence of the resolution parameter λW. One parameter to be set by the user is the currents resolution λW, which controls to which degree shape features of the input 3D shape given as a computational mesh (a) are included in the shape's currents representation. High λW values neglect small shape features (b) Unique shape features of each subject are captured by computing the transformations that deform the template towards each subject shape. In order to calculate these transformations, a second parameter λV, which controls the stiffness of those deformations, is set: large λV result in "stiffer" (i.e. less elastic) deformations that capture more global shape features (Fig. 5) [12]. This parameter can be considered as changing the elasticity of the material of the surface models; the more elastic the material, the more the surface models can be manipulated. For example, stretching or deforming a lycra cloth (small λV) will have a different result compared to stretching a leather cloth (large λV). Influence of the stiffness parameter λV. The second parameter to be set by the user, λV, controls the stiffness or elasticity of the deformation of the template towards each subject shape. Low deformation stiffness values result in too local, unrealistic deformations After computing the transformations of the template toward each shape present in the population, each subject shape is uniquely characterised by a multitude of deformation vectors rather than its actual 3D surface. To describe the deformation data with the minimum number of required parameters, a statistical method called partial least square regression (PLS) [12, 22], is employed (Fig. 6a). PLS allows the extraction of shape modes [5], which represent the dominant, most common shape features observed in the population that are most correlated with a specific parameter of interest (such as a clinical parameter measured on the individual patient). Here, shape modes most related to body surface area (BSA) and the functional parameter ejection fraction (EF) were extracted. Analysing the output using dimensionality reduction techniques and correlation analyses. PLS regression is used to extract shape patterns most related to a selected response variable as shape modes. Subject-specific deformation vectors, derived from the template computation, constitute the input. Resulting shape modes can be visualised as 3D shape deformations (a). By projecting shape modes onto each subject shape, subject-specific shape vectors XS can be derived that constitute a numerical representation of the 3D shape features captured by the shape mode (b). XS is correlated with the selected response parameter as measured on the subjects in order to determine how strongly shape and response are associated (c). Analysis techniques are marked with dashed lines Extracted shape patterns described by PLS shape modes are visualised by deforming the computed template shape with the transformations along the direction of the mode (Fig. 6a). To determine whether the obtained shape patterns are correlated with a response parameter, shape modes need to be broken down to numbers that allow statistical analysis. This is achieved by mathematically projecting each subject-specific patient transformation onto the found shape mode [12], which yields the so called shape vector XS (Fig. 6b). Shape vectors are essentially numerical representations of a specific shape mode. Each shape vector entry describes in one subject-specific number how much the template has to be deformed along the derived shape mode in order to match the specific subject shape as well as possible. The shape vector thus represents 3D global and regional shape features associated with a certain subject and response parameter. Further standard correlation analysis between the shape vector and the response parameter reveals how well subject shape features are represented by the derived shape mode (Fig. 6c). A perfect correlation of shape vector and response would imply that the derived shape mode showed exactly those shape patterns associated with low or high response values (such as high or low EF) when moving along the shape mode from low to high shape vector values. For mathematical details about the shape modelling process as outlined above, we refer to Appendix 1 and the referenced literature. The entire mathematical framework has been published under the name "exoshape" and is publicly available as a Matlab (The MathWorks, Natick, MA) code [12, 22], (https://team.inria.fr/asclepios/software/exoshape/). A similar, open-source code has been recently published in C as "Deformetrica" by Durrleman et al. [23] (http://www.deformetrica.org/). The described SSM framework was applied to the CoA patients following the steps as explained in detail in the next sections (Fig. 7): i) Segmentation of patient CMR images to reconstruct the 3D surface models of the structures of interest; the models and CMR data were also used to compute traditional 2D or 3D morphometric parameters (Fig. 7a); ii) meshing and smoothing of the segmented models to create the computational input for the template calculation (Fig. 7b); iii) registration of the input shapes; (Fig. 7c) and iv) setting of resolution λW and stiffness λV, which are the crucial parameters the user needs to provide along with the input shapes prior to calculating the template. Overview of pre-processing steps prior to shape analysis. Cardiac structures of interest are segmented manually or automatically from 3D imaging data (a). Segmented models then are cut, appropriately meshed and smoothed in order to remove irrelevant shape variability (b). Before running the shape analysis, the resulting surface models are aligned i.e. rigidly registered in order to reduce bias due to differences in scaling, transformation and rotation (c). User interaction is marked with dashed lines After the template is computed, the following post-processing analyses are carried out: i) removing confounders such as size differences between subjects prior to extracting shape features related to the functional parameter EF as they can hide potentially important shape features; ii) accounting for outliers and influential subjects that are common in clinical data of pathological shapes; iii) validating the template as representing the mean shape of the population and as being not substantially affected if any of the shapes that were used to compute it is removed or if a new patient is added iv) analysing associations between extracted shape features (represented by shape vectors as well as by traditional 2D and 3D measured geometric parameters) and demographic (BSA) and functionally relevant parameters (EF) via standard bivariate correlation analysis (Fig. 6c). Patient population, image data and 2D morphometry CMR imaging data from 20 CoA patients post-repair (16.5 ± 3.1 years, range 11.1 to 20.1 years; CoA repair performed at 4 days to 5 years of age) were included in the study. Conventional morphological descriptors for this population were previously reported by our group [24]. Three-dimensional volumes of the left ventricle (LV) and the aorta during mid-diastolic rest period were obtained from CMR using a 1.5 T Avanto MR scanner (Siemens Medical Solutions, Erlangen, Germany) with a 3D balanced, steady-state free precession (bSSFP), whole-heart, free breathing isotropic data acquisition method (iso-volumetric voxel size 1.5 × 1.5 × 1.5 mm) [24]. Ejection fraction (EF) was measured from the CMR data [24]. Images were segmented using thresholding and region-growing techniques combined with manual editing in commercial software (Mimics, Leuven, Belgium) [24]. A previous study comparing physical objects and their respective 3D segmented and reconstructed computer models found an average operator induced error in the order of 0.75 mm, which equals about half the voxel size in our study [25]. In order to reduce irrelevant shape variability, aortas were cut such that only the root, the arch and the descending aorta up to the diaphragm were kept. As the focus of this analysis lies on the arch shape, coronary arteries and head and neck vessels were cut as close as possible to the arch. This is a common pre-processing step in shape analysis of aortic arches [26–28]. The final segmented surface models of the aortas were stored as computational surface meshes. Conventional 2D morphometry was carried out manually on CMR imaging data to measure the ratio of aortic arch height (A) and width (T) as well as the ascending and descending aortic arch diameters (Dasc and Ddesc, respectively) at the level of the pulmonary artery as proposed by Ou et al. [17] (Fig. 8a). Diameters at the transverse arch level (Dtrans) and at the isthmus level (Disth) were measured manually as described previously [24]. Geometric parameters measured in 2D (a) and 3D (b). Geometric parameters such as diameters D and aortic arch height A and width T were measured manually on 2D CMR image slices according to [17] and [24] (a). 3D parameters were computed semi-automatically using VMTK for all input shapes (b) 3D shape parameters were computed semi-automatically from the segmented arch surface models using The Vascular Modeling Toolkit [29] (VMTK, Orobix, Bergamo, Italy; www.vmtk.org ) in combination with Matlab. Extracted geometric parameters included volume V and surface area Asurf, as well as parameters associated with the vessel centreline such as length, curvature and tortuosity [30, 31], and inner vessel diameters along the centreline (minimum Dmin, maximum Dmax and median diameters Dmed) (Fig. 8b). Table 1 provides an overview of all geometric parameters that were assessed via correlation analyses. Note that all measured geometric parameters were indexed to patient body surface area (BSA), where applicable. Table 1 Morphometric parameters measured on the 3D surface models of the arches Meshing and smoothing Preliminary sensitivity analyses were carried out in order to assess the influence of the meshing, and of the resolution and stiffness parameters (λW, λV) on computational time and on the final template shape (Appendix 2). Results showed that template calculation time can be reduced by up to 85 % if an appropriately low mesh resolution is chosen - without substantially affecting the final template shape. To determine an optimally low, yet sufficient, mesh resolution, we focussed on the smallest subject present in the population of shapes as it defines a lower limit for mesh resolution. Starting from the original surface model of the smallest subject obtained from segmentation (in this case, subject CoA3), re-meshed surface models were created from low (~0.3 cells/mm2) to high (~1.5 cells/mm2) mesh resolution in VMTK. To quantify deviations from the original segmented shape, the surface area Asurf of each re-meshed model was measured and compared to the respective values of the original mesh (Asurf,orig = 8825 mm2). Surface area deviations were calculated. A cut-off value for tolerable surface errors was chosen to be 0.5 % compared to the original subject mesh, which was reached for a surface mesh resolution of 0.75 cells/mm2. All CoA arch surface models were meshed with this resolution, using an additional passband smoothing filter to further reduce unnecessary shape variability (Fig. 9a). Input surface models of the studied patients post-aortic coarctation repair (a) and computed template (b). Computational surface meshes of 20 aortic arches constituted the input for the shape analysis (a). Coronary arteries and head and neck vessels were removed prior to analysis (3D rotatable models of the arches can be found under www.ucl.ac.uk/cardiac-engineering/research/library-of-3d-anatomies/congenital_defects/coarctations ). The final template (mean shape, blue) computed on the entire population (N = 20 subjects) shows characteristic shape features associated with CoA such as a narrowing in the transverse and isthmus arch section as well as a slightly dilated aortic root and an overall slightly gothic and tortuous arch shape (b) Alignment of input meshes To reduce possible bias introduced by misaligned surface models, a two-step approach is proposed. First, each input shape was aligned (i.e. rigidly registered using translation and rotation only) to an initial reference shape using registration functions based on the iterative closest point (ICP) algorithm available in VMTK [32]. The initial reference shape was determined as the closest shape to the centre or "mean" of the population (in this case subject CoA4; Fig. 9a) according to gross geometric parameters (volume V, surface area Asurf, centreline length LCL and median diameter along the centreline Dmed). Point-to-point correspondence between the reference mesh and respective subject meshes is not necessary as the correspondence will be updated at each iteration by finding the closest point. After computing an initial template shape based on the shapes aligned to the initial reference shape (subject CoA4), the final alignment of all input meshes was obtained in a second step by adopting a Generalised Procrustes Analysis (GPA) [33] approach in the following manner: The input meshes were re-aligned, with the reference shape this time being the computed template A new template based on the newly aligned meshes was computed The model compactness was computed as proposed by Styner et al. [8] If the compactness was decreased, the reference shape was set to the new template and the procedure continued with step 1, otherwise the meshes were aligned sufficiently. Here, sufficient alignment was obtained after one iteration of the outlined procedure. A priori setting of the resolution and stiffness λ parameters Generally, it is recommended to set the resolution parameter λW in the order of magnitude of the shape features to be captured [12]; however, clear indication for parameter setting is missing, in particular for the stiffness λV, which cannot be intuitively estimated. Following sensitivity analysis (Appendix 2), λW needs to be small enough to be able to capture all the features of interest, while being large enough to discard noise and to allow efficient template computation. The following approach is proposed to obtain an a priori estimation of a suitable set of λ parameters. Essentially, the shape analysis algorithm deforms a template shape towards each individual subject shape present in the population. The quality of the matching of source and target shape depends on the setting of the λ parameters. The suggested approach is based on the idea that the subject with the most challenging shape features to be captured defines a lower limit in terms of transformation resolution (λW) and stiffness (λV) to obtain an appropriate matching. Here we assume this to be the smallest subject within our shape population. We therefore transformed an initial template towards the smallest subject shape present in the set of shapes, starting from coarse initial values and decreasing both resolution λW and stiffness λV incrementally until a sufficient matching was achieved. Note that incorrectly chosen λ parameters will ultimately result in high matching errors and in unrealistic shape deformations, which can be examined by the user – visually and numerically. To determine starting values for λW and λV for computing the initial template, we suggest a "rule of thumb" method, based on the fact that the λ parameters are inherently associated with probing (λW) or deforming surfaces (λV). As both parameters are given as a length in millimetres, they can be squared to define a plane quadratic surface. With this definition, they are interpreted as a percentage of the surface area to be probed or deformed. Based on the smallest surface area Asurf,min within the population, λW and λV can be initialised using (Eq. 1) for a given percentage pW or pV, respectively: $$ {\lambda}_W=\sqrt{p_W\cdot {A}_{surf, min};}{\lambda}_V=\sqrt{p_V\cdot {A}_{surf, min}} $$ For the resolution λW, our approach can be interpreted as probing pW % of the smallest aorta surface area if it was cut open and laid out flat. Note that for large aortas the percentage drops below the chosen percentage values as the same parameters are applied to all shape models. Here, we set pW to 2.5 % and pV to 25 %, which yielded an initial λW of 15 mm and a λV of 47 mm, with the minimal surface area present in the set of shapes being Asurf,min = 8825 mm2. Those values were used to compute an initial template based on all 20 subjects. The initial template was then transformed towards the smallest subject (CoA3) while incrementally decreasing λW and λV in 1 mm steps until the matching error between source (initial template) and target (CoA3) was reduced by ≥80 %. A perfect (100 % error reduction) matching is not desired, as for example local shape differences due to segmentation errors or highly localised bulges are not of interest and thus do not need to be modelled. Note that the range of values for λV was fixed from 47 mm down to 40 mm in order to avoid too local deformations (Fig. 5). Starting from λW = 15 mm, transformations were computed in parallel for the range of λV values (47 to 40 mm). If the matching error was not reduced sufficiently by decreasing λV, then λW was decreased by 1 mm. In this way, we prioritised high λW values in order to ensure low runtimes for the final template calculation (Appendix 2). The matching error was determined by calculating the maximum surface distances between the target shape (subject CoA3) and the registered deformed source shape (the initial template). Following this procedure, a resolution of λW = 11 mm and a deformation stiffness of λV = 44 mm were found to sufficiently reduce the matching error and were used for all further template computations. The template, shape modes and shape vectors were then computed in Matlab based on the 20 arch surface models on a 32GB workstation using 10 cores (runtime for simultaneous template computation and transformation estimation approximately 15 h). Controlling for confounders and influential observations Size differences between patients were assumed to be reflected in differences in patient body surface area (BSA). To "normalise" the extraction of functionally relevant shape features, we aimed to remove dominantly size-related shape features first. For that, those shape features most related to a change in BSA were computed using PLS based on the original predictors Xorig (the moment vectors deforming the template towards each subject). In previous publications this approach has been used to build a statistical growth model [12, 22]. Here, on the contrary, we aimed to remove shape patterns related to size differences between subjects prior to further analyses. A second PLS was then performed on the predictor residuals Xresid, which were obtained by subtracting the result of the first PLS (the product of PLS predictor scores XSBSA and predictor loadings XLBSA) from the original predictors Xorig as shown (Eq. 2): $$ {X}_{resid}={X}_{orig}-X{S}_{BSA}\times XL{\mathit{\hbox{'}}}_{BSA} $$ In this way, 3D shape features most related to size differences could be removed prior to analysing correlations of PLS shape vectors with geometric and clinical parameters normalised to BSA [34]. Detecting outliers or influential subjects In preliminary studies, PLS regression proved to be prone to be influenced by outliers. Outliers in terms of shape are common in clinical data of pathological shapes; particularly in the field of CHD, where inter-subject shape variability is typically large. In order to detect influential observations in the PLS regression, the Cook's distance was measured. The Cook's distance measures how much a specific subject influences the final regression result by leaving out that subject and comparing all remaining fitted values to the original, full data fitted values. It is defined as (Eq. 3) [35] $$ {D}_i=\frac{{\displaystyle {\sum}_{j=1}^n\Big({y}_j-{y_j}_{(i)}}\Big){}^2}{p\cdot MSE};MSE=\frac{1}{n}{\displaystyle \sum_{i=1}^N{\left({y}_i-y\right)}^2} $$ with yj being the jth fitted response variable and yj (i) being the jth fitted response variable if the fit does not include observation i; p is the number of coefficients in the regression model and MSE is the mean square error. The Cook's distance was computed for each subject by leaving out the subject and performing PLS regression on the remaining subjects. PLS regression was thus repeated N times, with N being the number of subjects. Here, observations with Cook's distances exceeding four times the mean Cook's distance were discarded from the analysis as potentially influential observations. Validation of the template - geometric approach Standard geometric parameters such as Volume V, surface area Asurf, centreline length LCL and median diameter Dmed along the centreline of the vessel were computed for each patient shape, averaged over the entire population and compared with the respective parameter measured on the final template shape. The deviations ∆Dev from the mean population values were calculated for x being one of the parameters (V, Asurf, LCL, Dmed) calculated on the template and \( \overline{x} \) being the respective population mean as (Eq. 4): $$ \Delta Dev=\frac{x-\overline{x}}{\overline{x}}\cdot 100\% $$ The overall deviation ∆Devtotal of the template from population means was calculated as the average of the deviations from each of the above mentioned parameters. A template shape yielding a low overall deviation ∆Devtotal from population mean values of below 5 % was considered to represent a good approximation of the mean shape. K-fold cross-validation In order to assure that the final template shape is not overly influenced by adding or leaving out a specific subject shape, k-fold cross-validation was performed [11]. The entire dataset was divided into k = 10 randomly assigned subsets. The template calculation was run k times, each time leaving out one of the subsets until each patient had been left out once. As the entire set consists of N = 20 datasets in total, in each of the k runs N/k = 2 patients were left out. The 10 resulting templates should all be close to the template calculated on the full dataset of N = 20 patients. This was assessed visually by overlaying the final template meshes and numerically by measuring the surface distances between each of the 10 templates and the original template. To back up the findings of the SSM, correlations between the parameters of interest, BSA and EF, with the traditionally measured geometric parameters and demographic parameters (patient age and height) were computed using bivariate correlation analysis. For correlations with BSA, non-indexed geometric shape descriptors were used. In a second step, shape vectors most related to BSA and EF (after removing size effects) were extracted via PLS and were correlated with the response variables BSA, EF, demographic parameters and the 2D and 3D geometric shape descriptors (Table 1). For parameters that were normally distributed, the standard parametric Pearson correlation coefficient r is reported. For non-normally distributed parameters, non-parametric Kendall's τ is given. Non-normality was assumed if the Shapiro-Wilk test was significant. Parameters were considered significant (2-tailed) for p-values < .05. All statistical tests were performed in SPSS (IBM SPSS Statistics v.22, SPSS Inc., Chicago, IL). Computed template and validation The template shape showed distinct narrowed sections in the transverse arch and isthmus region. The root was slightly dilated and the overall arch shape could be described as rather "gothic" with a narrow arch width T and large height A (Fig. 9b, Additional File 1). Key geometric parameters of the template such as surface area Asurf, volume V, centreline length LCL and median diameter along the centreline Dmed were all close to their respective means as measured on the entire population of shapes (Table 2). Overall average deviation from those mean geometric population values was 3.1 %. Thus, the template was considered to be a good representation of the "mean shape" of the CoA population. The cross-validation templates matched the original template well on visual assessment. Using gross geometric parameters (Asurf, V, LCL and Dmed), cross-validation templates showed average total deviations from the original template ranging from 2.8 to 6.6 %. Average surface distances between the template shapes ranged from 0.21 mm to 1.07 mm. Hence, the computed template was considered to be minimally influenced by adding or removing another patient shape. Table 2 Mean geometric parameters of the population compared to geometric parameters of the template Shape patterns associated with differences in BSA Associations of geometric shape descriptors with changes in BSA Correlations of the traditionally measured 2D and 3D geometric parameters (Table 1) and demographic parameters with BSA were analysed using non-indexed geometric descriptors. BSA was significantly positively correlated with age (r = 0.705; p = .001) and height (r = 0.838; p < .001) and thus accounted for overall size differences between subjects. Further significant positive correlations of BSA were found with volume V (Kendall's τ = 0.385; p = .019) and surface area Asurf (r = 0.537; p = .015) of the arch models, the maximum and minimum diameter along the centreline, Dmax (τ = 0.460; p = .005) and Dmin (r = 0.628; p = .003), ascending aortic diameter Dasc (r = 0.550; p = .012), transverse diameter Dtrans (r = 0.453; p = .045) and isthmus diameter Disth (r = 0.523; p = .018) as well as the arch width T (r = 0.555; p = .011) (Table 3). Significant negative correlations were found with the ratio of arch surface area and volume Asurf/V (r = −0.641; p = .002) and the median curvature along the centreline Cmed (r = −0.603; p = .005). Table 3 Correlations between BSA and BSA Shape Vector and traditionally measured parameters Associations of shape modes and shape vectors with changes in BSA derived from SSM A first PLS regression of shape features with BSA revealed subject CoA20 to be influential to the regression as CoA20 exceeded the computed Cook's distance threshold of 0.77. We considered CoA20 as an outlier in terms of its overall shape as it presented with a highly gothic (A/T ratio = 0.94) arch with a bended descending aorta (Fig. 9a) that is considerably larger than other subjects. Thus, CoA20 is likely to skew the subsequent shape feature extraction and was therefore removed from the following analyses. Subsequent PLS regression with BSA on the remaining 19 subjects extracted a BSA shape mode, which accounted for 24 % of the shape variability present in the population. Visually, the BSA shape mode showed an overall enlargement of the deformed template arch shape with an increase in ascending, transverse, isthmus and descending aorta diameter while moving from low towards higher BSA values (Fig. 10a, Additional File 2). The overall arch shape for low BSA was slim and rather straight, with a rounded arch; whereas for high BSA values the arch shape was more gothic and more tortuous with a slightly dilated root and descending aorta. Visualisation of the BSA shape mode (a) and correlation with BSA shape vector (b). Shape features associated with deforming the template along the BSA Shape Mode from low (a, top) to high BSA values (a, bottom) from different views as indicated. Low BSA values were associated with a slim, straight and rounded arch shape, whereas moving towards higher BSA values resulted in an overall size increase along with shape deformation towards a more tortuous gothic arch with a slightly dilated root. The measured BSA of the subjects and the shape features as described by the BSA Shape Mode correlated strongly (b) The correlation between the associated BSA shape vector and BSA was significant with r = 0.707 (p = .001), implying that the BSA shape mode captured shape features associated with differences in BSA well (Fig. 10b). Furthermore, the computed BSA shape vector correlated positively and significantly with age (r = 0.696; p = .001) and height (r = 0.872; p < .001), volume V (τ = 0.743; p < .001) and surface area Asurf (r = 0.902; p < .001), centreline length LCL (r = 0.853; p < .001), diameters Dmax (r = 0.602; p < .001), Dmin (r = 0.763; p < .001), Dmed (r = 0.709; p = .001), Dasc (r = 0.708; p = .001), Dtrans (r = 0.646; p = .003), Disth (r = 0.746; p < .001), Ddesc (r = 0.740; p < .001) and arch height A (r = 0.632; p = .004) and width T (r = 0.626; p = .004) (Table 3). Significant negative correlations were found for the surface volume ratio Asurf/V (r = −0.787; p < .001) and the median curvature Cmed (r = −0.718; p = .001). Those associations were correctly depicted by the BSA shape mode (Fig. 10a). Shape patterns associated with differences in EF Associations of indexed geometric shape descriptors with changes in EF Significant positive correlations were found between EF and the ratio of transverse and descending arch diameter Dtrans/Ddesc (r = 0.456; p = .050). EF correlated negatively and significantly with the indexed arch surface area iAsurf (r = −0.571; p = .011). Associations of shape modes and shape vectors with changes in EF derived from SSM A second PLS regression based on the residuals of the first PLS regression with BSA was performed with EF as response. This two-step approach allowed removing shape features due to size differences between subjects prior to extracting shape modes related to EF. This second "normalised" PLS regression yielded the EF shape mode, which accounted for 19 % of the remaining shape variability. The EF shape mode deformed the template from a large, overall rather straight but slightly gothic arch shape with a slim ascending aorta and a dilated descending aorta for low EF values towards a rather compact but rounded arch shape with a dilated aortic root and a slim descending aorta for high EF (Fig. 11, Additional File 3). Visualisation of the EF Shape Mode. Shape features associated with deforming the template along the EF Shape Mode from low (top) to high EF from different views as indicated. Lower EF was associated with a slim, rather gothic arch shape with a long dilated descending aorta, whereas higher EF was associated with a more rounded arch along with a dilated root and tapering towards a slim descending aorta The associated EF shape vector correlated significantly with EF (r = 0.521; p = .022) (Fig. 12). By analysing correlations of the EF shape vector with measured geometric parameters, further significant positive correlations with the ratio of ascending to descending aorta diameter Dasc/Ddesc (r = 0.753; p < .001) and the ratio of transverse and descending aorta diameter Dtrans/Ddesc (r = 0.457; p < .049) were found; corroborating the visual results. Negative significant correlations were found with the indexed descending aorta diameter iDdesc (r = −0.527; p < .020). All further correlations are given in Table 4. Correlation between EF and EF Shape Vector and visual assessment of results. Measured EF and shape features as described by the EF Shape Mode correlated well. Shape change of the template from a larger arch shape with a slim ascending and a slightly dilated descending aorta was associated with low, negative shape vector values. A smaller arch shape with dilated root and slim descending aorta was associated with high, positive shape vector values (bottom). Compared with the shape of two subjects (CoA1 and CoA12) with low EF at the left, lower spectrum of shape vector values, key shape features supposedly associated with low EF values such as a long, slightly dilated descending and a slim ascending aorta, are depicted correctly by the EF shape mode. On the other side of the shape spectrum, subjects CoA6 and CoA17 presented with a high EF and showed shape features in agreement with the shape mode derived for high EF values. Both shapes were compact, with a shorter, slim descending aorta compared to the ascending aorta, along with a dilated aortic root. Two subjects, who most likely contributed to the relatively weak correlation between EF and the EF shape vector, were subjects CoA5 and CoA15 as marked in red (dashed). Although they presented with similar shapes as CoA6 and CoA17 and thus do show shape features that should be associated with high EF values, their EF values were in the mid-spectrum for CoA5 and even lower than CoA12 for CoA15 Table 4 Correlations between EF and EF Shape Vector and traditionally measured parameters This study describes and verifies a non-parametric statistical shape analysis method in detail and demonstrates its potential for discovering previously unknown 3D shape biomarkers in a complex anatomical shape population. The methodology is comprehensively explained from the user-perspective, with the aim of making the process more accessible to the broader research community. The shape analysis method was applied to CMR images of the aorta from patients post coarctation repair. The method computes a mean shape for this population of patients – the template – that we have shown to have good agreement with the conventional 2D and 3D measurements when averaged across the population (e.g. centreline length of the template = the average of the centreline length measured from each patient). Biomarker information – the shape features – for each individual were then extracted by transforming the mean aorta to each patient's aorta. These extracted shape features, unique to each individual, were shown to: i) Accurately represent individual characteristics of the arch, as measured by patient-specific 2D/3D morphometric parameters, and ii) Have correlations with body surface area and left ventricular ejection fraction, offering the potential that they may be important biomarkers of biological processes. The found associations of aortic arch shape with ejection fraction were not known previously, which is why we consider the extracted 3D shape features as potential novel shape biomarkers that need to be confirmed by future studies. These results constitute the first statistical shape model of the aorta affected by coarctation. A description of the statistical shape modelling framework adopted in this study is reported elsewhere in mathematically rather complex terms. Yet, in this paper we present the method from the user's perspective. Here, we aimed to raise the awareness of the importance of necessary modelling parameters such as the meshing, smoothing and λ parameters for 3D shape analysis of complex anatomical structures. The mesh resolution for the input surfaces mainly affects the computational time needed to compute the template, but does not affect the final template shape substantially. Conversely, the analysis parameters (resolution λW and stiffness λV) affect both computational time and the final template shape considerably, requiring careful setting according to the shape population to be analysed. We provide tips on how to mesh input models and propose a new way of determining the λ parameters, which ensures robust and efficient template computation, even with an increased number of subjects for future studies. Furthermore, a modified PLS regression technique is described, which enables extraction of shape features independent of size differences between subjects. By measuring the Cook's Distance during PLS regression, we were able to account for outliers such as one subject with an overly large, "abnormal" aortic shape and indeed a highly impaired cardiac function (EF = 17 %) that had to be excluded in order not to affect the shape feature extraction (subject CoA20). This suggests that the methodology could potentially be used to detect outlying shapes in a complex shape population – which, in turn, might be associated with outlying functional behaviour. The calculated template based on the 20 CoA cohort showed characteristic shape features associated with CoA such as a slightly gothic arch shape, a dilated root, and a distinct narrowing in the transverse and isthmus arch section. The template shape was validated by comparing its geometry with the population average geometric parameters and by applying cross-validation techniques in order to ensure that removing or adding shapes had no influence. Therefore, new patients can be added easily, which involves performing the described pre-processing steps (segmenting, meshing, cutting, registration) and re-computing the template. Such a template could serve as a representative of the "normal of the abnormal"; a reference mean shape that might facilitate the diagnosis of highly abnormal cases within a pathologic shape population. Three-dimensional global and regional shape features associated with differences in size (represented by BSA) and function (represented by EF) were extracted and found to be well in agreement with trends confirmed by traditional morphometrics. BSA correlated strongly and significantly with conventional geometric parameters, as expected. Those results confirmed the visual results shown by the SSM, whereby an increase in BSA was associated with an overall increase in aorta length and vessel diameters as well as with a shape development towards a slightly dilated root and a more gothic arch shape. For the first time, high EF was associated with a more compact, rounded arch shape with a slightly dilated aortic root and a slim descending aorta, whereas low EF was associated with a more gothic arch shape, a slim ascending aorta and a slightly dilated descending aorta, which may increase flow resistance across the arch and therefore left ventricular afterload. Note however, that in order not to inflate Type II error of not detecting actual effects, computed correlation significances were not adjusted for multiple comparisons. Therefore, all results have to be considered as exploratory. Analysing the found correlations in detail Correlations with traditionally measured geometric parameters Whereas BSA correlated strongly with multiple measured 2D and 3D shape descriptors, EF correlated significantly only with two geometrical parameters (the ratio of transverse to descending aortic diameter and the indexed surface area). One reason for this may be that the shape of the aortic arch marginally affects EF. However, these discrepancies could also emphasise that complex 3D shapes cannot always be sufficiently described by traditional individual morphometric measurements. Shape features associated with differences in body size between subjects are typically dominant and contribute to the largest portion of shape variability in natural pathologic shape populations [36]. An increase in body size usually results in an overall size increase of the structure of interest, reflected in increased diameters and vessel length in the case of the aorta. This is why shape features associated with size differences are likely to be picked up by traditional 2D and 3D measurements. For the functional parameter EF though, we were interested in shape features independent of size effects, which, however, may be less prominent and may only be captured by a complex combination and collection of different morphometric parameters. Herein lies the power of 3D statistical shape modelling: results such as the mean shape and its variability are derived as visual, intuitively comprehensible and less biased 3D shape representations taking into account the entire 3D shape, instead of an unhandy collection of multiple measured parameters that might miss out crucial shape features. Correlations with shape vectors describing shape features most related to a specific parameter in 3D We found a strong significant correlation between the BSA shape vector and BSA, whereas EF correlated less with its EF shape vector. Overall, these results imply that shape features shown by the respective shape modes accounted well for differences in both BSA and EF in our shape population. In a strong correlation between functional parameter and shape vector, all subjects with low EF values would show those shape features given by the EF shape mode for low shape vector values, and vice versa for all subjects with high EF values. Nevertheless, those trends visually confirmed that our method was able to correctly extract 3D shape features from a population of shapes, which are potentially associated with a functional parameter of clinical relevance (Fig. 12). Therefore, the presented method can be used as a research tool to explore a population of 3D shapes, in order to detect where crucial shape changes occur and whether specific geometric parameters are likely to be of functional relevance. Limitations and future work The biggest limitation of our study is the small sample size of 20 subjects, with rather inhomogeneous characteristics in terms of age (range 11.1 to 20.1 years), age at arch intervention (4 days to 5 years after birth) and type of surgery [24]. Thus, results presented in this work are primarily meant to demonstrate the potential of the proposed statistical shape modelling method by studying the association of complex 3D shape features with external, functional parameters such as EF. This could improve the derivation of novel shape biomarkers in future studies. In CoA patients, our method applied to a larger cohort of patients could help answer whether specific arch morphologies such as the gothic arch shape are associated with hypertension post-aortic coarctation repair [15, 37]. In this paper, we presented a non-parametric shape analysis method based on CMR data from the user-perspective and applied it to a population of aortic arch shapes of patients post-aortic coarctation repair. The process was described in detail in order to make it more accessible to researchers from both clinical and engineering background. The method has the potential of discovering previously unknown shape biomarkers from medical image databases and could thus provide novel insight into the relation between shape and function. Application to larger cohorts could contribute to a better understanding of complex structural disease, improving diagnosis and risk stratification, and could ultimately assist in the development of new surgical approaches. 2D, two-dimension(al); 3D, three-dimension(al); BSA, body surface area [m2]; CHD, congenital heart disease; CMR, cardiovascular magnetic resonance; CoA, coarctation of the aorta; EDV, end-diastolic volume [ml]; EF, ejection fraction [%]; ICP, iterative closest point algorithm; PDM, point distribution model; SSM, statistical shape model (ling); VMTK, the vascular modelling toolkit. Lamata P, Casero R, Carapella V, Niederer SA, Bishop MJ, Schneider JE, et al. Images as drivers of progress in cardiac computational modelling. Prog Biophys Mol Biol. 2014;115:198–212. Craiem D, Chironi G, Redheuil A, Casciaro M, Mousseaux E, Simon A, et al. Aging Impact on Thoracic Aorta 3D Morphometry in Intermediate-Risk Subjects: Looking Beyond Coronary Arteries with Non-Contrast Cardiac CT. Ann Biomed Eng. 2012;40:1028–38. Young AA, Frangi AF. Computational cardiac atlases: from patient to population and back. Exp Physiol. 2009;94:578–96. Lamata P, Lazdam M, Ashcroft A, Lewandowski AJ, Leeson P, Smith N. Computational mesh as a descriptor of left ventricular shape for clinical diagnosis. Computing in Cardiology Conference. 2013;2013:571–4. Cootes T, Hill A, Taylor C, Haslam J. Use of active shape models for locating structures in medical images. Image Vision Computing. 1994;12:355–65. Remme E, Young AA, Augenstein KF, Cowan B, Hunter PJ. Extraction and quantification of left ventricular deformation modes. IEEE Trans Biomed Eng. 2004;51:1923–31. Lewandowski AJ, Augustine D, Lamata P, Davis EF, Lazdam M, Francis J, et al. Preterm Heart in Adult Life Cardiovascular Magnetic Resonance Reveals Distinct Differences in Left Ventricular Mass, Geometry, and Function. Circulation. 2013;127:197–206. Styner MA, Rajamani KT, Nolte L-P, Zsemlye G, Székely G, Taylor CJ, et al. Evaluation of 3D Correspondence Methods for Model Building. In: Taylor C, Noble JA, editors. Information Processing in Medical Imaging. Berlin: Springer; 2003. p. 63–75. Available from: http://link.springer.com/chapter/10.1007/978-3-540-45087-0_6. Vaillant M, Glaunès J. Surface Matching via Currents. In: Christensen GE, Sonka M, editors. Information Processing in Medical Imaging. Berlin: Springer; 2005. p. 381–92. Available from: http://link.springer.com/chapter/10.1007/11505730_32. Durrleman S, Pennec X, Trouvé A, Ayache N. Measuring brain variability via sulcal lines registration: a diffeomorphic approach. Med Image Comput Comput Assist Interv. 2007;10:675–82. Mansi T, Durrleman S, Bernhardt B, Sermesant M, Delingette H, Voigt I, et al. A Statistical Model of Right Ventricle in Tetralogy of Fallot for Prediction of Remodelling and Therapy Planning. In: Yang G-Z, Hawkes D, Rueckert D, Noble A, Taylor C, editors. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009. Berlin: Springer; 2009. p. 214–21. Available from: http://link.springer.com/chapter/10.1007/978-3-642-04268-3_27. Mansi T, Voigt I, Leonardi B, Pennec X, Durrleman S, Sermesant M, et al. A Statistical Model for Quantification and Prediction of Cardiac Remodelling: Application to Tetralogy of Fallot. IEEE Trans Med Imaging. 2011;30:1605–16. Leonardi B, Taylor AM, Mansi T, Voigt I, Sermesant M, Pennec X, et al. Computational modelling of the right ventricle in repaired tetralogy of Fallot: can it provide insight into patient treatment? Eur Heart J Cardiovasc Imaging. 2013;14:381–6. Durrleman S, Pennec X, Trouvé A, Ayache N. Statistical models of sets of curves and surfaces based on currents. Med Image Anal. 2009;13:793–808. O'Sullivan J. Late Hypertension in Patients with Repaired Aortic Coarctation. Curr Hypertens Rep. 2014;16:1–6. Vergales JE, Gangemi JJ, Rhueban KS, Lim DS. Coarctation of the Aorta - The Current State of Surgical and Transcatheter Therapies. Curr Cardiol Rev. 2013;9:211–9. Ou P, Bonnet D, Auriacombe L, Pedroni E, Balleux F, Sidi D, et al. Late systemic hypertension and aortic arch geometry after successful repair of coarctation of the aorta. Eur Heart J. 2004;25:1853–9. De Caro E, Trocchio G, Smeraldi A, Calevo MG, Pongiglione G. Aortic Arch Geometry and Exercise-Induced Hypertension in Aortic Coarctation. Am J Cardiol. 2007;99:1284–7. Lee MGY, Kowalski R, Galati JC, Cheung MMH, Jones B, Koleff J, et al. Twenty-four-hour ambulatory blood pressure monitoring detects a high prevalence of hypertension late after coarctation repair in patients with hypoplastic arches. J Thorac Cardiovasc Surg. 2012;144:1110–8. Durrleman S. Statistical models of currents for measuring the variability of anatomical curves, surfaces and their evolution. University of Nice-Sophia Antipolis; 2010. Available from: https://tel.archives-ouvertes.fr/tel-00631382/. Mansi T. Modèles physiologiques et statistiques du cœur guidés par imagerie médicale: application à la tétralogie de Fallot [Internet]. École Nationale Supérieure des Mines de Paris; 2010 [cited 2013 Aug 21]. Available from: http://tel.archives-ouvertes.fr/tel-00530956 McLeod K, Mansi T, Sermesant M, Pongiglione G, Pennec X. Statistical Shape Analysis of Surfaces in Medical Images Applied to the Tetralogy of Fallot Heart. In: Cazals F, Kornprobst P, editors. Modeling in Computational Biology and Biomedicine. Berlin: Springer; 2013. Available from: http://link.springer.com/content/pdf/10.1007%2F978-3-642-31208-3_5.pdf#page-1. Durrleman S, Prastawa M, Charon N, Korenberg JR, Joshi S, Gerig G, et al. Morphometry of anatomical shape complexes with dense deformations and sparse parameters. Neuroimage. 2014;101:35–49. Ntsinjana HN, Biglino G, Capelli C, Tann O, Giardini A, Derrick G, et al. Aortic arch shape is not associated with hypertensive response to exercise in patients with repaired congenital heart diseases. J Cardiovasc Magn Reson. 2013;15:101. Schievano S, Migliavacca F, Coats L, Khambadkone S, Carminati M, Wilson N, et al. Percutaneous Pulmonary Valve Implantation Based on Rapid Prototyping of Right Ventricular Outflow Tract and Pulmonary Trunk from MR Data. Radiology. 2007;242:490–7. Casciaro ME, Craiem D, Chironi G, Graf S, Macron L, Mousseaux E, et al. Identifying the principal modes of variation in human thoracic aorta morphology. J Thorac Imaging. 2014;29:224–32. Bosmans B, Huysmans T, Wirix-Speetjens R, Verschueren P, Sijbers J, Bosmans J, et al. Statistical Shape Modeling and Population Analysis of the Aortic Root of TAVI Patients. J Med Devices. 2013;7:040925. Zhao F, Zhang H, Wahle A, Thomas MT, Stolpen AH, Scholz TD, et al. Congenital Aortic Disease: 4D Magnetic Resonance Segmentation and Quantitative Analysis. Med Image Anal. 2009;13:483–93. Antiga L, Piccinelli M, Botti L, Ene-Iordache B, Remuzzi A, Steinman DA. An image-based modeling framework for patient-specific computational hemodynamics. Med Biol Eng Comput. 2008;46:1097–112. Piccinelli M, Veneziani A, Steinman DA, Remuzzi A, Antiga L. A framework for geometric analysis of vascular structures: application to cerebral aneurysms. IEEE Trans Med Imaging. 2009;28:1141–55. Antiga L, Steinman DA. Robust and objective decomposition and mapping of bifurcating vessels. IEEE Trans Med Imaging. 2004;23:704–13. Besl PJ, McKay ND. A method for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell. 1992;14:239–56. Heimann T, Meinzer H-P. Statistical shape models for 3D medical image segmentation: A review. Med Image Anal. 2009;13:543–63. Singh N, Thomas Fletcher P, Samuel Preston J, King RD, Marron JS, Weiner MW, et al. Quantifying anatomical shape variations in neurological disorders. Med Image Anal. 2014;18:616–33. Mathworks. MATLAB v2014 Documentation - Cook's Distance. Natick, MA. 2014; Joliffe IT. Principal Component Analysis. 2nd ed. Inc.: Springer-Verlag New York; 2002. Canniffe C, Ou P, Walsh K, Bonnet D, Celermajer D. Hypertension after repair of aortic coarctation — A systematic review. Int J Cardiol. 2013;167:2456–61. Lützen J. De Rham's Currents. In The Prehistory of the Theory of Distributions. [Studies in the History of Mathematics and Physical Sciences, vol. 7]. Springer New York; 1982:144–7. Beg MF, Miller MI, Trouvé A, Younes L. Computing Large Deformation Metric Mappings via Geodesic Flows of Diffeomorphisms. Int J Comput Vision. 2005;61:139–57. Bookstein FL. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1989;11:567–85. The authors would like to thank Marc-Michel Rohé from Inria Sophia Antipolis-Méditeranée for his help and assistance regarding the non-parametric shape analysis method. MOCHA Collaborative Group: Andrew Taylor, Alessandro Giardini, Sachin Khambadkone, Silvia Schievano, Marc de Leval and T. -Y. Hsia (Institute of Cardiovascular Science, UCL, London, UK); Edward Bove and Adam Dorfman (University of Michigan, Ann Arbor, MI, USA); G. Hamilton Baker and Anthony Hlavacek (Medical University of South Carolina, Charleston, SC, USA); Francesco Migliavacca, Giancarlo Pennati and Gabriele Dubini (Politecnico di Milano, Milan, Italy); Alison Marsden (University of California, San Diego, CA, USA); Jeffrey Feinstein (Stanford University, Stanford, CA, USA); Irene Vignon-Clementel (INRIA,Paris,France); Richard Figliola and John McGregor (Clemson University, Clemson, SC, USA). The authors gratefully acknowledge support from Fondation Leducq, FP7 integrated project MD-Paedigree (partially funded by the European Commission), Commonwealth Scholarships, Heart Research UK and National Institute of Health Research (NIHR). All relevant data supporting our findings are anonymised and stored at Great Ormond Street Hospital for Children. JLB, KM and SS designed the study, contributed to the data analysis and interpretation and drafted the manuscript. GB and CC contributed to the data acquisition and revised critically the manuscript. HNN and AMT contributed to the data acquisition, analysis and interpretation and revised critically the manuscript. TYH, MS and XP contributed to the data analysis and interpretation and revised critically the manuscript. All authors read and approved the final manuscript. All authors declare no relationships or activities that could appear to have influenced the submitted work and declare no competing interests. This report incorporates independent research from the National Institute for Health Research Biomedical Research Centre Funding Scheme. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health. Ethical approval was obtained by the Institute of Child Health/Great Ormond Street Hospital for Children Research Ethics Committee, and all patients or legal parent or guardian gave informed consent for research use of the data. Centre for Cardiovascular Imaging, University College London, Institute of Cardiovascular Science & Cardiorespiratory Unit, Great Ormond Street Hospital for Children, London, UK Jan L. Bruse , Giovanni Biglino , Hopewell N. Ntsinjana , Claudio Capelli , Tain-Yen Hsia , Andrew M. Taylor & Silvia Schievano Cardiac Modelling Department, Simula Research Laboratory, Oslo, Norway Kristin McLeod Inria Sophia Antipolis-Méditeranée, ASCLEPIOS Project, Sophia Antipolis, France , Maxime Sermesant & Xavier Pennec Bristol Heart Institute, School of Clinical Sciences, University of Bristol, Bristol, UK Giovanni Biglino Search for Jan L. Bruse in: Search for Kristin McLeod in: Search for Giovanni Biglino in: Search for Hopewell N. Ntsinjana in: Search for Claudio Capelli in: Search for Tain-Yen Hsia in: Search for Maxime Sermesant in: Search for Xavier Pennec in: Search for Andrew M. Taylor in: Search for Silvia Schievano in: , Alessandro Giardini , Sachin Khambadkone , Silvia Schievano , Marc de Leval , T. Y. Hsia , Edward Bove , Adam Dorfman , G. Hamilton Baker , Anthony Hlavacek , Francesco Migliavacca , Giancarlo Pennati , Gabriele Dubini , Alison Marsden , Jeffrey Feinstein , Irene Vignon-Clementel , Richard Figliola & John McGregor Correspondence to Jan L. Bruse. Computed template (rotating). 360° rotating view of the computed CoA template. (AVI 23282 kb) BSA Shape Mode deformations of the template. Side view of the derived BSA Shape Mode deforming the template; thereby showing shape patterns associated with low and high BSA, respectively. (AVI 3968 kb) EF Shape Mode deformations of the template. Side view of the derived EF Shape Mode deforming the template; thereby showing shape patterns associated with low and high EF, respectively. (AVI 5021 kb) Additional file 1: Video 1. Appendix 1: Detailed description of the statistical shape modelling framework Forward approach The shape modelling framework is based on the forward approach [14], which starts from an initial average template shape \( \overline{T} \), that is deformed into each subject shape Ti by applying an appropriate transformation function φi (Eq. 5). The subject-specific transformation function φi deforms the template towards each individual subject shape and thus contains most of the shape information. The residuals εi correspond to irrelevant shape features such as image artefacts [12]. $$ {T}^i={\varphi}^i\cdot \overline{T}+{\varepsilon}^i $$ "Subject = Deformed Template + Residuals" Mathematical currents as non-parametric shape descriptors The current of a surface S is defined as the flux of a test vector field across that surface. The resulting shape T (a surrogate representation of S) is uniquely characterised by the variations of the flux as the test vector field varies. The definition of currents related to a flux actually stems from Faraday's law in electromagnetism, where a varying magnetic field induces a current in a wire [20, 38]. Input parameters: resolution λW and stiffness λV Using mathematical currents allows modelling shape as a distribution of specific shape features. Shapes or surface models of shapes (given as computational meshes) are transferred into a vector space W generated by a Gaussian kernel, called the space of currents (Fig. 13a) [9]. Similar to histograms, kernels indicate how likely a certain parameter value occurs within a population, i.e. how shape features are distributed. The crucial parameter is the standard deviation of the kernel λW, which allows to control how coarsely or finely a surface is resolved by currents [14]. If λW is chosen too small, too many shape features are captured and noise dominates, whereas if λW is chosen too large, important characteristics may be lost [12]. Thus, the parameter λW is referred to as the resolution of the currents representation. It is defined in millimetres and is one of the parameters to be set by the user. Overview of the template computation using currents. All surface shape models are transferred into their currents representation (a). The user has to set the resolution parameter λW to determine which shape features are to be captured. The template is then computed as the mean shape using an alternate algorithm, minimising the distances between template and each subject (b). Thereby, the template shape is initialised as the mean of the currents and then matched with each subject shape. Crucial is the deformation function φi, which is defined by the moment vectors ß that drive the subject-specific deformation of the template. The user has to set the stiffness of those deformations, λV. User input is marked with dashed lines To encode all 3D shape information present in the dataset, the template is transformed, i.e. deformed into each of the shapes used to compute it via transformation functions φi (Eq. 5). Similar to the definition of the space of currents, subject-specific transformation functions φi are defined within another vector space V as a Gaussian kernel with standard deviation λV. The parameter λV controls the stiffness of the transformations φi. Intuitively, λV affects the size of the region that is consistently deformed – the larger λV, the more global ("stiffer") the deformation; the smaller λV, the more local ("less stiff") the deformation [12]. λV is also defined in millimetres and is the second parameter to be set by the user. Computing the template in the space of currents, based on transformations After defining all the shapes present in the population in the space of currents, the template is initially computed as the empirical mean shape \( \overline{T} \). To be able to deform the template towards the shape of each individual patient, a suitable transformation function φi is required. Here, φi is defined using the large deformation diffeomorphic metric mapping (LDDMM) approach [39]. The transformation φi is a function of moment vectors ß, which contain the initial kinetic energy necessary to cover the path of a transformation φi (i.e. deformation) from one current to the other [21]. Moments ß thus "drive" the transformation. The template \( \overline{T} \) and the associated transformations φi (Eq. 5) are computed simultaneously using an alternate two-step minimisation algorithm (Fig. 13b) [14]. The aim is to minimise the distance in the space of currents between the deformed template \( \overline{T} \) and its respective target shape object Ti. Once the initial template is computed as the empirical mean shape, the distance between template and target is first minimised with respect to the transformations φi, registering the initial template to each target shape independently ("first step") [14]. The new, updated template is then calculated based on those transformations and the initial template, thus minimising the equation with respect to the template ("second step"). The second step reduces the overall registration error and yields a template that is more centred with respect to the target shapes. This process is iterated until convergence [21]. The template and its deformations towards each individual shape constitute the SSM. Note that both template \( \overline{T} \) and transformations φi are calculated based on currents i.e. based on surrogate representations of surfaces, not on the actual computational meshes. Therefore, results from the space of currents have to be mapped back to the original space of the computational meshes. The mesh surface model of the final template is obtained by deforming the mesh of the patient closest to the template towards the template currents representation. Analysing the output – the concept of shape vectors describing the entire 3D shape As each patient shape is associated with a large number of moment vectors, output data is difficult to analyse and interpret. Therefore, analysis of shape variability requires a dimensionality reduction in order to discard any redundant shape information while retaining principal contributors to shape variability [12], which can be achieved by applying partial least square regression (PLS) [12, 22]. PLS combines dimensionality reduction in the fashion of a principal component analysis (PCA) [5, 25], with linear regression. Given two sets of variables X (predictors) and Y (response), PLS computes the shape modes which best explain the variance of X, Y and the covariance of X and Y. As predictors X, the characteristic moment vectors β that parameterise the deformations of the template towards each patient were used. Analysed response variables Y were body surface area (BSA) and the functional parameter ejection fraction (EF). The resulting shape modes were ordered according to their correlation with the response variable Y, with the first one being most correlated and accounting for a certain percentage of variability in the response Y and in the predictors X, which encode all shape information. Here, we retained the first PLS shape mode only in order to capture only shape features most related to either BSA or EF and thus to avoid overfitting. The shape modes describe the main components of shape information present in the population, so that each subject shape in the dataset can be approximated by linearly combining shape modes. Thus, the shape of subject i is characterised by a unique linear combination of m weights for m shape modes as exemplified in (Eq. 6). $$ Shap{e}^i={\displaystyle \sum_{k=1}^m weigh{t}_k^i}\cdot Mod{e}_k $$ Those subject-specific weights constitute the shape vector XS, which is obtained by projecting each patient transformation onto the shape modes [12]. Correlating the shape vector and response parameter Y shows how well each subject's shape features (supposedly related to Y) are represented by the derived shape mode. Mathematical details of how our framework allows deriving PLS shape modes and shape vectors based on deformations can be found in [21] and [12]. PLS shape modes were computed using the plsregress function in Matlab. Appendix 2: Preliminary sensitivity analysis of meshing and λ parameters A sensitivity analysis was performed to investigate how the mesh resolution of the input surface models and the setting of the λ parameters affect the template shape and the computational time needed to compute a template. Segmented 3D surface models of 5 randomly chosen aortic arches were meshed from high (5 cells/mm2), to medium (2.5 cells/mm2) to low (0.5 cells/mm2) mesh resolution using VMTK. A template was calculated for each of the differently meshed test sets, first with λW and λV being constant at 15mm. All computations were performed on a workstation with 32GB memory using 10 cores. Computational time was recorded. Results showed that the computed template shape was not substantially affected by changing the mesh resolution, whereas computational time increased dramatically for high input mesh resolutions. Using the low-resolution meshes, templates were computed changing λW from 10 to 15 and 20mm, and λV from 10 to 20 and 30mm respectively, to investigate the effect of the λ parameters on the template shape. Lower λW values, i.e. higher resolution increased the computational time needed for the template calculation. The final template shape was substantially influenced by changing both λW and λV (Table 5). Table 5 Influence of mesh resolution and shape model parameters λW and λV on computational time and template shape Bruse, J.L., McLeod, K., Biglino, G. et al. A statistical shape modelling framework to extract 3D shape biomarkers from medical imaging data: assessing arch morphology of repaired coarctation of the aorta. BMC Med Imaging 16, 40 (2016) doi:10.1186/s12880-016-0142-z Received: 12 November 2015 Statistical shape model (SSM) 3D Shape analysis Coarctation of the aorta Computational modelling Thoracic imaging
CommonCrawl
inverse gumbel distribution Developed by Thomas Nagler, Ulf Schepsmeier, Jakob Stoeber, Eike Christian Brechmann, Benedikt Graeler, Tobias Erhardt. when the random variate `114` = rotated Tawn type 1 copula (180 degrees) ≈ For example, suppose you are interested in a distribution made up of three values −1, 0, 1, with probabilities of 0.2, 0.5, and 0.3, respectively. The h-function is defined as the conditional distribution function of a and By using this site you agree to the use of cookies for analytics and personalized content. ( (inverse h-function) of a given parametric bivariate copula. {\displaystyle U} β Density function, distribution function, random generation, generator and inverse generator function for the Gumbel Copula with parameters alpha.The 4 classic estimation methods for copulas. ≈ ; = The normal distribution (also called Gaussian distribution) is the most used statistical distribution because of the many physical, biological, and social processes that it can model. 6 = Joe copula ln 2 , of a Gumbel distribution is given by. is the Euler-Mascheroni constant. When the probability density function (PDF) is positive for the entire real number line (for example, the normal PDF), the ICDF is not defined for either p = 0 or p = 1. ln By plotting `24` = rotated Gumbel copula (90 degrees) hence 2 = Student t copula (t-copula) (1941). x Gumbel has shown that the maximum value (or last order statistic) in a sample of a random variable following an exponential distribution approaches the Gumbel distribution closer with increasing sample size.. {\displaystyle \gamma \approx 0.5772} U This is useful because the difference of two Gumbel-distributed random variables has a logistic distribution. 6 = 0.37 #> [1] 0.06292588 = \frac{\partial C(u_1, u_2; \boldsymbol{\theta})}{\partial u_2}, $$ The exponential distribution is a special case of the Weibull distribution and the gamma distribution. When distribution fitting software like CumFreq became available, the task of plotting the distribution was made easier, as is demonstrated in the section below. ( , irrespective of the value of For more information on Weibull distribution, see Johnson et al. $$h_2(u_1|u_2;\boldsymbol{\theta}) := P(U_1 \le u_1 | U_2 = u_2) -variable on the vertical axis, the distribution is represented by a straight line with a slope 1 \(h_2^{-1}(u_1|u_2;\boldsymbol{\theta})\). The Annals of Mathematical Statistics, 12, 163–190. This distribution might be used to represent the distribution of the maximum level of a river in a particular year if there was a list of maximum values for the past ten years. GUMBEL-WEIBULL DISTRIBUTION 202 control because its hazard rate is decreasing when the shape parameter a < 1, constant when a = 1, and increasing when a > 1. 0 7 = BB1 copula 13 = rotated Clayton copula (180 degrees; survival Clayton'') \cr `14` = rotated Gumbel copula (180 degrees; survival Gumbel'') `124` = rotated Tawn type 1 copula (90 degrees) `40` = rotated BB8 copula (270 degrees) Each integer has equal probability of occurring. μ When the PDF is positive only on an interval (for example, the uniform PDF), the ICDF is defined for p = 0 and p = 1. {\displaystyle \beta \pi /{\sqrt {6}}} {\displaystyle \beta =1} All rights Reserved. object obj, the alternative version. `204` = Tawn type 2 copula = where \((U_1, U_2) \sim C\), and \(C\) is a bivariate copula distribution ) 1 μ Extremes from Pareto distribution (Power Law) and Cauchy distributions converge to Frechet Distribution. `34` = rotated Gumbel copula (270 degrees) #> $hinv2 {\displaystyle \beta .}. integer; single number or vector of size length(u1); A variable x has a lognormal distribution if log(x – λ ) has a normal distribution. should be an positive integer for the Students's t copula family = 2. β The general formula for the probability density function of the Gumbel (minimum) distribution is \( f(x) = \frac{1} {\beta} e^{\frac{x-\mu}{\beta}}e^{-e^{\frac{x-\mu} {\beta}}} \) where μ is the location parameter and β is the scale parameter. logical; default is TRUE; if FALSE, checks . BiCopHfunc(), BiCopPDF(), BiCopCDF(), When the ICDF is not defined, Minitab returns a missing value (*) for the result. β `38` = rotated BB6 copula (270 degrees) numeric; single number or vector of size length(u1); The Frechet distribution, like the Gumbel distribution, is unbounded on the right tail and is much fatter. ⁡ Alzaatreh, Lee and Famoye (2013) proposed a method for generating new distributions, namely, the T-X family. `33` = rotated Clayton copula (270 degrees) F `104` = Tawn type 1 copula The Poisson distribution is a discrete distribution that models the number of events based on a constant rate of occurrence. Pair-copula constructions of multiple dependence. [3], At the mode, where ( {\displaystyle \gamma } [10], In machine learning, the Gumbel distribution is sometimes employed to generate samples from the categorical distribution.[11]. The vertical axis is linear. . / In probability theory and statistics, the Gumbel distribution (Generalized Extreme Value distribution Type-I) is used to model the distribution of the maximum (or the minimum) of a number of samples of various distributions. Gumbel has shown that the maximum value (or last order statistic) in a sample of a random variable following an exponential distribution minus natural logarithm of the sample size [6] approaches the Gumbel distribution closer with increasing sample size. {\displaystyle F} 0.3665 / If the family and parameter specification is stored in a BiCop() μ − 9 = BB7 copula The shape of the chi-square distribution depends on the number of degrees of freedom. In pre-software times probability paper was used to picture the Gumbel distribution (see illustration). 18 = rotated BB6 copula (180 degrees; survival BB6'')\cr `19` = rotated BB7 copula (180 degrees; survival BB7'') The Gumbel Hougaard Copula. generalized multivariate log-gamma distribution, "Les valeurs extrêmes des distributions statistiques", "Chapter 6 Frequency and Regression Analysis", "Rational reconstruction of frailty-based mortality models by a generalisation of Gompertz' law of mortality", CumFreq, software for probability distribution fitting, https://math.stackexchange.com/questions/3527556/gumbel-distribution-and-exponential-distribution?noredirect=1#comment7669633_3527556, "The Gumbel-Max Trick for Discrete Distributions", https://en.wikipedia.org/w/index.php?title=Gumbel_distribution&oldid=990718796, Location-scale family probability distributions, Creative Commons Attribution-ShareAlike License, This page was last edited on 26 November 2020, at 03:18. The Gumbel distribution is named after Emil Julius Gumbel (1891–1966), based on his original papers describing the distribution. x Therefore, this estimator is often used as a plotting position. Gumbel E.J. #>, # inverse h-functions of the Gaussian copula. 16 = rotated Joe copula (180 degrees; survival Joe'') \cr `17` = rotated BB1 copula (180 degrees; survival BB1'') γ ( {\displaystyle Q(p)} The Poisson distribution can be used as an approximation to the binomial when the number of independent trials is large and the probability of success is small. copula parameter. `30` = rotated BB8 copula (90 degrees) The Gumbel distribution is a particular case of the generalized extreme value distribution (also known as the Fisher-Tippett distribution). Possible values are integers from zero to n. If X has a standard normal distribution, X2 has a chi-square distribution with one degree of freedom, allowing it to be a commonly used sampling distribution. Q 0.78 {\displaystyle \pi /{\sqrt {6}}\approx 1.2825. Decision Tree Symbols, Bloodborne Wallpaper 1080x1920, Slogan Using Word Green Earth, Ana Hotels Revelion, Karnataka Bank Customer Care,
CommonCrawl
npj 2d materials and applications An in situ and ex situ TEM study into the oxidation of titanium (IV) sulphide Operando X-ray photoelectron spectroscopy of solid electrolyte interphase formation and evolution in Li2S-P2S5 solid-state electrolytes Kevin N. Wood, K. Xerxes Steirer, … Glenn Teeter Understanding electrochemical switchability of perovskite-type exsolution catalysts Alexander K. Opitz, Andreas Nenning, … Jürgen Fleig Preparation of hydrogen, fluorine and chlorine doped and co-doped titanium dioxide photocatalysts: a theoretical and experimental approach Petros-Panagis Filippatos, Anastasia Soultati, … Alexander Chroneos The origin of the high electrochemical activity of pseudo-amorphous iridium oxides Marine Elmaalouf, Mateusz Odziomek, … Jennifer Peron Structural evolution of titanium dioxide during reduction in high-pressure hydrogen Sencer Selcuk, Xunhua Zhao & Annabella Selloni The stability of P2-layered sodium transition metal oxides in ambient atmospheres Wenhua Zuo, Jimin Qiu, … Yong Yang The formation mechanism of Li4Ti5O12−y solid solutions prepared by carbothermal reduction and the effect of Ti3+ on electrochemical performance Guijun Yang & Soo-Jin Park Deciphering the degradation mechanism of the lead-free all inorganic perovskite Cs2SnI6 Weiguang Zhu, Guoqing Xin, … Jie Lian Thermally Stable, High Performance Transfer Doping of Diamond using Transition Metal Oxides Kevin G. Crawford, Dongchen Qi, … David A. J. Moran Edmund Long1,2, Sean O'Brien3, Edward A. Lewis4, Eric Prestat ORCID: orcid.org/0000-0003-1340-09704, Clive Downing1, Clotilde S. Cucinotta1,2, Stefano Sanvito1,2, Sarah J. Haigh4 & Valeria Nicolosi1,5 npj 2D Materials and Applications volume 1, Article number: 22 (2017) Cite this article Scanning electron microscopy Two-dimensional materials Titanium (IV) sulphide (TiS2) is a layered transition metal dichalcogenide, which we exfoliate using liquid phase exfoliation. TiS2 is a candidate for being part of a range of future technologies. These applications are varied, and include supercapacitor and battery energy storage devices, catalytic substrates and the splitting of water. The driving force behind our interest was as a material for energy storage devices. Here we investigate a potential failure mechanism for such devices, namely oxidation and subsequent loss of sulphur. This degradation is important to understand, since these applications are highly property-dependent, and changes to the chemistry will result in changes in desired properties. Two approaches to study oxidisation were taken: ex situ oxidation by water and oxygen at room temperature and in situ oxidation by a 5% O2/Ar gas at elevated temperatures. Both sources of oxygen resulted in oxidation of the starting TiS2 flakes, with differing morphologies. Water produced amorphous oxide slowly growing in from the edge of the flakes. Oxygen gas at ≥375 °C produced crystalline oxide, with a range of structures due to oxidation initiating from various regions of the observed flakes. Titanium (IV) sulphide, TiS2, is a candidate for applications in a range of future technologies, including catalytic substrates and the splitting of water, supercapacitors and energy storage devices. TiS2 is a layered transition metal dichalcogenide (TMD) with hexagonal crystal symmetry and takes the 1T phase and is relatively under explored in two-dimensional form. Each layer consists of three sheets of atoms: the two outer planes are sulphur, coordinated to three titanium atoms with a trigonal pyramidal geometry; the middle plane is titanium and each atom takes octahedral coordination to six sulphur atoms. These atomic sheets are stacked ABC (Supplemental Information Fig. 1), while the layers themselves stack AA in the 1T phase. Initial status of TiS2. a TEM overview of TiS2 flake, b HAADF STEM of region from A, from which maps C-E were acquired, c Ti L3 EELS map, d O K EELS map, e S K EDX map The excellent capacitive properties of bulk TiS2 (high energy-density and power-density1, 2 and excellent electronic conductivity3) have attracted interest since the 1970s, with bulk TiS2 used as a cathode material in lithium-metal/alloy anode batteries, where lithium metal ions were intercalated, transferring charge to reduce titanium 3d orbitals and causing reduction from Ti4+ to Ti3+ (refs. 4, 5). High rates of charging and discharging were demonstrated with near-perfect reversibility. In addition, the material forms a single phase with lithium (Li x TiS2) over the entirety of the range 0 ≤ x ≤ 1, avoiding the need for phase changes upon (de-)lithiation which removes much of the strain induced in cycling, prolonging battery life. However, TMD/Lithium-metal batteries were withdrawn from consumers due to accidents in 1989 where the metallic lithium caught fire as a consequence of short-circuiting due to lithium-dendrite growth.6 Other battery-applications of bulk TiS2 include as an encapsulation material of Li2S cathodes and as part of nanocomposite cathodes in solid-state lithium batteries.7, 8 It has also been used as a hybrid cathode in Li-S batteries, where sulphur is intercalated into a TiS2 foam, and found to have a very high capacity.9 Furthermore, TiS2 has also been used as a substrate for the growth of water-splitting Pt and Au catalysts.10 It has also been proposed that alloying with TiO2 would reduce the bandgap to optimise photon absorption for photocatalytic water splitting.11 The exact bandgap of TiS2 is hard to specify, as a range of values have been put forward across the literature, ranging from −1.5 to 2.5 eV, but it is usually considered to be either a semi-metal (negative bandgap arising from overlapping conduction and valence bands) or a small-gap semiconductor.12,13,14 One possible explanation for this range of values is that the samples of TiS2 being measured had already undergone some small degree of oxidation, or that samples differed stoichiometrically. Here our interest in the degradation of this material was initialised when, upon opening a dispersion of TiS2, a sulphurous smell was noted. In the presence of water TiS2 could oxidise according to the following reactions: $${\rm{Ti}}{{\rm{S}}_{2}} + {\rm{x}} {{\rm{H}}_{2}}{\rm O} \to {\rm{Ti}}{{\rm{S}}_{2-}}{_x}{{\rm{O}}_x} + x\,{{\rm{H}}_2}{\rm{S}}.$$ While in the presence of oxygen gas, the following is proposed: $${\rm{Ti}}{{\rm{S}}_2} + {{\rm{O}}_2} \to {\rm{Ti}}{{\rm{S}}_{2 - 2x}}{{\rm{O}}_{2x}} + \frac{x}{4}{{\rm{S}}_8}.$$ Cucinotta et al.15 performed density functional theory (DFT) calculations on the above reactions, predicting that in an aqueous environment water would preferentially oxidise a monolayer from the edge or from a point defect in the case that the edge was 50% terminated in sulphur. It is worth noting that other possible mechanisms could result in the formation of different species including SO2, but this was not included in these calculations. This correlated well with work by Han et al.16 who observed similar behaviour in TiS2 nanocrystals in a water/toluene mixture. Here we expand this work to study the oxidation of material produced through liquid phase exfoliation methods. We combine ex situ studies with advanced electron microscopy techniques including in situ oxidation studies to reveal the mechanisms underlying the oxidation of TiS2 nanosheets. Oxidation by water We studied the oxidation of TiS2 by two common oxygen-containing species: deionised water (H2O) and gaseous oxygen (O2). We used ex situ ageing experiments initially to study oxidation over the course of many weeks within aqueous and atmospheric environments. We subsequently conducted in situ gas-heating and vacuum-heating experiments to try and observe oxidation in action, but decoupled from purely thermal effects. Oxidation occurs on the nanoscale, so we used transmission electron microscopy (TEM) to acquire both structural and spectroscopic information with high spatial resolution, in combination with first principles calculations to determine likely reaction mechanisms. Flakes exfoliated by ultra-sonication had thicknesses in the approximate range 10–60 nm as measured by energy filtered transmission electron microscope thickness mapping, taking the electron inelastic mean free path at 300 kV to be 127.4 nm (further information can be found in the supplementary information, Supplementary Fig. 2).17 This range of thickness is typical of flakes exfoliated in this manner without undergoing size-selection, and includes changes in thickness within a single flake.18 Lateral flake dimensions range from several hundred nanometres up to several micrometres. Figure 1 shows the status of a flake of TiS2 immediately after exfoliation. Figure 1a shows an overview of a flake acquired with TEM, the boxed region was imaged with scanning transmission electron microscopy (STEM) in Fig. 1b. Figure 1c–e shows elemental maps. We mapped sulphur with energy-dispersive X-ray spectroscopy (EDX) due to the weak EELS L2,3 edge (Fig. 1e), which is fairly uniform across the entirety of the mapped region. Titanium L2,3 (Fig. 1c) and oxygen K (Fig. 1d) edges were mapped using electron energy loss spectroscopy (EELS). EDX maps of the Ti K, L and O K peaks are provided in Supplementary Fig. 3, but the overlap between the Ti L and O K peaks due to poorer energy resolution means they are not as useful to map as the associated EELS edges and the low counts makes EDX quantification meaningless. As expected, titanium was found uniformly across the flake, and a narrow boarder (~2 nm) of oxygen was observed at flake/step edges, attributed to oxidation of the starting powder prior to exfoliation. The small bright feature present in Fig. 1c, d is attributed to a particle of titanium oxide on top of the sulphide flake, whereas no feature is present at that location in the S map (Fig. 1e). Figure 2 maps across a flake stored in H2O for 21 days, during which time the oxidation region grew ~80 nm into the flake. A band of material was mapped across the flake to identify changes over the flake which are not localised to the edge, which is narrow to minimise the effects of sample-drift. Figure 2a is an HAADF (high-angle annular dark-field) STEM image of part of the TiS2 flake, across which EELS and EDX spectra were acquired. Two regions were identified based on contrast levels and morphology—an outer and an inner region. Speckled contrast was observed in the outer region and to a lesser extent across the centre. This is indicative of fragmented pieces, suggesting the flakes are breaking down during the reaction to form a shell. A region of darker HAADF intensity was observed along the border between these regions – implying reduced thickness or mass. It is expected that the thickness of these flakes resulted in the retention of this inner region, due to difficulties in getting water molecules into the middle where the flakes were thickest. Thinner flakes would likely fully oxidise faster due to the relative increase in the number of layers near to the surface. Figure 2b–d shows elemental maps across these regions, consisting of a Ti L2,3 EELS map (2 B), an O K EELS map (2C) and a S K EDX map (2 D). Figure 2b, c shows Ti and O are present across all areas although both seem reduced in the inner region, while S was confined to only the inner region. Mapping the shift in energy of the maximum of the Ti L3 edge (Fig. 2e) shows a shift of 0.8 eV from the sulphide to the oxide region, an expected chemical shift.19 This chemical shift will be expanded upon later in comparison to other samples. There is a ~5 nm wide intermediary region (green in Fig. 2e) which matches with the decay in sulphur intensity in Fig. 2d, in which a mixture of sulphide and oxide is expected. State of TiS2 after 21 days oxidation in water. a HAADF STEM of a partially oxidised flake of TiS2 from which B-G were acquired in the location indicated by the orange rectangle, b Ti L3 EELS map, c O K EELS map, d S K EDX map, e map of the peak shift on the Ti L3 edge, f: Summed nanoprobe diffraction patterns from sulphide region, g Summed nanoprobe diffraction patterns from oxide region. f and g have been contrast inverted for clarity A 'parallel nanoprobe' beam was employed to compare the crystallinity between these regions. The nanoprobe had a diameter of ~15 nm, and was used to obtain local diffraction patterns from across the flake, providing a sampling region over 130 times smaller than that obtainable with our smallest selected area electron diffraction aperture. The sulphide region (Fig. 2f) was crystalline, matching TiS2 down (001), while the oxide region (Fig. 2g) was amorphous. Figure 2f, g was obtained by summing the diffraction patterns from their respective regions, and contrast was inverted for clarity. This oxidation process in water, matched the way flakes were observed to degrade within the dispersion, shown in Supplementary Fig. 4. Oxidation by air An additional oxidation process explored was the reaction with oxygen gas. Figure 3 shows the result of ageing in normal atmospheric conditions for 46 days. This sample showed more oxidation than the as-exfoliated flake shown in Fig. 1. Figure 3a is a STEM image of the edge-region, and fast fourier transforms (FFT) (green and red insets to Fig. 3a) demonstrate the loss of crystallinity at the edge. Variations in contrast in STEM are due to both differences in average atomic number and sample thickness. Along the right hand edge, the flake has fully oxidised and appears dark relative to the rest of this area due to titanium oxide being of a lower mean atomic number than titanium disulphide. However the bright region towards the bottom of the probed region arises from a TiO x particle on top the TiS2. The oxide region looks very different from that in Fig. 3a, uniform contrast suggesting a different structure was produced without the small nanoflakes observed in the water-oxidised sample. We observed strong enhancement of the oxygen signal around the flake edges (Fig. 3c), similar to the 'as-exfoliated' sample in Fig. 1, which had only grown ~5 nm into all layers after 46 days. A line of oxide extending into the flake was observed, suggesting that the oxide can form along boundary defects, but that growth into the bulk of the flake is limited. A shift in the peak of the Ti L3 edge (Fig. 3e) was again observed in the oxide region as in the pristine sample but there was now with a larger shift of about 1.5 eV compared to the sulphide. Again, a 5-nm wide intermediary region was observed. State of TiS2 after 46 days oxidation in air. a HAADF of TiS2 flake from which b–e were acquired. Inserted are FFTs from the middle (green) and edge (red) showing loss in crystallinity in the oxide, b Ti L3 EELS map, c O K EELS map, d S K EDX map, e Map of the shift of the maximum of the Ti L3 edge. Scale bars on b–e are 20 nm Overall, ageing in water produced significantly more oxidation despite exposure for only half as long as a sample exposed only to air. A possible explanation for this difference is that water molecules dissociate into H+ and OH− ions, creating a more reactive. This suggests that to prevent oxidation of TiS2 a desiccated environment may be sufficient and a good vacuum or an inert gas environment may not be necessary. An aqueous electrolyte, however, would likely lead to premature failure battery devices. Oxidation by oxygen + heating We studied the response of TiS2 flakes to exposure to a 5% oxygen/argon gas mixture at temperatures between 150 and 500 °C. Thermogravimetric analysiss showed rapid conversion of TiS2 to TiO2 at 325 °C.14 This heating induced transformation is of potential importance to alternate battery systems. For example, Na-ion batteries often operate at elevated temperature to compensate for the sluggish movement of the larger ions versus Li ions. In order to better understand this behaviour we used EDX and EELS spectrum imaging and nanoprobe line profiles, in conjunction with a heating/gas in situ TEM holder to study the oxidation of these materials at high spatial resolution. To minimise initial oxidation, the flakes were dispersed in anhydrous isopropyl alcohol (IPA) in the inert atmosphere of a nitrogen glove-box. The sample was put under vacuum within a few hours of being drop-cast, and a flake was initially inspected at room temperature in vacuum (Supplementary Fig. 5), however the flakes bore a strong resemblance to those in Fig. 1, with a thin oxide border around all flake steps. The sample was heated up to 150 °C and allowed to settle before the gas mixture was introduced, which produced no obvious changes to flakes (Fig. 4a). Gas was then introduced at a pressure of 330 mbar, and the temperature increased to 250 °C and then to 375 °C. At 375 °C a rapid and dramatic morphological change occurred in many flakes, at which point the temperature was dropped to 325 °C to prevent further reaction while analysis was performed. Oxidation at 375 °C. a Initial HAADF of material at 150 °C in vacuum, b HAADF of region from which b–h were acquired after heating to 375 °C in 330 mbar gas, c Ti K EDX map, d O K EDX map, e S K EDX map, f Summation of nanoprobe diffraction patterns from line across edge of flake, contrast inverted for clarity, g Ti L3 EELS map, h O pre-K EELS map, i O K EELS map Figure 4 demonstrates that the flake undergoes both chemical and morphological changes when heated to 375 °C in this environment. The HAADF contrast (Fig. 4b), formerly uniform across the flake, showed striations running through it parallel to the edges of the flake, and the flake appears to have shrivelled up and possibly delaminated, very different to the pristine starting material. We saw near-complete loss of sulphur throughout (Fig. 4e), apart from a small region top the top left underneath a much thicker flake. Comparison of the Ti and O EDX maps (Fig. 4c and d) suggests that there is an oxide layer which does not contain Ti around the edge of the flake. This is confirmed by EELS analysis in which we can distinguish different local environments for oxygen from the shape of the O K edge. By producing a chemical map using the O pre-K edge (π* and σ*) (Fig. 4h) we find this coincides well with the Ti L2,3 map (Fig. 4g). In contrast the main O K-edge map (Fig. 4i) extends outside the Ti-rich region but agrees with the O EDX map, suggesting the presence of an additional surface oxide layer. A combination of the surface layer and the thickness likely protected the top region from oxidation. Nanoprobe electron diffraction patterns along the line indicated in Fig. 4b were acquired to study the crystallinity. Figure 4f is the sum of diffraction patterns from 12 positions over a distance of 120 nm from the surface into the flake. The apparent superposition of single-crystal patterns show that oxide is polycrystalline, while the perseverance of the dominant {110} reflections across several steps suggests one grain to be at least 50 nm wide. Of the three main TiO2 polytypes (rutile, anatase and brookite), the reflections observed most closely match to rutile observed along the [001] direction, with traces of anatase. At this temperature, anatase would be the expected phase of TiO2, as the anatase-rutile transformation is not expected until 620 °C.20 Both anatase and rutile nanosheets have been observed,21,22,23,24 and 4-layer thick rutile nanosheets are predicted to be stable.25 In situ heating in vacuum showed little change to the flake, expanded upon in Supplementary Fig. 6, but shows it is the presence of O2 that is critical to this transformation. Analysis of elemental environment from electron energy-loss spectra Maps of the Ti L3 peak shift relative to its energy in the TiS2 are shown in Figs. 2e and 3e. A shift up in energy is expected upon replacing a Ti–S bond with a Ti–O bond. Oxygen is more electronegative than sulphur so oxygen will form a more ionic bond, increasing the bond energy. Mapping the Ti L3 peak offers a useful tool to distinguish between regions of the sample that have actually oxidised versus those with a residual surface layer containing oxygen. It also offers some insight into the degree to which a sulphurous-region has been oxidised, based on the magnitude of the peak shift. The maximum peak shift observed was 1.5 eV in Fig. 3e; however this peak-shift is reduced in Fig. 2e as a result of some surface oxidation also seen in the O EELS map (Fig. 2c). Figure 5 compares EELS spectra from flakes oxidised in water, air and the in situ heating, showing the Ti L2,3, L1 (minor) and O K edges. These spectra have been normalised to the same integrated intensity for ease of comparison, and split into the Ti L2,3 and O K/Ti L1 regions. It can be seen that while the intensity is relatively flat after the Ti L2,3 edge in spectra A–D post background removal, it increases in spectra E from the in situ experiments. This is due to increased plural scattering from the SiN windows encasing the gas in the in situ holder adding another 80 nm to the overall specimen thickness. Adding this to the flake thickness approaches the inelastic mean free path of TiS2—128 nm at 300 kV. This worsens the energy resolution, spreading the zero-loss peak to a full width half-max of 1.3 eV, so we were not able to resolve most of the fine detail around the various edges. Comparison of Ti and O EELS edges from across different sample of oxidised TiS2. a From the sulphide region of the water oxidised flake in Fig. 2, b from the oxide region of the water oxidised flake in Fig. 2, c from the sulphide region of the air oxidised flake in Fig. 3, d from the oxide region of the air oxidised flake in Fig. 3, e from the oxide region of the in situ oxidised flake in Fig. 4. Inset concentration of Ti4+ within oxide spectra The energy loss near edge structure can act as a fingerprint for specific polymorphs of a given structure, such as the various phases of TiO2—anatase, rutile and brookite, and can also differentiate other oxides of varying stoichiometry.26,27,28 Stoyanov et al.28 proposed the following empirical equation to determine the concentration of Ti4+ within a titanium oxide: $$x = \left[ {\ln \left( {\frac{{a - 0.87953}}{{0.21992}}} \right)} \right]\, \times \,0.21767.$$ In this equation, x is the Ti4+ concentration (Ti4+/ΣTi4+) and a is the white line intensity ratio I(L2)/I(L3) based on the intensity within two 1 eV energy windows—an L3 window centred around the first white line in the L3 edge in a pure Ti3+ sample (455.8 to 456.8 eV), and an L2 window centred around the second white line in the L2 edge in a pure Ti4+ sample (465.25–466.25 eV). This analysis was performed on the three oxides in Fig. 5 (extracted from the corresponding oxide regions in Figs. 2–4), produced after exposure to water (Fig. 5b), atmosphere (Fig. 5d), and oxygen at 375 °C (Fig. 5e). It showed that in all cases we have a mixture of Ti3+ and Ti4+ ions present within the oxides. This suggests that oxidation of TiS2 does not form pure TiO2 but instead, we appear to have the formation of an intermediary or non-stoichiometric oxide. This could explain the disordered structures observed in both imaging and diffraction (either amorphous or nanocrystalline). Standard EELS quantification within Digital Micrograph of the oxides in Fig. 5b (water) and Fig. 5d (air) returned Ti:O ratios of 47.2:52.8 and 43.3:56.7 implying formation of an oxide with a stoichiometry close to Ti3O4. While there is no known oxide with this formula,29 two close oxides are TiO and Ti2O3.28 The spectra show agreement with Ti2O3 with reasonable match in the position of the L3,2 maxima, while the lack of crystallinity would likely cause broadening of the peaks making it hard to resolve the expected splitting.28 However, this stoichiometry would suggest the valence of Ti to be a mixture of Ti3+ and Ti2+, TiO would not be expected to be found at room temperature, typically requiring temperatures of nearly 1000 °C to form, and would oxidise almost immediately upon exposure to air to a TiO2.30, 31 Figure 5e was not considered for quantification due to the rising background resulting from the plural scattering. Thermodynamics of reaction with oxygen gas To investigate the difference in reactivity between O2 and H2O, we performed calculations to determine the thermodynamics of oxidation of TiS2 by oxygen gas. In the first step of this reaction, the oxygen molecule should be physisorb on TiS2 edge, which is endothermic (−0.28 eV). By contrast physisorption of a water molecule on a TiS2 edge is quite exothermic ( + 0.76 eV). Oxygen could then either substitute for sulphur ( + 2.00 eV at the edge and + 1.98 eV at the centre of a nanoflake) or fill a pre-existing sulphur vacancy (the most exothermic at + 3.39 eV), both significantly more exothermic than the corresponding reaction with water (0.12, 0.11 and 0.21 eV, respectively). Generation of a sulphur vacancy is strongly endothermic however (−1.4 eV) and under the proposed mechanism, filling one would not result in the formation of a new vacancy (in contrast to the chaining reaction with water).15 Filling a vacancy would also result in the formation of a lone oxygen atom that would then either need a second vacancy to enter the structure, or combination with another lone oxygen atom to form a new O2 molecule (we did not investigate this step). The low reactivity of O2 at room temperature is therefore believed to be due to the relatively large barrier in bringing O2 to the surface and then removing a sulphur atom, which is easier to overcome as thermal energy becomes more plentiful upon heating. It might be possible to have some degree of control over the electronic properties of a sulphide-oxide hybrid material, by engineering the band gap. Calculations have shown the band gap can be fine-tuned between the extremes of pure TiS2 and TiO2 via oxygen substitution,15 which might also explain why such a range of values exist for the measured band-gap. Aging in water would provide one method of controlling the formation of oxide, although it would be isolated to the edges. Flash heating with oxygen might provide an alternative, if a flake could be controllably damaged such that oxygen vacancies pocket the surface, possibly through ion- or electron-irradiation. Small regions of oxide dispersed among the sulphide could therefore potentially be formed. Possible oxidation mechanism The mechanism by which water oxidises TiS2 could be envisaged as having two stages. During stage 1, sulphur in the outermost surface regions is exchanged with oxygen to produce the hydrogen sulphide smelled upon opening vials of the dispersion. This has previously been demonstrated theoretically,15 and the mechanism reproduced in Supplementary Fig. 7. Later, titanium in non-surface layers seems to diffuse out to react with oxygen, leading to titanium depletion within the core. The depletion region appears dark in both the HAADF intensity (Fig. 2a) and Ti EDX maps (Fig. 2b), and is rich in sulphur relative to titanium. This would be consistent with the nanoscale Kirkendall effect, whereby atoms diffuse via vacancy exchange, with preferential diffusion of cations outwards over anions inwards.32,33,34 Reaction with water at the flake edge disturbs the crystallinity, generating Ti4+ vacancies. These vacancies diffuse away from the edge as fresh Ti4+ moves towards the edge to react with O2−, leaving sulphur to recombine and leave afterwards. The formation of the oxide layer hinders both the removal of sulphur and inward-diffusion of oxygen, leading to the formation of the depletion region observed. The apparent increase in thickness, observed from the increase in intensity in Fig. 2b, would likely be due to formation a less dense amorphous structure. This swelling of layers as they oxidise would serve to inhibit transport to the interface resulting in the slow rate of degradation of layers away from the surface, which would have the greatest impact on thicker flakes. A schematic representing this mechanism is shown in Supplementary Fig. 8. We observed edge oxidation to be strongly preferential versus bulk suggesting that diffusion between layers is much faster than through layers. The fragmented nature of the oxidised region could arise during the removal of sulphur, if the sulphurous species act to isolate nucleation of tiny oxide domains. This contrasts with the oxide morphology seen in the oxygen-gas-heating experiments. There oxidation seems to result in the shrinking and crinkling up of the flake, albeit still with a polycrystalline structure. This could be due to large amounts of strain being rapidly induced in the flakes by the reaction. The elevated temperature makes all species more mobile, which is likely the reason behind both the higher rate of oxidation and the higher rate of sulphur recombination and removal. An alternative explanation for the striped appearance of the oxide is that oxide nucleates and starts to grow, dragging titanium towards it while expelling sulphur away. The sulphur-rich regions eventually combine into gaseous species lost into the atmosphere in the holder, leaving behind regions with much less material than those around the original nucleation sites. In conclusion, we studied the oxidation of TiS2 in a range of environments using a range of advanced electron microscopy techniques. The kinetic barrier for oxygen gas oxidation appears to be higher than that for oxidation via water. Water was observed to slowly oxidise TiS2 flakes, with oxidation starting at the flake edge and moving inwards, forming an amorphous oxide of Ti3+ and Ti4+. In contrast, the flakes oxidise slowly in atmosphere or vacuum conditions at high temperatures, suggesting moisture to be the critical element to remove from their environment in order to ensure long term stability. TiS2 was fully oxidised when heated above 375 °C in oxygen gas, using an in situ set-up, whereupon flakes oxidised immediately to form poly-crystalline material. This work demonstrates the potential capabilities of cutting-edge in situ TEM holders for detailed characterisation of two-dimensional nanomaterials degradation behaviour within relevant environmental conditions not commonly available within a transmission electron microscope. In particular simultaneous EDX and EELS spectral imaging has been demonstrated in a gas-cell operating at a pressure of 330 mbar, despite the challenges presented by a combination of 80 nm of electron-transparent window material and a gaseous environment. Mapping of the shift in the Ti L3 in particular offers a valuable method of distinguishing regions of oxide, sulphide, or a mixture of the two species. This is particularly useful when surface oxygen species would otherwise skew elemental mapping. DFT calculations found the oxidation of TiS2 by both water and oxygen gas to be thermodynamically favourable, with an edge site being favourable to a vacancy site due to the lack of chain reaction to progress the reaction at the vacancy. Commercial TiS2 powder was acquired from Sigma Aldrich (99.9% Lot#333492) and stored in an argon filled glove box with both oxygen and moisture readings <1 ppm. Five milligrams of TiS2 were added to 50 ml anhydrous IPA (99.5% Sigma Aldrich Lot#278475). The unexfoliated dispersion was then exfoliated using a Fisherbrand ultrasonic disintegrator operating at 20 kHz and 34 W. The temperature was kept at a constant 10 °C using a circular IsoUK 4100 R20 refrigeration bath. The dispersion was sonicated for 3 h and quickly placed inside a centrifuge tube. The dispersion was centrifuged at 1000 RPM for 1 h and the supernatant was decanted and stored in the Argon-filled glove box. For the ex situ ageing experiments, the dispersion was drop-cast onto gold TEM grids, while for the in situ experiments it was drop-casted onto specialised Si-Si3N4 windowed environmental cell chips. The later are used with the in situ gas TEM holder and allow the sample to be viewed at high resolution while being exposed to elevated temperatures (up to 900 °C) and pressures up to 1 bar. The upper environmental cell chips consist of a heating element surrounding 30-nm thick Si3N4 windows through which the electron beam can pass and the sample can be viewed. Material was drop-casted onto these windows for heating and imaging. The microscope used for Figs. 1 and 2 was an FEI Titan 80–300 S/TEM, operated mainly in STEM mode at 300 kV, with an EDAX EDX detector and a Gatan imaging filter (GIF) for acquiring EEL spectra. The convergence angle was 10 mrad, and HAADF collection angles were 39–200 mrad while EEL spectra were acquired with a collection angle of 21 mrad and an energy dispersion of 0.1 eV. Figure 3 was acquired on a NION UltraSTEM operated at 200 kV with a Brucker EDX detector and Enfinia EELS spectrometer. The convergence angle was 27 mrad, collection angle of the HAADF detector 99–200 mrad, and EEL spectra were acquired with a collection angle of 15 mrad and an energy dispersion of 0.1 eV. The procedure for ex situ hydration of TiS2 was to image flakes immediately after dispersion, then store the grid in deionised water between imaging sessions. The grid was dried in an Across International drying oven (V0-16020C3-Prong) at ~5 × 10−2 mbar for an hour prior to imaging to remove as much excess water from the grid as possible, and then returned to DI water immediately afterwards. Attempts were made to re-image the same flakes, but build-up of carbonaceous contamination would coat imaged areas, retarding or fully preventing further oxidation. However, the range of flakes studied was broadly similar in both size and thickness, so reasonable comparisons could be drawn between them. In situ oxidation experiments have been performed using a Protochips Atmosphere system and a FEI Titan 80–200 ChemiSTEM equipped with probe-side aberration correction, anX-FEG electron source and four windowless SDD detectors. The experiment was performed using an acceleration voltage of 200 kV, a beam current of 600 pA and a convergence angle of 21 mrad. EDX spectrum imaging was performed with the specimen tilted at 25° to prevent shadowing of the X-rays emitted from the specimen by the specimen holder. In this configuration, two of the four (Super-X) EDX detectors were used for the acquisition, providing a solid angle of about 0.4 sr. EEL spectra were acquired using a GIF Quantum with an energy dispersion of 0.1 eV and a collection angle of 62 mrad, providing an effective energy resolution of 1.3 eV. In standard imaging mode, the HAADF collection angles were 62–142 mrad, while in spectroscopy imaging mode the HAADF collection angles were 60–190 mrad. The gas environmental cell holder was run with an oxygen/argon mix at 290–350 mbar operated between 150–500 °C. The holder had been modified previously, similar to the modifications reported for the liquid-cell holder,35, 36 in order to significantly reduce the shadowing of the holder for two of the four available EDX detectors. The internal environment of the holder was shielded from the vacuum of the microscope by two silicon nitride windows, 30 and 50-nm thick. Selecting an appropriate sample size is a crucial step in designing a successful study. Electron microscopy studies were hereby designed to be representative of the sample. We based this study on multiple samples prepared exactly in the same way. Samples were individually screened at the microscope to make sure our findings were representative. Sheets thickness dependence is discussed in the manuscript. Computational approach DFT37, 38 calculations were performed with the CP2K package using the PBE (Perdew–Becke–Ernzerhof) exchange and correlation (XC) functional.39 CP2K uses Goedecker-type pseudopotentials and expands Kohn–Sham orbitals using a combination of a mixed Gaussian-type and plane-wave basis set.40 The atomic orbitals of the atoms involved (O, H, S, and Ti) were expanded in DZVP and DZV (double-zeta valence polarised and double-zeta valence) Gaussian-type basis sets, while charge density was expanded in plane-waves with a density cut-off of 800 Ry. An ideal TiS2 monolayer was modelled using a hexagonal TiS2 (1,1,1) 8 × 8 supercell containing 192 atoms, in which layers were periodically separated in the z-direction by a vacuum region 10.7 Å wide. A 4 × 8 supercell of TiS2 was created to model a nanoflake, continuous in the y-direction and truncated in the x-direction, such that periodically repeated flakes in the x- (z-direction) direction were separated by 15.37 (10.73) Å. This supercell was created to be orthorhombic, replicating a (√3,1) rectangular unit cell over the primitive hexagonal unit cell. This nanoflake was created to study structural and oxidation properties at the edges, which were modelled with 50% S-passivation.15 The structure of this edge was a zig-zag structure, with S atoms lying across the Ti terminal row such that the edge appears tilted. This has the effect of leaving the Ti atoms accessible to the oxidising molecules—gaseous oxygen and water. We determined the cell to be sufficiently large to be a good representation of the Brillouin zone at Γ, by comparison of edge geometry and oxidation energies between a 4 × 8 and a 4 × 20 supercell. Substitution of an O atom into the middle of the nanoflake was also performed to check that the same reaction energy was obtained for an infinite surface. Winter, M., Besenhard, J. O., Spahr, M. E. & Novák, P. Insertion electrode materials for rechargeable lithium batteries. Adv. Mater. 10, 725–763 (1998). Whittingham, M. S. & Jacobson, A. J. High energy density plural chalcogenide cathode-containing cell. US Patent 4,233,375 (1979). Whittingham, M. & Jacobson, A. J. A mixed rate cathode for lithium batteries. J. Electrochem. Soc. 128, 485–486 (1981). Whittingham, M. Lithium batteries and cathode materials. Chem. Rev. 104, 4271–4301 (2004). Kanno, R., Takeda, Y., Imura, M. & Yamamoto, O. Rechargeable solid electrolyte cells with a copper ion conductor, Rb4Cu16I7−δCl13+δ, and a titanium disulphide cathode. J. Appl. Electrochem. 12, 681–685 (1982). Brandt, K. Historical development of secondary lithium batteries. Solid State Ion. 69, 173–183 (1994). Seh, Z. W. et al. Two-dimensional layered transition metal disulphides for effective encapsulation of high-capacity lithium sulphide cathodes. Nat. Commun. 5, 5017 (2014). Trevey, J. E., Stoldt, C. R. & Lee, S.-H. High power nanocomposite TiS2 cathodes for all-solid-state lithium batteries. J. Electrochem. Soc. 158, A1282–A1289 (2011). Ma, L. et al. Hybrid cathode architectures for lithium batteries based on TiS2 and sulfur. J. Mater. Chem. A 3, 19857–19866 (2015). Zeng, Z., Tan, C., Huang, X., Bao, S. & Zhang, H. Growth of noble metal nanoparticles on single-layer TiS2 and TaS2 nanosheets for hydrogen evolution reaction. Energy Environ. Sci. 7, 797–803 (2014). Yang, C., Hirose, Y., Nakao, S., Hoang, N. L. H. & Hasegawa, T. Metal-induced solid-phase crystallization of amorphous TiO2 thin films. Appl. Phys. Lett. 101, 52101 (2012). Myron, H. & Freeman, A. Electronic structure and optical properties of layered dichalcogenides: TiS2 and TiSe2. Phys. Rev. B 9, 481–486 (1974). Greenaway, D. L. & Nitsche, R. Preparation and optical properties of group IV–VI2 chalcogenides having the CdI2 structure. J. Phys. Chem. Solids 26, 1445–1458 (1965). McKelvy, M. J. & Glaunsinger, W. S. Synthesis and characterization of nearly stoichiometric titanium disulfide. J. Solid State Chem. 66, 181–188 (1987). Cucinotta, C. S. et al. Electronic properties and chemical reactivity of TiS2 nanoflakes. J. Phys. Chem. C 119, 15707–15715 (2015). Han, J. H. et al. Unveiling chemical reactivity and structural transformation of Two-dimensional layered nanocrystals. J. Am. Chem. Soc. 135, 3736–3739 (2013). Malis, T., Cheng, S. C. & Egerton, R. F. EELS log-ratio technique for specimen-thickness measurement in the TEM. J. Electron. Microsc. Tech. 8, 193–200 (1988). Lotya, M. et al. Liquid phase production of graphene by exfoliation of graphite in surfactant/water solutions. J. Am. Chem. Soc. 131, 3611–3620 (2009). Egerton, R. F. Electron energy-loss spectroscopy in the TEM. Rep. Prog. Phys. 72, 16502 (2009). Mckelvy, M. J., Claunsinger, W. S. & Ouvrard, G. Titanium Disulfide 6 (Wiley, 2007). Yu, J., Fan, J. & Lv, K. Anatase TiO2 nanosheets with exposed (001) facets: improved photoelectric conversion efficiency in dye-sensitized solar cells. Nanoscale 2, 2144–2149 (2010). Sheng, L., Liao, T., Kou, L. & Sun, Z. Single-crystalline ultrathin 2D TiO2 nanosheets: a bridge towards superior photovoltaic devices. Mater. Today Energy 3, 32–39 (2017). Peng, C. W. et al. Interconversion of rutile TiO2 and layered ranisdellite-like titanates: New route to elongated mesoporous rutile nanoplates. Cryst. Growth Des. 8, 3555–3559 (2008). Zhang, Y. et al. Raman study of 2D anatase TiO2 nanosheets. Phys. Chem. Chem. Phys. 18, 32178–32184 (2016). He, T. et al. Layered titanium oxide nanosheet and ultrathin nanotubes: A first-principles prediction. J. Phys. Chem. C 113, 13610–13615 (2009). Brydson, R. et al. Electron energy loss and X-ray absorption spectroscopy of rutile and anatase: a test of structural sensitivity. J. Phys. Condens. Matter. 1, 797–812 (1989). Brydson, R., Sauer, H., Engel, W. & Hofer, F. Electron energy-loss near-edge structures at the oxygen Kedges of titanium (IV) oxygen compounds. J. Phys. Condens. Matter. 4, 3429–3427 (1992). Stoyanov, E., Langenhorst, F. & Steinle-Neumann, G. The effect of valence state and site geometry on Ti L3,2 and O K electron energy-loss spectra of TixOy phases. Am. Minerol. 92, 577–586 (2007). Pearson, A. D. Studies on the lower oxides of titanium. J. Phys. Chem. Solids 5, 316–327 (1958). Watanabé, D., Castles, J. R., Jostsons, A. & Malin, A. S. The ordered structure of TiO. Acta Crystallogr. 23, 307–313 (1967). Bartkowski, S. et al. Electronic structure of titanium monoxide. Phys. Rev. B 56, 10656–10667 (1997). Yin, Y. et al. Formation of hollow nanocrystals through the nanoscale Kirkendall effect. Science 304, 711–714 (2004). Li, X., Li, M., Liang, J., Wang, X. & Yu, K. Growth mechanism of hollow TiO2(B) nanocrystals as powerful application in lithium-ion batteries. J. Alloys Compd. 681, 471–476 (2016). Liang, J. et al. Fabrication of TiO2 hollow nanocrystals through the nanoscale Kirkendall effect for lithium-ion batteries and photocatalysis. New J. Chem. 39, 3145–3149 (2015). Lewis, E. A. et al. Real-time imaging and local elemental analysis of nanostructures in liquids. Chem. Commun. 50, 10019–10022 (2014). Zaluzec, N. J., Burke, M. G., Haigh, S. J. & Kulzick, M. A. X-ray energy-dispersive spectrometry during in situ liquid cell studies using an analytical electron microscope. Microsc. Microanal. 20, 323–329 (2014). Hohenberg, P. & Kohn, W. Inhomgenous electron gas. Phys. Rev. B 136, B864–B871 (1964). Kohn, W. & Sham, L. J. Self consistent equations including exchange and correlation effects. Phys. Rev. 385, A1133–A1138 (1965). Vandevondele, J. et al. Quickstep: Fast and accurate density functional calculations using a mixed Gaussian and plane waves approach. Comput. Phys. Commun. 167, 103–128 (2005). Goedecker, S., Teter, M. & Hutter, J. Separable dual-space Gaussian pseudopotentials. Phys. Rev. B 54, 1703–1710 (1996). E.L., S.O.′B. and V.N. wish to thank the support of the SFI PIYRA and AMBER grants, the European Research Council (2DNanoCaps and 3D2D Print projects) and the EU ITN MoWSeS project. The Advanced Microscopy Laboratory (AML) and its staff (in particular CD) are thanked for their assistance in electron microscopy in Dublin. S.S. and C.S.C. have been supported by the European Research Council (Quest-project). All calculations were performed on the Parsons cluster maintained by the Trinity Centre for High Performance Computing, under project id: HPC_12_0722. This cluster was funded through grants from Science Foundation Ireland. SJH, EP and EAL thank the Defence Threat Reduction Agency under grant HDTRA1-12-1-0013. The Titan at Manchester was funded with support from HM Government (UK) and is associated with research capability of the Nuclear Advanced Manufacturing Research Centre. CRANN and AMBER Research Centres, Dublin, Dublin 2, Ireland Edmund Long, Clive Downing, Clotilde S. Cucinotta, Stefano Sanvito & Valeria Nicolosi School of Physics, Trinity College Dublin, Dublin 2, Ireland Edmund Long, Clotilde S. Cucinotta & Stefano Sanvito Centre for Bionano Interactions, University College Dublin, Belfield, Dublin 4, Ireland School of Materials, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK Edward A. Lewis, Eric Prestat & Sarah J. Haigh School of Chemistry, Trinity College Dublin, Dublin 2, Ireland Valeria Nicolosi Edmund Long Edward A. Lewis Eric Prestat Clive Downing Clotilde S. Cucinotta Stefano Sanvito Sarah J. Haigh E.L. wrote the first draft of the paper, and performed the ex situ experiments. E.A.L., E.P., S.J.H., C.D. and E.L. performed the in situ experiments. C.S.C. and S.S. performed simulations of the oxidation. S.O.′B prepared the dispersions. V.N. and E.L. planned the experiments. All authors contributed to the final version. Correspondence to Valeria Nicolosi. The authors declare that they have no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Long, E., O'Brien, S., Lewis, E.A. et al. An in situ and ex situ TEM study into the oxidation of titanium (IV) sulphide. npj 2D Mater Appl 1, 22 (2017). https://doi.org/10.1038/s41699-017-0024-4 Revised: 08 June 2017 Accepted: 16 June 2017 An Innovative Process for Production of Ti Metal Powder via TiSx from TiN Eltefat Ahmadi Ryosuke O. Suzuki Metallurgical and Materials Transactions B (2020) Editorial Summary TiS2 oxidation: moisture is the critical element preventing environmental stability The degradation dynamics of TiS2 reveal that the flakes oxidise in water and atmosphere, pointing towards moisture as the key driving force. A team led by Valeria Nicolosi at Trinity College Dublin used advanced electron microscopy techniques to investigate the influence of different environments on the deterioration pathways of TiS2, a promising candidate for future energy storage applications. By comparing the effect of ex-situ oxidation by water and oxygen at room temperature, and in-situ oxidation at high temperatures, water was proven to effectively oxidise the TiS2 flakes from the edges thereby forming an amorphous oxide phase. Conversely, the degradation was found to proceed more slowly in atmosphere or vacuum conditions. These results suggest that TiS2 oxidation could be avoided in a desiccated environment that would prevent water molecules from dissociating in reactive ionic species. npj 2D Materials and Applications (npj 2D Mater Appl) ISSN 2397-7132 (online)
CommonCrawl
Only show content I have access to (260) Only show open access (47) Chapters (125) Last 6 months (11) Last 12 months (48) Last 3 years (89) Over 3 years (594) Psychiatry (129) Physics and Astronomy (75) Politics and International Relations (63) Statistics and Probability (59) Materials Research (43) Area Studies (37) Earth and Environmental Sciences (10) Epidemiology & Infection (51) Psychological Medicine (49) MRS Online Proceedings Library Archive (38) British Journal of Nutrition (28) Infection Control & Hospital Epidemiology (27) The British Journal of Psychiatry (27) Publications of the Astronomical Society of Australia (26) European Psychiatry (22) Microscopy and Microanalysis (21) The Journal of Laryngology & Otology (18) Parasitology (14) Proceedings of the Nutrition Society (13) International Astronomical Union Colloquium (9) Proceedings of the Prehistoric Society (7) The Classical Quarterly (7) The Journal of Agricultural Science (7) Behavioral and Brain Sciences (6) Cardiology in the Young (6) The Aeronautical Journal (6) Materials Research Society (42) Nutrition Society (28) Society for Healthcare Epidemiology of America (SHEA) (27) International Astronomical Union (23) The Royal College of Psychiatrists (23) European Psychiatric Association (22) MiMi / EMAS - European Microbeam Analysis Society (21) Nestle Foundation - enLINK (18) Royal College of Speech and Language Therapists (17) The Australian Society of Otolaryngology Head and Neck Surgery (10) Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (8) Society for Political Methodology (8) Weed Science Society of America (8) The Classical Association (7) The Prehistoric Society (7) AEPC Association of European Paediatric Cardiology (6) Royal Aeronautical Society (6) International Neuropsychological Society INS (5) Royal College of Psychiatrists / RCPsych (5) American Society of Law, Medicine & Ethics (4) Cambridge Studies in International Relations (19) The Cambridge History of the Book in Britain (18) Cambridge Studies in Medical Anthropology (16) Cambridge Handbooks in Psychology (1) Cambridge Studies on the American South (1) New Cambridge History of the Bible (1) Cambridge Histories (19) Cambridge Histories - Literature (18) Cambridge Handbooks (1) Cambridge Handbooks of Psychology (1) Cambridge Histories - Religion (1) Risk Factors for Mortality and Progression to Severe COVID-19 Disease in the Southeast United States (US): A Report from the SEUS Study Group Athena L. V. Hobbs, Nicholas Turner, Imad Omer, Morgan K. Walker, Ronald M. Beaulieu, Muhammad Sheikh, S. Shaefer Spires, Christina T. Fiske, Ryan Dare, Salil Goorha, Priyenka Thapa, John Gnann, Jeffrey Wright, George E. Nelson Journal: Infection Control & Hospital Epidemiology / Accepted manuscript Published online by Cambridge University Press: 11 January 2021, pp. 1-33 Identify risk factors that could increase progression to severe disease and mortality in hospitalized SARS-CoV-2 patients in the Southeast US. Multicenter, retrospective cohort including 502 adults hospitalized with laboratory-confirmed COVID-19 between March 1, 2020 and May 8, 2020 within one of 15 participating hospitals in 5 health systems across 5 states in the Southeast US. The study objectives were to identify risk factors that could increase progression to hospital mortality and severe disease (defined as a composite of intensive care unit admission or requirement of mechanical ventilation) in hospitalized SARS-CoV-2 patients in the Southeast US. A total of 502 patients were included, and the majority (476/502, 95%) had clinically evaluable outcomes. Hospital mortality was 16% (76/476), while 35% (177/502) required ICU admission, and 18% (91/502) required mechanical ventilation. By both univariate and adjusted multivariate analysis, hospital mortality was independently associated with age (adjusted odds ratio [aOR] 2.03 for each decade increase, 95% CI 1.56-2.69), male sex (aOR 2.44, 95% CI: 1.34-4.59), and cardiovascular disease (aOR 2.16, 95% CI: 1.15-4.09). As with mortality, risk of severe disease was independently associated with age (aOR 1.17 for each decade increase, 95% CI: 1.00-1.37), male sex (aOR 2.34, 95% CI 1.54-3.60), and cardiovascular disease (aOR 1.77, 95% CI 1.09-2.85). In an adjusted multivariate analysis, advanced age, male sex, and cardiovascular disease increased risk of severe disease and mortality in patients with COVID-19 in the Southeast US. In-hospital mortality risk doubled with each subsequent decade of life. Safety at Sea during the Industrial Revolution Morgan Kelly, Cormac Ó Gráda, Peter M. Solar Journal: The Journal of Economic History , First View Shipping, central to the rise of the Atlantic economies, was an extremely hazardous activity. Between the 1780s and 1820s, a safety revolution occurred that saw shipping losses and insurance rates on oceanic routes almost halved thanks to steady improvements in shipbuilding and navigation. Copper sheathing, iron reinforcing, and flush decks were the major innovations in shipbuilding. Navigation improved, not through chronometers, which remained too expensive and unreliable for general use, but through radically improved charts, accessible manuals of basic navigational techniques, and improved shore-based navigational aids. "Curse thee, thou quadrant!" dashing it to the deck, "no longer will I guide my earthly way by thee; the level ship's compass, and the level dead-reckoning, by log and by line; these shall conduct me, and show me my place on the sea." Captain Ahab in Moby Dick, Ch. CXIII The Qualitative Transparency Deliberations: Insights and Implications Alan M. Jacobs, Tim Büthe, Ana Arjona, Leonardo R. Arriola, Eva Bellin, Andrew Bennett, Lisa Björkman, Erik Bleich, Zachary Elkins, Tasha Fairfield, Nikhar Gaikwad, Sheena Chestnut Greitens, Mary Hawkesworth, Veronica Herrera, Yoshiko M. Herrera, Kimberley S. Johnson, Ekrem Karakoç, Kendra Koivu, Marcus Kreuzer, Milli Lake, Timothy W. Luke, Lauren M. MacLean, Samantha Majic, Rahsaan Maxwell, Zachariah Mampilly, Robert Mickey, Kimberly J. Morgan, Sarah E. Parkinson, Craig Parsons, Wendy Pearlman, Mark A. Pollack, Elliot Posner, Rachel Beatty Riedl, Edward Schatz, Carsten Q. Schneider, Jillian Schwedler, Anastasia Shesterinina, Erica S. Simmons, Diane Singerman, Hillel David Soifer, Nicholas Rush Smith, Scott Spitzer, Jonas Tallberg, Susan Thomson, Antonio Y. Vázquez-Arroyo, Barbara Vis, Lisa Wedeen, Juliet A. Williams, Elisabeth Jean Wood, Deborah J. Yashar Journal: Perspectives on Politics , First View In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups' final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency's promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1 Reliability of nonlocalizing signs and symptoms as indicators of the presence of infection in nursing-home residents Theresa A. Rowe, Robin L.P. Jump, Bjørg Marit Andersen, David B. Banach, Kristina A. Bryant, Sarah B. Doernberg, Mark Loeb, Daniel J. Morgan, Andrew M. Morris, Rekha K. Murthy, David A. Nace, Christopher J. Crnich Journal: Infection Control & Hospital Epidemiology , First View Published online by Cambridge University Press: 09 December 2020, pp. 1-10 View extract Antibiotics are among the most common medications prescribed in nursing homes. The annual prevalence of antibiotic use in residents of nursing homes ranges from 47% to 79%, and more than half of antibiotic courses initiated in nursing-home settings are unnecessary or prescribed inappropriately (wrong drug, dose, or duration). Inappropriate antibiotic use is associated with a variety of negative consequences including Clostridioides difficile infection (CDI), adverse drug effects, drug–drug interactions, and antimicrobial resistance. In response to this problem, public health authorities have called for efforts to improve the quality of antibiotic prescribing in nursing homes. Recreating the OSIRIS-REx slingshot manoeuvre from a network of ground-based sensors Trent Jansen-Sturgeon, Benjamin A. D. Hartig, Gregory J. Madsen, Philip A. Bland, Eleanor K. Sansom, Hadrien A. R. Devillepoix, Robert M. Howie, Martin Cupák, Martin C. Towner, Morgan A. Cox, Nicole D. Nevill, Zacchary N. P. Hoskins, Geoffrey P. Bonning, Josh Calcino, Jake T. Clark, Bryce M. Henson, Andrew Langendam, Samuel J. Matthews, Terence P. McClafferty, Jennifer T. Mitchell, Craig J. O'Neill, Luke T. Smith, Alastair W. Tait Journal: Publications of the Astronomical Society of Australia / Volume 37 / 2020 Published online by Cambridge University Press: 27 November 2020, e049 Optical tracking systems typically trade off between astrometric precision and field of view. In this work, we showcase a networked approach to optical tracking using very wide field-of-view imagers that have relatively low astrometric precision on the scheduled OSIRIS-REx slingshot manoeuvre around Earth on 22 Sep 2017. As part of a trajectory designed to get OSIRIS-REx to NEO 101955 Bennu, this flyby event was viewed from 13 remote sensors spread across Australia and New Zealand to promote triangulatable observations. Each observatory in this portable network was constructed to be as lightweight and portable as possible, with hardware based off the successful design of the Desert Fireball Network. Over a 4-h collection window, we gathered 15 439 images of the night sky in the predicted direction of the OSIRIS-REx spacecraft. Using a specially developed streak detection and orbit determination data pipeline, we detected 2 090 line-of-sight observations. Our fitted orbit was determined to be within about 10 km of orbital telemetry along the observed 109 262 km length of OSIRIS-REx trajectory, and thus demonstrating the impressive capability of a networked approach to Space Surveillance and Tracking. Frontiers in hybrid and interfacial materials chemistry research Beth S. Guiton, Morgan Stefik, Veronica Augustyn, Sarbajit Banerjee, Christopher J. Bardeen, Bart M. Bartlett, Jun Li, Vilmalí López-Mejías, Leonard R. MacGillivray, Amanda Morris, Efrain E. Rodriguez, Anna Cristina S. Samia, Haoran Sun, Peter Sutter, Daniel R. Talham Journal: MRS Bulletin / Volume 45 / Issue 11 / November 2020 Print publication: November 2020 Through diversity of composition, sequence, and interfacial structure, hybrid materials greatly expand the palette of materials available to access novel functionality. The NSF Division of Materials Research recently supported a workshop (October 17–18, 2019) aiming to (1) identify fundamental questions and potential solutions common to multiple disciplines within the hybrid materials community; (2) initiate interfield collaborations between hybrid materials researchers; and (3) raise awareness in the wider community about experimental toolsets, simulation capabilities, and shared facilities that can accelerate this research. This article reports on the outcomes of the workshop as a basis for cross-community discussion. The interdisciplinary challenges and opportunities are presented, and followed with a discussion of current areas of progress in subdisciplines including hybrid synthesis, functional surfaces, and functional interfaces. The roles of disgust and harm perception in political attitude moralization Daniel C. Wisneski, Brittany E. Hanson, G. Scott Morgan Journal: Politics and the Life Sciences / Volume 39 / Issue 2 / Fall 2020 Print publication: Fall 2020 What causes people to see their political attitudes in a moral light? One answer is that attitude moralization results from associating one's attitude stance with feelings of disgust. To test the possibility that disgust moralizes, the current study used a high-powered preregistered design looking at within-person change in moral conviction paired with an experimental manipulation of disgust or anger (versus control). Results from the preregistered analyses found that we successfully induced anger but not disgust; however, our manipulation had no effect on moral conviction. Additional exploratory analyses investigating whether emotion and harm predicted increases in moral conviction over time found that neither disgust, anger, nor sadness had an effect on moralization, whereas perceptions of harm did predict moralization. Our findings are discussed in terms of their implications for current theory and research into attitude moralization. The incidence of psychotic disorders among migrants and minority ethnic groups in Europe: findings from the multinational EU-GEI study Fabian Termorshuizen, Els van der Ven, Ilaria Tarricone, Hannah E. Jongsma, Charlotte Gayer-Anderson, Antonio Lasalvia, Sarah Tosato, Diego Quattrone, Caterina La Cascia, Andrei Szöke, Domenico Berardi, Pierre-Michel Llorca, Lieuwe de Haan, Eva Velthorst, Miguel Bernardo, Julio Sanjuán, Manuel Arrojo, Robin M. Murray, Bart P. Rutten, Peter B. Jones, Jim van Os, James B. Kirkbride, Craig Morgan, Jean-Paul Selten Published online by Cambridge University Press: 22 September 2020, pp. 1-10 In Europe, the incidence of psychotic disorder is high in certain migrant and minority ethnic groups (hence: 'minorities'). However, it is unknown how the incidence pattern for these groups varies within this continent. Our objective was to compare, across sites in France, Italy, Spain, the UK and the Netherlands, the incidence rates for minorities and the incidence rate ratios (IRRs, minorities v. the local reference population). The European Network of National Schizophrenia Networks Studying Gene–Environment Interactions (EU-GEI) study was conducted between 2010 and 2015. We analyzed data on incident cases of non-organic psychosis (International Classification of Diseases, 10th edition, codes F20–F33) from 13 sites. The standardized incidence rates for minorities, combined into one category, varied from 12.2 in Valencia to 82.5 per 100 000 in Paris. These rates were generally high at sites with high rates for the reference population, and low at sites with low rates for the reference population. IRRs for minorities (combined into one category) varied from 0.70 (95% CI 0.32–1.53) in Valencia to 2.47 (95% CI 1.66–3.69) in Paris (test for interaction: p = 0.031). At most sites, IRRs were higher for persons from non-Western countries than for those from Western countries, with the highest IRRs for individuals from sub-Saharan Africa (adjusted IRR = 3.23, 95% CI 2.66–3.93). Incidence rates vary by region of origin, region of destination and their combination. This suggests that they are strongly influenced by the social context. Balancing processing ease with combustion performance in aluminum/PVDF energetic filaments Matthew C. Knott, Ashton W. Craig, Rahul Shankar, Sarah E. Morgan, Scott T. Iacono, Joseph E. Mates, Jena M. McCollum Journal: Journal of Materials Research , First View Published online by Cambridge University Press: 01 September 2020, pp. 1-8 Molecular weight (Mw) effects in poly(vinylidene fluoride) (PVDF) influence both processability and combustion behavior in energetic Al–PVDF filaments. Results show decreased viscosity in unloaded and fuel-lean (i.e., 15 wt% Al) filaments. In highly loaded filaments (i.e., 30 wt% Al), reduced viscosity is minimal due to higher electrostatic interaction between Al particles and low Mw chains as confirmed by Fourier-transform infrared spectroscopy. Thermal and combustion analysis further corroborates this story as exothermic activity decreases in PVDF with smaller Mw chains. Differential scanning calorimetry and Thermogravimetric analysis show reduced reaction enthalpy and lower char yield in low Mw PVDF. Enthalpy reduction trends continued in nonequilibrium burn rate studies, which confirm that burn rate decreases in the presence of low Mw PVDF. Furthermore, powder X-ray patterns of post-burn products suggest that low Mw PVDF decomposition creates a diffusion barrier near the Al particle surface resulting in negligible AlF3 formation in fuel-rich filaments. Threat, hostility and violence in childhood and later psychotic disorder: population-based case–control study Craig Morgan, Charlotte Gayer-Anderson, Stephanie Beards, Kathryn Hubbard, Valeria Mondelli, Marta Di Forti, Robin M. Murray, Carmine Pariante, Paola Dazzan, Thomas J. Craig, Ulrich Reininghaus, Helen L. Fisher Journal: The British Journal of Psychiatry / Volume 217 / Issue 4 / October 2020 A growing body of research suggests that childhood adversities are associated with later psychosis, broadly defined. However, there remain several gaps and unanswered questions. Most studies are of low-level psychotic experiences and findings cannot necessarily be extrapolated to psychotic disorders. Further, few studies have examined the effects of more fine-grained dimensions of adversity such as type, timing and severity. Using detailed data from the Childhood Adversity and Psychosis (CAPsy) study, we sought to address these gaps and examine in detail associations between a range of childhood adversities and psychotic disorder. CAPsy is population-based first-episode psychosis case–control study in the UK. In a sample of 374 cases and 301 controls, we collected extensive data on childhood adversities, in particular household discord, various forms of abuse and bullying, and putative confounders, including family history of psychotic disorder, using validated, semi-structured instruments. We found strong evidence that all forms of childhood adversity were associated with around a two- to fourfold increased odds of psychotic disorder and that exposure to multiple adversities was associated with a linear increase in odds. We further found that severe forms of adversity, i.e. involving threat, hostility and violence, were most strongly associated with increased odds of disorder. More tentatively, we found that some adversities (e.g. bullying, sexual abuse) were more strongly associated with psychotic disorder if first occurrence was in adolescence. Our findings extend previous research on childhood adversity and suggest a degree of specificity for severe adversities involving threat, hostility and violence. A systematic review on mediators between adversity and psychosis: potential targets for treatment Luis Alameda, Victoria Rodriguez, Ewan Carr, Monica Aas, Giulia Trotta, Paolo Marino, Natasha Vorontsova, Andrés Herane-Vives, Romayne Gadelrab, Edoardo Spinazzola, Marta Di Forti, Craig Morgan, Robin M Murray Journal: Psychological Medicine / Volume 50 / Issue 12 / September 2020 Various psychological and biological pathways have been proposed as mediators between childhood adversity (CA) and psychosis. A systematic review of the evidence in this domain is needed. Our aim is to systematically review the evidence on psychological and biological mediators between CA and psychosis across the psychosis spectrum. This review followed PRISMA guidelines. Articles published between 1979 and July 2019 were identified through a literature search in OVID (PsychINFO, Medline and Embase) and Cochrane Libraries. The evidence by each analysis and each study is presented by group of mediator categories found. The percentage of total effect mediated was calculated. Forty-eight studies were included, 21 in clinical samples and 27 in the general population (GP) with a total of 82 352 subjects from GP and 3189 from clinical studies. The quality of studies was judged as 'fair'. Our results showed (i) solid evidence of mediation between CA and psychosis by negative cognitive schemas about the self, the world and others (NS); by dissociation and other post-traumatic stress disorder symptoms; and through an affective pathway in GP but not in subjects with disorder; (iii) lack of studies exploring biological mediators. We found evidence suggesting that various overlapping and not competing pathways involving post-traumatic and mood symptoms, as well as negative cognitions contribute partially to the link between CA and psychosis. Experiences of CA, along with relevant mediators should be routinely assessed in patients with psychosis. Evidence testing efficacy of interventions targeting such mediators through cognitive behavioural approaches and/or pharmacological means is needed in future. The GLEAM 4-Jy (G4Jy) Sample: I. Definition and the catalogue Sarah V. White, Thomas M. O Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, Bi-Qing For, B. M. Gaensler, Melanie Johnston-Hollitt, André Offringa, Lister Staveley-Smith Published online by Cambridge University Press: 01 June 2020, e018 The Murchison Widefield Array (MWA) has observed the entire southern sky (Declination, $\delta< 30^{\circ}$ ) at low radio frequencies, over the range 72–231MHz. These observations constitute the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we use the extragalactic catalogue (EGC) (Galactic latitude, $|b| >10^{\circ}$ ) to define the GLEAM 4-Jy (G4Jy) Sample. This is a complete sample of the 'brightest' radio sources ( $S_{\textrm{151\,MHz}}>4\,\text{Jy}$ ), the majority of which are active galactic nuclei with powerful radio jets. Crucially, low-frequency observations allow the selection of such sources in an orientation-independent way (i.e. minimising the bias caused by Doppler boosting, inherent in high-frequency surveys). We then use higher-resolution radio images, and information at other wavelengths, to morphologically classify the brightest components in GLEAM. We also conduct cross-checks against the literature and perform internal matching, in order to improve sample completeness (which is estimated to be $>95.5$ %). This results in a catalogue of 1863 sources, making the G4Jy Sample over 10 times larger than that of the revised Third Cambridge Catalogue of Radio Sources (3CRR; $S_{\textrm{178\,MHz}}>10.9\,\text{Jy}$ ). Of these G4Jy sources, 78 are resolved by the MWA (Phase-I) synthesised beam ( $\sim2$ arcmin at 200MHz), and we label 67% of the sample as 'single', 26% as 'double', 4% as 'triple', and 3% as having 'complex' morphology at $\sim1\,\text{GHz}$ (45 arcsec resolution). We characterise the spectral behaviour of these objects in the radio and find that the median spectral index is $\alpha=-0.740 \pm 0.012$ between 151 and 843MHz, and $\alpha=-0.786 \pm 0.006$ between 151MHz and 1400MHz (assuming a power-law description, $S_{\nu} \propto \nu^{\alpha}$ ), compared to $\alpha=-0.829 \pm 0.006$ within the GLEAM band. Alongside this, our value-added catalogue provides mid-infrared source associations (subject to 6" resolution at 3.4 $\mu$ m) for the radio emission, as identified through visual inspection and thorough checks against the literature. As such, the G4Jy Sample can be used as a reliable training set for cross-identification via machine-learning algorithms. We also estimate the angular size of the sources, based on their associated components at $\sim1\,\text{GHz}$ , and perform a flux density comparison for 67 G4Jy sources that overlap with 3CRR. Analysis of multi-wavelength data, and spectral curvature between 72MHz and 20GHz, will be presented in subsequent papers, and details for accessing all G4Jy overlays are provided at https://github.com/svw26/G4Jy. The GLEAM 4-Jy (G4Jy) Sample: II. Host galaxy identification for individual sources Sarah V. White, Thomas M. O. Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, B. M. Gaensler, Melanie Johnston–Hollitt, André Offringa, Lister Staveley–Smith The entire southern sky (Declination, $\delta< 30^{\circ}$ ) has been observed using the Murchison Widefield Array (MWA), which provides radio imaging of $\sim$ 2 arcmin resolution at low frequencies (72–231 MHz). This is the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we have previously used a combination of visual inspection, cross-checks against the literature, and internal matching to identify the 'brightest' radio-sources ( $S_{\mathrm{151\,MHz}}>4$ Jy) in the extragalactic catalogue (Galactic latitude, $|b| >10^{\circ}$ ). We refer to these 1 863 sources as the GLEAM 4-Jy (G4Jy) Sample, and use radio images (of ${\leq}45$ arcsec resolution), and multi-wavelength information, to assess their morphology and identify the galaxy that is hosting the radio emission (where appropriate). Details of how to access all of the overlays used for this work are available at https://github.com/svw26/G4Jy. Alongside this we conduct further checks against the literature, which we document here for individual sources. Whilst the vast majority of the G4Jy Sample are active galactic nuclei with powerful radio-jets, we highlight that it also contains a nebula, two nearby, star-forming galaxies, a cluster relic, and a cluster halo. There are also three extended sources for which we are unable to infer the mechanism that gives rise to the low-frequency emission. In the G4Jy catalogue we provide mid-infrared identifications for 86% of the sources, and flag the remainder as: having an uncertain identification (129 sources), having a faint/uncharacterised mid-infrared host (126 sources), or it being inappropriate to specify a host (2 sources). For the subset of 129 sources, there is ambiguity concerning candidate host-galaxies, and this includes four sources (B0424–728, B0703–451, 3C 198, and 3C 403.1) where we question the existing identification. Elucidating negative symptoms in the daily life of individuals in the early stages of psychosis Karlijn S. F. M. Hermans, Inez Myin-Germeys, Charlotte Gayer-Anderson, Matthew J. Kempton, Lucia Valmaggia, Philip McGuire, Robin M. Murray, Philippa Garety, Til Wykes, Craig Morgan, Zuzana Kasanova, Ulrich Reininghaus Published online by Cambridge University Press: 22 May 2020, pp. 1-11 It remains poorly understood how negative symptoms are experienced in the daily lives of individuals in the early stages of psychosis. We aimed to investigate whether altered affective experience, anhedonia, social anhedonia, and asociality were more pronounced in individuals with an at-risk mental state for psychosis (ARMS) and individuals with first-episode psychosis (FEP) than in controls. We used the experience sampling methodology (ESM) to assess negative symptoms, as they occurred in the daily life of 51 individuals with FEP and 46 ARMS, compared with 53 controls. Multilevel linear regression analyses showed no overall evidence for a blunting of affective experience. There was some evidence for anhedonia in FEP but not in ARMS, as shown by a smaller increase of positive affect (BΔat−risk v. FEP = 0.08, p = 0.006) as the pleasantness of activities increased. Against our expectations, no evidence was found for greater social anhedonia in any group. FEP were more often alone (57%) than ARMS (38%) and controls (35%) but appraisals of the social situation did not point to asociality. Overall, altered affective experience, anhedonia, social anhedonia and asociality seem to play less of a role in the daily life of individuals in the early stages of psychosis than previously assumed. With the experience of affect and pleasure in daily life being largely intact, changing social situations and appraisals thereof should be further investigated to prevent development or deterioration of negative symptoms. Association of extent of cannabis use and psychotic like intoxication experiences in a multi-national sample of first episode psychosis patients and controls Musa Sami, Diego Quattrone, Laura Ferraro, Giada Tripoli, Erika La Cascia, Charlotte Gayer-Anderson, Jean-Paul Selten, Celso Arango, Miguel Bernardo, Ilaria Tarricone, Andrea Tortelli, Giusy Gatto, Simona del Peschio, Cristina Marta Del-Ben, Bart P. Rutten, Peter B. Jones, Jim van Os, Lieuwe de Haan, Craig Morgan, Cathryn Lewis, Sagnik Bhattacharyya, Tom P. Freeman, Michael Lynskey, Robin M. Murray, Marta Di Forti Published online by Cambridge University Press: 28 April 2020, pp. 1-9 First episode psychosis (FEP) patients who use cannabis experience more frequent psychotic and euphoric intoxication experiences compared to controls. It is not clear whether this is consequent to patients being more vulnerable to the effects of cannabis use or to their heavier pattern of use. We aimed to determine whether extent of use predicted psychotic-like and euphoric intoxication experiences in patients and controls and whether this differs between groups. We analysed data on patients who had ever used cannabis (n = 655) and controls who had ever used cannabis (n = 654) across 15 sites from six countries in the EU-GEI study (2010–2015). We used multiple regression to model predictors of cannabis-induced experiences and to determine if there was an interaction between caseness and extent of use. Caseness, frequency of cannabis use and money spent on cannabis predicted psychotic-like and euphoric experiences (p ⩽ 0.001). For psychotic-like experiences (PEs) there was a significant interaction for caseness × frequency of use (p < 0.001) and caseness × money spent on cannabis (p = 0.001) such that FEP patients had increased experiences at increased levels of use compared to controls. There was no significant interaction for euphoric experiences (p > 0.5). FEP patients are particularly sensitive to increased psychotic-like, but not euphoric experiences, at higher levels of cannabis use compared to controls. This suggests a specific psychotomimetic response in FEP patients related to heavy cannabis use. Clinicians should enquire regarding cannabis related PEs and advise that lower levels of cannabis use are associated with less frequent PEs. Jumping to conclusions, general intelligence, and psychosis liability: findings from the multi-centre EU-GEI case-control study Giada Tripoli, Diego Quattrone, Laura Ferraro, Charlotte Gayer-Anderson, Victoria Rodriguez, Caterina La Cascia, Daniele La Barbera, Crocettarachele Sartorio, Fabio Seminerio, Ilaria Tarricone, Domenico Berardi, Andrei Szöke, Celso Arango, Andrea Tortelli, Pierre-Michel Llorca, Lieuwe de Haan, Eva Velthorst, Julio Bobes, Miguel Bernardo, Julio Sanjuán, Jose Luis Santos, Manuel Arrojo, Cristina Marta Del-Ben, Paulo Rossi Menezes, Jean-Paul Selten, EU-GEI WP2 Group, Peter B. Jones, Hannah E Jongsma, James B Kirkbride, Antonio Lasalvia, Sarah Tosato, Alex Richards, Michael O'Donovan, Bart PF Rutten, Jim van Os, Craig Morgan, Pak C Sham, Robin M. Murray, Graham K. Murray, Marta Di Forti Published online by Cambridge University Press: 24 April 2020, pp. 1-11 The 'jumping to conclusions' (JTC) bias is associated with both psychosis and general cognition but their relationship is unclear. In this study, we set out to clarify the relationship between the JTC bias, IQ, psychosis and polygenic liability to schizophrenia and IQ. A total of 817 first episode psychosis patients and 1294 population-based controls completed assessments of general intelligence (IQ), and JTC, and provided blood or saliva samples from which we extracted DNA and computed polygenic risk scores for IQ and schizophrenia. The estimated proportion of the total effect of case/control differences on JTC mediated by IQ was 79%. Schizophrenia polygenic risk score was non-significantly associated with a higher number of beads drawn (B = 0.47, 95% CI −0.21 to 1.16, p = 0.17); whereas IQ PRS (B = 0.51, 95% CI 0.25–0.76, p < 0.001) significantly predicted the number of beads drawn, and was thus associated with reduced JTC bias. The JTC was more strongly associated with the higher level of psychotic-like experiences (PLEs) in controls, including after controlling for IQ (B = −1.7, 95% CI −2.8 to −0.5, p = 0.006), but did not relate to delusions in patients. Our findings suggest that the JTC reasoning bias in psychosis might not be a specific cognitive deficit but rather a manifestation or consequence, of general cognitive impairment. Whereas, in the general population, the JTC bias is related to PLEs, independent of IQ. The work has the potential to inform interventions targeting cognitive biases in early psychosis. A case study of the coconut crab Birgus latro on Zanzibar highlights global threats and conservation solutions Tim Caro, Haji Hamad, Rashid Suleiman Rashid, Ulrike Kloiber, Victoria M. Morgan, Ossi Nokelainen, Barnabas Caro, Ilaria Pretelli, Neil Cumberlidge, Monique Borgerhoff Mulder Journal: Oryx , First View The coconut crab Birgus latro, the largest terrestrial decapod, is under threat in most parts of its geographical range. Its life cycle involves two biomes (restricted terrestrial habitats near the coast, and salt water currents of the tropical Indian and Pacific Oceans). Its dependence on coastal habitat means it is highly vulnerable to the habitat destruction that typically accompanies human population expansion along coastlines. Additionally, it has a slow reproductive rate and can reach large adult body sizes that, together with its slow movement when on land, make it highly susceptible to overharvesting. We studied the distribution and population changes of coconut crabs at 15 island sites in coastal Tanzania on the western edge of the species' geographical range. Our aim was to provide the data required for reassessment of the extinction risk status of this species, which, despite indications of sharp declines in many places, is currently categorized on the IUCN Red List as Data Deficient. Pemba Island, Zanzibar, in Tanzania, is an important refuge for B. latro but subpopulations are fragmented and exploited by children and fishers. We discovered that larger subpopulations are found in the presence of crops and farther away from people, whereas the largest adult coconut crabs are found on more remote island reserves and where crabs are not exploited. Remoteness and protection still offer hope for this species but there are also opportunities for protection through local communities capitalizing on tourist revenue, a conservation solution that could be applied more generally across the species' range. The comparative efficacy of second-generation antidepressants for the accompanying symptoms of depression: a systematic review K. Thaler, G. Gartlehner, R.A. Hansen, L.C. Morgan, L.J. Lux, M. Van Noord, U. Mager, B.N. Gaynes, P. Thieda, M. Strobelberger, S. Lloyd, U. Reichenpfader, K.N. Lohr Journal: European Psychiatry / Volume 26 / Issue S2 / March 2011 Published online by Cambridge University Press: 16 April 2020, p. 697 Clinicians treating patients with Major Depressive Disorder (MDD) might favor one second-generation antidepressant (SGA) because of perceived benefits for the accompanying symptoms of MDD. To compare the efficacy of bupropion, citalopram, desvenlafaxine, duloxetine, escitalopram, fluoxetine, fluvoxamine, mirtazapine, nefazodone, paroxetine, sertraline, trazodone, and venlafaxine for the treatment of the accompanying symptoms of MDD. This review is part of a larger review on the comparative effectiveness of SGAs for MDD. We searched MEDLINE, Embase, The Cochrane Library, and the International Pharmaceutical Abstracts up to May 2010. Two persons independently reviewed the literature, abstracted data, and rated the risk of bias. We located 26 head-to-head and 7 placebo-controlled trials that provided evidence for this review. We did not locate any studies on treating accompanying appetite change, low energy, melancholia, or psychomotor change. There was no evidence for many comparisons and we were unable to conduct quantitative analysis for any comparisons. For the comparisons that were studied, we concluded that the SGAs are similarly efficacious for treating anxiety, insomnia, pain, and somatization. The strength of the evidence for these conclusions is low (meaning further research is very likely to have an important impact on our confidence in the estimate of the effect and is likely to change the estimate). Our findings indicate that the existing evidence does not warrant the choice of one second-generation antidepressant over another based on greater efficacy for the accompanying symptoms of depression. EMDR training for mental health therapists in postwar bosnia-herzegovina who work with psycho-traumatized population for increasing their psychotherapy capacities M. Hasanovic, I. Pajevic, S. Morgan, N. Kravic Published online by Cambridge University Press: 16 April 2020, p. 1309 After war 1992–1995 in Bosnia and Herzegovina (BH), whole population was highly psych-traumatized. Mental health therapists had no enough capacities to meet needs of population. They are permanently in need to increase their psychotherapy capacities. EMDR is a powerful, state-of-the-art treatment. Its effectiveness and efficacy has been validated by extensive research. National Institute for Clinical Excellence (NICE) recommended it as one of two trauma treatments of choice. To describe non profit, humanitarian approach in sharing skills of Eye Movement Reprocessing and Desensitization (EMDR) to mental health therapists in BH from Humanitarian Assistance Program (HAP) of UK & Ireland. Authors described educational process considering the history of idea and its realization through training levels and process of supervision. Highly skilled and internationally approved trainers from HAP UK & Ireland came four times to Psychiatry Department of University Clinical Center Tuzla in BH where they provided completed EMDR training for 24 trainees: neuro- psychiatrists, residents of neuro-psychiatry and psychologists from eight different health institutions from six different cities in BH. After finishing training process, trainees are obliged to practice their EMDR therapy in daily practice with real clients under the supervision process of HAP UK & Ireland trainers to become certified EMDR therapists. Regarding big physical distance between supervisors and trainees, supervision will be realized via Skype Internet technology Psychotherapy capacities of mental health psychotherapists in postwar BH could be increased with enthusiastic help of EMDR trainers from HAP UK&Ireland. Skunk and psychosis in South East London M. Di Forti, C. Morgan, V. Mondelli, L. Gittens, R. Handley, N. Hepgul, S. Luzi, T. Marques, M. Aas, S. Masson, C. Prescott, M. Russo, P. Sood, B. Wiffen, P. Papili, P. Dazzan, C. Pariante, K. Aitchison, J. Powell, R. Murray Journal: European Psychiatry / Volume 24 / Issue S1 / January 2009 Published online by Cambridge University Press: 16 April 2020, p. 1 Epidemiological studies have reported that the increased risk of developing psychosis in cannabis users is dose related. In addition, experimental research has shown that the active constituent of cannabis responsible for its psychotogenic effect is Delta-9-Tetrahydrocannabinol (THC) (Murray et al, 2007). Recent evidence has suggested an increased in potency (% TCH) in the cannabis seized in the UK (Potter et al, 2007). Hypothesis: We predicted that first episode psychosis patients are more likely to use higher potency cannabis and more frequently than controls. We collected information concerning socio-demographic, clinical characteristics and cannabis use (age at first use, frequency, length of use, type of cannabis used) from a sample of 191 first-episode psychosis patients and 120 matched healthy volunteers. All were recruited as part of the Genetic and Psychosis (GAP) study which studied all patients who presented to the South London and Maudsley Trust. There was no significant difference in the life-time prevalence of cannabis use or age at first use between cases and controls. However, cases were more likely to be regular users (p=0.05), to be current users (p=0.04) and to have smoked cannabis for longer (p=0.01). Among cannabis users, 86.8% of 1st Episode Psychosis Patients preferentially used Skunk/Sinsemilla compared to 27.7% of Controls. Only 13.2 % of 1st Episode psychosis Patients chose to use Resin/Hash compared to 76.3% of controls. The concentration of TCH in these in South East London, ranges between 8.5 and 14 % (Potter et al, 2007). Controls (47%) were more likely to use Hash (Resin) whose average TCH concentration is 3.4% (Potter et al, 2007). Patients with first episode psychosis have smoked higher potency cannabis, for longer and with greater frequency, than healthy controls.
CommonCrawl
Edward F Hughes Category Archives: String Theory Advanced, Math, Physics, String Theory A Second Course in String Theory July 4, 2016 edwardfhughes Leave a comment I've been lucky enough to have funding from SEPnet to create a new lecture course recently, following on from David Tong's famous Part III course on string theory. The notes are intended for the beginning PhD student, bridging the gap between Masters level and the daunting initial encounters with academic seminars. If you're a more experienced journeyman, I hope they'll provide a useful reminder of certain terminology. Remember what twist is? What does a D9-brane couple to? Curious about the scattering equations? Now you can satisfy your inner quizmaster! Here's a link to the notes. Comments, questions and corrections are more than welcome. Thanks are particularly due to Andy O'Bannon for his advice and support throughout the project. d-branescattering equationsSEPnetstring theorytwist Algebraic Geometry, Amplitudes, Geometry, Math, Physics, Quantum Field Theory, String Theory Three Ways with Totally Positive Grassmannians January 7, 2016 edwardfhughes Leave a comment This week I'm down in Canterbury for a conference focussing on the positive Grassmannian. "What's that?", I hear you ask. Roughly speaking, it's a mysterious geometrical object that seems to crop up all over mathematical physics, from scattering amplitudes to solitons, not to mention quantum groups. More formally we define We can view this as the space of matrices modulo a action, which has homogeneous "Plücker" coordinates given by the minors. Of course, these are not coordinates in the true sense, for they are overcomplete. In particular there exist quadratic Plücker relations between the minors. In principle then, you only need a subset of the homogeneous coordinates to cover the whole Grassmannian. To get to the positive Grassmannian is easy, you simply enforce that every minor is positive. Of course, you only need to check this for some subset of the Plücker coordinates, but it's tricky to determine which ones. In the first talk of the day Lauren Williams showed how you can elegantly extract this information from paths on a graph! In fact, this graph encodes much more information than that. In particular, it turns out that the positive Grassmannian naturally decomposes into cells (i.e. things homeomorphic to a closed ball). The graph can be used to exactly determine this cell decomposition. And that's not all! The same structure crops up in the study of quantum groups. Very loosely, these are algebraic structures that result from introducing non-commutativity in a controlled way. More formally, if you want to quantise a given integrable system, you'll typically want to promote the coordinate ring of a Poisson-Lie group to a non-commutative algebra. This is exactly the sort of problem that Drinfeld et al. started studying 30 years ago, and the field is very much active today. The link with the positive Grassmannian comes from defining a quantity called the quantum Grassmannian. The first step is to invoke a quantum plane, that is a -dimensional algebra generated by with the relation that for some parameter different from . The matrices that linearly transform this plane are then constrained in their entries for consistency. There's a natural way to build these up into higher dimensional quantum matrices. The quantum Grassmannian is constructed exactly as above, but with these new-fangled quantum matrices! The theorem goes that the torus action invariant irreducible varieties in the quantum Grassmannian exactly correspond to the cells of the positive Grassmannian. The proof is fairly involved, but the ideas are rather elegant. I think you'll agree that the final result is mysterious and intriguing! And we're not done there. As I've mentioned before, positive Grassmannia and their generalizations turn out to compute scattering amplitudes. Alright, at present this only works for planar super-Yang-Mills. Stop press! Maybe it works for non-planar theories as well. In any case, it's further evidence that Grassmannia are the future. From a historical point of view, it's not surprising that Grassmannia are cropping up right now. In fact, you can chronicle revolutions in theoretical physics according to changes in the variables we use. The calculus revolution of Newton and Leibniz is arguably about understanding properly the limiting behaviour of real numbers. With quantum mechanics came the entry of complex numbers into the game. By the 1970s it had become clear that projectivity was important, and twistor theory was born. And the natural step beyond projective space is the Grassmannian. Viva la revolución! canterburycell decompositiongrassmannianlauren williamspositivequantum groupsvariables Advanced, Amplitudes, Basic, Math, Music, Physics, Quantum Field Theory, String Theory Conference Amplitudes 2015 – Air on the Superstring July 11, 2015 edwardfhughes Leave a comment One of the first pieces of Bach ever recorded was August Wilhelmj's arrangement of the Orchestral Suite in D major. Today the transcription for violin and piano goes by the moniker Air on the G String. It's an inspirational and popular work in all it's many incarnations, not least this one featuring my favourite cellist Yo-Yo Ma. This morning we heard the physics version of Bach's masterpiece. Superstrings are nothing new, of course. But recently they've received a reboot courtesy of Dr. David Skinner among others. The ambitwistor string is an infinite tension version which only admit right-moving vibrations! At first the formalism looks a little daunting, until you realise that many calculations follow the well-trodden path of the superstring. Now superstring amplitudes are quite difficult to compute. So hard, in fact, that Dr. Oliver Schloterrer devoted an entire talk to understanding particular functions that emerge when scattering just strings at next-to-leading order. Mercifully, the ambitwistor string is far more well-behaved. The resulting amplitudes are rather beautiful and simple. To some extent, you trade off the geometrical aesthetics of the superstring for the algebraic compactness emerging from the ambitwistor approach. This isn't the first time that twistors and strings have been combined to produce quantum field theory. The first attempt dates back to 2003 and work of Edward Witten (of course). Although hugely influential, Witten's theory was esoteric to say the least! In particular nobody knows how to encode quantum corrections in Witten's language. Ambitwistor strings have no such issues! Adding a quantum correction is easy – just put your theory on a donut. But this conceptually simple step threatened a roadblock for the research. Trouble was, nobody actually knew how to evaluate the resulting formulae. Nobody, that was, until last week! Talented folk at Oxford and Cambridge managed to reduce the donutty problem to the original spherical case. This is an impressive feat – nobody much suspected that quantum corrections would be as easy as a classical computation! There's a great deal of hope that this idea can be rigorously extended to higher loops and perhaps even break the deadlock on maximal supergravity calculations at -loop level. The resulting concept of off-shell scattering equations piqued my interest – I've set myself a challenge to use them in the next 12 months! Scattering equations, you say? What are these beasts? For that we need to take a closer look at the form of the ambitwistor string amplitude. It turns out to be a sum over the solutions of the following equations The are just two particle invariants – encoding things you can measure about the speed and angle of particle scattering. And the are just some bonus variables. You'd never dream of introducing them unless somebody told you to! But yet they're exactly what's required for a truly elegant description. And these scattering equations don't just crop up in one special theory. Like spies in a Cold War era film, they seem to be everywhere! Dr. Freddy Cachazo alerted us to this surprising fact in a wonderfully engaging talk. We all had a chance to play detective and identify bits of physics from telltale clues! By the end we'd built up an impressive spider's web of connections, held together by the scattering equations. Freddy's talk put me in mind of an interesting leadership concept espoused by the conductor Itay Talgam. Away from his musical responsibilities he's carved out a niche as a business consultant, teaching politicians, researchers, generals and managers how to elicit maximal productivity and creativity from their colleagues and subordinates. Critical to his philosophy is the concept of keynote listening – sharing ideas in a way that maximises the response of your audience. This elusive quality pervaded Freddy's presentation. Following this masterclass was no mean feat, but one amply performed by my colleague Brenda Penante. We were transported to the world of on-shell diagrams – a modern alternative to Feynman's ubiquitous approach. These diagrams are known to produce the integrand in planar $\mathcal{N}=4$ super-Yang-Mills theory to all orders! What's more, the answer comes out in an attractive form, ripe for integration to multiple polylogarithms. Cunningly, I snuck the word planar into the paragraph above. This approximation means that the diagrams can be drawn on a sheet of paper rather than requiring dimensions. For technical reasons this is equivalent to working in the theory with an infinite number of color charges, not just the usual we find for the strong force. Obviously, it would be helpful to move beyond this limit. Brenda explained a decisive step in this direction, providing a mechanism for computing all leading singularities of non-planar amplitudes. By examining specific examples the collaboration uncovered new structure invisible in the planar case. Technically, they observed that the boundary operation on a reduced graph identified non-trivial singularities which can't be understood as the vanishing of minors. At present, there's no proven geometrical picture of these new relations. Amazingly they might emerge from a 1,700-year-old theorem of Pappus! Bootstraps were back on the agenda to close the session. Dr. Agnese Bissi is a world-expert on conformal field theories. These models have no sense of distance and only know about angles. Not particularly useful, you might think! But they crop up surprisingly often as approximations to realistic physics, both in particle smashing and modelling materials. Agnese took a refreshingly rigorous approach, walking us through her proof of the reciprocity principle. Until recently this vital tool was little more than an ad hoc assumption, albeit backed up by considerable evidence. Now Agnese has placed it on firmer ground. From here she was able to "soup up" the method. The supercharged variant can compute OPE coefficients as well as dimensions. Alas, it's already time for the conference dinner and I haven't mentioned Dr. Christian Bogner's excellent work on the sunrise integral. This charmingly named function is the simplest case where hyperlogarithms are not enough to write down the answer. But don't just take it from me! You can now hear him deliver his talk by visiting the conference website. I'm very pleased to have chatted with Professor Rutger Boels (on the Lagrangian origin of Yang-Mills soft theorems and concerning the universality of subheading collinear behaviour) and Tim Olson (about determining the relative sign between on-shell diagrams to ensure cancellation of spurious poles). Note: this post was originally written on Thursday 9th July but remained unpublished. I blame the magnificent food, wine and bonhomie at the conference dinner! Agnese Bissiambitwistor stringamplitudes 2015BachbootstrapBrenda Penanteconferenceconformal field theoryDavid SkinnerEdward WittenFreddy Cachazonon-planaron-shell diagramsscattering equationsstringsuperstring Advanced, General Relativity, Geometry, Physics, String Theory T-duality and Isometries of Spacetime April 24, 2015 edwardfhughes Leave a comment I've just been to an excellent seminar on Double Field Theory by its co-creator, Chris Hull. You may know that string theory exhibits a meta-symmetry called T-duality. More precisely, it's equivalent to put closed strings on circles of radius and . This is the simplest version of T-duality, when spacetime has no background fields. Now suppose we turn on the Kalb-Ramond field . This is just an excitation of the string which generalizes electromagnetic potential. This has the effect of making T-duality more complicated. In fact it promotes the symmetry to where is the dimension of your torus. Importantly for this to work, we must choose a field which is constant in the compact directions, otherwise we lose the isometries that gave us T-duality in the first place. Under this T-duality, the field and metric get mixed up. This can have dramatic consequences for the underlying geometry! In particular our new metric may not patch together by diffeomorphisms on our spacetime. Similarly our new Kalb-Ramond field may not patch together via diffeomorphisms and gauge transformations. We call such strange backgrounds non-geometric. To express this more succintly, let's package diffeomorphisms and gauge transformations together under the name generalized diffeomorphisms. We can now say that T-duality does not respect the patching conditions of generalized diffeomorphisms. Put another way, the group does not embed within the group of generalized diffeomorphisms of our spacetime! This lack of geometry is rather irritating. We physicists tend to like to picture things, and T-duality has just ruined our intuition! But here's where Double Field Theory comes in. The idea is to double the coordinates of your compact space, so that transformations just become rotations! Now T-duality clearly embeds within generalized diffeomorphisms and geometry has returned. All this complexity got me thinking about an easier problem – what do we mean by an isometry in a theory with background fields? In vacuum isometries are defined as diffeomorphisms which preserve the metric. Infinitesimally these are generated by Killing vector fields, defined to obey the equation Now suppose you add in background fields, in the form of an energy-momentum tensor . If we want a Killing vector to generate an overall symmetry then we'd better have In fact this equation follows from the last one through Einstein's equations. If your metric solves gravity with background fields, then any isometry of the metric automatically preserves the energy momentum tensor. This is known as the matter collineation theorem. But hang on, the energy momentum tensor doesn't capture all the dynamics of a background field. Working with a Kalb-Ramond field for instance, it's the potential which is the important quantity. So if we want our Killing vector field to be a symmetry of the full system we must also have at least up to a gauge transformation of . Visually if we have a magnetic field pointing upwards everywhere then our symmetry diffeomorphism had better not twist it round! So from a physical perspective, we should really view background fields as an integral part of spacetime geometry. It's then natural to combine fields with the metric to create a generalized metric. A cute observation perhaps, but it's not immediately useful! Here's where T-duality joins the party. The extended objects of string theory (and their low energy descriptions in supergravity) possess duality symmetries which exchange pieces of the generalized metric. So in a stringy world it's simplest to work with the generalized metric as a whole. And that brings us full circle. Double Field Theory exactly manifests the duality symmetries of the generalized metric! Not only is this mathematically helpful, it's also an important conceptual step on the road to unification via strings. If that road exists. background fieldsdiffeomorphismsDouble Field Theorygeneralized metricspacetimestring theoryT-duality Basic, Math, String Theory Anomaly Cancellation March 5, 2014 edwardfhughes Leave a comment Back in the early 80s, nobody was much interested in string theory. Some wrote it off as inconsistent nonsense. How wrong they were! With a stroke of genius Michael Green and John Schwarz confounded the critics. But how was it done? First off we'll need to understand the problem. Our best theory of nature at small scales is provided by the Standard Model. This describes forces as fields, possessing certain symmetries. In particular the mathematical description endows the force fields with an extra redundant symmetry. The concept of adding redundancy appears absurd at first glance. But it actually makes it much easier to write down the theory. Plus you can eliminate the redundancy later to simplify your calculations. This principle is known as adding gauge symmetry. When we write down theories, it's easiest to start at large scales and then probe down to smaller ones. As we look at smaller things, quantum effects come into play. That means we have to make our force fields quantum. As we move into the quantum domain, it's important that we don't lose the gauge symmetry. Remember that the gauge symmetry was just a mathematical tool, not a physical effect. If our procedure of "going quantum" destroyed this symmetry, the fields would have more freedom than they should. Our theory would cease to describe reality as we see it. Thankfully this problem doesn't occur in the Standard Model. But what of string theory? Well, it turns out (miraculously) that strings do reproduce the whole array of possible force fields, with appropriate gauge symmetries. But when you look closely at the small scale behaviour, bad things happen. More precisely, the fields described by propagating quantum strings seem to lose their gauge symmetry! Suddenly things aren't looking so miraculous. In fact, the string theory has got too much freedom to describe the real world. We call this issue a gauge anomaly. So what's the get out clause? Thankfully for string theorists, it turned out that the naive calculation misses some terms. These terms are exactly right to cancel out those that kill the symmetry. In other words, when you include all the information correctly the anomaly cancels! The essence of the calculation is captured in the image below. Any potential gauge anomaly would come from the interaction of particles. For concreteness we'll focus on open strings in Type I string theory. The anomalous contribution would be given by a -loop effect. Visually that corresponds to an open string worldsheet with an annulus. We'd like to sum up the contributions from all (conformally) inequivalent diagrams. Roughly speaking, this is a sum over the radius of the annulus. It turns out that the terms from exactly cancel the term at . That's what the pretty picture above is all about. But why wasn't that spotted immediately? For a start, the mathematics behind my pictures is fairly intricate. In fact, things are only manageable if you look at the term correctly. Rather than viewing it as a -loop diagram, you can equivalently see it as a tree level contribution. Shrinking down the annulus to makes it look like a point. The information contained in the loop can be viewed as inserting a closed string state at this point. (If you join two ends of an open string, they make a closed one)! The relevant closed string state is usually labelled . Historically, it was this "tree level" contribution that was accidentally ignored. As far as I'm aware, Green and Schwarz spotted the cancellation after adding the appropriate term as a lucky guess. Only later did this full story emerge. My thanks to Sam Playle for an informative discussion on these matters. 1-loopanomaly cancellationgreenschwarzstring amplitudesstring theorytree level Why String Theory? September 18, 2012 edwardfhughes 2 Comments Sorry, long time no post. But here's why. I've just finished working on a very exciting website explaining string theory to the layman. We hope it's informative and approachable. Any comments or feedback would be gladly received. Next week it'll be back to the algebraic geometry! linksstring theorywebsite the morning paper Reasons for Being Gödel's Lost Letter and P=NP Sci-5 FbStart The Thesis Whisperer christopherhughfrancisbaird Of Particular Significance Gowers's Weblog 4 gravitons Eventually Almost Everywhere Find Me On .SE Amplitudes 2015 Conference Detexify Godwine Choir Occam's Typewriter (Athene Donald) Quantum or Otherwise The Post-Truth Society is a Failure to Teach Probability (Chiral) Supersymmetry in Different Dimensions Black Holes and the Information Paradox: a Solution? Collabor8 – A New Type of Conference Amplitudes (15) General Relativity (5) Quantum Field Theory (23) a random walk through Computer Science research, by Adrian Colyer a personal view of the theory of computation Why we see what we see Five minutes with a scientist Just like the horse whisperer - but with more pages Life & Writings Conversations About Science with Theoretical Physicist Matt Strassler Mathematics related discussions The trials and tribulations of four gravitons and a postdoc A blog about probability and olympiads by Dominic Yeo
CommonCrawl
Identification of focus and center in a 3-dimensional system Stability and bifurcation of a viscous incompressible plasma fluid contained between two concentric rotating cylinders March 2014, 19(2): 523-541. doi: 10.3934/dcdsb.2014.19.523 Expanding Baker Maps as models for the dynamics emerging from 3D-homoclinic bifurcations Antonio Pumariño 1, , José Ángel Rodríguez 2, , Joan Carles Tatjer 3, and Enrique Vigil 2, Departamento de Matemáticas, Universidad de Oviedo, Calvo Sotelo s/n, 33007 Oviedo Dep. de Matemáticas, Universidad de Oviedo, Calvo Sotelo s/n, 33007, Oviedo, Spain, Spain Departament de Matemàtica Aplicada i Anàlisi, Universitat de Barcelona, Gran Via, 585, 08080 Barcelona, Spain Received March 2013 Revised December 2013 Published February 2014 For certain 3D-homoclinic tangencies where the unstable manifold of the saddle point involved in the homoclinic tangency has dimension two, many different strange attractors have been numerically observed for the corresponding family of limit return maps. Moreover, for some special value of the parameter, the respective limit return map is conjugate to what was called bidimensional tent map. This piecewise affine map is an example of what we call now Expanding Baker Map, and the main objective of this paper is to show how many of the different attractors exhibited for the limit return maps resemble the ones observed for Expanding Baker Maps. Keywords: Expanding Baker Maps., strange attractors, Limit return maps. Mathematics Subject Classification: Primary: 37C70, 37D45; Secondary: 37G3. Citation: Antonio Pumariño, José Ángel Rodríguez, Joan Carles Tatjer, Enrique Vigil. Expanding Baker Maps as models for the dynamics emerging from 3D-homoclinic bifurcations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (2) : 523-541. doi: 10.3934/dcdsb.2014.19.523 V. I. Arnold and A. Avez, Problemes Ergodiques De La Mecanique Classique,, Paris: Gauthier-Villars, (1967). Google Scholar M. Benedicks and L. Carleson, On iterations of $1-ax^2$ on $(0,1)$,, Ann. Math., 122 (1985), 1. doi: 10.2307/1971367. Google Scholar M. Benedicks and L. Carleson, The dynamics of the Hénon map,, Ann. Math., 133 (1991), 73. doi: 10.2307/2944326. Google Scholar J. Buzzi, Absolutely continuous invariant measures for generic multi-dimensional piecewise affine expanding maps,, Inter. Jour. Bif. and Chaos, 9 (1999), 1743. doi: 10.1142/S021812749900122X. Google Scholar P. Collet and J. P. Eckmann, Iterated Maps of the Interval as Dynamical Systems,, Birkhauser, (1980). Google Scholar E. Colli, Infinitely many coexisting strange attractors,, Ann. Inst. H. Poincaré, 15 (1998), 539. doi: 10.1016/S0294-1449(98)80001-2. Google Scholar S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, Dynamical phenomena in multidimensional systems with a non-rough Poincare homoclinic curve,, Russ. Acad. Sci. Dokl. Math., 47 (1993), 410. Google Scholar S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, On the existence of Newhouse regions near systems with non-rough Poincare homoclinic curve (multidimensional case),, Russian Acad. Sci. Dokl. Math., 47 (1993), 268. Google Scholar S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, On dynamical properties of diffeomorphisms with homoclinic tangencies,, J. Math. Sci., 126 (2005), 1317. Google Scholar S. V. Gonchenko, L. P. Shilnikov and D. V. Turaev, On dynamical properties of multidimensional diffeomorphisms from Newhouse regions. I,, Nonlinearity, 21 (2008), 923. doi: 10.1088/0951-7715/21/5/003. Google Scholar S. V. Gonchenko, V. S. Gonchenko and J. C. Tatjer, Bifurcations of three-dimensional diffeomorphisms with non-simple quadratic homoclinic tangencies and generalized Hénon maps,, Regular and Chaotic Dynamics, 12 (2007), 233. doi: 10.1134/S156035470703001X. Google Scholar M. R. Herman, Sur la conjugaison des difféomorphismes du cercle à des rotations,, (French), 46 (1976), 181. Google Scholar M. V. Jakobson, Absolutely continuous invariant measures for one-parameter families of one-dimensional maps,, Comm. Math. Phys., 81 (1981), 39. doi: 10.1007/BF01941800. Google Scholar A. Lasota and J. A. Yorke, On the existence of invariant measures for piecewise monotonic transformations,, Trans. Am. Math. Soc., 186 (1973), 481. doi: 10.1090/S0002-9947-1973-0335758-1. Google Scholar W. de Melo and S. van Strien, One Dimensional Dynamics,, Berlin, (1993). Google Scholar J. Milnor and P. Thurston, On iterated maps of the interval,, Lecture Notes in Math., 1342 (1988), 465. doi: 10.1007/BFb0082847. Google Scholar C. Mira, L. Gardini, A. Barugola and J. C. Cathala, Chaotic Dynamics in Two-Dimensional Noninvertible Maps,, World Scientific, (1996). doi: 10.1142/9789812798732. Google Scholar L. Mora and M. Viana, Abundance of strange attractors,, Acta Math., 171 (1993), 1. doi: 10.1007/BF02392766. Google Scholar S. Newhouse, Non-density of Axiom A (a) on $S^2$,, Proc. Amer. Math. Soc. Sympos. Pure Math., 14 (1970), 191. Google Scholar S. Newhouse, Diffeomorphisms with infinitely many sinks,, Topology, 13 (1974), 9. doi: 10.1016/0040-9383(74)90034-2. Google Scholar S. Newhouse, The abundance of wild hyperbolic sets and nonsmooth stable sets for diffeomorphisms,, Publ. Math. I.H.E.S. 50 (1979), 50 (1979), 101. Google Scholar J. Palis and F. Takens, Hyperbolicity and Sensitive Chaotic Dynamics at Homoclinic Bifurcations,, Cambridge University Press, (1993). Google Scholar J. Palis and J. C. Yoccoz, Homoclinic tangencies for hyperbolic sets of large Hausdorff dimension,, Acta Math., 172 (1994), 91. doi: 10.1007/BF02392792. Google Scholar J. Palis and M. Viana, High dimension diffeomorphisms displaying infinitely many sinks,, Ann. Math., 140 (1994), 207. doi: 10.2307/2118546. Google Scholar W. Parry, Symbolic dynamics and transformations of the unit interval,, Trans. AMS, 122 (1966), 368. doi: 10.1090/S0002-9947-1966-0197683-5. Google Scholar A. Pumariño and J. C. Tatjer, Dynamics near homoclinic bifurcations of three-dimensional dissipative diffeomorphisms,, Nonlinearity, 19 (2006), 2833. doi: 10.1088/0951-7715/19/12/006. Google Scholar A. Pumariño and J. C. Tatjer, Attractors for return maps near homoclinic tangencies of three-dimensional dissipative diffeomorphism,, Discrete and continuous dynamical systems, (2007), 971. doi: 10.3934/dcdsb.2007.8.971. Google Scholar N. Romero, Persistence of homoclinic tangencies in higher dimensions,, Ergod. Th. Dyn. Sys., 15 (1995), 735. doi: 10.1017/S0143385700008634. Google Scholar J. C. Tatjer, Three-dimensional dissipative diffeomorphisms with homoclinic tangencies,, Ergodic Theory and Dynamical Systems, 21 (2001), 249. doi: 10.1017/S0143385701001146. Google Scholar L. Tedeschini-Lalli and J. A. Yorke, How often do simple dynamical processes have infinitely many coexisting sinks?, Comm. Math. Phys., 106 (1986), 635. doi: 10.1007/BF01463400. Google Scholar M. Tsujii, Absolutely continuous invariant measures for expanding piecewise linear maps,, Invent. Math., 143 (2001), 349. doi: 10.1007/PL00005797. Google Scholar M. Viana, Strange attractors in higher dimensions,, Bull. Braz. Math. Soc., 24 (1993), 13. doi: 10.1007/BF01231695. Google Scholar M. Viana, Homoclinic bifurcations and strange attractors,, Available from: , (). doi: 10.1007/978-94-015-8439-5_10. Google Scholar J. A. Yorke and K. T. Alligood, Cascades of period doubling bifurcations: A prerequisite for horseshoes,, Bull. A.M.S. 9 (1983), 9 (1983), 319. doi: 10.1090/S0273-0979-1983-15191-1. Google Scholar Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalizable Expanding Baker Maps: Coexistence of strange attractors. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1651-1678. doi: 10.3934/dcds.2017068 Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Persistent two-dimensional strange attractors for a two-parameter family of Expanding Baker Maps. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 657-670. doi: 10.3934/dcdsb.2018201 Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979 Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalization of two-dimensional piecewise linear maps: Abundance of 2-D strange attractors. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 941-966. doi: 10.3934/dcds.2018040 Carlangelo Liverani. A footnote on expanding maps. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3741-3751. doi: 10.3934/dcds.2013.33.3741 Antonio Pumariño, Joan Carles Tatjer. Attractors for return maps near homoclinic tangencies of three-dimensional dissipative diffeomorphisms. Discrete & Continuous Dynamical Systems - B, 2007, 8 (4) : 971-1005. doi: 10.3934/dcdsb.2007.8.971 Hans-Otto Walther. Contracting return maps for monotone delayed feedback. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 259-274. doi: 10.3934/dcds.2001.7.259 Peter Haïssinsky, Kevin M. Pilgrim. An algebraic characterization of expanding Thurston maps. Journal of Modern Dynamics, 2012, 6 (4) : 451-476. doi: 10.3934/jmd.2012.6.451 Peter Haïssinsky, Kevin M. Pilgrim. Examples of coarse expanding conformal maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2403-2416. doi: 10.3934/dcds.2012.32.2403 José F. Alves. Stochastic behavior of asymptotically expanding maps. Conference Publications, 2001, 2001 (Special) : 14-21. doi: 10.3934/proc.2001.2001.14 Yushi Nakano, Shota Sakamoto. Spectra of expanding maps on Besov spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1779-1797. doi: 10.3934/dcds.2019077 Rafael De La Llave, Michael Shub, Carles Simó. Entropy estimates for a family of expanding maps of the circle. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 597-608. doi: 10.3934/dcdsb.2008.10.597 Michael Blank. Finite rank approximations of expanding maps with neutral singularities. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 749-762. doi: 10.3934/dcds.2008.21.749 Xu Zhang, Yuming Shi, Guanrong Chen. Coupled-expanding maps under small perturbations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1291-1307. doi: 10.3934/dcds.2011.29.1291 Viviane Baladi, Daniel Smania. Smooth deformations of piecewise expanding unimodal maps. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 685-703. doi: 10.3934/dcds.2009.23.685 Yong Fang. On smooth conjugacy of expanding maps in higher dimensions. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 687-697. doi: 10.3934/dcds.2011.30.687 Damien Thomine. A spectral gap for transfer operators of piecewise expanding maps. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 917-944. doi: 10.3934/dcds.2011.30.917 Ralf Spatzier, Lei Yang. Exponential mixing and smooth classification of commuting expanding maps. Journal of Modern Dynamics, 2017, 11: 263-312. doi: 10.3934/jmd.2017012 Alena Erchenko. Flexibility of Lyapunov exponents for expanding circle maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2325-2342. doi: 10.3934/dcds.2019098 Rua Murray. Ulam's method for some non-uniformly expanding maps. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 1007-1018. doi: 10.3934/dcds.2010.26.1007 Antonio Pumariño José Ángel Rodríguez Joan Carles Tatjer Enrique Vigil
CommonCrawl
http://scigraph.springernature.com/pub.10.1140/epjc/s10052-020-08677-2 Measurements of WH and ZH production in the H→bb¯ decay channel in pp collisions at 13Te with the ATLAS detector View Full Text , G. Aad, B. Abbott, D. C. Abbott, A. Abed Abud, K. Abeling, D.K. Abhayasinghe, S.H. Abidi, O.S. AbouZeid, N. L. Abraham, H. Abramowicz, H. Abreu, Y. Abulaiti, B.S. Acharya, B. Achkar, L. Adam, C. Adam Bourdarios, L. Adamczyk, L. Adamek, J. Adelman, M. Adersberger, A. Adiguzel, S. Adorni, T. Adye, A.A. Affolder, Y. Afik, C. Agapopoulou, M.N. Agaras, A. Aggarwal, C. Agheorghiesei, J.A. Aguilar-Saavedra, A. Ahmad, F. Ahmadov, W.S. Ahmed, X. Ai, G. Aielli, S. Akatsuka, M. Akbiyik, T.P.A. Åkesson, E. Akilli, A.V. Akimov, K. Al Khoury, G.L. Alberghi, J. Albert, M.J. Alconada Verzini, S. Alderweireldt, M. Aleksa, I.N. Aleksandrov, C. Alexa, T. Alexopoulos, A. Alfonsi, F. Alfonsi, M. Alhroob, B. Ali, S. Ali, M. Aliev, G. Alimonti, C. Allaire, B.M.M. Allbrooke, B.W. Allen, P.P. Allport, A. Aloisio, F. Alonso, C. Alpigiani, E. Alunno Camelia, M. Alvarez Estevez, M.G. Alviggi, Y. Amaral Coutinho, A. Ambler, L. Ambroz, C. Amelung, D. Amidei, S.P. Amor Dos Santos, S. Amoroso, C. S. Amrouche, F. An, C. Anastopoulos, N. Andari, T. Andeen, J.K. Anders, S.Y. Andrean, A. Andreazza, V. Andrei, C. R. Anelli, S. Angelidakis, A. Angerami, A.V. Anisenkov, A. Annovi, C. Antel, M.T. Anthony, E. Antipov, M. Antonelli, D.J.A. Antrim, F. Anulli, M. Aoki, J. A. Aparisi Pozo, M.A. Aparo, L. Aperio Bella, N. Aranzabal, V. Araujo Ferraz, R. Araujo Pereira, C. Arcangeletti, A.T.H. Arce, F. A. Arduh, J.-F. Arguin, S. Argyropoulos, J.-H. Arling, A.J. Armbruster, A. Armstrong, O. Arnaez, H. Arnold, Z. P. Arrubarrena Tame, G. Artoni, K. Asai, S. Asai, T. Asawatavonvanich, N. Asbah, E.M. Asimakopoulou, L. Asquith, J. Assahsah, K. Assamagan, R. Astalos, R.J. Atkin, M. Atkinson, N.B. Atlay, H. Atmani, K. Augsten, V.A. Austrup, G. Avolio, M.K. Ayoub, G. Azuelos, H. Bachacou, K. Bachas, M. Backes, F. Backman, P. Bagnaia, M. Bahmani, H. Bahrasemani, A.J. Bailey, V.R. Bailey, J.T. Baines, C. Bakalis, O.K. Baker, P.J. Bakker, E. Bakos, D. Bakshi Gupta, S. Balaji, E.M. Baldin, P. Balek, F. Balli, W.K. Balunas, J. Balz, E. Banas, M. Bandieramonte, A. Bandyopadhyay, Sw. Banerjee, L. Barak, W.M. Barbe, E.L. Barberio, D. Barberis, M. Barbero, G. Barbour, T. Barillari, M.-S. Barisits, J. Barkeloo, T. Barklow, R. Barnea, B.M. Barnett, R.M. Barnett, Z. Barnovska-Blenessy, A. Baroncelli, G. Barone, A.J. Barr, L. Barranco Navarro, F. Barreiro, J. Barreiro Guimarães da Costa, U. Barron, S. Barsov, F. Bartels, R. Bartoldus, G. Bartolini, A.E. Barton, P. Bartos, A. Basalaev, A. Basan, A. Bassalat, M.J. Basso, R.L. Bates, S. Batlamous, J.R. Batley, B. Batool, M. Battaglia, M. Bauce, F. Bauer, K. T. Bauer, P. Bauer, H. S. Bawa, A. Bayirli, J.B. Beacham, T. Beau, P.H. Beauchemin, F. Becherer, P. Bechtle, H. C. Beck, H.P. Beck, K. Becker, C. Becot, A. Beddall, A.J. Beddall, V.A. Bednyakov, M. Bedognetti, C.P. Bee, T.A. Beermann, M. Begalli, M. Begel, A. Behera, J.K. Behr, F. Beisiegel, M. Belfkir, A.S. Bell, G. Bella, L. Bellagamba, A. Bellerive, P. Bellos, K. Beloborodov, K. Belotskiy, N.L. Belyaev, D. Benchekroun, N. Benekos, Y. Benhammou, D.P. Benjamin, M. Benoit, J.R. Bensinger, S. Bentvelsen, L. Beresford, M. Beretta, D. Berge, E. Bergeaas Kuutmann, N. Berger, B. Bergmann, L.J. Bergsten, J. Beringer, S. Berlendis, G. Bernardi, C. Bernius, F.U. Bernlochner, T. Berry, P. Berta, C. Bertella, A. Berthold, I.A. Bertram, O. Bessidskaia Bylund, N. Besson, A. Bethani, S. Bethke, A. Betti, A.J. Bevan, J. Beyer, D.S. Bhattacharya, P. Bhattarai, V.S. Bhopatkar, R. Bi, R.M. Bianchi, O. Biebel, D. Biedermann, R. Bielski, K. Bierwagen, N.V. Biesuz, M. Biglietti, T.R.V. Billoud, M. Bindi, A. Bingul, C. Bini, S. Biondi, C.J. Birch-sykes, M. Birman, T. Bisanz, J.P. Biswal, D. Biswas, A. Bitadze, C. Bittrich, K. Bjørke, T. Blazek, I. Bloch, C. Blocker, A. Blue, U. Blumenschein, G.J. Bobbink, V.S. Bobrovnikov, S. S. Bocchetta, D. Bogavac, A.G. Bogdanchikov, C. Bohm, V. Boisvert, P. Bokan, T. Bold, A.E. Bolz, M. Bomben, M. Bona, J.S. Bonilla, M. Boonekamp, C. D. Booth, A.G. Borbély, H.M. Borecka-Bielska, L. S. Borgna, A. Borisov, G. Borissov, D. Bortoletto, D. Boscherini, M. Bosman, J.D. Bossio Sola, K. Bouaouda, J. Boudreau, E.V. Bouhova-Thacker, D. Boumediene, S.K. Boutle, A. Boveia, J. Boyd, D. Boye, I.R. Boyko, A.J. Bozson, J. Bracinik, N. Brahimi, G. Brandt, O. Brandt, F. Braren, B. Brau, J.E. Brau, W. D. Breaden Madden, K. Brendlinger, R. Brener, L. Brenner, R. Brenner, S. Bressler, B. Brickwedde, D.L. Briglin, D. Britton, D. Britzger, I. Brock, R. Brock, G. Brooijmans, W.K. Brooks, E. Brost, P.A. Bruckman de Renstrom, B. Brüers, D. Bruncko, A. Bruni, G. Bruni, L.S. Bruni, S. Bruno, M. Bruschi, N. Bruscino, L. Bryngemark, T. Buanes, Q. Buat, P. Buchholz, A.G. Buckley, I.A. Budagov, M.K. Bugge, F. Bührer, O. Bulekov, B.A. Bullard, T.J. Burch, S. Burdin, C.D. Burgard, A.M. Burger, B. Burghgrave, J.T.P. Burr, C.D. Burton, J. C. Burzynski, V. Büscher, E. Buschmann, P.J. Bussey, J.M. Butler, C.M. Buttar, J.M. Butterworth, P. Butti, W. Buttinger, C. J. Buxo Vazquez, A. Buzatu, A.R. Buzykaev, G. Cabras, S. Cabrera Urbán, D. Caforio, H. Cai, V.M.M. Cairo, O. Cakir, N. Calace, P. Calafiura, G. Calderini, P. Calfayan, G. Callea, L. P. Caloba, A. Caltabiano, S. Calvente Lopez, D. Calvet, S. Calvet, T.P. Calvet, M. Calvetti, R. Camacho Toro, S. Camarda, D. Camarero Munoz, P. Camarri, M.T. Camerlingo, D. Cameron, C. Camincher, S. Campana, M. Campanelli, A. Camplani, V. Canale, A. Canesse, M. Cano Bret, J. Cantero, T. Cao, Y. Cao, M.D.M. Capeans Garrido, M. Capua, R. Cardarelli, F. Cardillo, G. Carducci, I. Carli, T. Carli, G. Carlino, B.T. Carlson, E.M. Carlson, L. Carminati, R.M.D. Carney, S. Caron, E. Carquin, S. Carrá, G. Carratta, J.W.S. Carter, T.M. Carter, M.P. Casado, A. F. Casha, F.L. Castillo, L. Castillo Garcia, V. Castillo Gimenez, N.F. Castro, A. Catinaccio, J. R. Catmore, A. Cattai, V. Cavaliere, V. Cavasinni, E. Celebi, F. Celli, K. Cerny, A.S. Cerqueira, A. Cerri, L. Cerrito, F. Cerutti, A. Cervelli, S.A. Cetin, Z. Chadi, D. Chakraborty, J. Chan, W.S. Chan, W.Y. Chan, J.D. Chapman, B. Chargeishvili, D.G. Charlton, T.P. Charman, C.C. Chau, S. Che, S. Chekanov, S.V. Chekulaev, G.A. Chelkov, B. Chen, C. Chen, C.H. Chen, H. Chen, J. Chen, J. Chen, J. Chen, S. Chen, S.J. Chen, X. Chen, Y. Chen, Y.-H. Chen, H.C. Cheng, H.J. Cheng, A. Cheplakov, E. Cheremushkina, R. Cherkaoui El Moursli, E. Cheu, K. Cheung, T.J.A. Chevalérias, L. Chevalier, V. Chiarella, G. Chiarelli, G. Chiodini, A.S. Chisholm, A. Chitan, I. Chiu, Y.H. Chiu, M.V. Chizhov, K. Choi, A.R. Chomont, Y. S. Chow, L.D. Christopher, M.C. Chu, X. Chu, J. Chudoba, J.J. Chwastowski, L. Chytka, D. Cieri, K.M. Ciesla, D. Cinca, V. Cindro, I.A. Cioară, A. Ciocio, F. Cirotto, Z.H. Citron, M. Citterio, D. A. Ciubotaru, B.M. Ciungu, A. Clark, M.R. Clark, P.J. Clark, S.E. Clawson, C. Clement, Y. Coadou, M. Cobal, A. Coccaro, J. Cochran, R. Coelho Lopes De Sa, H. Cohen, A.E.C. Coimbra, B. Cole, A. P. Colijn, J. Collot, P. Conde Muiño, S.H. Connell, I.A. Connelly, S. Constantinescu, F. Conventi, A.M. Cooper-Sarkar, F. Cormier, K. J. R. Cormier, L.D. Corpe, M. Corradi, E.E. Corrigan, F. Corriveau, M.J. Costa, F. Costanza, D. Costanzo, G. Cowan, J.W. Cowley, J. Crane, K. Cranmer, R.A. Creager, S. Crépé-Renaudin, F. Crescioli, M. Cristinziani, V. Croft, G. Crosetti, A. Cueto, T. Cuhadar Donszelmann, H. Cui, A.R. Cukierman, W.R. Cunningham, S. Czekierda, P. Czodrowski, M.M. Czurylo, M.J. Da Cunha Sargedas De Sousa, J.V. Da Fonseca Pinto, C. Da Via, W. Dabrowski, F. Dachs, T. Dado, S. Dahbi, T. Dai, C. Dallapiccola, M. Dam, G. D'amen, V. D'Amico, J. Damp, J.R. Dandoy, M.F. Daneri, M. Danninger, V. Dao, G. Darbo, O. Dartsi, A. Dattagupta, T. Daubney, S. D'Auria, C. David, T. Davidek, D.R. Davis, I. Dawson, K. De, R. De Asmundis, M. De Beurs, S. De Castro, N. De Groot, P. de Jong, H. De la Torre, A. De Maria, D. De Pedis, A. De Salvo, U. De Sanctis, M. De Santis, A. De Santo, J.B. De Vivie De Regie, C. Debenedetti, D. V. Dedovich, A.M. Deiana, J. Del Peso, Y. Delabat Diaz, D. Delgove, F. Deliot, C.M. Delitzsch, M. Della Pietra, D. Della Volpe, A. Dell'Acqua, L. Dell'Asta, M. Delmastro, C. Delporte, P.A. Delsart, D.A. DeMarco, S. Demers, M. Demichev, G. Demontigny, S.P. Denisov, L. D'Eramo, D. Derendarz, J.E. Derkaoui, F. Derue, P. Dervan, K. Desch, K. Dette, C. Deutsch, M. R. Devesa, P.O. Deviveiros, F.A. Di Bello, A. Di Ciaccio, L. Di Ciaccio, W.K. Di Clemente, C. Di Donato, A. Di Girolamo, G. Di Gregorio, B. Di Micco, R. Di Nardo, K.F. Di Petrillo, R. Di Sipio, C. Diaconu, F.A. Dias, T. Dias Do Vale, M. A. Diaz, F.G. Diaz Capriles, J. Dickinson, M. Didenko, E.B. Diehl, J. Dietrich, S. Díez Cornell, C. Diez Pardos, A. Dimitrievska, W. Ding, J. Dingfelder, S.J. Dittmeier, F. Dittus, F. Djama, T. Djobava, J.I. Djuvsland, M.A.B. Do Vale, M. Dobre, D. Dodsworth, C. Doglioni, J. Dolejsi, Z. Dolezal, M. Donadelli, B. Dong, J. Donini, A. D'onofrio, M. D'Onofrio, J. Dopke, A. Doria, M.T. Dova, A.T. Doyle, E. Drechsler, E. Dreyer, T. Dreyer, A.S. Drobac, D. Du, T.A. du Pree, Y. Duan, F. Dubinin, M. Dubovsky, A. Dubreuil, E. Duchovni, G. Duckeck, O.A. Ducu, D. Duda, A. Dudarev, A.C. Dudder, E. M. Duffield, M. D'uffizi, L. Duflot, M. Dührssen, C. Dülsen, M. Dumancic, A.E. Dumitriu, M. Dunford, A. Duperrin, H. Duran Yildiz, M. Düren, A. Durglishvili, D. Duschinger, B. Dutta, D. Duvnjak, G.I. Dyckes, M. Dyndal, S. Dysch, B.S. Dziedzic, M. G. Eggleston, T. Eifert, G. Eigen, K. Einsweiler, T. Ekelof, H. El Jarrari, V. Ellajosyula, M. Ellert, F. Ellinghaus, A.A. Elliot, N. Ellis, J. Elmsheuser, M. Elsing, D. Emeliyanov, A. Emerman, Y. Enari, M.B. Epland, J. Erdmann, A. Ereditato, P.A. Erland, M. Errenst, M. Escalier, C. Escobar, O. Estrada Pastor, E. Etzion, H. Evans, M.O. Evans, A. Ezhilov, F. Fabbri, L. Fabbri, V. Fabiani, G. Facini, R. M. Fakhrutdinov, S. Falciano, P.J. Falke, S. Falke, J. Faltova, Y. Fang, Y. Fang, G. Fanourakis, M. Fanti, M. Faraj, A. Farbin, A. Farilla, E.M. Farina, T. Farooque, S.M. Farrington, P. Farthouat, F. Fassi, P. Fassnacht, D. Fassouliotis, M. Faucci Giannelli, W.J. Fawcett, L. Fayard, O.L. Fedin, W. Fedorko, A. Fehr, M. Feickert, L. Feligioni, A. Fell, C. Feng, M. Feng, M.J. Fenton, A. B. Fenyuk, S.W. Ferguson, J. Ferrando, A. Ferrante, A. Ferrari, P. Ferrari, R. Ferrari, D.E. Ferreira de Lima, A. Ferrer, D. Ferrere, C. Ferretti, F. Fiedler, A. Filipčič, F. Filthaut, K.D. Finelli, M.C.N. Fiolhais, L. Fiorini, F. Fischer, J. Fischer, W.C. Fisher, T. Fitschen, I. Fleck, P. Fleischmann, T. Flick, B.M. Flierl, L. Flores, L.R. Flores Castillo, F.M. Follega, N. Fomin, J.H. Foo, G.T. Forcolin, B. C. Forland, A. Formica, F.A. Förster, A.C. Forti, E. Fortin, M.G. Foti, D. Fournier, H. Fox, P. Francavilla, S. Francescato, M. Franchini, S. Franchino, D. Francis, L. Franco, L. Franconi, M. Franklin, G. Frattari, A.N. Fray, P. M. Freeman, B. Freund, W.S. Freund, E.M. Freundlich, D.C. Frizzell, D. Froidevaux, J.A. Frost, M. Fujimoto, C. Fukunaga, E. Fullana Torregrosa, T. Fusayasu, J. Fuster, A. Gabrielli, A. Gabrielli, S. Gadatsch, P. Gadow, G. Gagliardi, L.G. Gagnon, G.E. Gallardo, E.J. Gallas, B.J. Gallop, R. Gamboa Goni, K.K. Gan, S. Ganguly, J. Gao, Y. Gao, Y.S. Gao, F.M. Garay Walls, C. García, J.E. García Navarro, J.A. García Pascual, C. Garcia-Argos, M. Garcia-Sciveres, R.W. Gardner, N. Garelli, S. Gargiulo, C. A. Garner, V. Garonne, S.J. Gasiorowski, P. Gaspar, A. Gaudiello, G. Gaudio, I.L. Gavrilenko, A. Gavrilyuk, C. Gay, G. Gaycken, E.N. Gazis, A.A. Geanta, C.M. Gee, C.N.P. Gee, J. Geisen, M. Geisen, C. Gemme, M.H. Genest, C. Geng, S. Gentile, S. George, T. Geralis, L. O. Gerlach, P. Gessinger-Befurt, G. Gessner, S. Ghasemi, M. Ghasemi Bostanabad, M. Ghneimat, A. Ghosh, A. Ghosh, B. Giacobbe, S. Giagu, N. Giangiacomi, P. Giannetti, A. Giannini, G. Giannini, S.M. Gibson, M. Gignac, D.T. Gil, B.J. Gilbert, D. Gillberg, G. Gilles, D.M. Gingrich, M.P. Giordani, P.F. Giraud, G. Giugliarelli, D. Giugni, F. Giuli, S. Gkaitatzis, I. Gkialas, E.L. Gkougkousis, P. Gkountoumis, L.K. Gladilin, C. Glasman, J. Glatzer, P.C.F. Glaysher, A. Glazov, G.R. Gledhill, I. Gnesi, M. Goblirsch-Kolb, D. Godin, S. Goldfarb, T. Golling, D. Golubkov, A. Gomes, R. Goncalves Gama, R. Gonçalo, G. Gonella, L. Gonella, A. Gongadze, F. Gonnella, J.L. Gonski, S. González de la Hoz, S. Gonzalez Fernandez, R. Gonzalez Lopez, C. Gonzalez Renteria, R. Gonzalez Suarez, S. Gonzalez-Sevilla, G.R. Gonzalvo Rodriguez, L. Goossens, N. A. Gorasia, P. A. Gorbounov, H.A. Gordon, B. Gorini, E. Gorini, A. Gorišek, A.T. Goshaw, M.I. Gostkin, C.A. Gottardo, M. Gouighri, A.G. Goussiou, N. Govender, C. Goy, I. Grabowska-Bold, E.C. Graham, J. Gramling, E. Gramstad, S. Grancagnolo, M. Grandi, V. Gratchev, P.M. Gravila, F.G. Gravili, C. Gray, H.M. Gray, C. Grefe, K. Gregersen, I.M. Gregor, P. Grenier, K. Grevtsov, C. Grieco, N. A. Grieser, A. A. Grillo, K. Grimm, S. Grinstein, J.-F. Grivaz, S. Groh, E. Gross, J. Grosse-Knetter, Z.J. Grout, C. Grud, A. Grummer, J.C. Grundy, L. Guan, W. Guan, C. Gubbels, J. Guenther, A. Guerguichon, J.G.R. Guerrero Rojas, F. Guescini, D. Guest, R. Gugel, A. Guida, T. Guillemin, S. Guindon, U. Gul, J. Guo, W. Guo, Y. Guo, Z. Guo, R. Gupta, S. Gurbuz, G. Gustavino, M. Guth, P. Gutierrez, C. Gutschow, C. Guyot, C. Gwenlan, C.B. Gwilliam, E.S. Haaland, A. Haas, C. Haber, H.K. Hadavand, A. Hadef, M. Haleem, J. Haley, J.J. Hall, G. Halladjian, G.D. Hallewell, K. Hamano, H. Hamdaoui, M. Hamer, G.N. Hamity, K. Han, L. Han, S. Han, Y.F. Han, K. Hanagaki, M. Hance, D.M. Handl, M.D. Hank, R. Hankache, E. Hansen, J.B. Hansen, J.D. Hansen, M.C. Hansen, P.H. Hansen, E.C. Hanson, K. Hara, T. Harenberg, S. Harkusha, P. F. Harrison, N.M. Hartman, N.M. Hartmann, Y. Hasegawa, A. Hasib, S. Hassani, S. Haug, R. Hauser, L.B. Havener, M. Havranek, C.M. Hawkes, R.J. Hawkings, S. Hayashida, D. Hayden, C. Hayes, R.L. Hayes, C.P. Hays, J.M. Hays, H.S. Hayward, S.J. Haywood, F. He, Y. He, M.P. Heath, V. Hedberg, S. Heer, A.L. Heggelund, C. Heidegger, K.K. Heidegger, W.D. Heidorn, J. Heilman, S. Heim, T. Heim, B. Heinemann, J.G. Heinlein, J.J. Heinrich, L. Heinrich, J. Hejbal, L. Helary, A. Held, S. Hellesund, C.M. Helling, S. Hellman, C. Helsens, R. C. W. Henderson, Y. Heng, L. Henkelmann, A. M. Henriques Correia, H. Herde, Y. Hernández Jiménez, H. Herr, M.G. Herrmann, T. Herrmann, G. Herten, R. Hertenberger, L. Hervas, T.C. Herwig, G.G. Hesketh, N.P. Hessey, H. Hibi, A. Higashida, S. Higashino, E. Higón-Rodriguez, K. Hildebrand, J.C. Hill, K.K. Hill, K. H. Hiller, S.J. Hillier, M. Hils, I. Hinchliffe, F. Hinterkeuser, M. Hirose, S. Hirose, D. Hirschbuehl, B. Hiti, O. Hladik, D.R. Hlaluku, J. Hobbs, N. Hod, M.C. Hodgkinson, A. Hoecker, D. Hohn, D. Hohov, T. Holm, T.R. Holmes, M. Holzbock, L.B.A.H. Hommels, T.M. Hong, J.C. Honig, A. Hönle, B.H. Hooberman, W.H. Hopkins, Y. Horii, P. Horn, L.A. Horyn, S. Hou, A. Hoummada, J. Howarth, J. Hoya, M. Hrabovsky, J. Hrdinka, J. Hrivnac, A. Hrynevich, T. Hryn'ova, P.J. Hsu, S.-C. Hsu, Q. Hu, S. Hu, Y.F. Hu, D.P. Huang, Y. Huang, Y. Huang, Z. Hubacek, F. Hubaut, M. Huebner, F. Huegging, T.B. Huffman, M. Huhtinen, R. Hulsken, R.F.H. Hunter, P. Huo, N. Huseynov, J. Huston, J. Huth, R. Hyneman, S. Hyrych, G. Iacobucci, G. Iakovidis, I. Ibragimov, L. Iconomidou-Fayard, P. Iengo, R. Ignazzi, O. Igonkina, R. Iguchi, T. Iizawa, Y. Ikegami, M. Ikeno, N. Ilic, F. Iltzsche, H. Imam, G. Introzzi, M. Iodice, K. Iordanidou, V. Ippolito, M.F. Isacson, M. Ishino, W. Islam, C. Issever, S. Istin, F. Ito, J.M. Iturbe Ponce, R. Iuppa, A. Ivina, H. Iwasaki, J.M. Izen, V. Izzo, P. Jacka, P. Jackson, R.M. Jacobs, B.P. Jaeger, V. Jain, G. Jäkel, K. B. Jakobi, K. Jakobs, T. Jakoubek, J. Jamieson, K.W. Janas, R. Jansky, M. Janus, P.A. Janus, G. Jarlskog, A.E. Jaspan, N. Javadov, T. Javůrek, M. Javurkova, F. Jeanneau, L. Jeanty, J. Jejelava, P. Jenni, N. Jeong, S. Jézéquel, H. Ji, J. Jia, H. Jiang, Y. Jiang, Z. Jiang, S. Jiggins, F. A. Jimenez Morales, J. Jimenez Pena, S. Jin, A. Jinaru, O. Jinnouchi, H. Jivan, P. Johansson, K.A. Johns, C.A. Johnson, R.W.L. Jones, S.D. Jones, T.J. Jones, J. Jongmanns, J. Jovicevic, X. Ju, J.J. Junggeburth, A. Juste Rozas, A. Kaczmarska, M. Kado, H. Kagan, M. Kagan, A. Kahn, C. Kahra, T. Kaji, E. Kajomovitz, C.W. Kalderon, A. Kaluza, A. Kamenshchikov, M. Kaneda, N.J. Kang, S. Kang, Y. Kano, J. Kanzaki, L.S. Kaplan, D. Kar, K. Karava, M.J. Kareem, I. Karkanias, S.N. Karpov, Z.M. Karpova, V. Kartvelishvili, A.N. Karyukhin, E. Kasimi, A. Kastanas, C. Kato, J. Katzy, K. Kawade, K. Kawagoe, T. Kawaguchi, T. Kawamoto, G. Kawamura, E.F. Kay, S. Kazakos, V. F. Kazanin, R. Keeler, R. Kehoe, J.S. Keller, E. Kellermann, D. Kelsey, J.J. Kempster, J. Kendrick, K.E. Kennedy, O. Kepka, S. Kersten, B.P. Kerševan, S. Ketabchi Haghighat, M. Khader, F. Khalil-Zada, M. Khandoga, A. Khanov, A.G. Kharlamov, T. Kharlamova, E.E. Khoda, A. Khodinov, T.J. Khoo, G. Khoriauli, E. Khramov, J. Khubua, S. Kido, M. Kiehn, C.R. Kilby, E. Kim, Y.K. Kim, N. Kimura, A. Kirchhoff, D. Kirchmeier, J. Kirk, A.E. Kiryunin, T. Kishimoto, D. P. Kisliuk, V. Kitali, C. Kitsaki, O. Kivernyk, T. Klapdor-Kleingrothaus, M. Klassen, C. Klein, M.H. Klein, M. Klein, U. Klein, K. Kleinknecht, P. Klimek, A. Klimentov, T. Klingl, T. Klioutchnikova, F.F. Klitzner, P. Kluit, S. Kluth, E. Kneringer, E.B.F.G. Knoops, A. Knue, D. Kobayashi, M. Kobel, M. Kocian, T. Kodama, P. Kodys, D.M. Koeck, P.T. Koenig, T. Koffas, N.M. Köhler, M. Kolb, I. Koletsou, T. Komarek, T. Kondo, K. Köneke, A.X.Y. Kong, A.C. König, T. Kono, V. Konstantinides, N. Konstantinidis, B. Konya, R. Kopeliansky, S. Koperny, K. Korcyl, K. Kordas, G. Koren, A. Korn, I. Korolkov, E. V. Korolkova, N. Korotkova, O. Kortner, S. Kortner, V.V. Kostyukhin, A. Kotsokechagia, A. Kotwal, A. Koulouris, A. Kourkoumeli-Charalampidi, C. Kourkoumelis, E. Kourlitis, V. Kouskoura, R. Kowalewski, W. Kozanecki, A.S. Kozhin, V.A. Kramarenko, G. Kramberger, D. Krasnopevtsev, M.W. Krasny, A. Krasznahorkay, D. Krauss, J.A. Kremer, J. Kretzschmar, P. Krieger, F. Krieter, A. Krishnan, M. Krivos, K. Krizka, K. Kroeninger, H. Kroha, J. Kroll, J. Kroll, K.S. Krowpman, U. Kruchonak, H. Krüger, N. Krumnack, M.C. Kruse, J.A. Krzysiak, A. Kubota, O. Kuchinskaia, S. Kuday, D. Kuechler, J.T. Kuechler, S. Kuehn, T. Kuhl, V. Kukhtin, Y. Kulchitsky, S. Kuleshov, Y. P. Kulinich, M. Kuna, T. Kunigo, A. Kupco, T. Kupfer, O. Kuprash, H. Kurashige, L.L. Kurchaninov, Y.A. Kurochkin, A. Kurova, M. G. Kurth, E.S. Kuwertz, M. Kuze, A.K. Kvam, J. Kvita, T. Kwan, F. La Ruffa, C. Lacasta, F. Lacava, D.P.J. Lack, H. Lacker, D. Lacour, E. Ladygin, R. Lafaye, B. Laforge, T. Lagouri, S. Lai, I.K. Lakomiec, J.E. Lambert, S. Lammers, W. Lampl, C. Lampoudis, E. Lançon, U. Landgraf, M.P.J. Landon, M.C. Lanfermann, V.S. Lang, J.C. Lange, R.J. Langenberg, A.J. Lankford, F. Lanni, K. Lantzsch, A. Lanza, A. Lapertosa, J.F. Laporte, T. Lari, F. Lasagni Manghi, M. Lassnig, T.S. Lau, A. Laudrain, A. Laurier, M. Lavorgna, S.D. Lawlor, M. Lazzaroni, B. Le, E. Le Guirriec, A. Lebedev, M. LeBlanc, T. LeCompte, F. Ledroit-Guillon, A. C. A. Lee, C.A. Lee, G.R. Lee, L. Lee, S.C. Lee, S. Lee, B. Lefebvre, H.P. Lefebvre, M. Lefebvre, C. Leggett, K. Lehmann, N. Lehmann, G. Lehmann Miotto, W.A. Leight, A. Leisos, M.A.L. Leite, C.E. Leitgeb, R. Leitner, D. Lellouch, K.J.C. Leney, T. Lenz, S. Leone, C. Leonidopoulos, A. Leopold, C. Leroy, R. Les, C.G. Lester, M. Levchenko, J. Levêque, D. Levin, L.J. Levinson, D.J. Lewis, B. Li, B. Li, C.-Q. Li, F. Li, H. Li, H. Li, J. Li, K. Li, L. Li, M. Li, Q. Li, Q.Y. Li, S. Li, X. Li, Y. Li, Z. Li, Z. Li, Z. Li, Z. Liang, M. Liberatore, B. Liberti, A. Liblong, K. Lie, S. Lim, C.Y. Lin, K. Lin, R.A. Linck, R. E. Lindley, J. H. Lindon, A. Linss, A.L. Lionti, E. Lipeles, A. Lipniacka, T.M. Liss, A. Lister, J.D. Little, B. Liu, B.X. Liu, H. B. Liu, J.B. Liu, J.K.K. Liu, K. Liu, M. Liu, P. Liu, X. Liu, Y. Liu, Y. Liu, Y.L. Liu, Y.W. Liu, M. Livan, A. Lleres, J. Llorente Merino, S.L. Lloyd, C.Y. Lo, E.M. Lobodzinska, P. Loch, S. Loffredo, T. Lohse, K. Lohwasser, M. Lokajicek, J.D. Long, R.E. Long, I. Longarini, L. Longo, K.A. Looper, I. Lopez Paz, A. Lopez Solis, J. Lorenz, N. Lorenzo Martinez, A.M. Lory, P. J. Lösel, A. Lösle, X. Lou, X. Lou, A. Lounis, J. Love, P.A. Love, J.J. Lozano Bahilo, M. Lu, Y.J. Lu, H.J. Lubatti, C. Luci, F.L. Lucio Alves, A. Lucotte, F. Luehring, I. Luise, L. Luminari, B. Lund-Jensen, M.S. Lutz, D. Lynn, H. Lyons, R. Lysak, E. Lytken, F. Lyu, V. Lyubushkin, T. Lyubushkina, H. Ma, L.L. Ma, Y. Ma, D.M. Mac Donell, G. Maccarrone, A. Macchiolo, C.M. Macdonald, J.C. MacDonald, J. Machado Miguens, D. Madaffari, R. Madar, W.F. Mader, M. Madugoda Ralalage Don, N. Madysa, J. Maeda, T. Maeno, M. Maerker, V. Magerl, N. Magini, J. Magro, D.J. Mahon, C. Maidantchik, T. Maier, A. Maio, K. Maj, O. Majersky, S. Majewski, Y. Makida, N. Makovec, B. Malaescu, Pa. Malecki, V.P. Maleev, F. Malek, D. Malito, U. Mallik, D. Malon, C. Malone, S. Maltezos, S. Malyukov, J. Mamuzic, G. Mancini, I. Mandić, L. Manhaes de Andrade Filho, I.M. Maniatis, J. Manjarres Ramos, K.H. Mankinen, A. Mann, A. Manousos, B. Mansoulie, I. Manthos, S. Manzoni, A. Marantis, G. Marceca, L. Marchese, G. Marchiori, M. Marcisovsky, L. Marcoccia, C. Marcon, C.A. Marin Tobon, M. Marjanovic, Z. Marshall, M.U.F. Martensson, S. Marti-Garcia, C.B. Martin, T.A. Martin, V.J. Martin, B. Martin dit Latour, L. Martinelli, M. Martinez, P. Martinez Agullo, V.I. Martinez Outschoorn, S. Martin-Haugh, V.S. Martoiu, A.C. Martyniuk, A. Marzin, S.R. Maschek, L. Masetti, T. Mashimo, R. Mashinistov, J. Masik, A.L. Maslennikov, L. Massa, P. Massarotti, P. Mastrandrea, A. Mastroberardino, T. Masubuchi, D. Matakias, A. Matic, N. Matsuzawa, P. Mättig, J. Maurer, B. Maček, D.A. Maximov, R. Mazini, I. Maznas, S.M. Mazza, J.P. Mc Gowan, S.P. Mc Kee, T.G. McCarthy, W.P. McCormack, E.F. McDonald, J.A. Mcfayden, G. Mchedlidze, M. A. McKay, K.D. McLean, S. J. McMahon, P.C. McNamara, C.J. McNicol, R.A. McPherson, J.E. Mdhluli, Z.A. Meadows, S. Meehan, T. Megy, S. Mehlhase, A. Mehta, B. Meirose, D. Melini, B.R. Mellado Garcia, J.D. Mellenthin, M. Melo, F. Meloni, A. Melzer, E.D. Mendes Gouveia, L. Meng, X.T. Meng, S. Menke, E. Meoni, S. Mergelmeyer, S. A. M. Merkt, C. Merlassino, P. Mermod, L. Merola, C. Meroni, G. Merz, O. Meshkov, J.K.R. Meshreki, J. Metcalfe, A.S. Mete, C. Meyer, J.-P. Meyer, M. Michetti, R.P. Middleton, L. Mijović, G. Mikenberg, M. Mikestikova, M. Mikuž, H. Mildner, A. Milic, C.D. Milke, D.W. Miller, A. Milov, D. A. Milstead, R.A. Mina, A.A. Minaenko, I.A. Minashvili, A.I. Mincer, B. Mindur, M. Mineev, Y. Minegishi, L.M. Mir, M. Mironova, A. Mirto, K.P. Mistry, T. Mitani, J. Mitrevski, V.A. Mitsou, M. Mittal, O. Miu, A. Miucci, P.S. Miyagawa, A. Mizukami, J. U. Mjörnmark, T. Mkrtchyan, M. Mlynarikova, T. Moa, S. Mobius, K. Mochizuki, P. Mogg, S. Mohapatra, R. Moles-Valls, K. Mönig, E. Monnier, A. Montalbano, J. Montejo Berlingen, M. Montella, F. Monticelli, S. Monzani, N. Morange, A.L. Moreira De Carvalho, D. Moreno, M. Moreno Llácer, C. Moreno Martinez, P. Morettini, M. Morgenstern, S. Morgenstern, D. Mori, M. Morii, M. Morinaga, V. Morisbak, A.K. Morley, G. Mornacchi, A.P. Morris, L. Morvaj, P. Moschovakos, B. Moser, M. Mosidze, T. Moskalets, J. Moss, E.J.W. Moyse, S. Muanza, J. Mueller, R. S. P. Mueller, D. Muenstermann, G.A. Mullier, D.P. Mungo, J.L. Munoz Martinez, F.J. Munoz Sanchez, P. Murin, W.J. Murray, A. Murrone, J.M. Muse, M. Muškinja, C. Mwewa, A.G. Myagkov, A. A. Myers, G. Myers, J. Myers, M. Myska, B.P. Nachman, O. Nackenhorst, A.Nag Nag, K. Nagai, K. Nagano, Y. Nagasaka, J.L. Nagle, E. Nagy, A.M. Nairz, Y. Nakahama, K. Nakamura, T. Nakamura, H. Nanjo, F. Napolitano, R.F. Naranjo Garcia, R. Narayan, I. Naryshkin, T. Naumann, G. Navarro, P.Y. Nechaeva, F. Nechansky, T.J. Neep, A. Negri, M. Negrini, C. Nellist, C. Nelson, M.E. Nelson, S. Nemecek, M. Nessi, M.S. Neubauer, F. Neuhaus, M. Neumann, R. Newhouse, P.R. Newman, C.W. Ng, Y. S. Ng, Y.W.Y. Ng, B. Ngair, H.D.N. Nguyen, T. Nguyen Manh, E. Nibigira, R.B. Nickerson, R. Nicolaidou, D.S. Nielsen, J. Nielsen, M. Niemeyer, N. Nikiforou, V. Nikolaenko, I. Nikolic-Audit, K. Nikolopoulos, P. Nilsson, H.R. Nindhito, Y. Ninomiya, A. Nisati, N. Nishu, R. Nisius, I. Nitsche, T. Nitta, T. Nobe, D.L. Noel, Y. Noguchi, I. Nomidis, M. A. Nomura, M. Nordberg, J. Novak, T. Novak, O. Novgorodova, R. Novotny, L. Nozka, K. Ntekas, E. Nurse, F.G. Oakham, H. Oberlack, J. Ocariz, A. Ochi, I. Ochoa, J.P. Ochoa-Ricoux, K. O'Connor, S. Oda, S. Odaka, S. Oerdek, A. Ogrodnik, A. Oh, C.C. Ohm, H. Oide, M.L. Ojeda, H. Okawa, Y. Okazaki, M. W. O'Keefe, Y. Okumura, T. Okuyama, A. Olariu, L.F. Oleiro Seabra, S. A. Olivares Pino, D. Oliveira Damazio, J.L. Oliver, M.J.R. Olsson, A. Olszewski, J. Olszowska, Ö.O. Öncel, D.C. O'Neil, A.P. O'neill, A. Onofre, P.U.E. Onyisi, H. Oppen, R. G. Oreamuno Madriz, M.J. Oreglia, G.E. Orellana, D. Orestano, N. Orlando, R.S. Orr, V. O'Shea, R. Ospanov, G. Otero y Garzon, H. Otono, P.S. Ott, G. J. Ottino, M. Ouchrif, J. Ouellette, F. Ould-Saada, A. Ouraou, Q. Ouyang, M. Owen, R.E. Owen, V.E. Ozcan, N. Ozturk, J. Pacalt, H.A. Pacey, K. Pachal, A. Pacheco Pages, C. Padilla Aranda, S. Pagan Griso, G. Palacino, S. Palazzo, S. Palestini, M. Palka, P. Palni, C.E. Pandini, J.G. Panduro Vazquez, P. Pani, G. Panizzo, L. Paolozzi, C. Papadatos, K. Papageorgiou, S. Parajuli, A. Paramonov, C. Paraskevopoulos, D. Paredes Hernandez, S.R. Paredes Saenz, B. Parida, T.H. Park, A.J. Parker, M.A. Parker, F. Parodi, E.W. Parrish, J.A. Parsons, U. Parzefall, L. Pascual Dominguez, V.R. Pascuzzi, J.M.P. Pasner, F. Pasquali, E. Pasqualucci, S. Passaggio, F. Pastore, P. Pasuwan, S. Pataraia, J.R. Pater, A. Pathak, J. Patton, T. Pauly, J. Pearkes, B. Pearson, M. Pedersen, L. Pedraza Diaz, R. Pedro, T. Peiffer, S.V. Peleganchuk, O. Penc, H. Peng, B.S. Peralva, M.M. Perego, A.P. Pereira Peixoto, L. Pereira Sanchez, D.V. Perepelitsa, E. Perez Codina, F. Peri, L. Perini, H. Pernegger, S. Perrella, A. Perrevoort, K. Peters, R.F.Y. Peters, B.A. Petersen, T.C. Petersen, E. Petit, V. Petousis, A. Petridis, C. Petridou, F. Petrucci, M. Pettee, N.E. Pettersson, K. Petukhova, A. Peyaud, R. Pezoa, L. Pezzotti, T. Pham, F.H. Phillips, P.W. Phillips, M.W. Phipps, G. Piacquadio, E. Pianori, A. Picazio, R. H. Pickles, R. Piegaia, D. Pietreanu, J.E. Pilcher, A.D. Pilkington, M. Pinamonti, J.L. Pinfold, C. Pitman Donaldson, M. Pitt, L. Pizzimento, A. Pizzini, M.-A. Pleier, V. Plesanovs, V. Pleskot, E. Plotnikova, P. Podberezko, R. Poettgen, R. Poggi, L. Poggioli, I. Pogrebnyak, D. Pohl, I. Pokharel, G. Polesello, A. Poley, A. Policicchio, R. Polifka, A. Polini, C.S. Pollard, V. Polychronakos, D. Ponomarenko, L. Pontecorvo, S. Popa, G.A. Popeneciu, L. Portales, D.M. Portillo Quintero, S. Pospisil, K. Potamianos, I.N. Potrap, C.J. Potter, H. Potti, T. Poulsen, J. Poveda, T.D. Powell, G. Pownall, M.E. Pozo Astigarraga, P. Pralavorio, S. Prell, D. Price, M. Primavera, M.L. Proffitt, N. Proklova, K. Prokofiev, F. Prokoshin, S. Protopopescu, J. Proudfoot, M. Przybycien, D. Pudzha, A. Puri, P. Puzo, D. Pyatiizbyantseva, J. Qian, Y. Qin, A. Quadt, M. Queitsch-Maitland, M. Racko, F. Ragusa, G. Rahal, J.A. Raine, S. Rajagopalan, A. Ramirez Morales, K. Ran, D.M. Rauch, F. Rauscher, S. Rave, B. Ravina, I. Ravinovich, J.H. Rawling, M. Raymond, A.L. Read, N.P. Readioff, M. Reale, D.M. Rebuzzi, G. Redlinger, K. Reeves, J. Reichert, D. Reikher, A. Reiss, A. Rej, C. Rembser, A. Renardi, M. Renda, M. B. Rendel, A.G. Rennie, S. Resconi, E.D. Resseguie, S. Rettie, B. Reynolds, E. Reynolds, O.L. Rezanova, P. Reznicek, E. Ricci, R. Richter, S. Richter, E. Richter-Was, M. Ridel, P. Rieck, O. Rifki, M. Rijssenbeek, A. Rimoldi, M. Rimoldi, L. Rinaldi, T.T. Rinn, G. Ripellino, I. Riu, P. Rivadeneira, J.C. Rivera Vergara, F. Rizatdinova, E. Rizvi, C. Rizzi, S.H. Robertson, M. Robin, D. Robinson, C. M. Robles Gajardo, M. Robles Manzano, A. Robson, A. Rocchi, E. Rocco, C. Roda, S. Rodriguez Bosca, A.M. Rodríguez Vera, S. Roe, J. Roggel, O. Røhne, R. Röhrig, R.A. Rojas, B. Roland, C.P.A. Roland, J. Roloff, A. Romaniouk, M. Romano, N. Rompotis, M. Ronzani, L. Roos, S. Rosati, G. Rosin, B.J. Rosser, E. Rossi, E. Rossi, E. Rossi, L.P. Rossi, L. Rossini, R. Rosten, M. Rotaru, B. Rottler, D. Rousseau, G. Rovelli, A. Roy, D. Roy, A. Rozanov, Y. Rozen, X. Ruan, T.A. Ruggeri, F. Rühr, A. Ruiz-Martinez, A. Rummler, Z. Rurikova, N.A. Rusakovich, H.L. Russell, L. Rustige, J.P. Rutherfoord, E.M. Rüttinger, M. Rybar, G. Rybkin, E.B. Rye, A. Ryzhov, J.A. Sabater Iglesias, P. Sabatini, L. Sabetta, S. Sacerdoti, H.F-W. Sadrozinski, R. Sadykov, F. Safai Tehrani, B. Safarzadeh Samani, M. Safdari, P. Saha, S. Saha, M. Sahinsoy, A. Sahu, M. Saimpert, M. Saito, T. Saito, H. Sakamoto, D. Salamani, G. Salamanna, A. Salnikov, J. Salt, A. Salvador Salas, D. Salvatore, F. Salvatore, A. Salvucci, A. Salzburger, J. Samarati, D. Sammel, D. Sampsonidis, D. Sampsonidou, J. Sánchez, A. Sanchez Pineda, H. Sandaker, C.O. Sander, I.G. Sanderswood, M. Sandhoff, C. Sandoval, D.P.C. Sankey, M. Sannino, Y. Sano, A. Sansoni, C. Santoni, H. Santos, S.N. Santpur, A. Santra, K.A. Saoucha, A. Sapronov, J.G. Saraiva, O. Sasaki, K. Sato, F. Sauerburger, E. Sauvan, P. Savard, R. Sawada, C. Sawyer, L. Sawyer, I. Sayago Galvan, C. Sbarra, A. Sbrizzi, T. Scanlon, J. Schaarschmidt, P. Schacht, D. Schaefer, L. Schaefer, S. Schaepe, U. Schäfer, A.C. Schaffer, D. Schaile, R.D. Schamberger, E. Schanet, C. Scharf, N. Scharmberg, V.A. Schegelsky, D. Scheirich, F. Schenck, M. Schernau, C. Schiavi, L.K. Schildgen, Z.M. Schillaci, E.J. Schioppa, M. Schioppa, K.E. Schleicher, S. Schlenker, K.R. Schmidt-Sommerfeld, K. Schmieden, C. Schmitt, S. Schmitt, J.C. Schmoeckel, L. Schoeffel, A. Schoening, P.G. Scholer, E. Schopf, M. Schott, J.F.P. Schouwenberg, J. Schovancova, S. Schramm, F. Schroeder, A. Schulte, H.-C. Schultz-Coulon, M. Schumacher, B.A. Schumm, Ph. Schune, A. Schwartzman, T.A. Schwarz, Ph. Schwemling, R. Schwienhorst, A. Sciandra, G. Sciolla, M. Scornajenghi, F. Scuri, F. Scutti, L.M. Scyboz, C.D. Sebastiani, P. Seema, S.C. Seidel, A. Seiden, B.D. Seidlitz, T. Seiss, C. Seitz, J.M. Seixas, G. Sekhniaidze, S.J. Sekula, N. Semprini-Cesari, S. Sen, C. Serfon, L. Serin, L. Serkin, M. Sessa, H. Severini, S. Sevova, F. Sforza, A. Sfyrla, E. Shabalina, J.D. Shahinian, N.W. Shaikh, D. Shaked Renous, L.Y. Shan, M. Shapiro, A. Sharma, A.S. Sharma, P.B. Shatalov, K. Shaw, S.M. Shaw, M. Shehade, Y. Shen, A. D. Sherman, P. Sherwood, L. Shi, S. Shimizu, C.O. Shimmin, Y. Shimogama, M. Shimojima, I.P.J. Shipsey, S. Shirabe, M. Shiyakova, J. Shlomi, A. Shmeleva, M.J. Shochet, J. Shojaii, D. R. Shope, S. Shrestha, E.M. Shrif, E. Shulga, P. Sicho, A.M. Sickles, E. Sideras Haddad, O. Sidiropoulou, A. Sidoti, F. Siegert, Dj. Sijacki, M.Jr. Silva, M.V. Silva Oliveira, S.B. Silverstein, S. Simion, R. Simoniello, C. J. Simpson-allsop, S. Simsek, P. Sinervo, V. Sinetckii, S. Singh, M. Sioli, I. Siral, S.Yu. Sivoklokov, J. Sjölin, A. Skaf, E. Skorda, P. Skubic, M. Slawinska, K. Sliwa, R. Slovak, V. Smakhtin, B.H. Smart, J. Smiesko, N. Smirnov, S.Yu. Smirnov, Y. Smirnov, L.N. Smirnova, O. Smirnova, E.A. Smith, H.A. Smith, M. Smizanska, K. Smolek, A. Smykiewicz, A.A. Snesarev, H.L. Snoek, I.M. Snyder, S. Snyder, R. Sobie, A. Soffer, A. Søgaard, F. Sohns, C.A. Solans Sanchez, E.Yu. Soldatov, U. Soldevila, A.A. Solodkov, A. Soloshenko, O.V. Solovyanov, V. Solovyev, P. Sommer, H. Son, W. Song, W.Y. Song, A. Sopczak, A. L. Sopio, F. Sopkova, S. Sottocornola, R. Soualah, A.M. Soukharev, D. South, S. Spagnolo, M. Spalla, M. Spangenberg, F. Spanò, D. Sperlich, T.M. Spieker, G. Spigo, M. Spina, D.P. Spiteri, M. Spousta, A. Stabile, B.L. Stamas, R. Stamen, M. Stamenkovic, E. Stanecka, B. Stanislaus, M.M. Stanitzki, M. Stankaityte, B. Stapf, E.A. Starchenko, G.H. Stark, J. Stark, P. Staroba, P. Starovoitov, S. Stärz, R. Staszewski, G. Stavropoulos, M. Stegler, P. Steinberg, A.L. Steinhebel, B. Stelzer, H.J. Stelzer, O. Stelzer-Chilton, H. Stenzel, T.J. Stevenson, G.A. Stewart, M.C. Stockton, G. Stoicea, M. Stolarski, S. Stonjek, A. Straessner, J. Strandberg, S. Strandberg, M. Strauss, T. Strebler, P. Strizenec, R. Ströhmer, D.M. Strom, R. Stroynowski, A. Strubig, S.A. Stucci, B. Stugu, J. Stupak, N.A. Styles, D. Su, W. Su, X. Su, V.V. Sulin, M.J. Sullivan, D.M.S. Sultan, S. Sultansoy, T. Sumida, S. Sun, X. Sun, K. Suruliz, C.J.E. Suster, M.R. Sutton, S. Suzuki, M. Svatos, M. Swiatlowski, S. P. Swift, T. Swirski, A. Sydorenko, I. Sykora, M. Sykora, T. Sykora, D. Ta, K. Tackmann, J. Taenzer, A. Taffard, R. Tafirout, E. Tagiev, R. Takashima, K. Takeda, T. Takeshita, E.P. Takeva, Y. Takubo, M. Talby, A. A. Talyshev, K. C. Tam, N. M. Tamir, J. Tanaka, R. Tanaka, S. Tapia Araya, S. Tapprogge, A. Tarek Abouelfadl Mohamed, S. Tarem, K. Tariq, G. Tarna, G.F. Tartarelli, P. Tas, M. Tasevsky, T. Tashiro, E. Tassi, A. Tavares Delgado, Y. Tayalati, A.J. Taylor, G.N. Taylor, W. Taylor, H. Teagle, A. S. Tee, R. Teixeira De Lima, P. Teixeira-Dias, H. Ten Kate, J.J. Teoh, S. Terada, K. Terashi, J. Terron, S. Terzo, M. Testa, R.J. Teuscher, S.J. Thais, N. Themistokleous, T. Theveneaux-Pelzer, F. Thiele, D. W. Thomas, J. O. Thomas, J.P. Thomas, E.A. Thompson, P.D. Thompson, E. Thomson, E.J. Thorpe, R.E. Ticse Torres, V.O. Tikhomirov, Yu.A. Tikhonov, S. Timoshenko, P. Tipton, S. Tisserant, K. Todome, S. Todorova-Nova, S. Todt, J. Tojo, S. Tokár, K. Tokushuku, E. Tolley, R. Tombs, K.G. Tomiwa, M. Tomoto, L. Tompkins, P. Tornambe, E. Torrence, H. Torres, E. Torró Pastor, C. Tosciri, J. Toth, D.R. Tovey, A. Traeet, C.J. Treado, T. Trefzger, F. Tresoldi, A. Tricoli, I.M. Trigger, S. Trincaz-Duvoid, D.A. Trischuk, W. Trischuk, B. Trocmé, A. Trofymov, C. Troncon, F. Trovato, L. Truong, M. Trzebinski, A. Trzupek, F. Tsai, J.C.-L. Tseng, P. V. Tsiareshka, A. Tsirigotis, V. Tsiskaridze, E. G. Tskhadadze, M. Tsopoulou, I.I. Tsukerman, V. Tsulaia, S. Tsuno, D. Tsybychev, Y. Tu, A. Tudorache, V. Tudorache, T. T. Tulbure, A.N. Tuna, S. Turchikhin, D. Turgeman, I. Turk Cakir, R. J. Turner, R. Turra, P.M. Tuts, S. Tzamarias, E. Tzovara, K. Uchida, F. Ukegawa, G. Unal, M. Unal, A. Undrus, G. Unel, F.C. Ungaro, Y. Unno, K. Uno, J. Urban, P. Urquijo, G. Usai, Z. Uysal, V. Vacek, B. Vachon, K.O.H. Vadla, T. Vafeiadis, A. Vaidya, C. Valderanis, E. Valdes Santurio, M. Valente, S. Valentinetti, A. Valero, L. Valéry, R.A. Vallance, A. Vallier, J. A. Valls Ferrer, T.R. Van Daalen, P. Van Gemmeren, S. Van Stroud, I. Van Vulpen, M. Vanadia, W. Vandelli, M. Vandenbroucke, E.R. Vandewall, A. Vaniachine, D. Vannicola, R. Vari, E.W. Varnes, C. Varni, T. Varol, D. Varouchas, K.E. Varvell, M.E. Vasile, G.A. Vasquez, F. Vazeille, D. Vazquez Furelos, T. Vazquez Schroeder, J. Veatch, V. Vecchio, M.J. Veen, L.M. Veloce, F. Veloso, S. Veneziano, A. Ventura, A. Verbytskyi, V. Vercesi, M. Verducci, C. M. Vergel Infante, C. Vergis, W. Verkerke, A.T. Vermeulen, J.C. Vermeulen, C. Vernieri, M.C. Vetterli, N. Viaux Maira, T. Vickey, O.E. Vickey Boeriu, G.H.A. Viehhauser, L. Vigani, M. Villa, M. Villaplana Perez, E. M. Villhauer, E. Vilucchi, M.G. Vincter, G.S. Virdee, A. Vishwakarma, C. Vittori, I. Vivarelli, M. Vogel, P. Vokac, S.E. von Buddenbrock, E. Von Toerne, V. Vorobel, K. Vorobev, M. Vos, J.H. Vossebeld, M. Vozak, N. Vranjes, M. Vranjes Milosavljevic, V. Vrba, M. Vreeswijk, R. Vuillermet, I. Vukotic, S. Wada, P. Wagner, W. Wagner, J. Wagner-Kuhr, S. Wahdan, H. Wahlberg, R. Wakasa, V.M. Walbrecht, J. Walder, R. Walker, S. D. Walker, W. Walkowiak, V. Wallangen, A.M. Wang, A.Z. Wang, C. Wang, C. Wang, F. Wang, H. Wang, H. Wang, J. Wang, P. Wang, Q. Wang, R.-J. Wang, R. Wang, R. Wang, S.M. Wang, W.T. Wang, W. Wang, W.X. Wang, Y. Wang, Z. Wang, C. Wanotayaroj, A. Warburton, C.P. Ward, D.R. Wardrope, N. Warrack, A.T. Watson, M.F. Watson, G. Watts, B.M. Waugh, A.F. Webb, C. Weber, M.S. Weber, S.A. Weber, S.M. Weber, A.R. Weidberg, J. Weingarten, M. Weirich, C. Weiser, P.S. Wells, T. Wenaus, B. Wendland, T. Wengler, S. Wenig, N. Wermes, M. Wessels, T. D. Weston, K. Whalen, A. M. Wharton, A.S. White, A. White, M.J. White, D. Whiteson, B.W. Whitmore, W. Wiedenmann, C. Wiel, M. Wielers, N. Wieseotte, C. Wiglesworth, L.A.M. Wiik-Fuchs, H.G. Wilkens, L.J. Wilkins, H. H. Williams, S. Williams, S. Willocq, P.J. Windischhofer, I. Wingerter-Seez, E. Winkels, F. Winklmeier, B.T. Winter, M. Wittgen, M. Wobisch, A. Wolf, R. Wölker, J. Wollrath, M.W. Wolter, H. Wolters, V.W.S. Wong, N.L. Woods, S.D. Worm, B.K. Wosiek, K.W. Woźniak, K. Wraight, S.L. Wu, X. Wu, Y. Wu, J. Wuerzinger, T.R. Wyatt, B.M. Wynne, S. Xella, L. Xia, J. Xiang, X. Xiao, X. Xie, I. Xiotidis, D. Xu, H. Xu, H. Xu, L. Xu, T. Xu, W. Xu, Z. Xu, Z. Xu, B. Yabsley, S. Yacoob, K. Yajima, D.P. Yallup, N. Yamaguchi, Y. Yamaguchi, A. Yamamoto, M. Yamatani, T. Yamazaki, Y. Yamazaki, J. Yan, Z. Yan, H.J. Yang, H.T. Yang, S. Yang, T. Yang, X. Yang, Y. Yang, Z. Yang, W.-M. Yao, Y.C. Yap, Y. Yasu, E. Yatsenko, H. Ye, J. Ye, S. Ye, I. Yeletskikh, M.R. Yexley, E. Yigitbasi, P. Yin, K. Yorita, K. Yoshihara, C.J.S. Young, C. Young, J. Yu, R. Yuan, X. Yue, M. Zaazoua, B. Zabinski, G. Zacharis, E. Zaffaroni, J. Zahreddine, A.M. Zaitsev, T. Zakareishvili, N. Zakharchuk, S. Zambito, D. Zanzi, D.R. Zaripovas, S.V. Zeißner, C. Zeitnitz, G. Zemaityte, J.C. Zeng, O. Zenin, T. Ženiš, D. Zerwas, M. Zgubič, B. Zhang, D.F. Zhang, G. Zhang, J. Zhang, K. Zhang, L. Zhang, L. Zhang, M. Zhang, R. Zhang, S. Zhang, X. Zhang, X. Zhang, Y. Zhang, Z. Zhang, Z. Zhang, P. Zhao, Z. Zhao, A. Zhemchugov, Z. Zheng, D. Zhong, B. Zhou, C. Zhou, H. Zhou, M.S. Zhou, M. Zhou, N. Zhou, Y. Zhou, C.G. Zhu, C. Zhu, H.L. Zhu, H. Zhu, J. Zhu, Y. Zhu, X. Zhuang, K. Zhukov, V. Zhulanov, D. Zieminska, N.I. Zimine, S. Zimmermann, Z. Zinonos, M. Ziolkowski, L. Živković, G. Zobernig, A. Zoccoli, K. Zoch, T.G. Zorbas, R. Zou, L. Zwalinski Measurements of the Standard Model Higgs boson decaying into a bb¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$b\bar{b}$$\end{document} pair and produced in association with a W or Z boson decaying into leptons, using proton–proton collision data collected between 2015 and 2018 by the ATLAS detector, are presented. The measurements use collisions produced by the Large Hadron Collider at a centre-of-mass energy of s=13Te\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s} = 13\,\text {Te}\text {V}$$\end{document}, corresponding to an integrated luminosity of 139fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$139\,\mathrm {fb}^{-1}$$\end{document}. The production of a Higgs boson in association with a W or Z boson is established with observed (expected) significances of 4.0 (4.1) and 5.3 (5.1) standard deviations, respectively. Cross-sections of associated production of a Higgs boson decaying into bottom quark pairs with an electroweak gauge boson, W or Z, decaying into leptons are measured as a function of the gauge boson transverse momentum in kinematic fiducial volumes. The cross-section measurements are all consistent with the Standard Model expectations, and the total uncertainties vary from 30% in the high gauge boson transverse momentum regions to 85% in the low regions. Limits are subsequently set on the parameters of an effective Lagrangian sensitive to modifications of the WH and ZH processes as well as the Higgs boson decay into bb¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$b\bar{b}$$\end{document}. More... » 2010-06-09. A general framework for implementing NLO calculations in shower Monte Carlo programs: the POWHEG BOX in JOURNAL OF HIGH ENERGY PHYSICS 2013-07-05. Effective Lagrangian for a light Higgs-like scalar in JOURNAL OF HIGH ENERGY PHYSICS 2010-10-25. Dimension-six terms in the Standard Model Lagrangian in JOURNAL OF HIGH ENERGY PHYSICS 2008-03-13. A parton shower algorithm based on Catani-Seymour dipole factorisation in JOURNAL OF HIGH ENERGY PHYSICS 2015-01-14. Search for the bb¯ decay of the Standard Model Higgs boson in associated (W/Z)H production with the ATLAS detector in JOURNAL OF HIGH ENERGY PHYSICS 2014-07-08. Complete Higgs sector constraints on dimension-6 operators in JOURNAL OF HIGH ENERGY PHYSICS 2017-10-19. Measurement of inclusive and differential cross sections in the H → ZZ* → 4ℓ decay channel in pp collisions at s=13 TeV with the ATLAS detector in JOURNAL OF HIGH ENERGY PHYSICS 2013-04-04. QCD matrix elements + parton showers. The NLO case in JOURNAL OF HIGH ENERGY PHYSICS 2014-04-07. Higher-order QCD effects for associated WH production and decay at the LHC in JOURNAL OF HIGH ENERGY PHYSICS 2016-10-27. Performance of pile-up mitigation techniques for jets in pp collisions at s=8 TeV using the ATLAS detector in EUROPEAN PHYSICAL JOURNAL C 2014-09-24. Measurement of the Z/γ* boson transverse momentum distribution in pp collisions at s=7 TeV with the ATLAS detector in JOURNAL OF HIGH ENERGY PHYSICS 2018-11-08. Performance of missing transverse momentum reconstruction with the ATLAS detector using proton–proton collisions at s=13TeV in EUROPEAN PHYSICAL JOURNAL C 2017-05-19. Reconstruction of primary vertices at the ATLAS experiment in Run 1 proton–proton collisions at the LHC in EUROPEAN PHYSICAL JOURNAL C 2012-03-27. Automated one-loop calculations with GoSam in EUROPEAN PHYSICAL JOURNAL C 2012-03-23. Electroweak corrections to Higgs-strahlung off W/Z bosons at the Tevatron and the LHC with Hawk in JOURNAL OF HIGH ENERGY PHYSICS 2013-10-14. HW±/HZ + 0 and 1 jet at NLO with the POWHEG BOX interfaced to GoSam and their merging within MiNLO in JOURNAL OF HIGH ENERGY PHYSICS 2007-11-23. Matching NLO QCD computations with parton shower simulations: the POWHEG method in JOURNAL OF HIGH ENERGY PHYSICS 2017-12-06. Evidence for the H→bb¯ decay with the ATLAS detector in JOURNAL OF HIGH ENERGY PHYSICS 2016-08-05. Measurements of the Higgs boson production and decay rates and constraints on its couplings from a combined ATLAS and CMS analysis of the LHC pp collision data at s=7 and 8 TeV in JOURNAL OF HIGH ENERGY PHYSICS 2010-09-25. The ATLAS Simulation Infrastructure in EUROPEAN PHYSICAL JOURNAL C 2017-05-18. Performance of the ATLAS trigger system in 2015 in EUROPEAN PHYSICAL JOURNAL C 2016-05-25. Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment in EUROPEAN PHYSICAL JOURNAL C 2014-11-18. Soft gluon resummation for gluon-induced Higgs Strahlung in JOURNAL OF HIGH ENERGY PHYSICS 2016-11-28. Luminosity determination in pp collisions at s = 8 TeV using the ATLAS detector at the LHC in EUROPEAN PHYSICAL JOURNAL C 2012-02-03. Top-quark mediated effects in hadronic Higgs-Strahlung in EUROPEAN PHYSICAL JOURNAL C 2011-02-03. Single-top Wt-channel production matched with parton showers using the POWHEG method in EUROPEAN PHYSICAL JOURNAL C 2013-03-02. Jet energy resolution in proton-proton collisions at recorded in 2010 with the ATLAS detector in EUROPEAN PHYSICAL JOURNAL C 2019-11-25. ATLAS b-jet identification performance and efficiency measurement with tt¯ events in pp collisions at s=13 TeV in EUROPEAN PHYSICAL JOURNAL C 2019-01-24. Measurement of inclusive and differential Higgs boson production cross sections in the diphoton decay channel in proton-proton collisions at s=13 TeV in JOURNAL OF HIGH ENERGY PHYSICS 2011-02-09. Asymptotic formulae for likelihood-based tests of new physics in EUROPEAN PHYSICAL JOURNAL C 2016-05-23. Muon reconstruction performance of the ATLAS detector in proton–proton collision data at s=13 TeV in EUROPEAN PHYSICAL JOURNAL C 2008-07-08. Single-top hadroproduction in association with a W boson in JOURNAL OF HIGH ENERGY PHYSICS 2009-08-06. Measuring the Higgs sector in JOURNAL OF HIGH ENERGY PHYSICS 2018-11-29. Measurements of Higgs boson properties in the diphoton decay channel in proton-proton collisions at s=13 TeV in JOURNAL OF HIGH ENERGY PHYSICS 2012-10-23. MINLO: multi-scale improved NLO in JOURNAL OF HIGH ENERGY PHYSICS 2017-12-14. The SMEFTsim package, theory and tools in JOURNAL OF HIGH ENERGY PHYSICS 2007-09-28. A positive-weight next-to-leading-order Monte Carlo for heavy flavour hadroproduction in JOURNAL OF HIGH ENERGY PHYSICS 2009-09-25. NLO single-top production matched with shower in POWHEG: s - and t -channel contributions in JOURNAL OF HIGH ENERGY PHYSICS 2015-06-11. Higgs and Z boson associated production via gluon fusion in the SM and the 2HDM in JOURNAL OF HIGH ENERGY PHYSICS 2017-11-09. Measurements of properties of the Higgs boson decaying into the four-lepton final state in pp collisions at s=13 TeV in JOURNAL OF HIGH ENERGY PHYSICS 2013-02-13. Gluon-induced Higgs-strahlung at next-to-leading order QCD in JOURNAL OF HIGH ENERGY PHYSICS 2014-02-05. Higgs Strahlung at the Large Hadron Collider in the 2-Higgs-doublet model in JOURNAL OF HIGH ENERGY PHYSICS 2019-08-03. Electron reconstruction and identification in the ATLAS experiment using the 2015 and 2016 LHC proton–proton collision data at s=13 TeV in EUROPEAN PHYSICAL JOURNAL C 2018-03-15. Measurement of the Higgs boson coupling properties in the H → ZZ∗ → 4ℓ decay channel at s=13 TeV with the ATLAS detector in JOURNAL OF HIGH ENERGY PHYSICS 2016-06-30. Associated production of a Higgs boson at NNLO in JOURNAL OF HIGH ENERGY PHYSICS 2019-05-23. Measurement of VH, H→bb¯ production as a function of the vector-boson transverse momentum in 13 TeV pp collisions with the ATLAS detector in JOURNAL OF HIGH ENERGY PHYSICS 2008-12-10. Comix, a new matrix element generator in JOURNAL OF HIGH ENERGY PHYSICS European Physical Journal C CERN, 1211 Geneva 23, Switzerland Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK, USA Department of Physics, University of Massachusetts, Amherst, MA, USA Department of Physics, Royal Holloway University of London, Egham, UK Department of Physics and Astronomy, University of Sussex, Brighton, UK High Energy Physics Division, Argonne National Laboratory, Argonne, IL, USA Department of Physics, King's College London, London, UK Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, Krakow, Poland Department of Physics, Northern Illinois University, DeKalb, IL, USA Fakultät für Physik, Ludwig-Maximilians-Universität München, Munich, Germany Département de Physique Nucléaire et Corpusculaire, Université de Genève, Geneva, Switzerland Particle Physics Department, Rutherford Appleton Laboratory, Didcot, UK Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, USA Institute for Mathematics, Astrophysics and Particle Physics, Radboud University Nijmegen/Nikhef, Nijmegen, The Netherlands Physics Division, Lawrence Berkeley National Laboratory and University of California, Berkeley, CA, USA Dipartimento di Fisica, Università di Roma Tor Vergata, Rome, Italy Nikhef National Institute for Subatomic Physics and University of Amsterdam, Amsterdam, The Netherlands INFN Sezione di Milano, Milan, Italy Institute for Fundamental Science, University of Oregon, Eugene, OR, USA School of Physics and Astronomy, University of Birmingham, Birmingham, UK Dipartimento di Fisica, Università di Napoli, Naples, Italy Department of Physics, University of Washington, Seattle, WA, USA Department of Physics, Oxford University, Oxford, UK Department of Physics, Brandeis University, Waltham, MA, USA Department of Physics, University of Michigan, Ann Arbor, MI, USA Laboratório de Instrumentação e Física Experimental de Partículas-LIP, Lisbon, Portugal Department of Physics and Astronomy, Iowa State University, Ames, IA, USA Department of Physics and Astronomy, University of Sheffield, Sheffield, UK Department of Physics, University of Texas at Austin, Austin, TX, USA Dipartimento di Fisica, Università di Milano, Milan, Italy Nevis Laboratory, Columbia University, Irvington, NY, USA Novosibirsk State University, Novosibirsk, Russia Department of Physics, Oklahoma State University, Stillwater, OK, USA Department of Physics and Astronomy, University of California Irvine, Irvine, CA, USA INFN Sezione di Roma, Rome, Italy Instituto de Física Corpuscular (IFIC), Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain Department of Physics, Duke University, Durham, NC, USA Laboratory for Particle Physics and Cosmology, Harvard University, Cambridge, MA, USA Physics Department, Brookhaven National Laboratory, Upton, NY, USA Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia Department of Physics, University of Illinois, Urbana, IL, USA Dipartimento di Fisica, Sapienza Università di Roma, Rome, Italy Department of Physics, Yale University, New Haven, CT, USA Department of Physics, University of Texas at Arlington, Arlington, TX, USA Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, USA Department of Physics and Astronomy, University of Louisville, Louisville, KY, USA School of Physics, University of Melbourne, Melbourne, VIC, Australia INFN Sezione di Genova, Genoa, Italy Department of Physics and Astronomy, University College London, London, UK Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Munich, Germany SLAC National Accelerator Laboratory, Stanford, CA, USA Physics Department, Lancaster University, Lancaster, UK SUPA - School of Physics and Astronomy, University of Glasgow, Glasgow, UK Cavendish Laboratory, University of Cambridge, Cambridge, UK California State University, Long Beach, CA, USA Department of Physics and Astronomy, Tufts University, Medford, MA, USA Department of Physics, University of Warwick, Coventry, UK Faculty of Engineering and Natural Sciences, Bahcesehir University, Istanbul, Turkey Departments of Physics and Astronomy, Stony Brook University, Stony Brook, NY, USA Faculté des Sciences Ain Chock, Réseau Universitaire de Physique des Hautes Energies, Université Hassan II, Casablanca, Morocco Department of Physics, University of Arizona, Tucson, AZ, USA School of Physics and Astronomy, University of Manchester, Manchester, UK Physics Department, Southern Methodist University, Dallas, TX, USA School of Physics and Astronomy, Queen Mary University of London, London, UK INFN Sezione di Roma Tre, Rome, Italy Oliver Lodge Laboratory, University of Liverpool, Liverpool, UK Ohio State University, Columbus, OH, USA Department of Physics and Astronomy, Michigan State University, East Lansing, MI, USA Department of Physics, Boston University, Boston, MA, USA Department of Physics, Indiana University, Bloomington, IN, USA Dipartimento di Matematica e Fisica, Università Roma Tre, Rome, Italy University of Iowa, Iowa City, IA, USA INFN Gruppo Collegato di Cosenza, Laboratori Nazionali di Frascati, Italy INFN Sezione di Roma Tor Vergata, Rome, Italy Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic INFN Sezione di Napoli, Naples, Italy SUPA-School of Physics and Astronomy, University of Edinburgh, Edinburgh, UK Faculty of Engineering and Natural Sciences, Istanbul Bilgi University, Istanbul, Turkey Department of Physics, University of Wisconsin, Madison, WI, USA Department of Physics, University of Pennsylvania, Philadelphia, PA, USA Department of Physics, Chinese University of Hong Kong, Shatin, N.T., Hong Kong Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal Universita di Napoli Parthenope, Naples, Italy Institute of Particle Physics (IPP), Montreal, Canada Department of Physics, New York University, New York, NY, USA Physics Department, University of Texas at Dallas, Richardson, TX, USA Borough of Manhattan Community College, City University of New York, New York, NY, USA Department of Physics, California State University, Fresno, USA Enrico Fermi Institute, University of Chicago, Chicago, IL, USA Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, Lisbon, Portugal Department of Physics, California State University, East Bay, USA Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM, USA Physics Department, SUNY Albany, Albany, NY, USA The City College of New York, New York, NY, USA Department of Physics, University of Hong Kong, Pok Fu Lam, Hong Kong Centro de Física Nuclear da Universidade de Lisboa, Lisbon, Portugal Department of Physics, California State University, Sacramento, USA Physics Department, National Institute for Research and Development of Isotopic and Molecular Technologies, Cluj-Napoca, Romania Louisiana Tech University, Ruston, LA, USA Particle Physics Consolidated Grant 2019 http://dx.doi.org/10.1140/epjc/s10052-020-08677-2 "alternateName": "CERN, 1211 Geneva 23, Switzerland", "Facult\u00e9 des Sciences Semlalia, Universit\u00e9 Cadi Ayyad, LPHEA-Marrakech, Morocco", "CERN, 1211 Geneva 23, Switzerland" "alternateName": "Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK, USA", "Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK, USA" "alternateName": "Department of Physics, University of Massachusetts, Amherst, MA, USA", "Department of Physics, University of Massachusetts, Amherst, MA, USA" "familyName": "Abud", "givenName": "A. Abed", "alternateName": "Department of Physics, Royal Holloway University of London, Egham, UK", "Department of Physics, Royal Holloway University of London, Egham, UK" "givenName": "D.K.", "givenName": "S.H.", "givenName": "O.S.", "alternateName": "Department of Physics and Astronomy, University of Sussex, Brighton, UK", "Department of Physics and Astronomy, University of Sussex, Brighton, UK" "alternateName": "High Energy Physics Division, Argonne National Laboratory, Argonne, IL, USA", "High Energy Physics Division, Argonne National Laboratory, Argonne, IL, USA" "alternateName": "Department of Physics, King\u2019s College London, London, UK", "Department of Physics, King\u2019s College London, London, UK" "givenName": "B.S.", "familyName": "Bourdarios", "givenName": "C. Adam", "alternateName": "Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, Krakow, Poland", "Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, Krakow, Poland" "alternateName": "Department of Physics, Northern Illinois University, DeKalb, IL, USA", "Department of Physics, Northern Illinois University, DeKalb, IL, USA" "alternateName": "Fakult\u00e4t f\u00fcr Physik, Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen, Munich, Germany", "Fakult\u00e4t f\u00fcr Physik, Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen, Munich, Germany" "alternateName": "D\u00e9partement de Physique Nucl\u00e9aire et Corpusculaire, Universit\u00e9 de Gen\u00e8ve, Geneva, Switzerland", "D\u00e9partement de Physique Nucl\u00e9aire et Corpusculaire, Universit\u00e9 de Gen\u00e8ve, Geneva, Switzerland" "alternateName": "Particle Physics Department, Rutherford Appleton Laboratory, Didcot, UK", "Particle Physics Department, Rutherford Appleton Laboratory, Didcot, UK" "alternateName": "Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, USA", "Santa Cruz Institute for Particle Physics, University of California Santa Cruz, Santa Cruz, CA, USA" "givenName": "A.A.", "givenName": "M.N.", "alternateName": "Institute for Mathematics, Astrophysics and Particle Physics, Radboud University Nijmegen/Nikhef, Nijmegen, The Netherlands", "Institute for Mathematics, Astrophysics and Particle Physics, Radboud University Nijmegen/Nikhef, Nijmegen, The Netherlands" "Laborat\u00f3rio de Instrumenta\u00e7\u00e3o e F\u00edsica Experimental de Part\u00edculas-LIP, Lisbon, Portugal", "givenName": "J.A.", "givenName": "W.S.", "alternateName": "Physics Division, Lawrence Berkeley National Laboratory and University of California, Berkeley, CA, USA", "Physics Division, Lawrence Berkeley National Laboratory and University of California, Berkeley, CA, USA" "alternateName": "Dipartimento di Fisica, Universit\u00e0 di Roma Tor Vergata, Rome, Italy", "INFN Sezione di Roma Tor Vergata, Rome, Italy", "Dipartimento di Fisica, Universit\u00e0 di Roma Tor Vergata, Rome, Italy" "givenName": "T.P.A.", "givenName": "A.V.", "familyName": "Khoury", "givenName": "K. Al", "Dipartimento di Fisica, INFN Bologna and Universita\u2019 di Bologna, Bologna, Italy", "givenName": "G.L.", "familyName": "Verzini", "givenName": "M.J. Alconada", "givenName": "I.N.", "alternateName": "Nikhef National Institute for Subatomic Physics and University of Amsterdam, Amsterdam, The Netherlands", "Nikhef National Institute for Subatomic Physics and University of Amsterdam, Amsterdam, The Netherlands" "alternateName": "INFN Sezione di Milano, Milan, Italy", "INFN Sezione di Milano, Milan, Italy" "givenName": "B.M.M.", "alternateName": "Institute for Fundamental Science, University of Oregon, Eugene, OR, USA", "Institute for Fundamental Science, University of Oregon, Eugene, OR, USA" "givenName": "B.W.", "alternateName": "School of Physics and Astronomy, University of Birmingham, Birmingham, UK", "School of Physics and Astronomy, University of Birmingham, Birmingham, UK" "givenName": "P.P.", "alternateName": "Dipartimento di Fisica, Universit\u00e0 di Napoli, Naples, Italy", "INFN Sezione di Napoli, Naples, Italy", "Dipartimento di Fisica, Universit\u00e0 di Napoli, Naples, Italy" "alternateName": "Department of Physics, University of Washington, Seattle, WA, USA", "Department of Physics, University of Washington, Seattle, WA, USA" "familyName": "Camelia", "givenName": "E. Alunno", "familyName": "Estevez", "givenName": "M. Alvarez", "givenName": "M.G.", "familyName": "Coutinho", "givenName": "Y. Amaral", "alternateName": "Department of Physics, Oxford University, Oxford, UK", "Department of Physics, Oxford University, Oxford, UK" "alternateName": "Department of Physics, Brandeis University, Waltham, MA, USA", "Department of Physics, Brandeis University, Waltham, MA, USA" "alternateName": "Department of Physics, University of Michigan, Ann Arbor, MI, USA", "Department of Physics, University of Michigan, Ann Arbor, MI, USA" "alternateName": "Laborat\u00f3rio de Instrumenta\u00e7\u00e3o e F\u00edsica Experimental de Part\u00edculas-LIP, Lisbon, Portugal", "Laborat\u00f3rio de Instrumenta\u00e7\u00e3o e F\u00edsica Experimental de Part\u00edculas-LIP, Lisbon, Portugal" "familyName": "Santos", "givenName": "S.P. Amor Dos", "alternateName": "Department of Physics and Astronomy, Iowa State University, Ames, IA, USA", "Department of Physics and Astronomy, Iowa State University, Ames, IA, USA" "alternateName": "Department of Physics and Astronomy, University of Sheffield, Sheffield, UK", "Department of Physics and Astronomy, University of Sheffield, Sheffield, UK" "alternateName": "Department of Physics, University of Texas at Austin, Austin, TX, USA", "Department of Physics, University of Texas at Austin, Austin, TX, USA" "givenName": "J.K.", "givenName": "S.Y.", "alternateName": "Dipartimento di Fisica, Universit\u00e0 di Milano, Milan, Italy", "INFN Sezione di Milano, Milan, Italy", "Dipartimento di Fisica, Universit\u00e0 di Milano, Milan, Italy" "alternateName": "Nevis Laboratory, Columbia University, Irvington, NY, USA", "Nevis Laboratory, Columbia University, Irvington, NY, USA" "alternateName": "Novosibirsk State University, Novosibirsk, Russia", "Budker Institute of Nuclear Physics and NSU, SB RAS, Novosibirsk, Russia", "Novosibirsk State University, Novosibirsk, Russia" "givenName": "M.T.", "alternateName": "Department of Physics, Oklahoma State University, Stillwater, OK, USA", "Department of Physics, Oklahoma State University, Stillwater, OK, USA" "alternateName": "Department of Physics and Astronomy, University of California Irvine, Irvine, CA, USA", "Department of Physics and Astronomy, University of California Irvine, Irvine, CA, USA" "givenName": "D.J.A.", "alternateName": "INFN Sezione di Roma, Rome, Italy", "INFN Sezione di Roma, Rome, Italy" "alternateName": "Instituto de F\u00edsica Corpuscular (IFIC), Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain", "Instituto de F\u00edsica Corpuscular (IFIC), Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain" "familyName": "Pozo", "givenName": "J. A. Aparisi", "givenName": "M.A.", "givenName": "L. Aperio", "familyName": "Ferraz", "givenName": "V. Araujo", "familyName": "Pereira", "givenName": "R. Araujo", "alternateName": "Department of Physics, Duke University, Durham, NC, USA", "Department of Physics, Duke University, Durham, NC, USA" "givenName": "A.T.H.", "givenName": "J.-F.", "givenName": "A.J.", "familyName": "Tame", "givenName": "Z. P. Arrubarrena", "alternateName": "Laboratory for Particle Physics and Cosmology, Harvard University, Cambridge, MA, USA", "Laboratory for Particle Physics and Cosmology, Harvard University, Cambridge, MA, USA" "givenName": "E.M.", "alternateName": "Physics Department, Brookhaven National Laboratory, Upton, NY, USA", "Physics Department, Brookhaven National Laboratory, Upton, NY, USA" "alternateName": "Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia", "Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia" "givenName": "R.J.", "alternateName": "Department of Physics, University of Illinois, Urbana, IL, USA", "Department of Physics, University of Illinois, Urbana, IL, USA" "givenName": "N.B.", "givenName": "V.A.", "givenName": "M.K.", "alternateName": "Dipartimento di Fisica, Sapienza Universit\u00e0 di Roma, Rome, Italy", "INFN Sezione di Roma, Rome, Italy", "Dipartimento di Fisica, Sapienza Universit\u00e0 di Roma, Rome, Italy" "givenName": "V.R.", "givenName": "J.T.", "alternateName": "Department of Physics, Yale University, New Haven, CT, USA", "Department of Physics, Yale University, New Haven, CT, USA" "givenName": "O.K.", "givenName": "P.J.", "alternateName": "Department of Physics, University of Texas at Arlington, Arlington, TX, USA", "Department of Physics, University of Texas at Arlington, Arlington, TX, USA" "familyName": "Gupta", "givenName": "D. Bakshi", "givenName": "W.K.", "alternateName": "Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, USA", "Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA, USA" "alternateName": "Department of Physics and Astronomy, University of Louisville, Louisville, KY, USA", "Department of Physics, University of Wisconsin, Madison, WI, USA", "Department of Physics and Astronomy, University of Louisville, Louisville, KY, USA" "givenName": "W.M.", "alternateName": "School of Physics, University of Melbourne, Melbourne, VIC, Australia", "School of Physics, University of Melbourne, Melbourne, VIC, Australia" "givenName": "E.L.", "alternateName": "INFN Sezione di Genova, Genoa, Italy", "Dipartimento di Fisica, Universit\u00e0 di Genova, Genoa, Italy", "INFN Sezione di Genova, Genoa, Italy" "alternateName": "Department of Physics and Astronomy, University College London, London, UK", "Department of Physics and Astronomy, University College London, London, UK" "alternateName": "Max-Planck-Institut f\u00fcr Physik (Werner-Heisenberg-Institut), Munich, Germany", "Max-Planck-Institut f\u00fcr Physik (Werner-Heisenberg-Institut), Munich, Germany" "givenName": "M.-S.", "alternateName": "SLAC National Accelerator Laboratory, Stanford, CA, USA", "SLAC National Accelerator Laboratory, Stanford, CA, USA" "givenName": "B.M.", "givenName": "R.M.", "familyName": "Navarro", "givenName": "L. Barranco", "familyName": "da Costa", "givenName": "J. Barreiro Guimar\u00e3es", "alternateName": "Physics Department, Lancaster University, Lancaster, UK", "Physics Department, Lancaster University, Lancaster, UK" "givenName": "A.E.", "givenName": "M.J.", "alternateName": "SUPA - School of Physics and Astronomy, University of Glasgow, Glasgow, UK", "SUPA - School of Physics and Astronomy, University of Glasgow, Glasgow, UK" "givenName": "R.L.", "alternateName": "Cavendish Laboratory, University of Cambridge, Cambridge, UK", "Cavendish Laboratory, University of Cambridge, Cambridge, UK" "givenName": "J.R.", "alternateName": "California State University, Long Beach, CA, USA", "California State University, Long Beach, CA, USA" "givenName": "J.B.", "alternateName": "Department of Physics and Astronomy, Tufts University, Medford, MA, USA", "Department of Physics and Astronomy, Tufts University, Medford, MA, USA" "givenName": "P.H.", "givenName": "H.P.", "alternateName": "Department of Physics, University of Warwick, Coventry, UK", "Department of Physics, University of Warwick, Coventry, UK" "alternateName": "Faculty of Engineering and Natural Sciences, Bahcesehir University, Istanbul, Turkey", "Faculty of Engineering and Natural Sciences, Bahcesehir University, Istanbul, Turkey" "alternateName": "Departments of Physics and Astronomy, Stony Brook University, Stony Brook, NY, USA", "Departments of Physics and Astronomy, Stony Brook University, Stony Brook, NY, USA" "givenName": "C.P.", "givenName": "T.A.", "givenName": "A.S.", "givenName": "N.L.", "alternateName": "Facult\u00e9 des Sciences Ain Chock, R\u00e9seau Universitaire de Physique des Hautes Energies, Universit\u00e9 Hassan II, Casablanca, Morocco", "Facult\u00e9 des Sciences Ain Chock, R\u00e9seau Universitaire de Physique des Hautes Energies, Universit\u00e9 Hassan II, Casablanca, Morocco" "givenName": "D.P.", "familyName": "Kuutmann", "givenName": "E. Bergeaas", "givenName": "L.J.", "alternateName": "Department of Physics, University of Arizona, Tucson, AZ, USA", "Department of Physics, University of Arizona, Tucson, AZ, USA" "givenName": "F.U.", "alternateName": "Institut f\u00fcr Kern-\u00a0und Teilchenphysik, Technische Universit\u00e4t Dresden, Dresden, Germany", "Institut f\u00fcr Kern-\u00a0und Teilchenphysik, Technische Universit\u00e4t Dresden, Dresden, Germany" "givenName": "I.A.", "familyName": "Bylund", "givenName": "O. Bessidskaia", "alternateName": "School of Physics and Astronomy, University of Manchester, Manchester, UK", "School of Physics and Astronomy, University of Manchester, Manchester, UK" "alternateName": "Physics Department, Southern Methodist University, Dallas, TX, USA", "Physics Department, Southern Methodist University, Dallas, TX, USA" "alternateName": "School of Physics and Astronomy, Queen Mary University of London, London, UK", "School of Physics and Astronomy, Queen Mary University of London, London, UK" "givenName": "D.S.", "givenName": "V.S.", "givenName": "N.V.", "alternateName": "INFN Sezione di Roma Tre, Rome, Italy", "INFN Sezione di Roma Tre, Rome, Italy" "givenName": "T.R.V.", "givenName": "C.J.", "givenName": "J.P.", "givenName": "G.J.", "givenName": "A.G.", "givenName": "J.S.", "alternateName": "Oliver Lodge Laboratory, University of Liverpool, Liverpool, UK", "Oliver Lodge Laboratory, University of Liverpool, Liverpool, UK" "givenName": "H.M.", "familyName": "Sola", "givenName": "J.D. Bossio", "givenName": "E.V.", "givenName": "S.K.", "alternateName": "Ohio State University, Columbus, OH, USA", "Ohio State University, Columbus, OH, USA" "givenName": "I.R.", "givenName": "J.E.", "familyName": "Madden", "givenName": "W. D. Breaden", "givenName": "D.L.", "alternateName": "Department of Physics and Astronomy, Michigan State University, East Lansing, MI, USA", "Department of Physics and Astronomy, Michigan State University, East Lansing, MI, USA" "familyName": "de Renstrom", "givenName": "P.A. Bruckman", "givenName": "L.S.", "givenName": "B.A.", "givenName": "T.J.", "givenName": "C.D.", "givenName": "A.M.", "givenName": "J.T.P.", "alternateName": "Department of Physics, Boston University, Boston, MA, USA", "Department of Physics, Boston University, Boston, MA, USA" "givenName": "J.M.", "givenName": "C.M.", "familyName": "Vazquez", "givenName": "C. J. Buxo", "givenName": "A.R.", "familyName": "Urb\u00e1n", "givenName": "S. Cabrera", "givenName": "V.M.M.", "alternateName": "Department of Physics, Indiana University, Bloomington, IN, USA", "Department of Physics, Indiana University, Bloomington, IN, USA" "familyName": "Lopez", "givenName": "S. Calvente", "givenName": "T.P.", "familyName": "Toro", "givenName": "R. Camacho", "familyName": "Munoz", "givenName": "D. Camarero", "alternateName": "Dipartimento di Matematica e Fisica, Universit\u00e0 Roma Tre, Rome, Italy", "INFN Sezione di Roma Tre, Rome, Italy", "Dipartimento di Matematica e Fisica, Universit\u00e0 Roma Tre, Rome, Italy" "alternateName": "University of Iowa, Iowa City, IA, USA", "University of Iowa, Iowa City, IA, USA" "familyName": "Bret", "givenName": "M. Cano", "familyName": "Garrido", "givenName": "M.D.M. Capeans", "alternateName": "INFN Gruppo Collegato di Cosenza, Laboratori Nazionali di Frascati, Italy", "Dipartimento di Fisica, Universit\u00e0 della Calabria, Rende, Italy", "INFN Gruppo Collegato di Cosenza, Laboratori Nazionali di Frascati, Italy" "alternateName": "INFN Sezione di Roma Tor Vergata, Rome, Italy", "INFN Sezione di Roma Tor Vergata, Rome, Italy" "alternateName": "Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic", "Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic" "alternateName": "INFN Sezione di Napoli, Naples, Italy", "INFN Sezione di Napoli, Naples, Italy" "givenName": "B.T.", "TRIUMF, Vancouver, BC, Canada", "givenName": "R.M.D.", "givenName": "J.W.S.", "alternateName": "SUPA-School of Physics and Astronomy, University of Edinburgh, Edinburgh, UK", "SUPA-School of Physics and Astronomy, University of Edinburgh, Edinburgh, UK" "givenName": "T.M.", "givenName": "M.P.", "givenName": "F.L.", "familyName": "Garcia", "givenName": "L. Castillo", "familyName": "Gimenez", "givenName": "V. Castillo", "givenName": "N.F.", "alternateName": "Faculty of Engineering and Natural Sciences, Istanbul Bilgi University, Istanbul, Turkey", "Faculty of Engineering and Natural Sciences, Istanbul Bilgi University, Istanbul, Turkey" "givenName": "S.A.", "alternateName": "Department of Physics, University of Wisconsin, Madison, WI, USA", "Department of Physics, University of Wisconsin, Madison, WI, USA" "givenName": "W.Y.", "givenName": "J.D.", "givenName": "D.G.", "givenName": "C.C.", "givenName": "S.V.", "givenName": "G.A.", "givenName": "C.H.", "alternateName": "Department of Physics, University of Pennsylvania, Philadelphia, PA, USA", "Department of Physics, University of Pennsylvania, Philadelphia, PA, USA" "givenName": "S.J.", "givenName": "Y.-H.", "alternateName": "Department of Physics, Chinese University of Hong Kong, Shatin, N.T., Hong Kong", "Department of Physics, Chinese University of Hong Kong, Shatin, N.T., Hong Kong" "givenName": "H.C.", "givenName": "H.J.", "familyName": "Moursli", "givenName": "R. Cherkaoui El", "givenName": "T.J.A.", "givenName": "Y.H.", "givenName": "M.V.", "givenName": "L.D.", "givenName": "M.C.", "givenName": "J.J.", "givenName": "K.M.", "givenName": "Z.H.", "givenName": "M.R.", "givenName": "S.E.", "familyName": "De Sa", "givenName": "R. Coelho Lopes", "givenName": "A.E.C.", "alternateName": "Instituto Superior T\u00e9cnico, Universidade de Lisboa, Lisbon, Portugal", "Instituto Superior T\u00e9cnico, Universidade de Lisboa, Lisbon, Portugal" "familyName": "Mui\u00f1o", "givenName": "P. Conde", "alternateName": "Universita di Napoli Parthenope, Naples, Italy", "Universita di Napoli Parthenope, Naples, Italy" "givenName": "E.E.", "alternateName": "Institute of Particle Physics (IPP), Montreal, Canada", "Institute of Particle Physics (IPP), Montreal, Canada" "givenName": "J.W.", "alternateName": "Department of Physics, New York University, New York, NY, USA", "Department of Physics, New York University, New York, NY, USA" "givenName": "R.A.", "familyName": "Donszelmann", "givenName": "T. Cuhadar", "givenName": "W.R.", "givenName": "M.M.", "givenName": "J.V.", "givenName": "M.F.", "givenName": "D.R.", "familyName": "Peso", "givenName": "J. Del", "givenName": "Y. Delabat", "familyName": "Pietra", "givenName": "M. Della", "familyName": "Volpe", "givenName": "D. Della", "givenName": "P.A.", "givenName": "D.A.", "givenName": "S.P.", "givenName": "P.O.", "familyName": "Bello", "givenName": "F.A. Di", "familyName": "Ciaccio", "givenName": "A. Di", "givenName": "L. Di", "familyName": "Clemente", "givenName": "W.K. Di", "familyName": "Donato", "givenName": "C. Di", "familyName": "Girolamo", "familyName": "Gregorio", "givenName": "G. Di", "familyName": "Micco", "givenName": "B. Di", "familyName": "Nardo", "givenName": "R. Di", "familyName": "Petrillo", "givenName": "K.F. Di", "familyName": "Sipio", "givenName": "F.A.", "familyName": "Vale", "givenName": "T. Dias Do", "familyName": "Capriles", "givenName": "F.G. Diaz", "givenName": "E.B.", "familyName": "Cornell", "givenName": "S. D\u00edez", "familyName": "Pardos", "givenName": "C. Diez", "givenName": "J.I.", "givenName": "M.A.B. Do", "givenName": "A.T.", "familyName": "Pree", "givenName": "T.A. du", "givenName": "O.A.", "givenName": "A.C.", "familyName": "Yildiz", "givenName": "H. Duran", "givenName": "G.I.", "familyName": "Jarrari", "givenName": "H. El", "givenName": "M.B.", "familyName": "Pastor", "givenName": "O. Estrada", "givenName": "M.O.", "givenName": "S.M.", "familyName": "Giannelli", "givenName": "M. Faucci", "givenName": "W.J.", "givenName": "O.L.", "alternateName": "Physics Department, University of Texas at Dallas, Richardson, TX, USA", "Physics Department, University of Texas at Dallas, Richardson, TX, USA" "givenName": "S.W.", "familyName": "de Lima", "givenName": "D.E. Ferreira", "givenName": "K.D.", "alternateName": "Borough of Manhattan Community College, City University of New York, New York, NY, USA", "Borough of Manhattan Community College, City University of New York, New York, NY, USA" "givenName": "M.C.N.", "givenName": "W.C.", "givenName": "L.R. Flores", "givenName": "F.M.", "givenName": "J.H.", "givenName": "G.T.", "givenName": "A.N.", "givenName": "D.C.", "familyName": "Torregrosa", "givenName": "E. Fullana", "givenName": "L.G.", "givenName": "G.E.", "givenName": "E.J.", "givenName": "B.J.", "familyName": "Goni", "givenName": "R. Gamboa", "givenName": "K.K.", "alternateName": "Department of Physics, California State University, Fresno, USA", "California State University, Long Beach, CA, USA", "Department of Physics, California State University, Fresno, USA" "givenName": "Y.S.", "familyName": "Walls", "givenName": "F.M. Garay",
CommonCrawl
Quantum Computing Meta Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up. Deutsch's algorithm makes no sense Asked 11 months ago Modified 11 months ago Here are the 4 classical functions over $1$ bit we're examining, $f(x) = \{0,1\}, x \in\{0,1\}$: identity (balanced) -> $f(x) = x$: \begin{bmatrix}1&0\\0&1\end{bmatrix} negation (balanced) -> $f(x) = \neg x$: \begin{bmatrix}0&1\\1&0\end{bmatrix} constant one (constant) -> $f(x) = 1$: \begin{bmatrix}0&0\\1&1\end{bmatrix} constant zero (constant) -> $f(x) = 0$: \begin{bmatrix}1&1\\0&0\end{bmatrix} The goal is to determine if a black-box is implementing a balanced or a constant function. Classically, this would require two queries. The proposition is that a quantum algorithm can solve the problem more efficiently by evaluating the black-box function over an equal superposition. Fine, let's examine the quantum versions of those $4$ functions. The balanced functions are reversible and one can use them in quantum algorithms, however, the constant functions are not. To make them reversible, Deutsch adds an ancilla qubit like this $f(|x\rangle|y\rangle)=|𝑥\rangle|y\oplus f(x)\rangle$; Once I start converting the classical matrices to their quantum versions, I have no problem with the balanced ones. For example, negation becomes: \begin{bmatrix}0&1&0&0\\1&0&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix} By presetting the ancilla qubit to $0$, I get the classical behavior: $|0\rangle|0\rangle\mapsto|0\rangle|1\rangle$ $0$ turns to $1$, $1$ turns to $0$ as I would expect. The identity matrix also satisfies me, since: Now, let's look at the quantum matrix for constant one: I'm bothered that $01$ gets converted to $00$. I would expect the result to be $01$. It seems that this gate is not the quantum equivalent of the constant $1$, since it cannot be used over classical states to achieve the same result as the classical gate. It's a completely different gate. If that's really the case, how can I justify that the quantum and classical solutions of Deutsch's problem are even solving the same problem? quantum-gate quantum-state superposition deutsch-jozsa-algorithm Ognyan TsvetkovOgnyan Tsvetkov In your framing, the second qubit is the ancilla that's initialized to $\vert 0\rangle$. That is, the first qubit is the input qubit, which remains the same after application of the oracle, and the second qubit is the answer qubit, which stores the answer, but must be preset to $|0\rangle$. Asking about the behavior when the second qubit is initialized to $|1\rangle$ is outside the scope of the setup of Deutsch's algorithm. As long as the second qubit is initialed to $|0\rangle$, then after application of the oracle the second qubit will constantly be $|1\rangle$ (if the oracle is the constant-$1$ function). Thus when you ask about why $01$ doesn't get mapped to $01$, it's because in that question, the second qubit is initialized to $|1\rangle$ (which doesn't satisfy the setup). Mark SMark S $\begingroup$ Your answer helped me get it. What was missing is that the input qubit remains the same and the ancilla qubit is the one that holds the result. I can now see that when applying the constant 1 with the ancilla preset to 0, I get: 00 -> 01 & 10 -> 11; The input (left) qubit remains the same to guarantee reversibility, the second (ancilla) qubit shows the result. Is that right? $\endgroup$ – Ognyan Tsvetkov $\begingroup$ Yeah, you got it. $\endgroup$ – Mark S Thanks for contributing an answer to Quantum Computing Stack Exchange! AWS will soon be sponsoring Quantum Computing What are the P(0) and P(1) probabilities for the T transformation in quantum computing? How to interpret the matrix representation of a quantum gate? Implementing four oracles for the Deutsch Algorithm (preferably on IBM Q Experience) CNOT in reversible computing and entanglement How does $U_f$ act on a qudit state in the Deutsch-Jozsa Algorithm How can one evaluation of $U_f$ in Grover's algorithm use only one query of $f$? Vector math of applying an X-gate on an $|i\rangle$ basis state Synthesize superposition state using generalized Grover algorithm How are Deutsch-Jozsa and Bernstein-Vazirani algorithms a fair comparison to classical ones Is a "Bell measurement" equivalent to a projective measurement in a different basis?
CommonCrawl
Resources / Notes d'Applications / Optiques / Custom Bandpass Filter using Shortpass and Longpass Filters Custom Bandpass Filter using Shortpass and Longpass Filters http://www.edmundoptics.fr/resources/application-notes/optics/custom-bandpass-filter-using-shortpass-and-longpass-filters/ Edmund Optics Inc. http://www.edmundoptics.fr Custom Bandpass Filter using Shortpass and Longpass Filters Bandpass filters are optical filters that allow transmission of a specific range of wavelengths, or band, while blocking other wavelengths. Many off-the-shelf bandpass filter options are available, but when an application has specific requirements for bandwidth or center wavelength not readily available, Longpass and Shortpass filters can be stacked creating a customized bandpass filter. Longpass filters are optical filters that reflect short wavelengths while transmitting, or passing, long wavelengths. Conversely, Shortpass filters transmit short wavelengths but reflect long ones. Examples of both are represented in Figure 1. Figure 1: Transmission Comparison of Shortpass and Longpass Filters The transmission graphs are very similar to Heaviside functions (denoted as H(λ)) in mathematics. As shown in Equation 1, Heaviside functions are specialized piecewise functions that have a value of one for a certain domain and zero for the rest. The domain is determined by the constant x. (1)$$ H \! \left( \lambda - x \right) = \begin{cases} 0, & \lambda < x \\ 1, & \lambda \geq x \end{cases} $$ H(λ) can be denoted as the transmission of a filter and λ as the wavelength. Figure 2 shows a Heaviside model of the LWP graph from Figure 1. Figure 2: Heaviside model of the Longpass curve from Figure 1 A custom optical bandpass filter can be made by using at least two filters. This is conceptually very similar to multiplying Heaviside functions in mathematics. In order to obtain a rectangular function from 540 to 545, the two Heaviside functions must be multiplied together as shown in Equation 2 and Figure 3. (2) $$\text{Let:} \, H_1 = H \! \left( \lambda - 450 \right) = \begin{cases} 0, & \lambda < 450 \\ 1, & \lambda \geq 450 \end{cases} $$ $$ H_2 = H \! \left( -\lambda + 500 \right) = \begin{cases} 0, & \lambda > 500 \\ 1, & \lambda \leq 500 \end{cases} $$ $$ H_1 \times H_2 = \text{BP}_{1, 2} \! \left( \lambda \right) = \begin{cases} 0, && \lambda < 450 \\ 1, & 450 \leq \! & \lambda \geq 500 \\ 0, && \lambda > 500 \end{cases} $$ Figure 3: This graph is of the rectangular function of BP1,2 What is achieved through multiplication in mathematics is achieved physically by layering two or more optical filters. When layering multiple optical filters, light propagates through one filter and into the next. Because each filter transmits certain wavelengths, layering allows for the transmission of a customized band of wavelengths. Figure 4: A custom bandpass filter where f1 and f2 represent Longpass and Shortpass filters This results in a transmission curve represented in Figure 5. Figure 5: The transmission curves of a custom bandpass filter produced by stacking a 450nm Longpass and 500nm Shortpass Filter Stacking Longpass and Shortpass filters can quickly create a custom solution for selecting a specific bandpass and is an ideal alternative in scenarios where an off-the-shelf bandpass filter doesn't meet the application's requirements or a customer filter design is too expensive or has too long of a lead time. Ce contenu vous a-t-il été utile ? Merci d'avoir évalué ce contenu ! Need bandpass interference, notch, edge, dichroic, color substrate, or ND filters? We cover the spectrum from UV, visible, and NIR to mid- and long-wave IR wavelengths. Confused about which filters to use for your application? Let EO make it easy with this simple-to-follow application note and its own filter selection guide. Notre service clientèle est prêt à vous aider. EO a le Savoir-Faire pour vous Assister Les ingénieurs d'Edmund Optics vous offrent toujours la meilleure solution pour votre application. N'importe s'il s'agit d'un produit standard ou d'une conception personnalisée, EO a la connaissance en industrie ainsi que les connaissances techniques pour répondre à vos besoins.
CommonCrawl
MichalisN abstract-algebra Approve tag wiki edits $-1$ is a quadratic residue modulo $p$ if and only if $p\equiv 1\pmod{4}$ Are half of all numbers odd? Must an ideal contain the kernel for its image to be an ideal? Is the algebraic closure of a $p$-adic field complete Existence of a map $f: \mathbb{Z}\rightarrow \mathbb{Q}$ Prove that if two miles are run in 7:59 then one mile MUST be run under 4:00. Feb 25 '12 at 21:01 Functional equation $f(y/x)=xf(y)-yf(x)$ Applications of the wreath product? Showing that two rings are not isomorphic characterization of the operation on a finite or infinite group. Calculate the limit: $\lim_{n \to \infty}\int^{1}_{0}{\frac{e^x}{1+x^n}\mbox{dx } \mbox{?}}$ What does $K^{1/p}$ for a field $K$ mean? Computing $\lim\limits_{x \to \infty } \frac{x}{\ln\big(\! \large\frac{2^{x}}{x}\! \big)}$ How often is an irreducible polynomial irreducible? irreducible polynomials over the $p$ adic number Dec 20 '11 at 0:23 Does the functional equation $f(1/r) = rf(r)$ have any nontrivial solutions besides $f(r) = 1/\sqrt{r}$? Mar 24 '11 at 0:26 radius of convergence of the power series $\sum_{n=0}^{\infty} z^{n!}$ Computing $A^{50}$ for a given matrix $A$ How can we show that an abelian group of order <1024 has a set of generators of cardinality <10 How to express $(1+x+x^2+\cdots+x^m)^n$ as a power series? Set of numbers which can not be represented as $a_1^n+a_2^n+\cdots+a_n^n$ Oct 19 '14 at 18:30 How does one prove that the spectral norm is less than or equal to the Frobenius norm? Jan 22 '12 at 2:02 Is the infinite intersection of prime ideals also a prime ideal? Why do we only consider quadratic domains as Euclidean domains with squarefree integers? Why is not the ring $\mathbb{Z}[2\sqrt{2}]$ a unique factorization domain? Mar 5 '15 at 17:04 Complex Mean Value Theorem: Counterexamples Prove $C^1[a,b]$ is a Banach space. The trace map in a finite field. Nov 28 '13 at 22:59 How prove this linear algebra $|\alpha-\beta|\le|\alpha-\gamma|$ Is the number of irreducibles in any number field infinite?
CommonCrawl
Research | Open | Published: 10 July 2019 Potential role of N-acetyl glucosamine in Aspergillus fumigatus-assisted Chlorella pyrenoidosa harvesting Arghya Bhattacharya1, Megha Mathur1, Pushpendar Kumar1 & Anushree Malik ORCID: orcid.org/0000-0002-2761-05681 Algal harvesting is a major cost which increases biofuel production cost. Algal biofuels are widely studied as third-generation biofuel. However, they are yet not viable because of its high production cost which is majorly contributed by energy-intensive biomass harvesting techniques. Biological harvesting method like fungal-assisted harvesting of microalgae is highly efficient but poses a challenge due to its slow kinetics and poorly understood mechanism. In this study, we investigate Aspergillus fumigatus–Chlorella pyrenoidosa attachment resulting in a harvesting efficiency of 90% within 4 h. To pinpoint the role of extracellular metabolite, several experiments were performed by eliminating the C. pyrenoidosa or A. fumigatus spent medium from the C. pyrenoidosa–A. fumigatus mixture. In the absence of A. fumigatus spent medium, the harvesting efficiency dropped to 20% compared to > 90% in the control, which was regained after addition of A. fumigatus spent medium. Different treatments of A. fumigatus spent medium showed drop in harvesting efficiency after periodate treatment (≤ 20%) and methanol–chloroform extraction (≤ 20%), indicating the role of sugar-like moiety. HR-LC–MS (high-resolution liquid chromatography–mass spectrometry) results confirmed the presence of N-acetyl-d-glucosamine (GlcNAc) and glucose in the spent medium. When GlcNAc was used as a replacement of A. fumigatus spent medium for harvesting studies, the harvesting process was significantly faster (p < 0.05) till 4 h compared to that with glucose. Further experiments indicated that metabolically active A. fumigatus produced GlcNAc from glucose. Concanavalin A staining and FTIR (Fourier transform infrared spectroscopy) analysis of A. fumigatus spent medium- as well as GlcNAc-incubated C. pyrenoidosa cells suggested the presence of GlcNAc on its cell surface indicated by dark red dots and GlcNAc-specific peaks, while no such characteristic dots or peaks were observed in normal C. pyrenoidosa cells. HR-TEM (High-resolution Transmission electron microscopy) showed the formation of serrated edges on the C. pyrenoidosa cell surface after treatment with A. fumigatus spent medium or GlcNAc, while Atomic force microscopy (AFM) showed an increase in roughness of the C. pyrenoidosa cells surface upon incubation with A. fumigatus spent medium. Results strongly suggest that GlcNAc present in A. fumigatus spent medium induces surface changes in C. pyrenoidosa cells that mediate the attachment to A. fumigatus hyphae. Thus, this study provides a better understanding of the A. fumigatus-assisted C. pyrenoidosa harvesting process. Microalgae showcase diverse applications for wastewater treatment and subsequent bioenergy generation. Large quantity of microalgal biomass is the foremost requirement in the biofuel route, for which several high-end/advanced photobioreactors have already been designed by several researchers [1,2,3]. However, the commercialization of algal biofuels still lags behind due to high cost of investment towards energy-intensive biomass harvesting techniques like centrifugation, membrane filtration and chemical based flocculation from photobioreactors [4,5,6]. Chemical processes are highly efficient, but the requirement of a lot of flocculant dosages renders the harvested biomass contaminated with undesirable chemicals [7, 8]. Biologically induced harvesting of algal cells is now being explored as a replacement for the conventional algal dewatering processes which contribute towards 3–15% of the algal biomass production cost [9, 10]. The harvested biomass can be processed for biofuel generation without any loss of quality [11, 12]. There have been quite a few studies on the algal–algal and algal–bacterial interactions for bio-harvesting of algal cells from bulk media [13,14,15,16,17,18,19,20]. A very recent approach is the use of filamentous fungi for harvesting algal cells [21,22,23]. In spite of having the potential for becoming a cost-effective process for algal harvesting, the lack of knowledge regarding the causative factors for the algal attachment to fungal pellets limits its application. When fungus is grown under submerged conditions with agitation, it forms a dense, compact hyphal structure termed as pellets [24]. When these pellet-forming filamentous fungi (PFF) were co-cultivated with algal cells, there is an orderly attachment of algal cells on the fungal pellets [22, 23, 25]. This phenomenon of algal–fungal attachment is of interest as it solves the problem of algal harvesting to a large extent. However, the studies with co-cultivation of fungi and algae have reported the interaction time to be between 24 and 72 h [22, 25,26,27,28,29,30,31]. In a more straight-forward approach, it has been observed that the attachment of algal cells (green algae or cyanobacteria) to fungal pellets could take place within 4–6 h using pre-cultivated fungal biomass [12, 32]. Such interaction of algae and PFF [12] is intriguing as it does not follow the simple kinetics of bio-harvesting [13, 33, 34]. Our recent study confirms that in addition to a specific set of physical conditions, the metabolically active fungus is mandatory for efficient attachment of the Chlorella pyrenoidosa cells to the Aspergillus fumigatus pellet [12]. This fact raises further queries on the biological context of C. pyrenoidosa–A. fumigatus attachment, which although seems far simpler than the complex algal–fungal associations in nature. A complex symbiotic association between fungus and cyanobacteria exists in the form of lichens in natural systems [35]. In artificial systems, such as co-culturing of fungus (Aspergillus nidulans) with microalgae (Chlamydomonas), a mutualistic association for nutrient exchange has been reported as reflected by thinning of microalgal cell wall by enzymatic action of fungus. However, this mutualism may sometimes lead to antagonism, when the algal death occurs due to over secretions of these enzymes [36]. Relatively simple phytoplankton–parasitic fungal (Chytrids) interactions controlled by cell-to-cell contact also exist in nature, which are driven by chemotaxis [37]. In our recent studies, somewhat similar action of fungus was observed in algal–fungal pellets as fungus used algal cells as nutrient source by enzymatically degrading it [32]. However, what drives the interaction between C. pyrenoidosa and A. fumigatus at the molecular level is not clear. There are also no specific molecules reported for PFFs so far, although enough evidence for extracellular polymeric substances (EPS)-mediated aggregation exists [38,39,40,41]. The present study aims to find out the causative mechanism behind this interesting phenomenon. It also tries to answer questions like why a specific set of biological conditions is necessary for this process to occur. Experiments were designed to address several questions like is there any chemical signaling/mediating molecule which mediates the attachment of C. pyrenoidosa and A. fumigatus, what is the origin of this mediating molecule, i.e., C. pyrenoidosa or A. fumigatus, what is the type/nature of this mediating molecule, how does this molecule mediate the attachment, etc. However, this study raises further questions regarding the nature and type of receptors which may be responsible for the attachment process that needs to be investigated further. Role of extracellular metabolites in C. pyrenoidosa–A. fumigatus harvesting When A. fumigatus pellets and C. pyrenoidosa cells, suspended in respective spent medium (Fig. 1a), were mixed at 1:5 ratio, C. pyrenoidosa harvesting efficiency of 90% was observed after 4 h (Fig. 1b). This A. fumigatus–C. pyrenoidosa ratio and its harvesting efficiency were optimized in our earlier study [12] and has been referred to as control for all the experiments performed in the present study. To pinpoint the specific mediating molecule, i.e., whether the cell structure of C. pyrenoidosa, A. fumigatus or any of its extracellular metabolic by-product is mediating the attachment, several experiments were performed by eliminating the C. pyrenoidosa or A. fumigatus spent medium from the C. pyrenoidosa–A. fumigatus mixture. When unwashed C. pyrenoidosa cells were mixed with washed A. fumigatus pellets (Set II), the harvesting efficiency dropped to 20% compared to > 90% in the control (Set I). Interestingly, when washed A. fumigatus pellets were re-suspended in A. fumigatus spent medium (Set III), harvesting efficiency was similar to the control (Fig. 1b). However, when washed A. fumigatus pellets were resuspended in fresh PDB (Set IV), the harvesting efficiency was similar to that of washed A. fumigatus pellets, i.e., ≤ 20%. On the other hand, when washed C. pyrenoidosa cells were subjected to harvesting experiment with unwashed A. fumigatus pellets (Set V), > 90% harvesting efficiency was observed. C. pyrenoidosa culture without any A. fumigatus biomass showed ≤ 5% harvesting. Overall, significant reduction in harvesting efficiencies was seen (p < 0.05) when A. fumigatus spent medium was washed off. Therefore, impact of A. fumigatus spent medium on C. pyrenoidosa cells was explored further. a Schematic representations of experimental conditions in different experimental sets. b Harvesting efficiency of BG11 grown Chlorella pyrenoidosa and A. fumigatus pellets as per different experimental conditions (defined in 1a) after 4 h (n = 3). Significant drop in harvesting efficiency is seen in Set II and Set IV (p < 0.05) Nature and characterization of the mediating molecule To identify the nature of this extracellular molecule, different physical (heat) and chemical treatments (periodate, solvent extraction) of A. fumigatus spent medium were done before harvesting C. pyrenoidosa cells with untreated washed A. fumigatus pellets. The autoclaved A. fumigatus spent medium showed > 90% harvesting in 4 h. The aqueous phase of A. fumigatus spent medium after methanol: chloroform (1:1) extraction showed ≤ 20% harvesting efficiency. On the other hand, aqueous phase after hexane extraction exhibited > 90% harvesting efficiency. Periodate treatment of A. fumigatus spent medium leads to ≤ 20% harvesting efficiency. Harvesting efficiency of C. pyrenoidosa using A. fumigatus pellets with untreated spent was ≥ 90% (positive control). Harvesting efficiency of C. pyrenoidosa with washed A. fumigatus pellets (negative control) was ≤ 20% (Fig. 2). Harvesting efficiency of C. pyrenoidosa cells by A. fumigatus pellets after different treatments of A. fumigatus spent medium. Significant drop in flocculation efficiency is seen after periodate and methanol–chloroform treatment (p < 0.05) which is similar to the negative control (washed A. fumigatus pellets) without any A. fumigatus spent medium The composition of A. fumigatus spent medium was studied using HR-LC–MS. The flow-through obtained using Strata-X CW column showed no loss of harvesting efficiency. Hence, the HR-LC–MS studies of this flow-through liquid were done. The results indicated the presence of two sugar-like molecules in the spent medium, i.e., N-acetyl-d-glucosamine (GlcNAc) and glucose (Table 1). HR-LC–MS of the supernatant of washed A. fumigatus pellets (without spent medium) did not show the presence of GlcNAc or glucose. Table 1 Category of compounds obtained after HR-LC–MS of A. fumigatus spent media Confirmation of the mediating molecule in C. pyrenoidosa–A. fumigatus interaction Various treatments of A. fumigatus spent medium indicated sugar-like molecule to be responsible for harvesting. Hence, experiments were conducted to study the role of glucose and GlcNAc as a replacement of A. fumigatus spent medium during C. pyrenoidosa–A. fumigatus harvesting. Glucose, being the simplest form of sugar, was tested first. C. pyrenoidosa cells were incubated with 100-mM glucose and then harvesting experiments were carried out using washed A. fumigatus pellets devoid of any A. fumigatus spent medium. Results showed > 75% harvesting after 5 h, suggesting that glucose assisted in the harvesting process. Negative control of C. pyrenoidosa incubated without glucose did not show any harvesting with washed A. fumigatus pellets (Additional file 1: Figure S1). The concentration of glucose in the supernatant at different stages of incubation and harvesting was analyzed using HPLC (Fig. 3). HPLC analysis showed the presence of glucose before and after incubation with C. pyrenoidosa cells. No other peak was observed in these spectra. However, after mixing with A. fumigatus pellets and harvesting for 5 h, no glucose was detected by HPLC, but 24 mM of GlcNAc was detected. This suggested that the conversion of glucose into GlcNAc took place during harvesting with A. fumigatus pellets. During HPLC, GlcNAc had a retention time of 8.7 min; while glucose had a retention time of 8.3 min. This was an interesting observation as after incubating C. pyrenoidosa with glucose, the end product after harvesting was not glucose but GlcNAc. HR-LC–MS analysis also confirmed the presence of GlcNAc after 5 h of harvesting the glucose-incubated C. pyrenoidosa cells (Fig. 3a–d). Therefore, the exact role of GlcNAc during harvesting was studied in further experiments. C. pyrenoidosa cells were incubated with 100-mM GlcNAc for 2 h and harvested with washed A. fumigatus pellets for 5 h. GlcNAc concentration was measured by HPLC during three stages of the experiments viz. initial, after 2-h incubation and after harvesting. The initial concentration of GlcNAc was 90.40 mM which decreased to 36 mM after incubation with C. pyrenoidosa. When the GlcNAc-incubated C. pyrenoidosa cells were subjected for harvesting with washed A. fumigatus pellets, harvesting efficiency was found to be > 85% and GlcNAc concentration was measured as 28 mM after 5 h. On the contrary, there was no GlcNAc detected in C. pyrenoidosa incubated without glucose/GlcNAc and subjected to harvesting with washed A. fumigatus pellets (Additional file 2: Table S1). No harvesting occurred in this case. However, when t test was performed between the harvesting efficiencies of C. pyrenoidosa cells incubated with glucose and GlcNAc, significant difference in harvesting efficiency was observed till 4 h. The difference was not significant after 4 h (Additional file 1: Figure S1). HPLC chromatograms of supernatant from glucose pre-incubated C. pyrenoidosa at different stages of incubation. a Initial glucose concentration (RT-8.3 min) of 94.36 mM. b Glucose concentration after 2 h of pre-incubation showing a reduced concentration up to 45 mM. b After harvesting, the peak shifted to RT-8.7 which corresponds to the peak of GlcNAc, depicting the formation of GlcNAc during the attachment process. d HR-LC–MS chromatogram of the supernatant after harvesting glucose pre-incubated C. pyrenoidosa and washed A. fumigatus pellets confirming the presence of GlcNAc (indicated by arrow) Microscopic analysis and surface characterization of A. fumigatus spent medium-/GlcNAc-incubated C. pyrenoidosa cells during harvesting HR-LC–MS and HPLC analyses suggested that washed A. fumigatus pellets could harvest C. pyrenoidosa cells in the presence of GlcNAc alone. This indicates that GlCNAc present in the A. fumigatus spent media is responsible for C. pyrenoidosa harvesting. To further investigate this observation, Con A staining of normal C. pyrenoidosa cells, A. fumigatus spent medium-incubated C. pyrenoidosa cells and GlcNAc-incubated C. pyrenoidosa cells was done. Results showed the absence of GlcNAc on the cell surface of normal cells. However, when the C. pyrenoidosa cells were incubated with A. fumigatus spent medium or GlcNAc, the presence of GlcNAc on its cell surface was indicated by dark red dots (Fig. 4a). Further, FTIR analysis of same samples was performed. The FTIR pattern of the A. fumigatus spent medium- and GlcNAc-incubated algae showed distinct peaks at 3430, 2922, 2131, 1657, 1564, 1380, 1319, 1161, 1079, and 894 cm−1. These peaks were found to be characteristic of GlcNAc when correlated with FTIR spectra of GlcNAc alone. No such peaks were observed in normal C. pyrenoidosa cells. The FTIR of C. pyrenoidosa alone showed the presence of phosphate (1048 cm−1), amide (1541 cm−1), carboxylic (1650 cm−1), alkenes (2919 cm−1) and hydroxyl (3300 cm−1) groups on its cell surface (Fig. 4b) [12]. a Representative brightfield and fluorescent micrographs of Concanavalin A stained (i and ii) normal C. pyrenoidosa, (iii and iv) C. pyrenoidosa cells incubated with A. fumigatus spent medium and (v and vi) C. pyrenoidosa cells incubated with GlcNAc. White arrows indicate the presence of GlcNAc on the C. pyrenoidosa cell surface giving bright red spots. b FTIR spectra of normal C. pyrenoidosa, A. fumigatus spent media-incubated C. pyrenoidosa, GlcNAc-incubated C. pyrenoidosa and GlcNAc powder alone. Arrows indicate the characteristic peaks of GlcNAc in spent medium-incubated as well as GlcNAc-incubated C. pyrenoidosa biomass A distinct change in surface nature of C. pyrenoidosa cells incubated with A. fumigatus spent medium as well as GlcNAc for 2.5 h was observed through microscopic analysis (SEM and HR-TEM). The SEM image clearly showed more wrinkles on the C. pyrenoidosa cells after incubation as compared to the normal un-incubated C. pyrenoidosa cells (Fig. 5). The spent medium-incubated C. pyrenoidosa cells were elongated compared to the normal C. pyrenoidosa cells and appeared to be embedded in a matrix. Similar changes were observed when C. pyrenoidosa cells were incubated with GlcNAc. On the other hand, the SEM of the C. pyrenoidosa cells incubated with washed A. fumigatus pellets (which did not induce attachment) was found to be similar to the normal C. pyrenoidosa cells (Fig. 5). Scanning electron micrographs of (a) normal C. pyrenoidosa (b) C. pyrenoidosa incubated with washed A. fumigatus pellets (2.5 h) (c) C. pyrenoidosa incubated with A. fumigatus spent medium (2.5 h) (d) C. pyrenoidosa incubated with GlcNAc (2.5 h). The images shown above are the representative images for each treatment selected out of multiple frames. The figure shows the change in surface morphology of C. pyrenoidosa cells after incubating with A. fumigatus spent medium and GlcNAc while no such change in C. pyrenoidosa cells incubated with washed A. fumigatus pellets (2.5 h) To study the ultrastructural changes, HR-TEM of the above samples was performed. The normal C. pyrenoidosa cells and cell incubated with washed A. fumigatus (without A. fumigatus spent medium/GlcNAc) showed intact and round cells having smooth cell wall and well-defined cell organelles (Fig. 6). TEM images of A. fumigatus spent medium- and GlcNAc-incubated C. pyrenoidosa cells showed identical morphological changes. Both the images showed the formation of small villi-like structures on the cell surface. However, the TEM images of C. pyrenoidosa cells incubated with washed A. fumigatus pellets did not show such morphological changes and were similar to that of normal C. pyrenoidosa cells. The changes in the C. pyrenoidosa surface after A. fumigatus spent medium/GlcNAc treatment or even after complete harvesting does not affect the integrity of the cell wall as depicted by SYTOX green staining. The red auto-fluorescence was observed in A. fumigatus spent medium-incubated C. pyrenoidosa cells indicating live cells. As SYTOX binds to nucleic acids, dead cells would have shown green fluorescent color (Additional file 3: Figure S2). Transmission electron micrographs of (a) normal C. pyrenoidosa cells (b) C. pyrenoidosa incubated with washed A. fumigatus pellets (2.5 h) (c) C. pyrenoidosa incubated with A. fumigatus spent medium (2.5 h) (d) C. pyrenoidosa incubated with GlcNAc (2.5 h); CW denotes the cell wall of algal cells. The arrows show the formation of villi-like structures on the cell wall after incubation with A. fumigatus spent medium/GlcNAc. The images shown above are the representative images for each treatment selected out of multiple frames To confirm the surface changes observed through SEM and HR-TEM, AFM analysis of normal C. pyrenoidosa cells and A. fumigatus spent medium-incubated C. pyrenoidosa cells was conducted. Increase in the roughness of the C. pyrenoidosa cells (RMS value of 91.0 nm) was seen when incubated with A. fumigatus spent medium as compared to the normal C. pyrenoidosa which showed RMS value of 76.5 nm (Fig. 7). The cell height also increased from 42.5 nm in normal C. pyrenoidosa to 53.8 nm in the A. fumigatus spent medium-incubated C. pyrenoidosa. AFM analysis of (a) normal C. pyrenoidosa (RMS 76.5 nm) (b) spent medium treated C. pyrenoidosa (91.0 nm) showing change in the roughness of the cells after incubation with A. fumigatus spent medium Chemical harvesting, although quick, contaminates the biomass with unnecessary chemical contaminants [42]. Previous reports have suggested that biological harvesting of micro-algae improves the quality of the biomass for biofuel purpose. Wrede et al. [30] reported that there is a significant increase in lipid yield when microalgal cells were harvested with A. fumigatus. Increase in lipid content for algal–fungal pellets was also reported by Bhattacharya et al. [12]. In another study, Botryococcus braunii was harvested using A. fumigatus and the resultant biomass did not show any significant variation in the biomass composition. Apart from biodiesel aspect, harvesting of microalgae using filamentous fungi also increases its biogas potential. Prajapati et al. [32] reported enhanced bio-methane production from algal–fungal pellets due to the enzymatic degradation of microalgal cell wall by the fungi. All these studies show that harvesting algae with filamentous fungi is a favorable process for biofuel production. However, as the underlying mechanism of the process is not clearly understood, it is difficult to replicate the process at large scale. This study gives an insight into the communication mechanism between A. fumigatus and C. pyrenoidosa which could be exploited for fungal-assisted microalgal harvesting processes at large. Chemical communication between microbial cells is an inherent property which dictates the attachment and aggregation of cells [41, 43]. Such communications mediated by secretion of extracellular chemical signaling molecules could impose inter-beneficial partnerships for both the species [44]. In natural systems, the algae and fungus exhibit a host–parasite relationship as seen in chytrid–phytoplankton association [37]. In artificial systems, the co-culturing of Chlamydomonas reinhardtii and A. nidulans strains has been reported to show obligate mutualism in terms of nitrogen and carbon exchange within the species. The fungus converts glucose into CO2, which is being used by algae for its growth. On the other hand, the nitrogen fixing Chlamydomonas sp. reduces nitrite to ammonia, which fungus can use as a nitrogen source [36, 45]. During this nutrient exchange, the cell wall of Chlamydomonas sp. attached to fungus was reported to show thinning due to the action of fungal remodeling enzymes [45]. The present study can be closely related to the above-mentioned cases as it also reports extracellular secretions from A. fumigatus, mediating cell wall changes in the C. pyrenoidosa cells. These secretions might help A. fumigatus to provide signal to the C. pyrenoidosa cell and bring it into close vicinity for nutritional benefits or enzymatic degradation [32]. Similar phenomenon of microalgal–fungal attachment has been reported by our group with wide range of microalgal species encompassing green algae (Chlorella sp.), blue–green algae (Chroococcus sp.) as well as mixed consortia [12, 32]. Fungi produce various exopolysaccharides during its growth [38], which have been known to mediate cell–cell adhesion. The present study has identified an extracellular causative factor produced by the fungus A. fumigatus which is responsible for the C. pyrenoidosa–A. fumigatus attachment. Experiments done with washed and unwashed A. fumigatus pellets have shown the presence of a mediating molecule in the A. fumigatus spent medium that is crucial for the attachment process. Washed A. fumigatus pellets could not induce attachment since they were devoid of any A. fumigatus spent medium. Fresh PDB also did not have the mediating molecule, and its harvesting efficiency harvesting was similar to the washed A. fumigatus pellets. This observation suggested that the causative factor is the extracellular metabolite produced by actively growing fungi. The result was also in agreement with our recent observations that relatively old (72-h grown) or autoclaved A. fumigatus pellets failed to harvest the C. pyrenoidosa cells due to their low metabolic activity and damaged hyphae [12]. A. fumigatus spent medium is a cocktail of extracellular metabolites including EPS which is a complex mixture of polysaccharides, proteins, nucleic acids and amyloids [40]. EPS production by Aspergillus sp. is reported for bio-flocculation [46]. However, to pin-point the causative factor, it is necessary to know the nature of the mediating molecule. The autoclaving of the A. fumigatus spent medium did not affect the harvesting efficiency of the fungus. This indicated that the mediating factor was not proteinaceous in nature as autoclaving would denature the protein. Chloroform and methanol mixture was able to extract the polar organic components from the A. fumigatus spent medium. Loss of activity after this treatment indicated that the mediating factor was a polar organic compound. This was further confirmed using hexane as a solvent for extraction. Treatment of the A. fumigatus spent medium with hexane did not show any loss in harvesting activity. As hexane is able to extract only non-polar organic compounds, it further confirmed that the molecule is a polar organic compound. Further insights into the nature of the molecule were obtained by treating the A. fumigatus spent medium with sodium per-iodate. Per-iodate targets any saccharide/polysaccharide leading to its oxidation. Loss of harvesting activity after per-iodate treatment of the A. fumigatus spent medium strongly suggested that the mediating factor was a sugar-like molecule. The HR-LC–MS analysis of the A. fumigatus spent medium also showed the presence of two sugars, glucose and GlcNAc which might be the triggering factors for the attachment process. Previously, the bioflocculant produced by Achromobacter xylosoxidans was found to be composed mainly of carbohydrate hetero polymer [47]. In another study, characterization of a bioflocculant produced by Aspergillus flavus showed that it contained 69.7% sugar of which 1.8% was amino sugar-like GlcNAc [46]. In a recent study, it has been found that an amino sugar galactosaminogalactan mediates attachment of A. fumigatus to epithelial cells [48]. In the present study, we were able to pinpoint the particular molecule that could replicate the C. pyrenoidosa–A. fumigatus attachment process in the absence of A. fumigatus spent medium. Results of several spent treatments corroborated with HR-LC–MS results suggested the target molecule to be glucose or GlcNAc. The presence of either GlcNAc or A. fumigatus spent medium is mandatory for the attachment process. The t test between harvesting kinetics of control and GlcNAc-incubated C. pyrenoidosa cells did not show any significant difference. When GlcNAc was used for harvesting studies, it was clearly seen that the harvesting process was significantly faster as compared to glucose. HR-LC–MS and HPLC results confirmed that glucose is converted into GlcNAc by A. fumigatus pellets. Conversions of sugar-like molecules into GlcNAc have been reported by saprophytic fungus using degrading/hydrolytic enzymes [49, 50]. Kinetics of glucosamine formation using glucose and other sugars as substrate by fungus like Aspergillus sp. has also been reported during submerged fermentation [51]. Similar harvesting experiments with glucose anomers (Galactose and Mannose) showed only 30% harvesting which further suggests the role of GlcNAc in the harvesting process since these glucose anomers cannot be converted into GlcNAc (Additional file 4: Figure S3). The present result explains the previously observed pre-requisites for C. pyrenoidosa–A. fumigatus attachment (the presence of metabolically active fungi, temperature of 38 °C and neutral pH) [12], which seem to favor GlcNAc production. Earlier report showed that GlcNAc production by Aspergillus sp. was dependent on the pH of the system, where higher pH inhibited its formation [51]. Comparative study of GlcNAc production among three wild-type fungi viz. A. fumigatus, Rhizopus oligosporus and Monascus pilosus showed that A. fumigatus had the highest production capacity among the three fungi. We could demonstrate the presence of GlcNAc on the spent medium-incubated and GlcNAc-incubated C. pyrenoidosa using the lectin Concanavalin A (Con A) stain, and FTIR. Although, Con A gives signal for glucose as well as glucosamine, C. pyrenoidosa suspended in BG11 only do not show any signal as seen in the Fig. 4a. When C. pyrenoidosa is incubated with A. fumigatus spent media or glucosamine, Con A shows signals on the algal cell surface (Fig. 4b, c). When C. pyrenoidosa cells were incubated in the presence of glucosamine alone, the concentration of glucosamine decreases over time. The FTIR pattern of glucosamine-incubated cells show peaks similar to glucosamine. The Con A staining results complement the findings of the FTIR data. The mechanism of interaction could be further elaborated by examining the effect of A. fumigatus spent medium on C. pyrenoidosa cell's morphology, ultrastructure and cell surface roughness. Roughness analysis is one of the parameters which indicates the changes in the cell surface due to changes in the growing environment. The root mean square (rms) value corresponds to the roughness of the sample. AFM analysis showed that the roughness of the A. fumigatus spent medium-incubated C. pyrenoidosa cells was increased compared to the normal C. pyrenoidosa cells. Increase in roughness is an essential step for the cellular attachment process [52]. It has also been reported that in case of a green fouling alga Enteromorpha, increased roughness of cells caused more fouling compared to smooth cells with low roughness [53]. SEM micrographs (Fig. 5c) of spent medium-incubated cells depicted elongated C. pyrenoidosa cells embedded in a sticky matrix. Extracellular release of mucilage for entrapment is well documented for parasitic type of A. fumigatus species [40]. The fact that both A. fumigatus spent medium-incubated C. pyrenoidosa and GlcNAc-incubated algae showed the presence of GlcNAc-specific peaks in the FTIR spectra suggests that similar cell surface changes may be induced by A. fumigatus spent medium and GlcNAc. In this connection, SEM micrographs of GlcNAc-incubated C. pyrenoidosa cells also showed elongation and wrinkles on the cell surface. TEM images clearly show that A. fumigatus spent medium induces the formation of villi-like structures on C. pyrenoidosa surface. It was interesting to note that similar villi-like structures were induced by incubating C. pyrenoidosa cells with GlcNAc. On the other hand, washed A. fumigatus pellets (without A. fumigatus spent medium) failed to induce such changes. The above results strongly suggest that GlcNAc present in A. fumigatus spent medium is responsible for inducing surface changes in C. pyrenoidosa cells that mediates the attachment to A. fumigatus hyphae . Role of GlcNAc has also been demonstrated for cell–cell adhesion in bacteria and yeast [54, 55]. Moreover, it has also been established that GlcNAc can function as a signaling molecule or as an inducer of hyphal growth in Candida albicans without undergoing a metabolic reaction [56]. It is also known to induce formation of curly fibers in bacteria [57]. In the present study, we demonstrate that GlcNAc plays a similar role in adhesion of C. pyrenoidosa cells to A. fumigatus pellets by inducing morphological changes on the C. pyrenoidosa cell surface which is not reported till now. In summary, we show that the C. pyrenoidosa attachment to A. fumigatus pellets is mediated by GlcNAc as depicted by biochemical and analytical methods. Based on the results, the probable mechanism of the process is depicted in Fig. 8. When the C. pyrenoidosa cells are incubated with A. fumigatus spent medium or GlcNAc, it attaches with the C. pyrenoidosa surface. This induces surface modifications in C. pyrenoidosa cells as the cells become elongated with development of projections on the surface. The GlcNAc molecule on C. pyrenoidosa might act as a chemical signal for A. fumigatus receptors, thereby attaching the C. pyrenoidosa cells onto A. fumigatus cell wall. Recent reports have shown the role of G protein-coupled receptors (G-PCRs) present in ascomycetes fungi for sensing sugars [58]. Also, it has been reported that the receptors on the A. fumigatus cell surface are dynamic and are made only in the presence of external stimuli [59]. This kind of receptors might play a role in the C. pyrenoidosa–A. fumigatus attachment for which metabolically active fungi are required. Our results indicate that this attachment is triggered by GlcNAc which may act like a quorum-sensing molecule. GlcNAc induces morphological changes on the C. pyrenoidosa cell surface which is clearly visible by TEM micrograph. However, our recent study with such microalgal–fungal attachment has shown that fungus uses this interaction process to ultimately utilize the microalgal cells as source of food [32]. Hence, this microbial interaction is different from other interactions where the emitter and receiver of the signal are both benefited [44]. The existence of A. fumigatus receptors and factors triggering their expression needs to be investigated further. The figure depicts the probable mechanism of C. pyrenoidosa–A. fumigatus interaction mediated by GlcNAc (mediating molecule) Organism and culture conditions Microalgal species Chlorella pyrenoidosa was obtained from National Collection of Industrial Microorganisms (NCIM), NCL Pune (India). C. pyrenoidosa was maintained in 2% algae culture agar (HiMedia M343-500G) slants supplemented with BG11 (HiMedia, M1541-500G) in a plant growth chamber (Daihan Labtech, LGC-5101). Liquid cultures were maintained in BG11 broth at 120 rpm without CO2 supplementation. For experimental purposes, C. pyrenoidosa culture was grown in 2.5-L flasks under continuous light in a greenhouse maintained at 25 ± 1 °C and a light intensity of ≈ 3500 Lx [12]. The fungal strain Aspergillus fumigatus (Accession no. KY241789), previously isolated from wastewater [60], was used as the harvesting organism. A. fumigatus was maintained on sterile potato dextrose agar (PDA) slants (Himedia M096-500G) at 28 ± 1 °C. C. pyrenoidosa and A. fumigatus harvesting experiments Aspergillus fumigatus slant (3 days old) was used to inoculate potato dextrose broth (PDB; Himedia M403-500G) medium for cultivation. A. fumigatus spore suspension prepared with 0.1% Tween-80 (≈ 108 spores ml−1) was inoculated in 100-ml potato dextrose broth and incubated for 24 h at 28 ± 1 °C and 150 rpm in an orbital shaker. C. pyrenoidosa cultures exhibiting optical density (OD680) ≈ 2.5 were used for harvesting experiments. Ten milliliters of overnight-grown A. fumigatus culture was added to 90 ml of C. pyrenoidosa culture having OD680 ≈ 2.5 (1:5 dry weight basis) and kept at 38 ± 1 °C and 100 rpm for 4 h in an incubator shaker. The harvesting efficiency based on the optical density of C. pyrenoidosa was measured after every 30 min, for which the flask was allowed to stand for 3 min followed by drawing of the sample for absorbance measurement (OD680) using a microplate reader (Biotek EON®-C). Harvesting efficiency (HE) was calculated as: $${\text{HE}}\, = \,\left( {1\, - \,\frac{{{\text{OD}}_{t} }}{{{\text{OD}}_{0} }}} \right)\, \times \, 100,$$ where ODt = optical density at time t and OD0 = initial optical density. To check the viability of normal C. pyrenoidosa cells and C. pyrenoidosa cells incubated with A. fumigatus spent medium during harvesting (2.5 h), the cells were stained with a nucleic acid stain, i.e., SYTOX green (Cat. No. S7020, Invitrogen), which specifically stains the dead or damaged cells. Following this, dual fluorescence was used to show red autofluorescence for live C. pyrenoidosa cells and green fluorescence of SYTOX green (Excitation/Emission: 504/523 nm) for dead cells [61]. Role of extracellular factors To assess the role of an extracellular substance for the attachment process, A. fumigatus biomass was washed twice with distilled water removing all the spent medium from it. The A. fumigatus medium present after the growth of the fungus has been termed as A. fumigatus spent medium throughout the text. The A. fumigatus biomass was subjected to 5 harvesting experimental sets viz: Set I—Control (unwashed C. pyrenoidosa and A. fumigatus pellets), Set II—Unwashed C. pyrenoidosa and washed A. fumigatus pellets, Set III—Unwashed C. pyrenoidosa and washed A. fumigatus pellets resuspended in its spent medium, Set IV—Unwashed C. pyrenoidosa and washed A. fumigatus pellets resuspended in fresh PDB, Set V—Washed C. pyrenoidosa and unwashed A. fumigatus pellets. Harvesting of all the above sets was performed similarly as described above for 4 h. Experiments were done in triplicates for each and every experimental set. C. pyrenoidosa culture without any A. fumigatus biomass was run as a negative control to check the sedimentation rate of C. pyrenoidosa culture. The experimental setup is graphically shown in Fig. 1a. To study the effect of an extracellular factor (present in A. fumigatus spent medium) on the C. pyrenoidosa cell surface, scanning electron microscopy (SEM) and High-Resolution Transmission electron microscopy (HR-TEM) were performed before harvesting (normal C. pyrenoidosa cells) and after designated time during harvesting (2.5 h). For comparison, the SEM and HR-TEM of C. pyrenoidosa cells incubated with washed A. fumigatus pellets (without A. fumigatus spent medium) from Set II were also performed. The atomic force microscopy (AFM) of normal C. pyrenoidosa cells and cells during harvesting was performed to evaluate the change in cell height and roughness. Treatment and characterization of A. fumigatus spent medium By the above preliminary experiments, the role of some extracellular factors presents in the A. fumigatus spent medium was highlighted. Hence, various treatments of A. fumigatus spent medium like autoclaving, sodium periodate treatment, Methanol: Chloroform extraction and hexane extraction were performed followed by testing the harvesting efficiency of these pre-treated A. fumigatus spent medium when added to washed A. fumigatus pellets and C. pyrenoidosa cells. For every experiment, A. fumigatus pellets were first washed to remove any spent medium present. Harvesting experiments were done with normal C. pyrenoidosa cells and A. fumigatus pellets resuspended in fungal spent medium after treatments (autoclaving, sodium periodate treatment, Methanol: Chloroform extraction and hexane extraction). A mixture of C. pyrenoidosa cells and A. fumigatus pellets with untreated spent was used as a positive control. A negative control comprising C. pyrenoidosa mixed with washed A. fumigatus pellets was run to ensure that the pellets of A. fumigatus did not have the fungal spent media. Aspergillus fumigatus spent medium was autoclaved for 15 min at 121 °C and 15 psi pressure before using for harvesting experiments. As autoclaving would denature any protein component in the spent A. fumigatus medium, the role of protein molecule (if any) could be found out. Sodium periodate targets any saccharide/polysaccharide leading to the oxidation of these residues. The spent A. fumigatus media (100 ml) were pre-treated with 20-mM sodium periodate (Cat No. S1147, Sigma) solution prepared in an oxidation buffer (50-mM sodium acetate and 50-mM acetic acid, pH-4.5). Since sodium periodate is light sensitive, the preparation of the solution and pre-treatment of A. fumigatus spent medium were performed in the dark at 28 °C and 150 rpm. To stop the activity of sodium periodate, the sample was exposed to light for 15 min to remove the remaining periodate in the system. The methanol: chloroform extraction was performed by mixing an equal volume of methanol: chloroform mixture (1:1) with the A. fumigatus spent medium for 1 h at 28 °C and 150 rpm to identify any polar molecules responsible for the harvesting process. The aqueous fraction was separated from the mixture by a separating funnel and the organic fraction was discarded. Washed A. fumigatus pellets suspended in this aqueous fraction were then subjected to harvesting of C. pyrenoidosa cells. A similar experiment was done using the hexane-extracted aqueous fraction (extracted following the similar protocol used for Methanol: Chloroform aqueous extract) to identify the presence of any non-polar molecule which aided in the harvesting process. The A. fumigatus spent medium was analyzed using high-resolution liquid chromatography–mass spectrometry (HR-LC–MS). Determination of mediating molecule Based on the HR-LC–MS analysis and preceding A. fumigatus spent medium treatments, glucose and N-acetyl glucosamine (GlcNAc) were suspected to play the most important role in harvesting process. C. pyrenoidosa biomass was exposed to 100-mM glucose and GlcNAc, respectively, by incubation for 2 h at 38 °C and 100 rpm, followed by removal of residual glucose and GlcNAc by centrifugation (4000g for 10 min) and resuspension of biomass in fresh BG11. C. pyrenoidosa after glucose/GlcNAc incubation was then subjected to harvesting with washed A. fumigatus pellets (without spent medium) for 5 h as described in "C. pyrenoidosa and A. fumigatus harvesting experiments" section. Another set of C. pyrenoidosa suspension was also incubated under the same conditions without supplementation of glucose/GlcNAc followed by its harvesting with washed A. fumigatus pellets. To analyze the concentration of glucose and GlcNAc at different stages of experiment, high-performance liquid chromatography (HPLC) was performed at three levels: (i) supernatant of C. pyrenoidosa suspension immediately after glucose or GlcNAc addition (initial); (ii) supernatant of C. pyrenoidosa suspension after 2 h of incubation with glucose or GlcNAc and (iii) supernatant of respective sets at the end of harvesting for 5 h. The results were compared with HPLC of glucose and GlcNAc standard as a reference. HR-LC–MS of the supernatants from all the set after harvesting was also done to confirm the result of HPLC. GlcNAc-incubated C. pyrenoidosa biomass was also subjected to Fourier Transform Infra-Red Spectroscopy (FTIR) and was compared to FTIR spectra of C. pyrenoidosa biomass during harvesting (after 2 h) as well as normal C. pyrenoidosa. The FTIR of GlcNAc powder was also performed to correlate the presence of similar peaks on the incubated C. pyrenoidosa biomass. To further confirm the role of GlcNAc and A. fumigatus spent medium in the harvesting process, C. pyrenoidosa cells incubated with GlcNAc (100 mM) and A. fumigatus spent medium was observed under a fluorescent microscope and high-resolution transmission electron microscope (HR-TEM, Sect. 2.7.5) and was compared with normal C. pyrenoidosa cells. For fluorescent microscopy, the cells were stained with Concanavalin A (Con A) conjugated with Alexa Fluor® 594 (Cat No. C11253, ThermoFisher Scientific) to detect the presence of GlcNAc on the C. pyrenoidosa cell surface [62], followed by viewing of cells under a fluorescent microscope (Nikon Eclipse Ti-U). Con A is a lectin which binds d-glucose, d-fructose, d-mannose, N-acetyl-d-glucosamine and related monosaccharides [63]. Since Con A does not have fluorescence, it is tagged with a fluorescent dye Alexa Fluor 594 (Excitation/Emission: 590/617 nm). To 100 µl of C. pyrenoidosa cells, 0.1 µl of the dye was added and kept in the dark for 5 min at room temperature. The stained cells were then observed under the microscope. Analytical techniques Sample preparation for high-performance liquid chromatography (HPLC) and high-resolution liquid chromatography–mass spectrometry (HR-LC–MS) Samples were made salt free by passing through polymeric cartridge Strata-X-CW (Cat No. 8B-S035-JEG, Phenomenex). The cartridge was first conditioned by adding 10-ml methanol followed by 10-ml distilled water. The flow-through was discarded, and then 10 ml of the sample was added to the cartridge. The flow-through from the cartridge was collected, tested for harvesting activity and then analyzed using HPLC and HR-LC–MS. Standards for glucose (Cat No. 47829) and GlcNAc (Cat No. PHR1432-1G) were purchased from Sigma. Standards were dissolved in HPLC grade water (Cat No. AS077-1L, Hi-Media). HPLC was performed according to manufacturer's protocol using Agilent 1260 series machine with Agilent Hi-Plex H column and RI detector. The mobile phase was 0.005-M H2SO4 (Cat No. 5438270100, Sigma) at a flow rate of 0.6 ml min−1. The column temperature was kept at 60 °C and the run time was 20 min. All samples were degassed prior to analysis. HR-LC–MS HR-LC–MS was done to identify the nature of the compounds present in the A. fumigatus spent medium. The A. fumigatus spent medium is a mixture of various types of compounds like proteins, carbohydrates, organic acids, sugars and A. fumigatus metabolites. Hence, HR-LC–MS was done for these compounds. The aliquot was subjected to HR-LC–MS analysis (ACCUCORE RP-MS), and was outsourced to Sophisticated Analytical Instrument Facility (SAIF), Indian Institute of Technology (IIT), Mumbai. The column used was ZORBAX ECLIPSE C-18 (Agilent Technologies). The sample was run isocratically for 30 min using acetonitrile (95%) as a solvent. The compounds were analyzed using Quadrupole—Time of Flight mass spectrometer (Q-TOF MS; Agilent iFunnel G6550A) giving the probable hits from the database library provided by the manufacturer. Scanning electron microscopy (SEM) SEM analysis of (i) normal C. pyrenoidosa cells, (ii) C. pyrenoidosa cells incubated with A. fumigatus spent medium, (iii) GlcNAc-incubated C. pyrenoidosa and (iv) washed A. fumigatus pellets were done using a previously described protocol [61]. The samples for SEM analysis were first washed with PBS and then fixed with 1% glutaraldehyde for 4 h at room temperature. The samples were centrifuged and the fixative was discarded. Samples were then lyophilized (Allied Frost FD3) for SEM analysis using a ZEISS EVO 50 instrument under the following analytical condition: EHT = 20.00 kV, WD = 9.5 mm, Signal A = SE1. High-resolution Transmission electron microscopy (HR-TEM) The C. pyrenoidosa cells ((i) normal C. pyrenoidosa cells, (ii) C. pyrenoidosa cells incubated with A. fumigatus spent medium, (iii) GlcNAc-incubated C. pyrenoidosa and (iv) washed A. fumigatus pellets) were centrifuged at 4000g for 10 min and prepared for viewing as outlined by Gola et al. [64]. The samples were examined by high-resolution transmission electron microscope (Tecnai G2 20) operated at 200 kV. To confirm the observations of TEM analysis, AFM studies of the C. pyrenoidosa cells (normal cells and A. fumigatus spent medium-incubated C. pyrenoidosa cells) were done. C. pyrenoidosa cells were centrifuged and suspended in 50-mM citrate–phosphate buffer (pH 3) for conditioning. The buffer was removed by centrifugation, and the pellets were kept on glass cover slip for 30 min followed by washing with de-ionized water and then air drying. AFM micrographs were obtained using Bruker INNOVAA2 Sys instrument in tapping mode. Fourier transform infrared spectroscopy (FTIR) For FTIR measurements, the samples were first washed with phosphate buffer saline (PBS) and then lyophilized. The lyophilized powder was then used for FTIR analysis using a Nicolet Is50 (Thermo Scientific) instrument. All experiments were performed in triplicates and results were represented as mean ± S.D wherever applicable. Graphs were drawn using Microsoft Excel® (Part of Microsoft Office 2013 package). Significance test has been done using one-way ANOVA (p < 0.05). t Test (two-tailed) was also performed to check the significance between two data sets. Data supporting the results of the article are included within this manuscript and additional information. AF: Aspergillus fumigatus AFM: atomic force microscopy Chlorella pyrenoidosa FTIR: GlcNAc: N-acetyl glucosamine HPLC: high-performance liquid chromatography HR-LC–MS: high-resolution liquid chromatography–mass spectrometry optical density PDA: TEM: transmission electron microscopy Sharma YC, Singh B, Korstad J. A critical review on recent methods used for economically viable and eco-friendly development of microalgae as a potential feedstock for synthesis of biodiesel. Green Chem. 2011;13:2993. Ngangkham M, Ratha SK, Prasanna R, Saxena AK, Dhar DW, Sarika C, et al. Biochemical modulation of growth, lipid quality and productivity in mixotrophic cultures of Chlorella sorokiniana. SpringerPlus. 2012;1:33. https://doi.org/10.1186/2193-1801-1-33. Tango MD, Calijuri ML, Assemany PP, Couto E. Microalgae cultivation in agro-industrial effluents for biodiesel application: effects of the availability of nutrients. Water Sci Technol. 2018;78:2018180. https://doi.org/10.2166/wst.2018.180. Molina Grima E, Belarbi EH, Acién Fernández FG, Robles Medina A, Chisti Y. Recovery of microalgal biomass and metabolites: process options and economics. Biotechnol Adv. 2003;20:491–515. Gomez JA, Höffner K, Barton PI. From sugars to biodiesel using microalgae and yeast. Green Chem. 2015;18:461–75. Rashid N, Rehman MSU, Han J-I. Use of chitosan acid solutions to improve separation efficiency for harvesting of the microalga Chlorella vulgaris. Chem Eng J. 2013;226:238–42. Şirin S, Trobajo R, Ibanez C, Salvadó J. Harvesting the microalgae Phaeodactylum tricornutum with polyaluminum chloride, aluminium sulphate, chitosan and alkalinity-induced flocculation. J Appl Phycol. 2012;24:1067–80. https://doi.org/10.1007/s10811-011-9736-6. Granados MR, Acién FG, Gómez C, Fernández-Sevilla JM, Molina Grima E. Evaluation of flocculants for the recovery of freshwater microalgae. Bioresour Technol. 2012;118:102–10. Fasaei F, Bitter JH, Slegers PM, van Boxtel AJB. Techno-economic evaluation of microalgae harvesting and dewatering systems. Algal Res. 2018;31:347–62. Vasconcelos Fernandes T, Shrestha R, Sui Y, Papini G, Zeeman G, Vet LEM, et al. Closing domestic nutrient cycles using microalgae. Environ Sci Technol. 2015;49:12450–6. https://doi.org/10.1021/acs.est.5b02858. Al-Hothaly KA, Adetutu EM, Taha M, Fabbri D, Lorenzetti C, Conti R, et al. Bio-harvesting and pyrolysis of the microalgae Botryococcus braunii. Bioresour Technol. 2015;191:117–23. https://doi.org/10.1016/j.biortech.2015.04.113. Bhattacharya A, Mathur M, Kumar P, Prajapati SK, Malik A. A rapid method for fungal assisted algal flocculation: critical parameters and mechanism insights. Algal Res. 2017;21:42–51. Salim S, Gilissen L, Rinzema A, Vermuë MH, Wijffels RH. Modeling microalgal flocculation and sedimentation. Bioresour Technol. 2013;144:602–7. Sukenik A, Shelef G. Algal autoflocculation—verification and proposed mechanism. Biotechnol Bioeng. 1984;26:142–7. Oh HM, Lee SJ, Park MH, Kim HS, Kim HC, Yoon JH, et al. Harvesting of Chlorella vulgaris using a bioflocculant from Paenibacillus sp. AM49. Biotechnol Lett. 2001;23:1229–34. https://doi.org/10.1023/a:1010577319771. Rodolfi L, Zittelli GC, Barsanti L, Rosati G, Tredici MR. Growth medium recycling in Nannochloropsis sp. mass cultivation. Biomol Eng. 2003;20:243–8. Vandamme D, Foubert I, Fraeye I, Muylaert K. Influence of organic matter generated by Chlorella vulgaris on five different modes of flocculation. Bioresour Technol. 2012;124:508–11. Cho K, Hur SP, Lee CH, Ko K, Lee YJ, Kim KN, et al. Bioflocculation of the oceanic microalga Dunaliella salina by the bloom-forming dinoflagellate Heterocapsa circularisquama, and its effect on biodiesel properties of the biomass. Bioresour Technol. 2016;202:257–61. Hu YR, Wang F, Wang SK, Liu CZ, Guo C. Efficient harvesting of marine microalgae Nannochloropsis maritima using magnetic nanoparticles. Bioresour Technol. 2013;138:387–90. Zhang B, Lens PNL, Shi W, Zhang R, Zhang Z, Guo Y, et al. Enhancement of aerobic granulation and nutrient removal by an algal–bacterial consortium in a lab-scale photobioreactor. Chem Eng J. 2018;334:2373–82. Xu L, Guo C, Wang F, Zheng S, Liu CZ. A simple and rapid harvesting method for microalgae by in situ magnetic separation. Bioresour Technol. 2011;102:10047–51. Xie S, Sun S, Dai SY, Yuan J. Efficient coagulation of microalgae in cultures with filamentous fungi. Algal Res. 2013;2:28–33. Zhang J, Hu B. A novel method to harvest microalgae via co-culture of filamentous fungi to form cell pellets. Bioresour Technol. 2012;114:529–35. Papagianni M. Fungal morphology and metabolite production in submerged mycelial processes. Biotechnol Adv. 2004;22:189–259. Zhou W, Cheng Y, Li Y, Wan Y, Liu Y, Lin X, et al. Novel fungal pelletization-assisted technology for algae harvesting and wastewater treatment. Appl Biochem Biotechnol. 2012;167:214–28. https://doi.org/10.1007/s12010-012-9667-y. Gultom SO, Zamalloa C, Hu B. Microalgae harvest through fungal pelletization—co-culture of Chlorella vulgaris and Aspergillus niger. Energies. 2014;7:4417–29. Miranda AF, Taha M, Wrede D, Morrison P, Ball AS, Stevenson T, et al. Lipid production in association of filamentous fungi with genetically modified cyanobacterial cells. Biotechnol Biofuels. 2015;8:179. Muradov N, Taha M, Miranda AF, Wrede D, Kadali K, Gujar A, et al. Fungal-assisted algal flocculation: application in wastewater treatment and biofuel production. Biotechnol Biofuels. 2015;8:24. Zhou W, Min M, Hu B, Ma X, Liu Y, Wang Q, et al. Filamentous fungi assisted bio-flocculation: a novel alternative technique for harvesting heterotrophic and autotrophic microalgal cells. Sep Purif Technol. 2013;107:158–65. Wrede D, Taha M, Miranda AF, Kadali K, Stevenson T, Ball AS, et al. Co-cultivation of fungal and microalgal cells as an efficient system for harvesting microalgal cells, lipid production and wastewater treatment. PLoS ONE. 2014;9:e113497. Prajapati SK, Kumar P, Malik A, Choudhary P. Exploring pellet forming filamentous fungi as tool for harvesting non-flocculating unicellular microalgae. Bioenergy Res. 2014;7:1430–40. https://doi.org/10.1007/s12155-014-9481-1. Prajapati SK, Bhattacharya A, Kumar P, Malik A, Vijay VK. A method for simultaneous bioflocculation and pretreatment of algal biomass targeting improved methane production. Green Chem. 2016;18:5230–8. Stratford M, Keenan MHJ. Yeast flocculation: kinetics and collision theory. Yeast. 1987;3:201–6. Pan G, Zhang MM, Chen H, Zou H, Yan H. Removal of cyanobacterial blooms in Taihu Lake using local soils. I. Equilibrium and kinetic screening on the flocculation of Microcystis aeruginosa using commercially available clays and minerals. Environ Pollut. 2006;141:195–200. Sanders WB. Lichens: the Interface between mycology and plant morphology. Source Biosci. 2001;51:1025–36. Hom EFY, Schaeme D, Mittag M, Sasso S. A chemical perspective on microalgal—microbial interactions. Trends Plant Sci. 2015;10:1–4. Frenken T, Alacid E, Berger SA, Bourne EC, Gerphagnon M, Grossart H-P, et al. Integrating chytrid fungal parasites into plankton ecology: research gaps and needs. Environ Microbiol. 2017;19:3802–22. https://doi.org/10.1111/1462-2920.13827. Debeaupuis JP, Sarfati J, Goris A, Stynen D, Diaquin M, Latgé JP. Exocellular Polysaccharides from Aspergillus Fumigatus and Related Taxa. Mod Concepts Penicillium Aspergillus Classif. 1990;209:23. https://doi.org/10.1007/978-1-4899-3579-3_18. Holder DJ, Keyhani NO. Adhesion of the entomopathogenic fungus Beauveria (Cordyceps) bassiana to Substrata. Appl Environ Microbiol. 2005;71:5260–6. Jones EBG. Fungal adhesion. Mycol Res. 1994;98:961–81. Mori JF, Ueberschaar N, Lu S, Cooper RE, Pohnert G, Küsel K. Sticking together: inter-species aggregation of bacteria isolated from iron snow is controlled by chemical signaling. ISME J. 2017;11:1075–86. Chen L, Wang C, Wang W, Wei J. Optimal conditions of different flocculation methods for harvesting Scenedesmus sp. Cultivated in an open-pond system. Bioresour Technol. 2013;133:9–15. Phuong K, Kakii K, Nikata T. Intergeneric coaggregation of non-flocculating Acinetobacter spp. isolates with other sludge-constituting bacteria. J Biosci Bioeng. 2009;107:394–400. Keller L, Surette MG. Communication in bacteria: an ecological and evolutionary perspective. Nat Rev Microbiol. 2006;4:249–58. Hom EFY, Murray AW. Niche engineering demonstrates a latent capacity for fungal-algal mutualism. Science (80 −). 2014;345:94–8. Aljuboori AHR, Idris A, Abdullah N, Mohamad R. Production and characterization of a bioflocculant produced by Aspergillus flavus. Bioresour Technol. 2013;127:489–93. Subudhi S, Bisht V, Batta N, Pathak M, Devi A, Lal B. Purification and characterization of exopolysaccharide bioflocculant produced by heavy metal resistant Achromobacter xylosoxidans. Carbohydr Polym. 2016;137:441–51. Gravelat FN, Beauvais A, Liu H, Lee MJ, Snarr BD, Chen D, et al. Aspergillus galactosaminogalactan mediates adherence to host constituents and conceals hyphal β-glucan from the immune system. PLoS Pathog. 2013;9:e1003575. https://doi.org/10.1371/journal.ppat.1003575. Contreras R, Maras M, Bruyn A, Vervecken W, Uusitalo J, Penttil M, et al. In vivo synthesis of complex N-glycans by expression of human N-acetylglucosaminyltransferase I in the filamentous fungus Trichoderma reesei. FEBS Lett. 1999;452:365–70. Steiger MG, Mach-Aigner AR, Gorsche R, Rosenberg EE, Mihovilovic MD, Mach RL. Synthesis of an antiviral drug precursor from chitin using a saprophyte as a whole-cell catalyst. Microb Cell Fact. 2011;10:1–9. Hsieh J-W, Wu H-S, Wei Y-H, Wang SS. Determination and kinetics of producing glucosamine using fungi. Biotechnol Prog. 2007;1:1. https://doi.org/10.1021/bp070037o. Hallab NJ, Bundy KJ, O'Connor K, Moses RL, Jacobs JJ. Evaluation of metallic and polymeric biomaterial surface energy and surface roughness characteristics for directed cell adhesion. Tissue Eng. 2001;7:55–71. https://doi.org/10.1089/107632700300003297. Granhag L, Finlay J, Jonsson P, Callow J, Callow M. Roughness-dependent removal of settled spores of the green alga Ulva (syn Enteromorpha) exposed to hydrodynamic forces from a water jet. Biofouling. 2004;20:117–22. https://doi.org/10.1080/08927010410001715482. Prasadarao NV, Wass CA, Kim KS. Endothelial cell GlcNAc beta 1-4GlcNAc epitopes for outer membrane protein A enhance traversal of Escherichia coli across the blood–brain barrier. Infect Immun. 1996;64:154–60. Cormack BP, Ghori N, Falkow S. An adhesin of the yeast pathogen Candida glabrata mediating adherence to human epithelial cells. Science. 1999;285:578–82. Naseem S, Gunasekera A, Araya E, Konopka JB. N-Acetylglucosamine (GlcNAc) induction of hyphal morphogenesis and transcriptional responses in Candida albicans are not dependent on its metabolism. J Biol Chem. 2011;286(33):28671–80. Barnhart MM, Lynem J, Chapman MR. GlcNAc-6P levels modulate the expression of curli fibers by Escherichia coli. J Bacteriol. 2006;188:5212–9. Xue C, Hsueh YP, Heitman J. Magnificent seven: roles of G protein-coupled receptors in extracellular sensing in fungi. FEMS Microbiol Rev. 2008;32:1010–32. Grice CM, Bertuzzi M, Bignell EM. Receptor-mediated signaling in Aspergillus fumigatus. Front Microbiol. 2013;4:26. Mathur M, Gola D, Panja R, Malik A, Ahammad SZ. Performance evaluation of two Aspergillus spp. for the decolourization of reactive dyes by bioaccumulation and biosorption. Environ Sci Pollut Res. 2018;25:345–52. https://doi.org/10.1007/s11356-017-0417-0. Prajapati SK, Bhattacharya A, Malik A, Vijay VK. Pretreatment of algal biomass using fungal crude enzymes. Algal Res. 2015;8:8–14. Nobile CJ, Fox EP, Hartooni N, Mitchell KF, Hnisz D, Andes DR, et al. A histone deacetylase complex mediates biofilm dispersal and drug resistance in Candida albicans. MBio. 2014;5:e01201. Gonçalves GRF, Gandolfi ORR, Santos CMS, Bonomo RCF, Veloso CM, Fontan R. Development of supermacroporous monolithic adsorbents for purifying lectins by affinity with sugars. J Chromatogr B. 2016;1033–1034:406–12. Gola D, Dey P, Bhattacharya A, Mishra A, Malik A, Namburath M, et al. Multiple heavy metal removal using an entomopathogenic fungi Beauveria bassiana. Bioresour Technol. 2016;218:388–96. https://doi.org/10.1016/j.biortech.2016.06.096. The authors are grateful to Nano Research Facility (NRF) and Central Research Facility (CRF), IIT Delhi, for help with HPLC, FTIR, AFM,SEM and HR-TEM analysis. Sophisticated Analytical Instrument Facility (SAIF), IIT Bombay, for HR-LCMS analysis and SAIF AIIMS, New Delhi for TEM section cutting. Applied Microbiology Laboratory, Centre for Rural Development and Technology, Indian Institute of Technology, Delhi, Hauz Khas, New Delhi, 110016, India Arghya Bhattacharya , Megha Mathur , Pushpendar Kumar & Anushree Malik Search for Arghya Bhattacharya in: Search for Megha Mathur in: Search for Pushpendar Kumar in: Search for Anushree Malik in: AB and AM designed the experiments. AB, MM and PK conducted the experiments. AB and MM have written the manuscript. AM has thoroughly scrutinized the manuscript. All authors read and approved the final manuscript. The present work was carried out with financial support from Science and Engineering Research Board (SERB), Department of Science and Technology, Government of India (SB/S3/CEE/0002/2014). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Correspondence to Anushree Malik. Not applicable for this study. Additional file 1: Figure S1. Harvesting kinetics of C. pyrenoidosa with A. fumigatus pellets after incubation with glucose and GlcNAc showing significant difference in process kinetics (p < 0.05) up to 4 h. Non-significant difference (p < 0.05) was observed between harvesting kinetics of control and GlcNAc. Additional file 2: Table S1. HR-LC–MS analysis of supernatants after harvesting of algal–fungal mixtures using algae with different pre-incubations. Additional file 3: Figure S2. Representative brightfield (left) and fluorescent (right) micrographs of SYTOX Green stained C. pyrenoidosa cells incubatedwith A. fumigatus spent medium. Additional file 4: Figure S3. Harvesting efficiency of C. pyrenoidosa cells with A. fumigatus when incubated with different sugars. Bioflocculation
CommonCrawl
פלדהיים נעמי http://u.math.biu.ac.il/~liflyand/AnalysisSeminar/AnalSem07-08 Analysis Seminar RSS 27/01/2020 - 14:00 - 27/01/2020 - 15:10 Functional Inequalities on sub-Riemannian manifolds via QCD Prof. Emanuel Milman, Technion Prof. Emanuel Milman, Technion Functional Inequalities on sub-Riemannian manifolds via QCD We are interested in obtaining Poincar\'e and log-Sobolev inequalities on domains in sub-Riemannian manifolds (equipped with their natural sub-Riemannian metric and volume measure). It is well-known that strictly sub-Riemannian manifolds do not satisfy any type of Curvature-Dimension condition CD(K,N), introduced by Lott-Sturm-Villani some 15 years ago, so we must follow a different path. We show that while ideal (strictly) sub-Riemannian manifolds do not satisfy any type of CD condition, they do satisfy a quasi-convex relaxation thereof, which we name QCD(Q,K,N). As a consequence, these spaces satisfy numerous functional inequalities with exactly the same quantitative dependence (up to a factor of Q) as their CD counterparts. We achieve this by extending the localization paradigm to completely general interpolation inequalities, and a one-dimensional comparison of QCD densities with their "CD upper envelope". We thus obtain the best known quantitative estimates for (say) the L^p-Poincar\'e and log-Sobolev inequalities on domains in the ideal sub-Riemannian setting, which in particular are independent of the topological dimension. For instance, the classical Li-Yau / Zhong-Yang spectral-gap estimate holds on all Heisenberg groups of arbitrary dimension up to a factor of 4. No prior knowledge will be assumed, and we will (hopefully) explain all of the above notions during the talk. 06/01/2020 - 14:00 - 06/01/2020 - 15:15 Bounding the Poisson bracket invariant on surfaces Prof. Lev Buhovsky, Tel-Aviv University Prof. Lev Buhovsky, Tel-Aviv University Bounding the Poisson bracket invariant on surfaces I will discuss the Poisson bracket invariant of a cover, which was introduced by L. Polterovich. Initially, this invariant was studied via Floer theory, and lower bounds for it were established in some situations. I will try to explain how one can obtain the conjectural lower bound in dimension 2, using only elementary arguments. This is a joint work with A. Logunov and S. Tanny, with a contribution of F. Nazarov. 30/12/2019 - 14:00 - 30/12/2019 - 15:45 Cocompact embeddings of function spaces Prof. Cyril Tintarev, Uppsala, visiting Technion Prof. Cyril Tintarev, Uppsala, visiting Technion Cocompact embeddings of function spaces A sequence in a Banach space $E$ is called $G$-weak convergent, relative to a set $G$ of homeomorphisms of $E$ if $\forall g_k \in G$, $g_k(u_k-u)\rightharpoonup 0$. An embedding of two Banach spaces is called $G$-cocompact if every $G$-weakly convergent sequence in $E$ is convergent in $F$. Cocompact embeddings allow to improve convergence of bounded sequences beyond weak convergence when there is no pertinent compactness. On a series of examples in Sobolev, Besov, Lorenz-Zygmund, and Stricharz spaces, we show how cocompactness follows from compactness via a suitable decomposition, for example the Littlewood-Paley decomposition or the wavelet expansion. 23/12/2019 - 14:00 - 23/12/2019 - 16:00 Matrix convexity, Choquet boundaries and Tsirelson problems Dr. Adam Dor On, University of Illinois, Urbana-Champaign, USA Dr. Adam Dor On, University of Illinois, Urbana-Champaign, USA Matrix convexity, Choquet boundaries and Tsirelson problems Following work of Evert, Helton, Klep and McCullough on free LMI domains, we ask when a matrix convex set is the closed convex hull of its Choquet points. This is a finite-dimensional version of Arveson's non-commutative Krein-Milman theorem, and in fact some matrix convex sets can fail to have any Choquet points. The general problem of determining whether a given matrix convex set has this property turns out to be difficult because for certain matrix convex sets this is equivalent to a weak version of Tsirelson's problem. This weak variant of Tsirelson's problem is known to be equivalent to Connes' embedding conjecture, and is considered a hard problem by many experts. Our approach provides new geometric variants of Tsirelson type problems for pairs of convex polytopes which may be easier to rule out than Tsirelson's original problems. 16/12/2019 - 15:05 - 16/12/2019 - 16:00 Multiple translational tiling in the plane Dr. Yang Qi, Peking University Dr. Yang Qi, Peking University Multiple translational tiling in the plane In this talk, I will present some of my results in multiple translational tiling in the Euclidean plane. For examples, besides parallelograms and centrally symmetric hexagons, there is no other convex domain which can form any two-, three- or four-fold translative tiling in the plane. However, there are two types of octagons and one type of decagons which can form nontrivial five-fold translative tilings. Furthermore, a convex domain can form a five-fold translative tiling of the plane if and only if it can form a five-fold lattice tiling, a multiple translational tile in the plane is a multiple lattice tile. This talk is based on a joint work with Professor Chuanming Zong. 16/12/2019 - 14:00 - 16/12/2019 - 15:00 Optimal growth of frequently oscillating subharmonic functions Dr. Adi Glucksam, University of Toronto, Canada Dr. Adi Glucksam, University of Toronto, Canada Optimal growth of frequently oscillating subharmonic functions In this talk I will present Nevanlinna-type tight bounds on the minimal possible growth of subharmonic functions with a large zero set. We use a technique inspired by a paper of Jones and Makarov. 02/12/2019 - 14:00 - 02/12/2019 - 15:45 Multiplicity of eigenvalues of the circular clamped plate Prof. Dan Mangoubi, Hebrew University, Jerusalem Prof. Dan Mangoubi, Hebrew University, Jerusalem Multiplicity of eigenvalues of the circular clamped plate A celebrated theorem of C.L. Siegel from 1929 shows that the multiplicity of eigenvalues for the Laplace eigenfunctions on the unit disk is at most two. More precisely, Siegel shows that positive zeros of Bessel functions are transcendental. We study the fourth order clamped plate problem, showing that the multiplicity of eigenvalues is uniformly bounded (by not more than six). Our method is based on new recursion formulas and Siegel-Shidlovskii theory. The talk is based on a joint work with Yuri Lvovsky. 18/11/2019 - 14:00 - 18/11/2019 - 15:35 On single and paired shifted Funk transform Prof. M. Agranovsky, Bar-Ilan University Prof. M. Agranovsky, Bar-Ilan University On single and paired shifted Funk transform The classical result due to Funk is about the reconstruction of even functions on the unit sphere in $R^n$ from their integrals over the cross-sections by the hyperplanes (or k-planes) through the origin. In modern applications in tomography and imaging, this transform is involved in reconstruction methods in diffusion MRI. Last years, the shifted Funk transform, with the common point (center) of the cross- sections different form the origin, has been studied by several authors. My talk will be devoted to new results in this area. I will touch upon the description of the kernel of the shifted Funk transform and its relation with the classical one, delivered by action of the pseudo-orthogonal group on the unit real ball. In most cases, the kernel is nontrivial, so that inverting the transform is impossible. However, it appears that to completely recover a function on the unit sphere a pair Funk data may be enough and this possibility depends on the mutual location of the centers. The size of the common kernel of a paired transform appears to be related with the type of iteration dynamics of certain billiard-like self-mapping of the unit sphere and understanding this dynamics yields necessary and sufficient conditions of the injectivity of the paired Funk transform. In the cases of injectivity, we present a reconstruction procedure in terms of an $L^p$-convergent Neumann type series with p in a certain range. It is a joint work with Boris Rubin from Louisiana State University. 11/11/2019 - 14:00 - 11/11/2019 - 15:30 Salem conditions in the non-periodic case Prof. E. Liflyand, Bar-Ilan University Prof. E. Liflyand, Bar-Ilan University Salem conditions in the non-periodic case In the classical sources, Salem's necessary conditions for a trigonometric series to be the Fourier series of an integrable function are given in terms of ``some" sums. Realizing that, in fact, they are given in terms of the discrete Hilbert transforms, we generalize these to the non-periodic case, for functions from the Wiener algebra. Other relations of the two objects are also discussed. The obtained necessary condition is used to construct a monotone function with non-integrable cosine Fourier transform in a much easier way than in the classical book \cite{Ti} by Titchmarsh. Certain open problems are posed. 04/11/2019 - 14:00 - 04/11/2019 - 15:20 Riesz bases of exponentials for convex polytopes Prof. Nir Lev, Bar-Ilan University Prof. Nir Lev, Bar-Ilan University Riesz bases of exponentials for convex polytopes Which domains in Euclidean space admit a basis of exponential functions? The answer depends on what we mean by a "basis". The best one can hope for is to have an orthogonal basis of exponentials, but it is well-known that many reasonable domains do not have such a basis. In this case, a Riesz basis is the next best thing one can hope for. I will discuss a recent result joint with Alberto Debernardi, which deals with the construction of Riesz bases of exponentials for convex polytopes in R^d. 17/06/2019 - 14:00 - 17/06/2019 - 15:40 Generating functionals on quantum groups Dr. Ami Viselter, University of Haifa Dr. Ami Viselter, University of Haifa Generating functionals on quantum groups We will discuss generating functionals on locally compact quantum groups. One type of examples comes from probability: the family of distributions of a L\'evy process form a convolution semigroup, which in turn admits a natural generating functional. Another type of examples comes from (locally compact) group theory, involving semigroups of positive-definite functions and conditionally negative-definite functions, which provide important information about the group's geometry. We will explain how these notions are related and how all this extends to the quantum world; see how generating functionals may be (re)constructed and study their domains; and indicate how our results can be used to study cocycles. Based on joint work with Adam Skalski. 03/06/2019 - 14:00 - 03/06/2019 - 15:40 Weighted norm inequalities for integral transforms with splitting kernels Dr. Alberto Debernardi, Bar-Ilan University Dr. Alberto Debernardi, Bar-Ilan University Weighted norm inequalities for integral transforms with splitting kernels Given an integral transform on the positive real line, we say that its kernel is splitting if it satisfies upper pointwise estimates given by products of two functions, each of them taken in a different variable. We discuss necessary and/or sufficient conditions weighted norm inequalities involving these transforms to hold. Sharpness of the results is directly related to the sharpness of the upper estimates for the kernels that we find. 20/05/2019 - 14:00 - 20/05/2019 - 15:40 SHANNON SAMPLING ON MANIFOLDS AND GRAPHS Prof. Isaac Pesenson, Temple University, Philadelphia, USA Prof. Isaac Pesenson, Temple University, Philadelphia, USA SHANNON SAMPLING ON MANIFOLDS AND GRAPHS One of the most interesting properties of the so called bandlimited functions (=Palley-Wiener functions), i. e. functions whose Fourier transform has compact support, is that they are uniquely determined by their values on some countable sets of points and can be reconstructed from such values in a stable way. The sampling problem for band limited functions had attracted attention of many mathematicians. The mathematical theory of reconstruction of band limited functions from discrete sets of samples was introduced to the world of signal analysis and information theory by Shannon. Later the concept of bandlimitedness and the Sampling Theorem became the theoretical foundation of many branches of the information theory. In the talk I will show how these ideas can be extended to the setting of Riemannian manifolds and combinatorial graphs. It is an active field of research which found numerous applications in machine learning, astrophysics, and statistics. 29/04/2019 - 14:00 - 29/04/2019 - 15:20 On some inverse and nonlocal problems for operator differential equations and numerical methods for their solution Prof. Yuli Eidelman, Tel-Aviv University Prof. Yuli Eidelman, Tel-Aviv University On some inverse and nonlocal problems for operator differential equations and numerical methods for their solution We study some nonlocal problems for operator differential equations of the form $$\frac{dv}{dt} = Av + f(t)$$ with an unbounded operator A in vector spaces. For a wide classes of equations we obtain necessary and sufficient conditions of the unique solvability and the formulas for solutions of such problems. In the second part of the talk we present fast numerical algorithms for solution of these problems for matrix equations with rank structured matrices. 08/04/2019 - 14:00 - 08/04/2019 - 15:20 A general coefficient theorem for univalent functions Prof. Samuel Krushkal, Bar-Ilan University Prof. Samuel Krushkal, Bar-Ilan University A general coefficient theorem for univalent functions The estimating holomorphic functionals on the classes of univalent functions depending on the Taylor coefficients $a_n$ of these functions is important in various geometric and physical applications of complex analysis, because these coefficients reflect the fundamental intrinsic features of conformal maps. The goal of the talk is to outline the proof of a new general theorem on maximization of homogeneous polynomial (in fact, more general holomorphic) coefficient functionals $$J(f) = J(a_{m_1}, a_{m_2},\dots, a_{m_n})$$ on some classes of univalent functions in the unit disk naturally connected with the canonical class $S$. The theorem states that under a natural assumption on zero set of $J$ this functional is maximized only by the Koebe function $\kappa(z) = z/(1 - z)^2$ composed with pre and post rotations about the origin. The proof involves a deep result from the Teichm\"{u}ller space theory given by the Bers isomorphism theorem for Teichm\"{u}ller spaces of punctured Riemann surfaces. The given functional $J$ is lifted to the Teichm\"{u}ller space $\mathbf T_1$ of the punctured disk $\mathbb D_{*} = \{0 < |z| < 1\}$ which is biholomorphically equivalent to the Bers fiber space over the universal Teichm\"{u}ller space. This generates a positive subharmonic function on the disk $\{|t| < 4\}$ with $\sup_{|t|<4} u(t) = \max_{\mathbf T_1} |J|$ attaining this maximal value only on the boundary circle, which correspond to rotations of the Koebe function. Our theorem implies new sharp distortion estimates for univalent functions giving explicitly the extremal functions and creates a new bridge between the Teichm\"{u}ller space theory and geometric complex analysis. In particular, it provides an alternate and direct proof of the Bieberbach conjecture. 25/03/2019 - 14:00 - 25/03/2019 - 15:55 Chebyshev-type Quadratures for Doubling Weights Dr. Shoni Gilboa, The Open University of Israel Dr. Shoni Gilboa, The Open University of Israel Chebyshev-type Quadratures for Doubling Weights A Chebyshev-type quadrature for a given weight function is a quadrature formula with equal weights. We show that a method presented by Kane may be used to determine the order of magnitude of the minimal number of nodes required in Chebyshev-type quadratures for doubling weight functions, extending a long line of research on Chebyshev-type quadratures starting with the 1937 work of Bernstein. Joint work with Ron Peled. 11/03/2019 - 14:30 - 11/03/2019 - 16:00 New algorithms for convex interpolation Prof. Jeremy Schiff, BIU Prof. Jeremy Schiff, BIU New algorithms for convex interpolation In various settings, from computer graphics to financial mathematics, it is necessary to smoothly interpolate a convex curve from a set of data points. Standard interpolation schemes do not respect convexity, and existing special purpose methods require arbitrary choices and/or give interpolants that are very flat between data points. We consider a broad set of spline-type schemes and show that convexity preservation requires the basic spline to be infinitely differentiable but nonanalytic at its endpoints. Using such a scheme - which essentially corresponds to building-in the possibility of "very flatness" ab initio, rather than, say, enforcing it through extreme parameter choices - gives far more satisfactory numerical results. Joint work with Eli Passov. 11/03/2019 - 14:30 - 11/03/2019 - 15:45 New algorithms for convex interpolation Prof. Jeremy Schiff, Bar-Ilan University Prof. Jeremy Schiff, Bar-Ilan University In various settings, from computer graphics to financial mathematics, it is necessary to smoothly interpolate a convex curve from a set of data points. Standard interpolation schemes do not respect convexity, and existing special purpose methods require arbitrary choices and/or give interpolants that are very flat between data points. We consider a broad set of spline-type schemes and show that convexity preservation requires the basic spline to be infinitely differentiable but nonanalytic at its endpoints. Using such a scheme - which essentially corresponds to building-in the possibility of "very flatness" ab initio, rather than, say, enforcing it through extreme parameter choices - gives far more satisfactory numerical results. Joint work with Eli Passov 04/03/2019 - 14:00 - 04/03/2019 - 15:10 Hardy type inequalities in the category of Hausdorff operators Prof. E. Liflyand, Bar-Ilan University Hardy type inequalities in the category of Hausdorff operators Classical Hardy's inequalities are concerned with the Hardy operator and its adjoint, the Bellman operator. Hausdorff operators in their various forms are natural generalizations of these two operators. We adjust the scheme used by Bradley for Hardy's inequalities with general weights to the Hausdorff setting. It is not surprising that the obtained necessary conditions differ from the sufficient conditions as well as that both depend not only on weights but also on the kernel that generate the Hausdorff operator. For the Hardy and Bellman operators, the obtained necessary and sufficient conditions coincide and reduce to the classical ones. 14/01/2019 - 14:00 - 14/01/2019 - 15:15 Hankel transforms and general monotonicity Dr. A. Debernardi, Bar-Ilan University Dr. A. Debernardi, Bar-Ilan University Hankel transforms and general monotonicity We will discuss some problems related to Hankel transforms of \textbf{real-valued} general monotone functions, some of them generalize previously known results, and some others are completely new. To mention some, we give a criterion for uniform convergence of Hankel transforms, and we also give a solution to Boas' problem in this context. In particular, the latter implies a generalization of the well-known Hardy-Littlewood inequality for Fourier transforms. 07/01/2019 - 14:00 - 07/01/2019 - 15:45 Nonlinear resolvent of holomorphic generators Prof. David Shoikhet, Holon Institute of Technology, Israel Prof. David Shoikhet, Holon Institute of Technology, Israel Nonlinear resolvent of holomorphic generators This talk is based on joint work with Mark Elin and Toshiyuki Sugawa. Let $f$ \ be the infinitesimal generator of a one-parameter semigroup $\left\{ F_{t}\right\} _{t>0}$ of holomorphic self-mappings of the open unit disk, i.e., $f=\lim_{t\rightarrow 0}\frac{1}{t}\left( I-F_{t}\right) .$ In this work, we study properties of the resolvent family $R=\left\{ \left( I+rf\right) ^{-1}\right\} _{r>0}$ \ in the spirit of geometric function theory. We discovered, in particular, that $R$ forms an inverse Loewner chain and consists of starlike functions of order $\alpha >1/2$. Moreover, each element of $R$ satisfies the Noshiro-Warshawskii condition $\left( \func{Re}\left[ \left( I+rf\right) ^{-1}\right] ^{\prime }\left( z\right) >0\right) .$ This, in turn, implies that all elements of $R$ are also holomorphic generators. Finally, we study the existence of repelling fixed points of this family. 24/12/2018 - 14:00 - 24/12/2018 - 15:25 Noise Stability and Majority Functions Prof. Elchanan Mossel, Massachusets Institute of Technology, USA Prof. Elchanan Mossel, Massachusets Institute of Technology, USA Noise Stability and Majority Functions Two important results in Boolean analysis highlight the role of majority functions in the theory of noise stability. Benjamini, Kalai, and Schramm (1999) showed that a boolean monotone function is noise-stable if and only if it is correlated with a weighted majority. Mossel, O'Donnell, and Oleszkiewicz (2010) showed that simple majorities asymptotically maximize noise stability among low influence functions. In the talk, we will discuss and review progress from the last decade in our understanding of the interplay between Majorities and noise-stability. In particular, we will discuss a generalization of the BKS theorem to non-monotone functions, stronger and more robust versions of Majority is Stablest and the Plurality is Stablest conjecture. We will also discuss what these results imply for voting. 17/12/2018 - 14:00 - 17/12/2018 - 15:30 On a local version of the fifth Busemann-Petty Problem Prof. D. Ryabogin, Kent State University, Ohio, USA Prof. D. Ryabogin, Kent State University, Ohio, USA On a local version of the fifth Busemann-Petty Problem In 1956, Busemann and Petty posed a series of questions about symmetric convex bodies, of which only the first one has been solved. Their fifth problem asks the following. Let K be an origin symmetric convex body in the n-dimensional Euclidean space and let H_x be a hyperplane passing through the origin orthogonal to a unit direction x. Consider a hyperplane G parallel to H_x and supporting to K and let C(K,x)=vol(K\cap H_x)dist (0, G). If there exists a constant C such that for all directions x we have C(K,x)=C, does it follow that K is an ellipsoid? We give an affirmative answer to this problem for bodies sufficiently close to the Euclidean ball in the Banach-Mazur distance. This is a joint work with Maria Alfonseca, Fedor Nazarov and Vlad Yaskin. 10/12/2018 - 15:05 - 10/12/2018 - 16:00 Multivariable Hardy spaces and the classification of universal dilation algebras Dr. Eli Shamovich, University of Waterloo, Waterloo, Ontario, Canada Dr. Eli Shamovich, University of Waterloo, Waterloo, Ontario, Canada Multivariable Hardy spaces and the classification of universal dilation algebras In this talk, we will discuss what is special about the Hardy spaces $H^2(\mathbb{D})$ and its multiplier algebra $H^{\infty}(\mathbb{D})$, from the point of view of operators algebras and function theory. I will present two generalizations of the pair $H^2$ and $H^{\infty}$ to the multivariable setting. One commutative and one noncommutative. We will then discuss a natural classification question that arises in the multivariable setups of algebras of analytic functions on subvarieties of the unit ball. These algebras arise naturally as universal operator algebras of a class of row contractions. Only basic familiarity with operators on Hilbert spaces and complex analysis is assumed. 10/12/2018 - 14:00 - 10/12/2018 - 15:00 Radon Transforms over Horospheres in Real Hyperbolic Space Prof. Boris Rubin, Louisiana State University, USA Prof. Boris Rubin, Louisiana State University, USA Radon Transforms over Horospheres in Real Hyperbolic Space The horospherical Radon transform integrates functions on the n-dimensional real hyperbolic space over d-dimensional horospheres, where d is a fixed integer, $1\le d\le n-1$. Using the tools of real analysis, we obtain sharp existence conditions and explicit inversion formulas for these transforms acting on smooth functions and functions belonging to $L^p$. The case d = n-1 agrees with the classical Gelfand-Graev transform which was studied before in terms of the distribution theory on rapidly decreasing smooth functions. The results for $L^p$-functions and the case d < n-1 are new. This is a joint work with William O. Bray. 03/12/2018 - 14:00 - 03/12/2018 - 15:10 Finite sums of ridge functions on convex subsets of R^n Dr. A. Kuleshov, Moscow State University, Russia Dr. A. Kuleshov, Moscow State University, Russia Finite sums of ridge functions on convex subsets of R^n We prove that each function of one variable forming a continuous finite sum of ridge functions on a convex body belongs to the VMO space on every compact interval of its domain. Also, we prove that for the existence of finite limits of the functions of one variable forming the sum at the corresponding boundary points of their domains, it suffices to assume the Dini condition on the modulus of continuity of some continuous sum of ridge functions on a convex body E at some boundary point. Further, we prove that the obtained (Dini) condition is sharp. 26/11/2018 - 14:00 - 26/11/2018 - 15:50 The Fourier transform of a convex function revisited Prof. E. Liflyand, Bar-Ilan University The Fourier transform of a convex function revisited Asymptotic-wise results for the Fourier transform of a function of convex type are proved. Certain refinement of known one-dimensional results due to Trigub gives a possibility to obtain their multidimensional generalizations. 12/11/2018 - 14:00 - 12/11/2018 - 15:25 Completely monotonic gamma ratio and infinitely divisible H-function of Fox Prof. Dmitry Karp, Institute of Applied Mathematics, Far Eastern Branch of Russian Academy of Sciences, Russia Prof. Dmitry Karp, Institute of Applied Mathematics, Far Eastern Branch of Russian Academy of Sciences, Russia Completely monotonic gamma ratio and infinitely divisible H-function of Fox We investigate conditions for the logarithmic complete monotonicity of a quotient of two products of gamma functions, where the argument of each gamma function has a different scaling factor. We give necessary and sufficient conditions in terms of non-negativity of some elementary functions and some more practical sufficient conditions in terms of parameters. Further, we study the representing measure in Bernstein's theorem for both equal and non-equal scaling factors. This leads to conditions on the parameters under which Meijer's G-function or Fox's H-function represents an infinitely divisible probability distribution on the positive half-line. 05/11/2018 - 14:00 - 05/11/2018 - 15:15 Dimension of attractors for Iterated Function Systems of linear fractional transformations and the Diophantine property of matrix semigroups Dr. Yuki Takahashi, Bar-Ilan University Dr. Yuki Takahashi, Bar-Ilan University Dimension of attractors for Iterated Function Systems of linear fractional transformations and the Diophantine property of matrix semigroups We consider Iterated Function Systems of linear fractional transformations, and show that the Hausdorff dimension of the attractor is given by the Bowen's pressure formula, if the Iterated Function Systems satisfy the exponential separation condition. We also show that almost every finite collections of $GL_n( \mathbb{R} )$ matrices are Diophantine if the matrices have positive entries. This is a joint work with Boris Solomyak. 26/10/2018 - 14:05 - 26/10/2018 - 15:05 On the weak Muckenhoupt-Wheeden conjecture Prof. Andrei Lerner, Bar-Ilan University Prof. Andrei Lerner, Bar-Ilan University ו', 26/10/2018 - 14:05 - ו', 26/10/2018 - 15:05 On the weak Muckenhoupt-Wheeden conjecture We construct an example showing the sharpness of certain weighted weak type (1,1) bounds for the Hilbert transform. This is joint work with Fedor Nazarov and Sheldy Ombrosi. 22/10/2018 - 14:00 - 22/10/2018 - 15:20 Spectral gap and sign changes of Gaussian stationary processes Dr. Naomi Feldheim, Bar-Ilan University Dr. Naomi Feldheim, Bar-Ilan University Spectral gap and sign changes of Gaussian stationary processes It is known that the Fourier transform of a measure which vanishes on [-a,a] must have asymptotically at least a/pi zeroes per unit interval. One way to quantify this further is using a probabilistic model: Let f be a Gaussian stationary process on R whose spectral measure vanishes on [-a,a]. What is the probability that it has no zeroes on an interval of length L? Our main result shows that this probability is at most e^{-c a^2 L^2}, where c>0 is an absolute constant. This settles a question which was open for a while in the theory of Gaussian processes. I will explain how to translate the probabilistic problem to a problem of minimizing weighted L^2 norms of polynomials against the spectral measure, and how we solve it using tools from harmonic and complex analysis. Time permitting, I will discuss lower bounds. Based on a joint work with Ohad Feldheim, Benjamin Jaye, Fedor Nazarov and Shahaf Nitzan (arXiv:1801.10392). 15/10/2018 - 14:00 - 15/10/2018 - 15:10 Trace reconstruction for the deletion channel Prof. Yuval Peres, Microsoft Research Prof. Yuval Peres, Microsoft Research Trace reconstruction for the deletion channel In the trace reconstruction problem, an unknown string $x$ of $n$ bits is observed through the deletion channel, which deletes each bit with some constant probability $q$, yielding a contracted string. How many independent outputs (traces) of the deletion channel are needed to reconstruct $x$ with high probability? The best lower bound known is of order $n^{1.25}$. Until 2016, the best upper bound available was exponential in the square root of $n$. With Fedor Nazarov, we improve the square root to a cube root using complex analysis (bounds for Littlewood polynomials on the unit circle). This upper bound is sharp for reconstruction algorithms that only use this statistical information. (Similar results were obtained independently and concurrently by De, O'Donnell and Servedio). If the string $x$ is random and $q<1/2$, we can show a subpolynomial number of traces suffices by comparison to a biased random walk. (Joint work with Alex Zhai, FOCS 2017). With Nina Holden and Robin Pemantle (COLT 2018), we removed the restriction $q<1/2$ for random inputs. In the trace reconstruction problem, an unknown string $x$ of $n$ bits is observed through the deletion channel, which deletes each bit with some constant probability $q$, yielding a contracted string. How many independent outputs (traces) of the deletion channel are needed to reconstruct $x$ with high probability? The best lower bound known is of order $n^{1.25}$. Until 2016, the best upper bound available was exponential in the square root of $n$. With Fedor Nazarov, we improve the square root to a cube root using complex analysis (bounds for Littlewood polynomials on the unit circle). This upper bound is sharp for reconstruction algorithms that only use this statistical information. (Similar results were obtained independently and concurrently by De, O'Donnell and Servedio). If the string $x$ is random and $q<1/2$, we can show a subpolynomial number of traces suffices by comparison to a biased random walk. (Joint work with Alex Zhai, FOCS 2017). With Nina Holden and Robin Pemantle (COLT 2018), we removed the restriction $q<1/2$ for random inputs. 11/06/2018 - 14:00 - 11/06/2018 - 15:45 Asymptotic dilation of regular homeomorphisms Prof. Anatoly Golberg, Holon Institute of Technology Prof. Anatoly Golberg, Holon Institute of Technology Asymptotic dilation of regular homeomorphisms We study the asymptotic behavior of the ratio $|f(z)|/|z|$ as $z\to 0$ for homeomorphic mappings differentiable almost everywhere in the unit disc with non-degenerated Jacobian. The main tools involve the length-area functionals and angular dilatations depending on some real number $p.$ The results are applied to homeomorphic solutions of a nonlinear Beltrami equation. The estimates are illustrated by examples. 14/05/2018 - 15:05 - 14/05/2018 - 16:00 On cycles in asymmetric models of circular gene networks Prof. Vladimir Golubyatnikov, Sobolev institute of mathematics, Novosibirsk, Russia Prof. Vladimir Golubyatnikov, Sobolev institute of mathematics, Novosibirsk, Russia On cycles in asymmetric models of circular gene networks We study geometry and combinatorial structures of phase portrait of some nonlinear kinetic dynamical systems as models of circular gene networks in order to find conditions of existence of cycles of these systems. Some sufficient conditions of existence of their stable cycles are obtained as well. 14/05/2018 - 14:00 - 14/05/2018 - 15:00 Spectrality of Product Domains Rachel Greenfeld, Bar-Ilan University Rachel Greenfeld, Bar-Ilan University Spectrality of Product Domains A set $\Omega$ in $R^d$ is called spectral if the space $L^2(\Omega)$ admits an orthogonal basis consisting of exponential functions. Which sets $\Omega$ are spectral? This question is known as "Fuglede's spectral set problem". In the talk we will be focusing on the case of product domains, namely, when $\Omega = AxB$. In this case, it is conjectured that $\Omega$ is spectral if and only if the factors A and B are both spectral. We will discuss some new results, joint with Nir Lev, supporting this conjecture, and their applications to the study of spectrality of convex polytopes. 07/05/2018 - 14:00 - 07/05/2018 - 15:45 Methods of Nonassociative Algebras in Differential Equations Prof. Ya. Krasnov, Bar-Ilan University Prof. Ya. Krasnov, Bar-Ilan University Methods of Nonassociative Algebras in Differential Equations Many well-known (classes of) differential equations may be viewed as an equation in a certain commutative nonassociative algebra. We develop further the principal idea of L. Markus for deriving algebraic properties of solutions to ODEs and PDEs directly from the equations defining them. Our main purpose is a) to show how the algebraic formalism can be applied with great success to a remarkably elegant description of the geometry of curves being solutions to homogeneous polynomial ODEs, and, on the other hand, b) to motivate the recent interest in applications of nonassociative algebra methods to PDEs. More precisely, given a differential equation on an algebra A, we are interested in the following two problems: 1. Which properties of the differential equation determine certain algebraic structures on $A$ such as to be power associative, unital or division algebra. 2. In the converse direction, which properties of $A$ imply certain qualitative information about the differential equation, for example topological equivalent classes, existence of a bounded, periodic, ray solutions, ellipticity, etc. We also define and discuss syzygies between Peirce numbers which provide an effective tool for our study. (Some results here are based on a recent joint work with V. Tkachev.) 30/04/2018 - 14:00 - 30/04/2018 - 15:35 Holomorphic extensions of trace formulas Dr. Serge Itshak Lukasuewicz, Bar-Ilan University Dr. Serge Itshak Lukasuewicz, Bar-Ilan University Holomorphic extensions of trace formulas The Chazarain-Poisson summation formula for Riemannian manifolds (which generalizes the Poisson Summation formula) computes the distribution trace. In the case of Riemannian surfaces with constant (sectional) curvature, we study the holomorphic extension of the shifted trace. We have three generic cases according to the sign of the curvature: the sphere, the torus and the compact hyperbolic surfaces of negative constant curvature. We use the shifted Laplacian in order to be able to use the Selberg trace formula. Our results concern the case of the torus, the case of a compact Riemannian surfaces with constant (sectional) negative curvature, and the case of a compact Riemannian manifold of dimension $n$ and constant curvature, $n\ge 3$. 16/04/2018 - 14:00 - 16/04/2018 - 15:15 Smooth parametrizations (of semi-algebraic, o-minimal, … sets), and their applications in Dynamics, Analysis, and Diophantine geometry (and, maybe, in Complex Hyperbolic Geometry) Prof. Yosef Yomdin, Weizmann Institute Prof. Yosef Yomdin, Weizmann Institute Smooth parametrizations (of semi-algebraic, o-minimal, … sets), and their applications in Dynamics, Analysis, and Diophantine geometry (and, maybe, in Complex Hyperbolic Geometry) Smooth parametrization consists in a subdivision of a mathematical object under consideration into simple pieces, and then parametric representation of each piece, while keeping control of high order derivatives. Main examples for this talk are C^k or analytic parametrizations of semi-algebraic and o-minimal sets. We provide an overview of some results, open and recently solved problems on smooth parametrizations, and their applications in several apparently rather separated domains: Smooth Dynamics, Diophantine Geometry, and Analysis. The structure of the results, open problems, and conjectures in each of these domains shows in many cases a remarkable similarity, which we plan to stress. We consider a special case of smooth parametrization: ``doubling coverings" (or "conformal invariant Whitney coverings"), and "Doubling chains". We present some new results on the complexity bounds for doubling coverings, doubling chains, and on the resulting bounds in Kobayashi metric and Doubling inequalities. We plan also to present a short report on a remarkable progress, recently achieved in this (large) direction by two independent groups (G. Binyamini, D. Novikov, on one side, and R. Cluckers, J. Pila, A. Wilkie, on the other). 09/04/2018 - 14:00 - 09/04/2018 - 15:35 An $L^2$ identity and pinned distance problem Dr. Bochen Liu, Bar-Ilan University Dr. Bochen Liu, Bar-Ilan University An $L^2$ identity and pinned distance problem Given a measure on a subset of Euclidean spaces. The $L^2$ spherical averages of the Fourier transform of this measure was originally used to attack Falconer distance conjecture, via Mattila's integral. In this talk, we will consider pinned distance problem, a stronger version of Falconer distance problem, and show that spherical averages imply the same dimensional threshold on both problems. In particular, with the best known spherical averaging estimates, we improve a result of Peres and Schlag on pinned distance problem significantly. The idea is to reduce the pinned distance problem to an integral where spherical averages apply. The key new ingredient is an identity between square functions. 19/03/2018 - 14:00 - 19/03/2018 - 15:40 Differential and Difference Inclusions and the Filippov Theorem Prof. Elza Farkhi, Tel-Aviv University Prof. Elza Farkhi, Tel-Aviv University Differential and Difference Inclusions and the Filippov Theorem The talk surveys joint works with T. Donchev and more recent ones with R. Baier. We discuss some (continuous and discrete) versions of the celebrated Filippov theorem on approximate solutions of differential (and difference) equations and inclusions that extend classical stability results for differential equations with continuous and discontinuous right-hand sides. We present some applications related to numerical solution of differential equations and inclusions. 05/03/2018 - 14:00 - 05/03/2018 - 15:35 Ternary generalizations of graded algebras and their applications in physics Prof. Richard Kerner, University Pierre et Marie Curie - Sorbonne Universit\'es Paris, France Prof. Richard Kerner, University Pierre et Marie Curie - Sorbonne Universit\'es Paris, France Ternary generalizations of graded algebras and their applications in physics We discuss cubic and ternary algebras which are a direct generalization of Grassmann and Clifford algebras, but with $Z_3$-grading replacing the usual $Z_2$-grading. Elementary properties and structures of such algebras are discussed, with special interest in low-dimensional ones, with two or three generators. Invariant antisymmetric quadratic and cubic forms on such algebras are introduced, and it is shown how the $SL(2,C)$ group arises naturally in the case of lowest dimension, with two generators only, as the symmetry group preserving these forms. We also show how the calculus of differential forms can be extended to include also second differentials $d^2 x^i$, and how the $Z_3$ grading naturally appears when we assume that $d^3 = 0$ instead of $d^2 = 0$. Ternary analogue of the commutator is introduced, and its relation with usual Lie algebras investigated, as well as its invariance properties. We shall also discuss certain physical applications In particular, $Z_3$-graded gauge theory is briefly presented, as well as ternary generalization of Pauli's exclusion principle and ternary Dirac equation for quarks. 15/01/2018 - 14:00 - 15/01/2018 - 15:30 On the dimension of Furstenberg measure for $SL_2(\mathbb{R})$ random matrix products Prof. Boris Solomyak Bar-Ilan University Prof. Boris Solomyak Bar-Ilan University On the dimension of Furstenberg measure for $SL_2(\mathbb{R})$ random matrix products Let $\mu$ be a finitely-supported measure on $SL_{2}(\mathbb{R})$ generating a non-compact and totally irreducible subgroup, and let $\nu$ be the associated stationary (Furstenberg) measure. We prove that if the support of $\mu$ is ``Diophantine,'' then $\dim\nu=\min\{1,\frac{h_{RW}(\mu)}{2\chi(\mu)}\}$ where $h_{RW}(\mu)$ is the random walk entropy of $\mu$, $\dim$ denotes pointwise dimension, and $\chi$ is the Lyapunov exponent of the random walk generated by $\mu$. In particular, for every $\delta>0$, there is a neighborhood $U$ of the identity in $SL_{2}(\mathbb{R})$ such that if $\mu$ has support in $U$ on matrices with algebraic entries, is atomic with all atoms of size at least $\delta$, and generates a group which is non-compact and totally irreducible, then its stationary measure $\nu$ satisfies $\dim\nu=1$. This is a joint work with M. Hochman. In my talk, I will try to explain the concepts and motivate the result. 08/01/2018 - 14:00 - 08/01/2018 - 15:15 Hardy spaces over manifolds Prof. Shai Dekel, Tel-Aviv University Prof. Shai Dekel, Tel-Aviv University Hardy spaces over manifolds Maximal and atomic Hardy spaces $H^p$ and $H_A^p$ , $0 < p = 1$, are considered in the setting of a doubling metric measure space in the presence of a non-negative self-adjoint operator whose heat kernel has Gaussian localization. It is shown that $H^p = H_A^p$ with equivalent norms. 25/12/2017 - 15:05 - 25/12/2017 - 16:00 CLT for small scale mass distribution of toral Laplace eigenfunctions Dr. Nadav Yesha, King's College, London, UK Dr. Nadav Yesha, King's College, London, UK CLT for small scale mass distribution of toral Laplace eigenfunctions In this talk we discuss the fine scale $L^2$-mass distribution of toral Laplace eigenfunctions with respect to random position. For the 2-dimensional torus, under certain flatness assumptions on the Fourier coefficients of the eigenfunctions and generic restrictions on energy levels, both the asymptotic shape of the variance and the limiting Gaussian law are established, in the optimal Planck-scale regime. We also discuss the 3-dimensional case, where the asymptotic behaviour of the variance is analysed in a more restrictive scenario. This is joint work with Igor Wigman. 25/12/2017 - 14:00 - 25/12/2017 - 15:00 Approximations of convex bodies by measure-generated sets Dr. Boaz Slomka, University of Michigan, USA Dr. Boaz Slomka, University of Michigan, USA Approximations of convex bodies by measure-generated sets Abstract: We present a construction of convex bodies from Borel measures on ${\mathbb R}^n$. This construction allows us to study natural extensions of problems concerning the approximation of convex bodies by polytopes. In particular, we study a variation of the vertex index which, in a sense, measures how well a convex body can be inscribed into a polytope with small number of vertices. We discuss several estimates for these quantities, as well as an application to bounding certain average norms. Based on joint work with Han Huang. 11/12/2017 - 14:00 - 11/12/2017 - 15:30 Mappings with integrally controlled moduli: regularity properties Prof. A. Golberg, Holon Institute of Technology Prof. A. Golberg, Holon Institute of Technology Mappings with integrally controlled moduli: regularity properties We consider the classes of homeomorphisms of domains in $\mathbb R^n$ with $p$-moduli of the families of curves and surfaces integrally bounded from above and below. These classes essentially extend the well-known classes of mappings such as quasiconformal, quaiisometric, Lipschitzian, etc. In the talk, we survey the known results in this field but mainly establish new differential properties of such mappings. A collection of related open problems will also be presented. 04/12/2017 - 14:00 - 04/12/2017 - 15:10 Fourier frames for singular measures and pure type phenomena Prof. Nir Lev, Bar-Ilan University Fourier frames for singular measures and pure type phenomena Let $\mu$ be a positive, finite measure on $R^d$. Is it possible to construct a Fourier system which would constitute a frame in the space $L^2(\mu)$ ? In the talk, I will explain the notion of a Fourier frame, discuss what is known about the problem, and present some recent results. 13/11/2017 - 14:00 - 13/11/2017 - 15:40 On low discrepancy sequences and lattice points problem for parallelepiped Prof. M. Levin, Bar-Ilan University Prof. M. Levin, Bar-Ilan University On low discrepancy sequences and lattice points problem for parallelepiped We will consider the connections between very well uniformly distributed sequence in s-torus, Quasi-Monte Carlo integration and the lattice points problem for parallelepiped. Lattices are determined here from totally reel algebraic number fields and from "totally reel" functional fields. 06/11/2017 - 14:00 - 06/11/2017 - 15:20 Convolution semigroups on quantum groups and non-commutative Dirichlet forms Dr. Ami Viselter University of Haifa Dr. Ami Viselter University of Haifa Convolution semigroups on quantum groups and non-commutative Dirichlet forms We will discuss convolution semigroups of states on locally compact quantum groups. They generalize the families of distributions of Levy processes from probability. We are particularly interested in semigroups that are symmetric in a suitable sense. These are proved to be in one-to-one correspondence with KMS-symmetric Markov semigroups on the $L^{\infty}$ algebra that satisfy a natural commutation condition, as well as with non-commutative Dirichlet forms on the $L^2$ space that satisfy a natural translation invariance condition. This Dirichlet forms machinery turns out to be a powerful tool for analyzing convolution semigroups as well as proving their existence. We will use it to derive geometric characterizations of the Haagerup Property and of Property (T) for locally compact quantum groups, unifying and extending earlier partial results. We will also show how examples of convolution semigroups can be obtained via a cocycle twisting procedure. Based on joint work with Adam Skalski. 30/10/2017 - 14:00 - 30/10/2017 - 15:55 NON-STATIONARY EXTENSIONS OF BANACH FIXED-POINT THEOREM, WITH APPLICATIONS TO FRACTALS Prof. David Levin, Tel-Aviv University Prof. David Levin, Tel-Aviv University NON-STATIONARY EXTENSIONS OF BANACH FIXED-POINT THEOREM, WITH APPLICATIONS TO FRACTALS Iterated Function Systems (IFS) have been at the heart of fractal geometry almost from its origin, and several generalizations for the notion of IFS have been suggested. Subdivision schemes are widely used in computer graphics and attempts have been made to link limits generated by subdivision schemes to fractals generated by IFS. With an eye towards establishing connection between non-stationary subdivision schemes and fractals, this talk introduces a non-stationary extension of Banach fixed-point theorem. We introduce the notion of "trajectories of maps defined by function systems" which may be considered as a new generalization of the traditional IFS. The significance and the convergence properties of 'forward' and 'backward' trajectories is presented. Unlike the ordinary fractals which are self-similar at different scales, the attractors of these trajectories may have different structures at different scales. Joint work with Nira Dyn and Puthan Veedu Viswanathan. 12/06/2017 - 14:00 - 12/06/2017 - 15:35 Maximal and Riesz potential operators with rough kernel in non-standard function spaces: unabridged version Dr. Humberto Rafeiro, Pontificia Universidad Javeriana, Bogota, Colombia Dr. Humberto Rafeiro, Pontificia Universidad Javeriana, Bogota, Colombia Maximal and Riesz potential operators with rough kernel in non-standard function spaces: unabridged version In this talk we will discuss the boundedness of the maximal operator with rough kernel in some non-standard function spaces, e.g. vari- able Lebesgue spaces, variable Morrey spaces, Musielak-Orlicz spaces, among others. We will also discuss the boundedness of the Riesz po- tential operator with rough kernel in variable Morrey spaces. This is based on joint work with S. Samko. 22/05/2017 - 14:00 - 22/05/2017 - 15:10 Around Property (T) for quantum groups Dr. Ami Viselter, Haifa University Dr. Ami Viselter, Haifa University Around Property (T) for quantum groups Kazhdan's Property (T) is a notion of fundamental importance, with numerous applications in various fields of mathematics such as abstract harmonic analysis, ergodic theory and operator algebras. By using Property (T), Connes was the first to exhibit a rigidity phenomenon of von Neumann algebras. Since then, the various forms of Property (T) have played a central role in operator algebras, and in particular in Popa's deformation/rigidity theory. This talk is devoted to some recent progress in the notion of Property (T) for locally compact quantum groups. Most of our results are concerned with second countable discrete unimodular quantum groups with low duals. In this class of quantum groups, Property (T) is shown to be equivalent to Property (T)$^{1,1}$ of Bekka and Valette. As applications, we extend to this class several known results about countable groups, including theorems on "typical" representations (due to Kerr and Pichot) and on connections of Property (T) with spectral gaps (due to Li and Ng) and with strong ergodicity of weakly mixing actions on a particular von Neumann algebra (due to Connes and Weiss). Joint work with Matthew Daws and Adam Skalski. The talk will be self-contained: no prior knowledge of quantum groups or Property (T) for groups is required. 15/05/2017 - 14:00 - 15/05/2017 - 15:40 On the growth of Lebesgue constants for convex polyhedra Dr. Yu. Kolomoitsev, University of Luebeck, Germany Dr. Yu. Kolomoitsev, University of Luebeck, Germany On the growth of Lebesgue constants for convex polyhedra The talk is devoted to the Lebesgue constants of polyhedral partial sums of the Fourier series. New upper and lower estimates of the Lebesgue constant in the case of anisotropic dilations of general convex polyhedra will be presented. The obtained estimates generalize and give sharper versions of the corresponding results of E.S. Belinsky (1977), A.A.Yudin and V.A. Yudin (1985), J.M. Ash and L. De Carli (2009), and J.M. Ash (2010). 08/05/2017 - 14:00 - 08/05/2017 - 15:30 Milnor fibers of real singularities Prof. E. Shustin, Tel-Aviv University Prof. E. Shustin, Tel-Aviv University Milnor fibers of real singularities Milnor fibers of isolated hypersurface singularities carry the most important information on the singularity. We review the works by A'Campo and Gusein-Zade, who showed that, in the case of real plane curve singularities, one can use special deformations (so-called morsifications) in order to recover the topology of the Milnor fiber, intersection form in vanishing homology, monodromy operator and other invariants. We prove that any real plane curve singularity admits a morsification and discuss its relation to the Milnor fiber, which is still an open problem of the complex-analytic nature. Joint work with P. Leviant. 27/03/2017 - 14:00 - 27/03/2017 - 15:30 The non-Euclidean lattice points counting problem Prof. Amos Nevo, Technion Prof. Amos Nevo, Technion The non-Euclidean lattice points counting problem Euclidean lattice points counting problems, the primordial example of which is the Gauss circle problem, are an important topic in classical analysis. Their non-Euclidean analogs in irreducible symmetric spaces (such as hyperbolic spaces and the space of positive-definite symmetric matrices) are equally significant, and we will present an approach to establishing such results in considerable generality. Our method is based on dynamical arguments together with representation theory and non-commutative harmonic analysis, and produces the current best error estimate in the higher rank case. We will describe some of the remarkably diverse applications of lattice point counting problems, as time permits. 20/03/2017 - 14:00 - 20/03/2017 - 15:40 Asymptotic Relations for Sharp Constants of Approximation Theory Prof. Michael I. Ganzburg, Hampton University, Virginia, USA Prof. Michael I. Ganzburg, Hampton University, Virginia, USA Asymptotic Relations for Sharp Constants of Approximation Theory In this talk we discuss asymptotic relations between sharp constants of approximation theory in a general setting. We first present a general model that includes a circle of problems of finding sharp or asymptotically sharp constants in some areas of univariate and multivariate approximation theory, such as inequalities for approximating elements, approximation of individual elements, and approximation on classes of elements. Next we discuss sufficient conditions that imply limit inequalities and equalities between various sharp constants. Finally, we present applications of these results to sharp constants in Bernstein-V. A. Markov type inequalities of different metrics for univariate and multivariate trigonometric and algebraic polynomials and entire functions of exponential type. 13/02/2017 - 14:00 - 13/02/2017 - 16:00 Equilateral triangles in subsets of ${\mathbb R}^d$ of large Hausdorff dimension Bochen Liu, University of Rochester, NY, USA Bochen Liu, University of Rochester, NY, USA Equilateral triangles in subsets of ${\mathbb R}^d$ of large Hausdorff dimension I will discuss how large the Hausdorff dimension of a set $E\subset{\mathbb R}^d$ needs to be to ensure that it contains vertices of an equilateral triangle. An argument due to Chan, Laba and Pramanik (2013) implies that a Salem set of large Hausdorff dimension contains equilateral triangles. We prove that, without assuming the set is Salem, this result still holds in dimensions four and higher. In ${\mathbb R}^2$, there exists a set of Hausdorff dimension $2$ containing no equilateral triangle (Maga, 2010). I will also introduce some interesting parallels between the triangle problem in Euclidean space and its counter-part in vector spaces over finite fields. It is a joint work with Alex Iosevich. 23/01/2017 - 14:00 - 23/01/2017 - 15:30 Tame dynamical systems Prof. Michael Megrelishvili, Bar-Ilan University Prof. Michael Megrelishvili, Bar-Ilan University Tame dynamical systems Tame dynamical systems were introduced by A. K\"{o}hler in 1995 and their theory was developed during last decade in a series of works by several authors. Connections to other areas of mathematics like: Banach spaces, model theory, tilings, cut and project schemes were established. A metric dynamical $G$-system $X$ is tame if every element $p \in E(X)$ of the enveloping semigroup $E(X)$ is a limit of a sequence of elements from $G$. In a recent joint work with Eli Glasner we study the following general question: which finite coloring $G \to \{0, \dots ,d\}$ of a discrete countable group $G$ defines a tame minimal symbolic system $X \subset \{0, \dots ,d\}^G$. Any Sturmian bisequence $\Z \to \{0,1\}$ on the integers is an important prototype. As closely related directions we study cutting coding functions coming from circularly ordered systems. As well as generalized Helly's sequential compactness type theorems about families with bounded total variation. We show that circularly ordered dynamical systems are tame and that several Sturmian like symbolic $G$-systems are circularly ordered. 16/01/2017 - 14:00 - 16/01/2017 - 15:05 Matrices over local rings Prof. D. Kerner, Ben-Gurion University Prof. D. Kerner, Ben-Gurion University Matrices over local rings Linear algebra over a field have been studied for centuries. In many branches of math one faces matrices over a ring, these came e.g. as "matrices of functions" or "matrices depending on parameters". Linear algebra over a (commutative, associative) ring is infinitely more complicated. Yet, some particular questions can be solved. I will speak about two problems: block-diagonalization (block-diagonal reduction) of matrices and stability of matrices under perturbations by higher-order-terms. 09/01/2017 - 14:00 - 09/01/2017 - 17:35 Continuous valuations on convex sets and Monge-Ampere operators Prof. S. Alesker, Tel-Aviv University Prof. S. Alesker, Tel-Aviv University Continuous valuations on convex sets and Monge-Ampere operators Finitely additive measures on convex convex sets are called valuations. Valuations continuous in the Hausdorff metric are of special interest and have been studied in convexity for a long time. In this talk I will present a non-traditional method of constructing continuous valuations using various Monge-Ampere (MA) operators, namely the classical complex MA operator and introduced by the speaker quaternionic MA operators (if time permits, I will briefly discuss also octonionic case). In several aspects analytic properties of the latter are very similar to the properties of the former, but the geometric meaning is different. The construction of the quaternionic MA operator uses non-commutative determinants. 19/12/2016 - 14:00 - 19/12/2016 - 15:45 Exotic Poisson summation formulas Prof. Nir Lev, Bar-Ilan University Exotic Poisson summation formulas By a crystalline measure in R^d one means a measure whose support and spectrum are both discrete closed sets. I will survey the subject and discuss recent results obtained jointly with Alexander Olevskii. 05/12/2016 - 14:00 - 05/12/2016 - 15:30 More on Differential Inequalities and Normality Tomer Manket, Bar-Ilan University Tomer Manket, Bar-Ilan University More on Differential Inequalities and Normality Differential inequalities and their connection to normality (and quasi normality) have been studied since Marty's Theorem in 1935. We discuss when these inequalities imply some degree of normality, and present a new result, joint with S. Nevo and J. Grahl. 28/11/2016 - 14:00 - 28/11/2016 - 15:55 Differential inequalities and normality Prof. Shahar Nevo, Bar-Ilan University Prof. Shahar Nevo, Bar-Ilan University Differential inequalities and normality Following Marty's Theorem we present recent results about differential inequalities that imply (or not) some degree of normality. We deal with inequalities with reversed sign of inequality than that in Marty's Theorem, i.e. $|f^(k)(z)|> h(|f(z))$. 21/11/2016 - 14:00 - 21/11/2016 - 15:15 Asymptotic relations for the Fourier transform of a function of bounded variation Prof. E. Liflyand, Bar-Ilan University Asymptotic relations for the Fourier transform of a function of bounded variation Earlier and recent one-dimensional estimates and asymptotic relations for the cosine and sine Fourier transform of a function of bounded variation are refined in such a way that become applicable for obtaining multidimensional asymptotic relations for the Fourier transform of a function with bounded Hardy variation. 14/11/2016 - 14:00 - 14/11/2016 - 15:35 Boundary triples and Weyl functions of symmetric operators Prof. V. Derkach, Vasyl Stus Donetsk University, Ukraine Prof. V. Derkach, Vasyl Stus Donetsk University, Ukraine Boundary triples and Weyl functions of symmetric operators Selfadjoint extensions of a closed symmetric operator A in a Hilbert space with equal deficiency indices were described by in the 30s by J. von Neumann. Another approach, based on the notion of abstract boundary triple originates to the works of J.W. Calkin and was developed by M.I. Visik, G.Grubb, F.S.Rofe-Beketov, M.L.Gorbachuck, A.N.Kochubei and others. By Calkin's approach all selfadjoint extensions of the symmetric operator A can be parametrized via "multivalued" selfadjoint operators in an auxiliary Hilbert spaces. Spectral properties of these extensions can be characterized in terms of the abstract Weyl function, associated to the boundary triple. In the present talk some recent developments in the theory of boundary triples will be presented. Applications to boundary value problems for Laplacian operator in bounded domains with smooth and rough boundaries will be discussed. 20/06/2016 - 15:05 - 20/06/2016 - 16:00 A two-phase mother body and a Muskat problem Prof. Tatiana Savina Ohio University, Athens, OH, USA Prof. Tatiana Savina Ohio University, Athens, OH, USA A two-phase mother body and a Muskat problem A Muskat problem describes an evolution of the interface $\Gamma (t)\subset{\mathbb R}^{2}$ between two immiscible fluids, occupying regions $\Omega _1$ and $\Omega _2$ in a Hele-Shaw cell. The interface evolves due to the presence of sinks and sources located in $\Omega _j$, $j=1,2$. The case where one of the fluids is effectively inviscid, that is, it has a constant pressure, is called one-phase problem. This case has been studied extensively. Much less progress has been made for the two-phase problem, the Muskat problem. The main difficulty of the two-phase problem is the fact that the pressure on the interface, separating the fluids, is unknown. In this talk we introduce a notion of a two-phase mother body (the terminology comes from the potential theory) as a union of two distributions $\mu _j$ with integrable densities of sinks and sources, allowing to control the evolution of the interface, such that $\rm{supp}\, \mu _j \subset\Omega _j$. We use the Schwarz function approach and the introduced two-phase mother body to find the evolution of the curve $\Gamma (t)$ as well as two harmonic functions $p_j$, the pressures, defined almost everywhere in $\Omega_j$ and satisfied prescribed boundary conditions on $\Gamma (t)$. 20/06/2016 - 14:00 - 20/06/2016 - 15:00 Spectrality and tiling by cylindric domains Rachel Greenfeld Bar-Ilan University Rachel Greenfeld Bar-Ilan University Spectrality and tiling by cylindric domains A bounded set O in R^d is called spectral if the space L^2(O) admits an orthogonal basis consisting of exponential functions. In 1974 Fuglede conjectured that spectral sets can be characterized geometrically by their ability to tile the space by translations. Although since then spectral sets have been intensively studied, the connection between spectrality and tiling is still unresolved in many aspects. I will focus on cylindric sets and discuss a new result, joint with Nir Lev, on the spectrality of such sets. Since also the tiling analogue of the result holds, it provides a further evidence of the strong connection between these two properties. 30/05/2016 - 14:00 - 30/05/2016 - 15:10 On pointwise domination of Calderon-Zygmund operators by sparse operators Prof. A. Lerner, Bar-Ilan University Prof. A. Lerner, Bar-Ilan University On pointwise domination of Calderon-Zygmund operators by sparse operators In this talk we survey several recent results establishing a pointwise domination of Calder\'on-Zygmund operators by sparse operators defined by $${\mathcal A}_{\mathcal S}f(x)=\sum_{Q\in {\mathcal S}}\Big(\frac{1}{|Q|}\int_Qf\Big)\chi_{Q}(x),$$ where ${\mathcal S}$ is a sparse family of cubes from ${\mathbb R}^n$. In particular, we present a simple proof of M. Lacey's theorem about Calder\'on-Zygmund operators with Dini-continuous kernels in its quantitative form obtained by T. Hyt\"onen-L. Roncal-O. Tapiola. 23/05/2016 - 14:00 - 23/05/2016 - 16:00 Finite volume scheme for a parabolic equation with a non-monotone diffusion function Prof. Pauline Lafitte-Godillon, D\'epartement de Math\'ematiques & Laboratoire MICS, France Prof. Pauline Lafitte-Godillon, D\'epartement de Math\'ematiques & Laboratoire MICS, France Finite volume scheme for a parabolic equation with a non-monotone diffusion function Evans and Portilheiro introduced in 2004 the functional framework that allows to tackle the problem of a forward-backward diffusion equation with a cubic-like diffusion function, that is classically ill-posed. The key is to consider its ``entropy'' formulation determined by considering the equation as the singular limit of a third-order pseudo-parabolic equation. Obtaining numerical simulations is not easy, since the ill-posedness related to the negativity of the diffusion coefficient induces severe oscillations. However, we showed that, in 1D, the regularization offered by the basic Euler in time-centered finite differences in space renders a fairly good numerical solution, except for the fact that the entropy condition is violated. We thus proposed an adapted entropic scheme in 1D. The finite volume framework has since allowed us to prove new properties of the problem. 09/05/2016 - 14:00 - 09/05/2016 - 15:40 Hardy spaces and variants of the div-curl lemma Prof. Galia Dafni, Concordia University, Montreal, Canada Prof. Galia Dafni, Concordia University, Montreal, Canada Hardy spaces and variants of the div-curl lemma The theory of real Hardy spaces has been applied to the study of partial differential equations in many different contexts. In the 1990's, one of main results in this direction was the div-curl lemma of Coifman, Lions, Meyer and Semmes. We discuss some variants of this lemma in the context of the local Hardy spaces of Goldberg, and of weighted Hardy spaces. This is joint work with Der-Chen Chang and Hong Yue. 11/04/2016 - 15:05 - 11/04/2016 - 16:05 On the Riemann–Hilbert problem for the Beltrami equations in quasidisks Prof. V. Ryazanov, Institute of Applied Mathematics and Mechanics, Ukraine Prof. V. Ryazanov, Institute of Applied Mathematics and Mechanics, Ukraine On the Riemann–Hilbert problem for the Beltrami equations in quasidisks For the nondegenerate Beltrami equations in the quasidisks and, in particular, in smooth Jordan domains, we prove the existence of regular solutions of the Riemann–Hilbert problem with coefficients of bounded variation and boundary data that are measurable with respect to the absolute harmonic measure (logarithmic capacity). 11/04/2016 - 14:00 - 11/04/2016 - 15:05 On sharp inequalities for orthonormal polynomials along a contour Prof. F. Abdullayev, Mersin University, Turkey Prof. F. Abdullayev, Mersin University, Turkey On sharp inequalities for orthonormal polynomials along a contour For a system of polynomials orthonormal with weight on a curve in the complex plane, the problem of sharp estimates of these polynomials is of considerable importance. We discuss known conditions and inequalities and present certain refinements of them. 28/03/2016 - 14:00 - 28/03/2016 - 15:40 On the Fourier transform of a function of several variables Prof. R. Trigub Prof. R. Trigub On the Fourier transform of a function of several variables For functions $f(x_{1},x_{2})=f_{0}\big(\max\{|x_{1}|,|x_{2}|\}\big)$ from $L_{1}(\mathbb{R}^{2})$, sufficient and necessary conditions for the belonging of their Fourier transform $\widehat{f}$ to $L_{1}(\mathbb{R}^{2})$ as well as of a function $t\cdot \sup\limits_{y_{1}^{2}+y_{2}^{2}\geq t^{2}}\big|\widehat{f}(y_{1},y_{2})\big|$ to $L_{1}(\mathbb{R}^{1}_{+})$. As for the positivity of $\widehat{f}$ on $\mathbb{R}^{2}$, it is completely reduced to the same question on $\mathbb{R}^{1}$ for a function $f_{1}(x)=|x|f_{0}\big(|x|\big)+\int\limits_{|x|}^{\infty}f_{0}(t)dt$. 21/03/2016 - 14:00 - 21/03/2016 - 15:55 Comparing the degrees of unconstrained and constrained approximation Prof. D. Leviatan, Tel-Aviv University Prof. D. Leviatan, Tel-Aviv University Comparing the degrees of unconstrained and constrained approximation It is quite obvious that one should expect that the degree of constrained approximation be worse than the degree of unconstrained approximation. However, it turns out that in certain cases we can deduce the behavior of the degrees of the former from information about the latter. Let $E_n(f)$ denote the degree of approximation of $f\in C[-1,1]$, by algebraic polynomials of degree $<n$, and assume that we know that for some $\alpha>0$ and $\Cal N\ge1$, $$n^\alpha E_n(f)\leq1,\quad n\geq\Cal N.$$ Suppose that $f\in C[-1,1]$, changes its monotonicity or convexity $s\ge0$ times in $[-1,1]$ ($s=0$ means that $f$ is monotone or convex, respectively). We are interested in what may be said about its degree of approximation by polynomials of degree $<n$ that are comonotone or coconvex with $f$. Specifically, if $f$ changes its monotonicity or convexity at $Y_s:=\{y_1,\dots,y_s\}$ ($Y_0=\emptyset$) and the degrees of comonotone and coconvex approximation are denoted by $E^{(q)}_n(f,Y_s)$, $q=1,2$, respectively. We investigate when can one say that $$n^\alpha E^{(q)}_n(f,Y_s)\le c(\alpha,s,\Cal N),\quad n\ge\Cal N^*,$$ for some $\Cal N^*$. Clearly, $\Cal N^*$, if it exists at all (we prove it always does), depends on $\alpha$, $s$ and $\Cal N$. However, it turns out that for certain values of $\alpha$, $s$ and $\Cal N$, $\Cal N^*$ depends also on $Y_s$, and in some cases even on $f$ itself, and this dependence is essential. 14/03/2016 - 14:00 - 14/03/2016 - 16:00 Mappings with integrally controlled $p$-moduli Prof. A. Golberg, Holon Institute of Technology Mappings with integrally controlled $p$-moduli We consider classes of mappings (with controlled moduli) whose $p$-module of the families of curves/surfaces is restricted by integrals containing measurable functions and arbitrary admissible metrics. In the talk we discuss various properties of mappings with controlled moduli including their differential features (Lusin's $N-$ and $N^{-1}$-conditions, Jacobian bounds, estimates for distortion dilatations, H\"older/logarithmically H\"older continuity) and the topological structure (openness, discreteness, invertibility, finiteness of the multiplicity function). This allows us to investigate the interconnection between mappings of bounded and finite distortion defined analytically and mapping with controlled moduli having no analytic assumptions. 18/01/2016 - 14:00 - 18/01/2016 - 16:00 Integral formulae for codimension-one foliated Finsler spaces Prof. Vladimir Rovenski, University of Haifa Prof. Vladimir Rovenski, University of Haifa Integral formulae for codimension-one foliated Finsler spaces Recent decades brought increasing interest in Finsler spaces $(M,F)$, especially, in extrinsic geometry of their hypersurfaces. Randers metrics (i.e., $F=\alpha+\beta$, $\alpha$ being the norm of a Riemannian structure and $\beta$ a 1-form of $\alpha$-norm smaller than $1$ on~$M$), appeared in Zermelo's control problem, are of special interest. After a short survey of above, we will discuss Integral formulae, which provide obstructions for existence of foliations (or compact leaves of them) with given geometric properties. The first known Integral formula (by G.\,Reeb) for codimension-1 foliated closed manifolds tells us that the total mean curvature $H$ of the leaves is zero (thus, either $H\equiv0$ or $H(x)H(y)<0$ for some $x,y\in M$). Using a unit normal to the leaves of a codimension-one foliated $(M,F)$, we define a new Riemannian metric $g$ on $M$, which for Randers case depends nicely on $(\alpha,\beta)$. For that $g$ we derive several geometric invariants of a foliation in terms of $F$; then express them in terms of invariants of $\alpha$ and~$\beta$. Using our results \cite{rw2} for Riemannian case, we present new Integral formulae for codimension-one foliated $(M, F)$ and $(M, \alpha+\beta)$. Some of them generalize Reeb's formula. 04/01/2016 - 14:30 - 04/01/2016 - 16:00 Diffraction theory for aperiodic point sets in Lie groups Prof. Tobias Hartnick, Technion Prof. Tobias Hartnick, Technion Diffraction theory for aperiodic point sets in Lie groups The study of aperiodic point sets in Euclidean space is a classical topic in harmonic analysis, combinatorics and geometry. Aperiodic point sets in R^3 are models for quasi-crystals, and in this context it is of interest to study their diffraction measure, i.e. the way they scatter an incoming laser or x-ray beam. By a classical theorem of Meyer, every sufficiently regular aperiodic point set in a Euclidean space is a shadow of a periodic one in a larger locally compact abelian group. The diffraction of these "model sets" can be computed in terms of a certain group of irrational rotations of an associated torus. In this talk, I will review the classical theory of diffraction of Euclidean model sets and then explain how the theory generalizes to model sets in arbitrary (non-abelian) locally compact groups. We will explain the construction of new examples of different flavours, and how the classical theory has to be modified in order to accomodate these new examples. We will focus on the case of model sets in groups admitting a Gelfand pair, since for these the (spherical) diffraction theory is particularly accessible. No previous knowledge of model sets or diffraction theory is assumed. This is based on joint work with Michael Bjorklund and Felix Pogorzelski. We consider classes of mappings (with controlled moduli) whose $p$-module of the families of curves/surfaces is restricted by integrals containing measurable functions and arbitrary admissible metrics. In the talk we discuss various properties of mappings with controlled moduli including their differential features (Lusin's $N-$ and $N^{-1}$-conditions, Jacobian bounds, estimates for distortion dilatations, H\"older/logarithmically H\"older continuity) and the topological structure (openness, discreteness, invertibility, finiteness of the multiplicity function). This allows us to investigate the interconnection between mappings of bounded and finite distortion defined analytically and mapping with controlled moduli having no analytic assumptions. 21/12/2015 - 14:00 - 21/12/2015 - 20:35 Extremal and approximation problems for positive definite functions Dr. Panagiotis Mavroudis, University of Crete, Greece Dr. Panagiotis Mavroudis, University of Crete, Greece Extremal and approximation problems for positive definite functions Let $\Omega$ be an open 0-symmetric subset of $\mathbb R^d$ which contains 0 and f a continuous positive definite function vanishing off O, that is, supp f is contained in the closure of $\Omega$. The problem is to approximate f by a continuous positive definite function F supported in $\Omega$. We prove this when 1. d=1. 2 $\Omega$ is strictly star-shaped 3. f is a radial function. We also consider the following problem: Given a measure $\mu$ supported in $\Omega$, does there exist an extremal function for the problem $\sup \int f d\mu$, where the sup is taken over the cone of continuous positive definite functions f supported in $\Omega$ with f(0)=1? 07/12/2015 - 14:00 - 07/12/2015 - 23:05 Tiling by translates of a function Dr. Nir Lev, Bar-Ilan University Dr. Nir Lev, Bar-Ilan University Tiling by translates of a function A function f on the real line is said to tile by translates along a discrete set $\Lambda$ if the sum of all the functions f(x-\lambda), $\lambda \in \Lambda$, is equal to one identically. Which functions can tile by translates, and what can be said about the translation set $\Lambda$? I will survey the subject and discuss some recent results joint with Mihail Kolountzakis. 23/11/2015 - 14:00 - 23/11/2015 - 18:20 A tale of two Hardy spaces Prof. E. Liflyand Bar-Ilan University Prof. E. Liflyand Bar-Ilan University A tale of two Hardy spaces New relations between the Fourier transform of a function of bounded variation and the Hilbert transform of its derivative are revealed. The main result is an asymptotic formula for the {\bf cosine} Fourier transform. Such relations have previously been known only for the sine Fourier transform. Interrelations of various function spaces are studied in this context, first of all of two types of Hardy spaces. The obtained results are used for proving completely new results on the integrability of trigonometric series. 16/11/2015 - 14:00 - 16/11/2015 - 18:00 Quantifying isolated singularity in DEs Prof. Y. Krasnov Bar-Ilan University Prof. Y. Krasnov Bar-Ilan University Quantifying isolated singularity in DEs Consider a polynomial map $f: C^n\to C^n$, vanishing at some point $z_0$ in $C^n$. In differential equations, such points are called equilibria of the vector field $z' = f(z)$, or their singular points. The question is "how singular". Can we quantify the singularity of $f$ at $z_0$? Attempting only to demystify the problem, in this presentation we make an effort to quantify singularity in the sense of differential equations and also discuss connections of this theory to analysis, topology and commutative algebra. 09/11/2015 - 14:00 - 09/11/2015 - 15:30 Some new partial answers to a 52 year old interpolation question Prof. M. Cwikel, Technion Prof. M. Cwikel, Technion Some new partial answers to a 52 year old interpolation question It is now more than 52 years since Studia Mathematica received Alberto Calder\'on's very remarkable paper about his theory of complex interpolation spaces. And one of the questions which Calder\'on implicitly asked in that paper, by solving it in a significant special case, is apparently still open today: DOES COMPLEX INTERPOLATION PRESERVE THE COMPACTNESS OF AN OPERATOR? After briefly surveying attempts to solve this question over several decades, I will also report on a few new partial answers obtained recently, some of them (arXiv:1411.0171) jointly with Richard Rochberg. Among other things there is an interplay with Jaak Peetre's "plus-minus" interpolation method, (arXiv:1502.00986) a method which probably deserves to be better known. Banach lattices and UMD spaces also have some roles to play. Several distinguished mathematicians have expressed the belief that that the general answer to this question will ultimately turn out to be negative. Among other things, I will try to hint at where a counterexample might perhaps be hiding. You are all warmly invited to seek it out, or prove that it does not exist. A fairly recent survey which discusses this question is available at arXiv:1410.4527. 26/10/2015 - 14:00 - 01/11/2015 - 11:30 Hardy spaces on the Klein-Dirac quadric and multidimensional annulus: applications to Interpolation, Moment problems, and Cubature Prof. Ognyan Kounchev IZKS, University of Bonn, Germany Institute of mathematics and informatics, Bulgarian Academy of Sciences Prof. Ognyan Kounchev IZKS, University of Bonn, Germany Institute of mathematics and informatics, Bulgarian Academy of Sciences ב', 26/10/2015 - 14:00 - א', 01/11/2015 - 11:30 Hardy spaces on the Klein-Dirac quadric and multidimensional annulus: applications to Interpolation, Moment problems, and Cubature We present a new construction of Hardy spaces on the Klein-Dirac quadric; we show that the quadric is obtained as a complexification of the unit ball in R^n. We introduce also Hardy spaces on complexified multidimensional annulus. We show some natural properties of these Hardy spaces, in particular, Cauchy type formula, and Brothers Riesz type theorem. We prove applications to the multidimensional Moment problem, multidimensional Interpolation theory, and Cubature formulas. 08/06/2015 - 14:00 - 08/06/2015 - 15:00 Convexity and Teichm\"{u}ller spaces Prof. S. Krushkal, Bar-Ilan University Prof. S. Krushkal, Bar-Ilan University Convexity and Teichm\"{u}ller spaces We provide restricted negative answers to the Royden-Sullivan problem whether any Teichm\"{u}ller space of dimension greater than $1$ is biholomorphically equivalent to bounded domain in a complex Banach space. The only known result here is Tukia's theorem of 1977 that there is a real analytic homeomorphism of the universal Teichm\"{u}ller space onto a convex domain in some Banach space. We prove: (a) Any Teichm\"{u}ller space $\mathbf T(0,n)$ of the punctured spheres (the surfaces of genus zero) with sufficiently large number of punctures $(n \ge n_0 > 4)$ cannot be mapped biholomorphically onto a bounded convex domain in $\mathbf C^{n-3}$. (b) The universal Teichm\"{u}ller space is not biholomorphically equivalent to a bounded convex domain in uniformly convex Banach space, in particular, to convex domain in the Hilbert space. The proofs involve the existence of conformally rigid domains established by Thurston and some interpolation results for bounded univalent functions. 01/06/2015 - 14:00 - 01/06/2015 - 15:00 Wavelets on fractals Prof. Palle Jorgensen, University of Iowa, USA Prof. Palle Jorgensen, University of Iowa, USA Wavelets on fractals The class of fractals referred to are those which may be specified by a finite system of affine transformations, assuming contractive scaling; and their corresponding selfsimilar measures, $\mu$. They include standard Cantor spaces such as the middle third, and the planar Sierpinski caskets in various forms, and their corresponding selfsimilar measures, but the class is more general than this; including fractals realized in $\mathbb R^d$, for $d > 2$. In part 1, we motivate the need for wavelets in the harmonic analysis of these selfsimilar measures $\mu$. While classes of the Hilbert spaces $L^2(\mu)$ have Fourier bases, it is known (the speaker and Pedersen) that many do not, for example the middle third Cantor can have no more than two orthogonal Fourier frequencies. In part 2 of the talk, we outline a construction by the speaker and Dutkay to the effect that all the affine systems do have wavelet bases; this entails what we call thin Cantor spaces. 28/05/2015 - 14:00 - 28/05/2015 - 15:00 On the Zariski Cancellation Problem Prof. Mikhail Zaidenberg, Fourier Institute, Grenoble, France Prof. Mikhail Zaidenberg, Fourier Institute, Grenoble, France On the Zariski Cancellation Problem Given complex affine algebraic varieties $X$ and $Y$, the general Zariski Cancellation Problem asks whether the existence of an isomorphism $X\times\mathbb{C}^n\cong Y\times\mathbb{C}^n$ implies that $X\cong Y$. Or, in other words, whether varieties with isomorphic cylinders should be isomorphic. This occurs to be true for affine curves (Abhyankar, Eakin, and Heinzer $'72$) and false for affine surfaces (Danielewski $'89$). The special Zariski Cancellation Problem asks the same question provided that $Y=\mathbb{C}^k$. In this case, the answer is "yes" in dimension $k=2$ (Miyanishi-Sugie $'80$ and Fujita $'79$), and unknown in higher dimensions, where the situation occurs to be quite mysterious (indeed, over a field of positive characteristic, there is a recent counter-example due to Neena Gupta $'14$). The birational counterpart of the special Zariski Cancellation Problem asks whether stable rationality implies rationality. The answer occurs to be negative; the first counter-example was constructed by Beauville, Colliot-Th\'el\`ene, Sansuc, and Swinnerton-Dyer $'85$. We will survey on the subject, both on some classical results and on a very recent development, reporting in particular on a joint work with Hubert Flenner and Shulim Kaliman. 18/05/2015 - 15:05 - 18/05/2015 - 16:05 Bounded approximation and radial interpolation in the unit disc and related questions Prof. A. Danielyan, University of South Florida, Tampa, USA Prof. A. Danielyan, University of South Florida, Tampa, USA Bounded approximation and radial interpolation in the unit disc and related questions The talk is devoted to some bounded approximation and interpolation problems and theorems in the unit disc related to the work of P. Fatou, W. Rudin, L. Carleson, L. Zalcman, and other authors. Among other results, a new theorem due to S. Gardiner on radial interpolation will be presented. We also show that the classical Rudin-Carleson interpolation theorem is a simple corollary of Fatou's much older interpolation theorem (of 1906). 18/05/2015 - 14:00 - 18/05/2015 - 15:00 Criteria for the Poincare-Hardy inequalities Prof. V. Maz'ya, University of Liverpool and University of Linkoeping Prof. V. Maz'ya, University of Liverpool and University of Linkoeping Criteria for the Poincare-Hardy inequalities A number of topics in the qualitative spectral analysis of the Schr\"odinger operator $-\Delta + V$ are surveyed. In particular, results concerning the positivity and semiboundedness of this operator. The attention is focused on conditions both necessary and sufficient, as well as on their sharp corollaries. 04/05/2015 - 14:00 - 04/05/2015 - 15:00 Gegenbauer-Chebyshev Integrals and Radon Transforms Prof. B. Rubin, Louisiana State University, Baton Rouge, USA Prof. B. Rubin, Louisiana State University, Baton Rouge, USA Gegenbauer-Chebyshev Integrals and Radon Transforms The Radon transform $R$ assigns to a function $f$ on $R^n$ a collection of integrals of that function over hyperplanes in $R^n$. Suppose that $Rf$ vanishes on all hyperplanes that do not meet a fixed convex set. {\it Does it follow that $f$ is zero in the exterior of that set?} I am planning to discuss new results related to this question and the corresponding injectivity problems. If time allows, some projectively equivalent modifications of $R$ will be considered. 20/04/2015 - 14:00 - 20/04/2015 - 15:00 Infinitesimal Hilbert 16th problem Prof. S. Yakovenko, Weizmann Institute Prof. S. Yakovenko, Weizmann Institute Infinitesimal Hilbert 16th problem I will describe the current state of affairs in both the original Hilbert 16th problem (on limit cycles of polynomial planar vector fields) and its relaxed version on zeros of Abelian integrals. It turns out that the latter belong to a natural class of Q-functions described by integrable systems of linear differential equations with quasiunipotent monodromy, defined over the field of rational numbers. Functions of this class admit explicit (albeit very excessive) bounds for the number of their isolated zeros in a way similar to algebraic functions. This result lies at the core of the solution of the infinitesimal Hilbert problem, achieved with Gal Binyamini and Dmitry Novikov. The talk is aimed at a broad audience. 13/04/2015 - 14:00 - 13/04/2015 - 15:00 On summability methods for Fourier series and Fourier integrals Prof. R. Trigub, Donetsk National University, Ukraine Prof. R. Trigub, Donetsk National University, Ukraine On summability methods for Fourier series and Fourier integrals In the problem of summability at a point at which the derivative of indefinite integral exists for Fourier series and Fourier integrals of integrable functions a new sufficient condition is obtained. In the case of "arithmetic means" the corresponding condition is also necessary. Exact rates of approximation by the classical Gauss-Weierstrass, Bochner-Riesz, and Marcinkiewicz-Riesz means, as well as by non-classical Bernstein-Stechkin means are found. These problems are related to the representability of a function as an absolutely convergent Fourier integral. For this, new conditions are obtained, while for radial functions even a criterion. 19/01/2015 - 14:00 - 19/01/2015 - 15:00 Planar Sobolev extension domains and a Square Separation Theorem Prof. P. Shvartsman, Technion Prof. P. Shvartsman, Technion Planar Sobolev extension domains and a Square Separation Theorem For each positive integer $m$ and each $p>2$ we characterize bounded simply connected Sobolev $W^m_p$-extension domains $\Omega$ in $R^2$. Our criterion is expressed in terms of certain intrinsic subhyperbolic metrics in $\Omega$. Its proof is based on a series of results related to the existence of special chains of squares joining given points $x$ and $y$ in $\Omega$. An important geometrical ingredient for obtaining these results is a new ''Square Separation Theorem''. It states that under certain natural assumptions on the relative positions of a point $x$ and a square $S\subset\Omega$ there exists a similar square $Q\subset\Omega$ which touches $S$ and has the property that $x$ and $S$ belong to distinct connected components of $\Omega\setminus Q$. This is a joint work with Nahum Zobin. 12/01/2015 - 15:05 - 12/01/2015 - 16:05 On Boutroux's Tritronqu\'ee Solutions of the First Painlev\'e Equation Michael Twito, University of Sydney Australia Michael Twito, University of Sydney Australia On Boutroux's Tritronqu\'ee Solutions of the First Painlev\'e Equation The triply truncated solutions of the first Painlev\'e equation were specified by Boutroux in his famous paper of 1913 as those having no poles (of large modulus) except in one sector of angle $2\pi/5$. There are five such solutions and each of them can be obtained from any other one by applying a certain symmetry transformation. One of these solutions is real on the real axis. We will discuss a characteristic property of this solution (discovered by Prof. Joshi, and Prof. Kitaev), different from the asymptotic description given by Boutroux. 12/01/2015 - 14:00 - 12/01/2015 - 15:00 Bernoulli convolution measures and their Fourier transforms Prof. B. Solomyak Prof. B. Solomyak Bernoulli convolution measures and their Fourier transforms For $\lambda\in (0,1)$, the Bernoulli convolution measure $\nu_\lambda$ may be defined as the distribution of the random series $\sum_{n=0}^\infty \pm \lambda^n$, where the signs are chosen independently with equal probabilities. For $\lambda =1/3$, this is the familiar Cantor-Lebesgue measure (up to a linear change of variable). The Fourier transform of $\nu_\lambda$ has an infinite product formula: $$\widehat{\nu}_\lambda(t) = \prod_{n=0}^\infty \cos(2\pi \lam^n t).$$ The properties of $\nu_\lambda$ and their Fourier transforms have been studied since the 1930's by many mathematicians, among them Jessen, Wintner, Erd\H{o}s, Salem, Kahane, Garcia. In particular, it was proved by Erd\H{o}s and Salem that $\widehat{\nu}_\lambda(t)$ does not vanish at infinity (i.e. $\nu_\lambda$ is not a Rajchman measure) if and only if $1/\lambda$ is a Pisot number (an algebraic integer greater than one with all conjugates inside the unit circle). However, very little is known about the rate of decay, especially for specific $\lambda$, as opposed to "typical" ones. In this talk I will survey known results and open problems in this direction. Recently in a joint work with A. Bufetov we proved that if $1/\lam$ is an algebraic integer with at least one conjugate outside of the unit circle, then the Fourier transform of $\nu_\lam$ has at least a logarithmic decay rate at infinity. 05/01/2015 - 14:00 - 05/01/2015 - 15:00 Zeros of solutions of linear differential equations Prof. A. Eremenko, Purdue University Prof. A. Eremenko, Purdue University Zeros of solutions of linear differential equations This is a joint work with Walter Bergweiler. We construct differential equations of the form w"+Aw=0, where $A$ is an entire function of finite order, with the property that two linearly independent solutions have finite exponent of convergence of zeros. This solves a problem proposed by Bank and Laine in 1982. 29/12/2014 - 14:00 - 29/12/2014 - 15:00 On some properties of linear spaces and linear operators in the case of quaternionic scalars Dr. M. ELENA LUNA-ELIZARRARAS ́ Departamento de Matem ́aticas E.S.F.M. del I.P.N. 07338 M ́exico D.F., Dr. M. ELENA LUNA-ELIZARRARAS ́ Departamento de Matem ́aticas E.S.F.M. del I.P.N. 07338 M ́exico D.F., On some properties of linear spaces and linear operators in the case of quaternionic scalars In recent years the study of quaternionic linear spaces has been widely developed by mathematicians and has been widely used by physicists. At the same time it turns out that some basic and fundamental properties of those spaces are not treated properly and this requires to develop the corresponding theory. In this talk we will analyze certain peculiarities of the situation via the notion of quaternionic extension of real and complex linear spaces as well as using the notion of internal quaternionization. We will see, for example, how the norms of some operators behave when they are "quaternionically extended". 22/12/2014 - 14:00 - 22/12/2014 - 15:00 Sets of bounded discrepancy for multi-dimensional irrational rotation Dr. Nir Lev, Bar-Ilan University Sets of bounded discrepancy for multi-dimensional irrational rotation Hecke, Ostrowski and Kesten characterized the intervals on the circle for which the ergodic sums of their indicator function, under an irrational rotation, stay at a bounded distance from their integral with respect to the Lebesgue measure on the circle. In this talk I will discuss this phenomenon in multi-dimensional setting. Based on joint work with Sigrid Grepstad. 15/12/2014 - 14:00 - 15/12/2014 - 15:00 Three faces of equivariant degree Prof. Z. Balanov, University of Texas at Dallas Prof. Z. Balanov, University of Texas at Dallas Three faces of equivariant degree Topological methods based on the usage of degree theory have proved themselves to be an important tool for qualitative studying of solutions to nonlinear differential systems (including such problems as existence, uniqueness, multiplicity, bifurcation, etc.). During the last twenty years the equivariant degree theory emerged in Non- linear Analysis. In short, the equivariant degree is a topological tool allowing "counting" orbits of solutions to symmetric equations in the same way as the usual Brouwer degree does, but according to their symmetry properties. This method is an alternative and/or complement to the equivariant singularity theory developed by M. Golubitsky et al., as well as to a variety of methods rooted in Morse theory/Lusternik–Schnirelman theory. In fact, the equivariant degree has different faces reflecting a diversity of symmetric equations related to applications. In the two talks, I will discuss three variants of the equivariant degree: (i) non-parameter equivariant degree, (ii) twisted equivariant degree with one parameter, and (iii) gradient equivariant degree. Each of the three variants of equivariant degree will be illustrated by appropriate examples of applications: (i) boundary value problems for vector symmetric pendulum equation, (ii) Hopf bifurcation in symmetric neural networks (simulation of legged locomotion), and (iii) bifurcation of relative equilibria in Lennard-Jones three-body problem. The talk is addressed to a general audience, without any special knowledge of the subject. 08/12/2014 - 14:00 - 09/12/2014 - 13:20 Riesz sequences and arithmetic progressions Itay Londner, Tel-Aviv University Itay Londner, Tel-Aviv University ב', 08/12/2014 - 14:00 - ג', 09/12/2014 - 13:20 Riesz sequences and arithmetic progressions In the talk, which is joint work with Alexander Olevskii, I will present our study of the relationship between the existence of arithmetic progressions with specified lengths and step sizes and lower Riesz bounds of complex exponentials indexed by a set of integers $\Lambda$ on subsets of the circle. 01/12/2014 - 14:00 Certain problems in Fourier Analysis Prof. R. Trigub, Donetsk National University, Ukraine Certain problems in Fourier Analysis The following problems (or a part of them) will be discussed. 1. Generalization of the Abel-Poisson summation method. 2. Generalization of the Riemann-Lebesgue lemma. 3. Strengthening of the Hardy-McGehee-Pigno-Smith inequality. 4. Generalization of the Euler-Maclaurin formula. 5. Absolute convergence of grouped Fourier series. 6. Comparison of linear differential operators with constant coefficients. 7. Positive definite functions and splines. 8. Strong converse theorems in approximation theory. Bernstein-Stechkin polynomials. 24/11/2014 - 14:00 Entire functions of exponential type represented by pseudo-random and random Taylor series Prof. M. Sodin, Tel-Aviv University Prof. M. Sodin, Tel-Aviv University Entire functions of exponential type represented by pseudo-random and random Taylor series We study the influence the angular distribution of zeroes of the Taylor series with pseudo-random and random coefficients, and show that the distribution of zeroes is governed by certain autocorrelations of the coefficients. Using this guiding principle, we consider several examples of random and pseudo-random sequences $\xi$ and, in particular, answer some questions posed by Chen and Littlewood in 1967. As a by-product we show that if $\xi$ is a stationary random integer-valued sequence, then either it is periodic, or its spectral measure has no gaps in its support. The same conclusion is true if $\xi$ is a complex-valued stationary ergodic sequence that takes values from a uniformly discrete set (joint work with Alexander Borichev and Alon Nishry). 17/11/2014 - 14:00 - 17/11/2014 - 23:00 Ruled common nodal surfaces Prof. M. Agranovsky, Bar-Ilan University Ruled common nodal surfaces Nodal sets are zero loci of Laplace eigenfunctions (e.f.). Study of nodal sets is important for understanding wave processes. The geometry of a single nodal set may be very complicated and hardly can be well understood. More realistic might be describing geometry of sets which are nodal for a large family of e.f. (the condition of simultaneous vanishing, resonanse, of a large packet of e.f., on a large set, is overdetermined and hence may be expected to occur only for exclusive sets). Indeed, it was proved that common nodal curves for large, in different senses, families of e.f. in $\mathbb R^2$ are straight lines (non-periodic case: Quinto and the speaker, '96; periodic case: Bourgain and Rudnick, '11). It was conjectured that in a Euclidean space of arbitrary dimension, common nodal hypersurfaces for large families of e.f. are cones, more precisely, are translates of zero sets of harmonic homogeneous polynomials. The talk will be devoted to a recent result confirming the conjecture for ruled hypersurfaces in $\mathbb R^3$. Relation to the injectivity problem for the spherical Radon transform will be explained. 17/11/2014 - 14:00 - 17/11/2014 - 23:10 Entire functions of exponential type represented by pseudo-random and random Taylor series Prof. M. Sodin, Tel-Aviv University 03/11/2014 - 14:00 - 17/11/2014 - 23:00 Beurling's method in the theory of quasianalytic functions Avner Kiro, Tel-Aviv University Avner Kiro, Tel-Aviv University Beurling's method in the theory of quasianalytic functions The talk is will be devoted to two questions in the theory of quasianalytic Carleman classes. The first one is how to describe the image of a quasianalytic Carleman class under Borel's map $f\to\{f^{(n)}(0)/n!\}_{n\geq 0}$ ? The second one is how to sum the formal Taylor series of functions in quasianalytic Carleman classes? In the talk, I will present a method of Beurling that gives a solution to both of the problems for some quasianalytic Carleman classes. If time permits, I will also discuss the image problem in some non-quasianalytic classes. 02/06/2014 - 14:00 Hypergroups and their convolution algebras of multilinear forms Prof. Bert Schreiber, Wayne State University, Detroit, USA Prof. Bert Schreiber, Wayne State University, Detroit, USA Hypergroups and their convolution algebras of multilinear forms We will begin by introducing the notion of hypergroup, give some examples, and describe the convolution of measures on a hypergroup. After a review of some basic operator space theory, we shall describe how to extend the notion of convolution to the space of completely bounded multilinear forms on a cartesian product of spaces of continuous functions on hypergroups, thus making that space into a Banach algebra. When the hypergroups are commutative, we introduce and study a notion of Fourier transform in this setting. 26/05/2014 - 14:00 Essential spectrum of Operators of Quantum Mechanics and Limit Operators Prof. V. Rabinovich, National Polytechnic Institute of Mexico Prof. V. Rabinovich, National Polytechnic Institute of Mexico Essential spectrum of Operators of Quantum Mechanics and Limit Operators The talk is devoted to applications of the limit operators to the study of essential spectra and exponential decay of eigenfunctions of the discrete spectra for Schr\"{o}dinger and Dirac operators for wide classes of potentials. Outline of the talk: 1) Fredholm property and location of the essential spectrum of systems of partial differential operators with variable bounded coefficients; 2) Exponential estimates of solutions of systems of partial differential operators with variable bounded coefficients; 3) Location of the essential spectrum of Schr\"{o}dinger and Dirac operators and exponential estimates of eigenfunctions of the discrete spectrum. 19/05/2014 - 14:00 Optimal estimates for derivatives of analytic functions and solutions to Laplace, Lam\'e and Stokes equations Prof. Gershon Kresin, Ariel University Prof. Gershon Kresin, Ariel University Optimal estimates for derivatives of analytic functions and solutions to Laplace, Lam\'e and Stokes equations Two types of optimal estimates for derivatives of analytic functions with bounded real part are considered. The first of them is a pointwise inequality for derivatives of analytic functions in the complement of a convex closed domain in ${\mathbb C}$. The second type of inequalities is a limit relation for derivatives of analytic functions in an arbitrary proper subdomain of ${\mathbb C}$. Optimal estimates for derivatives of a vector field with bounded harmonic components as well as optimal estimates for the divergence of an elastic displacement field and pressure in a fluid in subdomains of ${\mathbb R}^n$ are discussed. 12/05/2014 - 14:00 Stability theorems for exponential bases in $L^2$ Prof. L. De Carli, Florida International University Prof. L. De Carli, Florida International University Stability theorems for exponential bases in $L^2$ Let $D$ be a domain of $\R^d$; we say that $L^2(D)$ has an exponential basis if there exists sequence of functions ${\mathcal B}=\{ e^{2\pi i \langle s_m x\rangle}\}_{ m \in Z^d}$, with $s_m\in\R^d$, with the following property: every function in $L^2(D)$ can be written in a unique way as $\sum_{m\in\Z^d} c_m e^{ 2\pi i \langle s_m, x\rangle} $, with $c_m \in \C$. For example, $\{ e^{2\pi i mx}\} _{m \in Z}$ is an exponential basis of $L^2(0, 1 )$. Exponential bases are very useful in the application, especially when they are orthogonal; however, the existence or non-existence of exponential bases is proved only on very special domains of $\R^d$. In particular, it is not known whether the unit ball in $\R^2$ has an exponential basis or not. An important property of exponential bases is their stability. That is, if $\{ e^{2\pi i \langle s_m, x\rangle}\}_{ m \in Z^d}$ is an exponential basis of $L^2(D)$ and $\Delta=\{\delta_m\}_{ m \in Z^d} $ is a sequence of sufficiently small real number, then also $\{ e^{2\pi i \langle s_m+\delta_m, x\rangle}\}_{ m \in Z^d}$ is an exponential basis of $L^2(D)$. In this talk I will discuss the existence and stability of exponential bases on special 2-dimensional domains called trapezoids. I will also generalize a celebrate theorem by M. Kadec and obtain stability bounds for exponential bases on domains of $\R^d$. The result that I will present in my talk are part of joint projects with my students A. Kumar and S. Pathak. END_OF_ABSTRACT 07/04/2014 - 14:00 A Hardy-Littlewood theorem revisited Prof. E. Liflyand, Bar-Ilan University A Hardy-Littlewood theorem revisited If a function and its conjugate (in a special sense) both have bounded variation, then their Fourier transforms are integrable. This recent extension of a classical (for Fourier series) Hardy- Littlewood theorem gives rise to new thoughts and results. 31/03/2014 - 14:00 Operator norms of finite Hankel matrices Dr. R. Bessonov, TAU, St-Petersburg State University. Dr. R. Bessonov, TAU, St-Petersburg State University. Operator norms of finite Hankel matrices The aim of this talk is to present a simple two-sided estimate for the operator norm of a finite Hankel matrix in terms of its standard symbol. We will also discuss several reformulations and consequences of this estimate, including the classical Fefferman's duality theorem for the Hardy space $H^1$. 31/03/2014 - 14:00 Operator norms of finite Hankel matrices Dr. R.V. Bessonov, TAU, St.Petersburg State University Dr. R.V. Bessonov, TAU, St.Petersburg State University 24/03/2014 - 14:00 Eisenstein Series and Breakdown of Semiclassical Approximations Dr. Shimon Brooks, Bar-Ilan University Dr. Shimon Brooks, Bar-Ilan University Eisenstein Series and Breakdown of Semiclassical Approximations We consider the wave flow on a surface of constant negative curvature. For short times, the propagation is approximated by the geodesic flow, with errors controlled by the "semiclassical expansion" coming from geometric optics. In negative curvature, this expansion is useful up the Ehrenfest time $|\log{\hbar}|$, after which the error terms in the expansion become as large as the main term. It is believed that the approximation of wave propagation by the geodesic flow should hold for much larger times, perhaps all the way up to the Heisenberg time $1/\hbar$. However, we show that this cannot hold in general, and exhibit explicit examples where the semiclassical approximation breaks down at a constant multiple of Ehrenfest time. These examples come from Eisenstein series on the modular surface, and are intimately tied to the arithmetic structure, and highly non-generic. We will also discuss these non-generic features of the arithmetic setting, and whether this breakdown at the Ehrenfest time is likely to be a more generic phenomenon or not. Includes joint work with Roman Schubert. 10/03/2014 - 14:00 Spectral Properties of Differential Equations in Algebras Prof. Yakov Krasnov, Bar-Ilan University Prof. Yakov Krasnov, Bar-Ilan University Spectral Properties of Differential Equations in Algebras The aim of this work is to establish a number of elementary properties about the topology and algebra of real quadratic homogeneous mapping and Ricatti type ODEs occurring in non-associative algebras. We construct a series of examples of the quadratic vector field to show the impact of their spectral properties into qualitative theory. 03/03/2014 - 14:00 Sharp Ul'yanov type inequalities in Lebesgue spaces Prof. Yu. Kolomoitsev, Inst. Applied Math. Mech., Donetsk, Ukraine Prof. Yu. Kolomoitsev, Inst. Applied Math. Mech., Donetsk, Ukraine Sharp Ul'yanov type inequalities in Lebesgue spaces We present sharp Ul'yanov type inequalities for fractional moduli of smoothness and K-functionals for the values of the parameters: 0<p<1, p<q. We also provide a generalization of Kolyada's inequality and relations between fractional moduli of smoothness of a function and its derivatives in the spaces L_p, 0<p<1. 24/02/2014 - 15:05 On some sufficient conditions for Fourier multipliers and absolute convergence of the Fourier transform Prof. Yu. Kolomoitsev, Inst. Applied Math. Mech., Donetsk, Ukraine On some sufficient conditions for Fourier multipliers and absolute convergence of the Fourier transform We present new sufficient conditions for Fourier multipliers. These conditions are given in terms of simultaneous behavior of (quasi-)norms of a function in different Lebesgue and Besov spaces. We also provide some sufficient conditions for the representation of a function as an absolute convergence Fourier integrals in terms of belonging of a function simultaneously to several spaces of smooth functions. 24/02/2014 - 14:00 Higher Order Elliptic Problems in Arbitrary Domains Prof. V. Maz'ya, University of Liverpool and University of Linkoeping Higher Order Elliptic Problems in Arbitrary Domains We discuss sharp continuity and regularity results for solutions of the polyharmonic equation in an arbitrary open set. The absence of information about geometry of the domain puts the question of regularity properties beyond the scope of applicability of the methods devised previously, which typically rely on specific geometric assumptions. Positive results have been available only when the domain is sufficiently smooth, Lipschitz or diffeomorphic to a polyhedron. The techniques developed recently allow to establish the boundedness of derivatives of solutions to the Dirichlet problem for the polyharmonic equation under no restrictions on the underlying domain and to show that the order of the derivatives is maximal. An appropriate notion of polyharmonic capacity is introduced which allows one to describe the precise correlation between the smoothness of solutions and the geometry of the domain. We also study the 3D Lam\'e system and establish its weighted positive definiteness for a certain range of elastic constants. By modifying the general theory developed by Maz'ya (Duke, 2002), we then show, under the assumption of weighted positive definiteness, that the divergence of the classical Wiener integral for a boundary point guarantees the continuity of solutions to the Lam\'e system at this point. The talk is based on my joint work with S.Mayboroda (Minnesota) and Guo Luo (Caltech) 13/01/2014 - 14:00 FROM CRYSTAL OPTICS TO DIRAC OPERATORS: A SPECTRAL THEORY OF FIRST-ORDER SYSTEMS Prof. Matania Ben-Artzi, Hebrew University, Jerusalem Prof. Matania Ben-Artzi, Hebrew University, Jerusalem FROM CRYSTAL OPTICS TO DIRAC OPERATORS: A SPECTRAL THEORY OF FIRST-ORDER SYSTEMS First-order systems of partial differential equations appear in many areas of physics, from the Maxwell equations to the Dirac The aim of the talk is to describe a general method for the study of the spectral density of all such systems, connecting it to traces on the (geometric-optical) "slowness surfaces" . The Holder continuity of the spectral density leads to a derivation of the limiting absorption principle and global spacetime estimates (based on joint work with Tomio Umeda). 06/01/2014 - 14:00 Some elementary observations on multi-linear operators arising in geometric settings Prof. A. Iosevich, University of Rochester, NY, USA Prof. A. Iosevich, University of Rochester, NY, USA Some elementary observations on multi-linear operators arising in geometric settings The beautiful and extensive Coifman-Meyer theory, developed in the 70s and 80s to study singular multi-linear operators does not apply to many naturally arising operators with positive kernels. We shall describe some elementary approaches to such operators and apply them to some problems in geometric measure theory and classical harmonic analysis. 30/12/2013 - 14:00 Porosity and the bounded linear regularity property Prof. Simeon Reich, The Technion Prof. Simeon Reich, The Technion Porosity and the bounded linear regularity property H. H. Bauschke and J. M. Borwein showed that in the space of all tuples of bounded, closed and convex subsets of a Hilbert space with a nonempty intersection, a typical tuple has the bounded linear regularity property. This property is important because it leads to the convergence of infinite products of the corresponding nearest point projections to a point in the intersection. We show that the subset of all tuples possessing the bounded linear regularity property has a porous complement. Moreover, our result is established in all normed spaces and for tuples of closed and convex sets which are not necessarily bounded. This is joint work with A. J. Zaslavski. 23/12/2013 - 14:00 How do wave packets spread? Time evolution on Ehrenfest time scales Dr. Roman Shubert, University of Bristol, UK Dr. Roman Shubert, University of Bristol, UK How do wave packets spread? Time evolution on Ehrenfest time scales We derive an extension of the standard time-dependent WKB theory, which can be applied to propagate coherent states and other strongly localized states for long times. It in particular allows us to give a uniform description of the transformation from a localized coherent state into a delocalized Lagrangian state, which takes place at the Ehrenfest time. The main new ingredient is a metaplectic operator that is used to modify the initial state in a way that the standard time-dependent WKB theory can then be applied for the propagation. This is based on joint work with Raul Vallejos and Fabricio Toscano, but in this talk we will focus on the special case of propagation on a manifold of negative curvature. 09/12/2013 - 14:00 Normal families of mappings with controlled $p$-module Prof. A. Golberg, Holon Institute of Technology Normal families of mappings with controlled $p$-module ~We consider the generic discrete open mappings in ${\mathbb R}^n$ under which the perturbation of extremal lengths of curve collections is controlled integrally via $\int Q(x)\eta^p(|x-x_0|) dm(x)$ with $n-1<p<n$, where $Q$ is a measurable function on ${\mathbb R}^n$ and $\int\limits_{r_1}^{r_2} \eta(r) dr \ge 1$ for any $\eta$ on a given interval $[r_1,r_2].$ The main results state that the family of all open discrete mappings of above type is normal under appropriate restrictions on the majorant $Q.$ We also provide conditions ensuring the local H\"older continuity of such mappings with respect to euclidian distances (in the general case with respect to their logarithms). The inequalities defining the continuity are sharp with respect to the order. This is a joint work with R. Salimov and E. Sevost'yanov. 25/11/2013 - 14:00 Local geometry of trajectories of parabolic type semigroups Prof. M. Elin, Ort Braude, Karmiel Prof. M. Elin, Ort Braude, Karmiel Local geometry of trajectories of parabolic type semigroups It is well known that the geometric nature of semigroup trajectories essentially depends on the semigroup type. In this work, we concentrate on parabolic type semigroups of holomorphic self-mappings of the open unit disk and of the right half-plane, and study the structure of semigroup trajectories near the Denjoy--Wolff point. In particular, we find the limit order of contact and the limit curvature of trajectories and their `closeness', determine whether these trajectories have asymptotes. For these purposes, we suggest that two terms in the asymptotic power expansion of semigroup generators are known. Our methods are based on the asymptotic expansion of a semigroup that we find on the first step. Inter alia, this enable us to establish a new rigidity property for semigroups of parabolic type. The talk is based on a joint work with F. Jacobzon. 18/11/2013 - 14:00 Newton's levitation theorem and photoacoustic tomography Prof. Victor Palamodov, Tel-Aviv University Prof. Victor Palamodov, Tel-Aviv University Newton's levitation theorem and photoacoustic tomography A generalization of Newton's attraction theorem will be discussed. The same analytic method is applied for reconstruction in photoacoustic geometry. 11/11/2013 - 14:00 Exceptional functions/values wandering on the sphere and normal families Dr. Shahar Nevo, Bar-Ilan University Dr. Shahar Nevo, Bar-Ilan University Exceptional functions/values wandering on the sphere and normal families We extend Caratheodory's generalization of Montel's fundamental normality test to "wandering" exceptional functions (i.e. depending on the respective function in the family under consideration), and we give a corresponding result on shared functions. Furthermore, we prove that if we have a family of pairs (a,b) of functions meromorphic in a domain such that a and b uniformly "stay away from each other " , then the families of the functions a resp. b are normal. The proofs are based on a "simultaneous rescaling" version of Zalcman's Lemma. We also introduce a somewhat "strange" result about some sharing wandering values assumptions that imply normality. 28/10/2013 - 14:00 Lyapunov theorem for $q$-concave Banach spaces Dr. Anna Novikova, Weizmann Institute of Sciences Dr. Anna Novikova, Weizmann Institute of Sciences Lyapunov theorem for $q$-concave Banach spaces Let $X$ be Banach space, $(\Omega,\Sigma)$ is a measure space, where $\Omega$ is a set and $\Sigma$ is a $\sigma$-algebra of subsets of $\Omega.$ If $m:\Sigma\rightarrow X$ is a $\sigma$-additive $X$-valued measure, then the range of $m$ is the set $m(\Sigma)=\{m(A): \ A\in\Sigma.\}$ The measure $m$ is {\it non-atomic} if for every set $A\in\Sigma$ with $m(A)>0,$ there exist $B\subset A,B\in\Sigma$ such that $m(B)\neq0$ and $m(A \backslash B)\neq0.$ $X$-valued measure we will call {\it Lyapunov measure} if the closure of its range is convex. And Banach space $X$ is {\it Lyapunov space} if every $X$-valued non-atomic measure is Lyapunov. Theorem. Let X be Banach space with unconditional basis, q-concave, $q<\infty$, and which doesn't contain isomorphic copy of $l_2.$ Then X is Lyapunov space. 21/10/2013 - 14:00 Brennan conjecture for composition operators on Sobolev spaces Prof. V. Goldshtein, Ben-Gurion University Prof. V. Goldshtein, Ben-Gurion University Brennan conjecture for composition operators on Sobolev spaces We show that Brennan's conjecture about integrability of derivatives of conformal homeomorphisms is equivalent to boundedness of composition operators on homogeneous Sobolev spaces $L^{1,p}$. This result is used for description of embedding operators of homogeneous Sobolev spaces $L^{1,p}$ into weighted Lebesgue spaces with so-called "conformal weights" induced by the conformal homeomorphisms of simply connected plane domains to the unit disc. Applications to elliptic boundary value problems will be discussed. 17/06/2013 - 14:00 Pseudo-differential calculus and quantum ergodicity on regular graphs Dr. Etienne Le Masson, Universit´e Paris-Sud 11, ORSAY, FRANCE Dr. Etienne Le Masson, Universit´e Paris-Sud 11, ORSAY, FRANCE Pseudo-differential calculus and quantum ergodicity on regular graphs I will present a quantum ergodicity theorem on large regular graphs. This is a result of spatial equidistribution of most eigenfunctions of the discrete Laplacian in the limit of large regular graphs. It is analogous to the quantum ergodicity theorem on Riemannian manifolds, which is concerned with the eigenfunctions of the Laplace-Beltrami operator in the high frequency limit. I will also talk about pseudo-differential calculus on regular graphs, one of the tools constructed for the proof of the theorem. This is a joint work with Nalini Anantharaman. 10/06/2013 - 14:00 Inversion formulas for the spherical mean transform with data on an ellipsoid in two and three dimensions Yehonatan Salman Yehonatan Salman Inversion formulas for the spherical mean transform with data on an ellipsoid in two and three dimensions This presentation is devoted to the problem of recovering a function from its spherical means with centers located on an ellipsoid $\Sigma$ in the two- and three-dimensional spaces. We will show how to generalize methods for obtaining inverse formulas for the case when $\Sigma$ is a sphere to the case when $\Sigma$ is an ellipsoid. The talk is based on my joint work with S.Mayboroda (Minnesota) and Guo Luo (Caltech) 22/04/2013 - 14:00 Generalized Analytic Functions in elliptic complex numbers Dr. Daniel Alayon-Solarz, Bar-Ilan University Dr. Daniel Alayon-Solarz, Bar-Ilan University Generalized Analytic Functions in elliptic complex numbers In this talk we will introduce a definition of Generalized Analytic Functions (in the sense of Vekua), in elliptic complex numbers. One advantage of this definition is that its Canonical Form is more general than Vekua's and in many cases the reduction from the elliptic and linear partial differential equation of first order can be done without solving an associated Beltrami equation. Finally, using techniques of Vekua we will show that these functions satisfy a representation formula that generalizes the Similarity Principle in the ordinary case. 08/04/2013 - 14:00 Generalized Convexity, Blaschke-type Condition in Unbounded Domains, and Application in Spectral Perturbation Theory of Linear Operators Prof. S. Favorov, Kharkov University, Ukraine Prof. S. Favorov, Kharkov University, Ukraine Generalized Convexity, Blaschke-type Condition in Unbounded Domains, and Application in Spectral Perturbation Theory of Linear Operators We introduce a notion of r-convexity for subsets of the complex plane. It is a pure geometric characteristic that generalizes the usual notion of convexity. Next, we investigate subharmonic functions that grow near the boundary in unbounded domains with r-convex compact complement. We obtain the Blaschke-type bounds for its Riesz measure and, in particular, for zeros of unbounded analytic functions in unbounded domains. These results are based on a certain estimates for Green functions on complements of some neighborhoods of $r$-convex compact set. Also, we apply our results in perturbation theory of linear operators in a Hilbert space. More precisely, we find quantitative estimates for the rate of condensation of the discrete spectrum of a perturbed operator near its the essential spectrum. 11/03/2013 - 14:00 Multi-tiling and Riesz bases Dr. Sigrid Grepstad, Norwegian University of Science and Technology Dr. Sigrid Grepstad, Norwegian University of Science and Technology Multi-tiling and Riesz bases Let S be a bounded, Riemann measurable set in R^d, and L be a lattice. By a theorem of Fuglede, if S tiles R^d with translation set L, then S has an orthogonal basis of exponentials. We show that, under the more general condition that S multi-tiles R^d with translation set L, S has a Riesz basis of exponentials. The proof is based on Meyer's quasicrystals. This is a joint work with Nir Lev. 04/03/2013 - 14:00 Intersections of fractal sets, multi-linear operators, and Fourier analysis Dr. Krystal Taylor, Technion Dr. Krystal Taylor, Technion Intersections of fractal sets, multi-linear operators, and Fourier analysis A classical theorem due to Mattila says that if $A,B \subset {\Bbb R}^d$ of Hausdorff dimension $s_A, s_B$, respectively, with $s_A+s_B \ge d$, $s_B>\frac{d+1}{2}$ and $dim_{{\mathcal H}}(A \times B)=s_A+s_B\ge d$, then $$ dim_{{\mathcal H}}(A \cap (z+B)) \leq s_A+s_B-d$$ for almost every $z \in {\Bbb R}^d$, in the sense of Lebesgue measure. We obtain a variable coefficient variant of this result in which we are able to replace the Hausdorff dimension with the upper Minkowski dimension on the left-hand-side of the first inequality. This is joint work with Alex Iosevich and Suresh Eswarathasan. Fourier Integral Operator bounds and other techniques of harmonic analysis play a crucial role in our investigation. 23/01/2013 - 14:00 Order preserving and order preserving operators on the class of convex functions in Banach spaces Dr. Daniel Reem, IMPA, Rio de Janeiro, Brasil Dr. Daniel Reem, IMPA, Rio de Janeiro, Brasil Order preserving and order preserving operators on the class of convex functions in Banach spaces Recently S. Artstein-Avidan and V. Milman have developed an abstract duality theory and proved the following remarkable result: up to linear terms, the only fully order preserving operator (namely, an invertible operator whose inverse also preserves the pointwise order between functions) acting on the class of lower semicontinuous proper convex functions defined on R^n is the identity operator, and the only fully order reversing operator acting on the same set is the Fenchel conjugation (Legendre transform). We establish a suitable extension of their result to infinite dimensional Banach spaces. This is a joint work with Alfredo N. Iusem and Benar F. Svaiter 14/01/2013 - 14:00 A simple proof of the $A_2$ conjecture Prof. A. Lerner, Bar-Ilan University A simple proof of the $A_2$ conjecture The $A_2$ conjecture says that the $L^2(w)$ operator norm of any Calder\'on-Zygmund operator is bounded linearly by the $A_2$ constant of the weight $w$. This conjecture was completely solved in 2010 by T. Hyt\"onen. The proof was based on a rather difficult representation of a general Calder\'on-Zygmund operator in terms of the Haar shift operators. In this talk we shall discuss a recent simpler proof completely avoiding the notion of the Haar shift operator. 31/12/2012 - 14:00 Equidistribution of Fekete points Dr. Nir Lev, Bar-Ilan University Equidistribution of Fekete points We consider an extremal configuration of points on a manifold, called Fekete points, and study their equidistribution through their relation to Beurling-Landau theory of sampling and interpolation (joint work with Joaquim Ortega-Cerda). 24/12/2012 - 14:00 On semiconjugate rational functions Prof. Fedor Pakovich, Ben-Gurion University Prof. Fedor Pakovich, Ben-Gurion University On semiconjugate rational functions A classification of commuting rational functions, that is of rational solutions of the functional equation A(X)=X(A), was obtained in the beginning of the past century by Fatou, Julia, and Ritt. In the talk we will present a solution of a more general problem of description of semiconjugate rational functions, that is of rational solutions of the functional equation A(X)=X(B) in terms of groups acting properly discontinuously on the Riemann sphere or complex plane. 17/12/2012 - 14:00 Fourier Reconstruction of Piecewise-Smooth Functions with "Smooth" Accuracy Prof. Y. Yomdin, Weizmann Institute Prof. Y. Yomdin, Weizmann Institute Fourier Reconstruction of Piecewise-Smooth Functions with "Smooth" Accuracy I plan to discuss a recent progress in Eckhoff Conjecture obtained via "Algebraic Sampling" approach, and a general bound on sampling accuracy provided by a combination of Kolmogorov's entropy and Johnson-Lindenstrauss dimensionality reduction. This is a joint work with D. Batenkov. 10/12/2012 - 14:00 THE CALDER\'ON PROBLEM - FROM THE PAST TO THE PRESENT Dr. Leo Tzou, Academy of Finland/University of Helsinki Dr. Leo Tzou, Academy of Finland/University of Helsinki THE CALDER\'ON PROBLEM - FROM THE PAST TO THE PRESENT The problem of determining the electrical conductivity of a body by making voltage and current measurements on the object's surface has various applications. We will look at the connection between this applied analysis problem with seemingly unrelated fields such as symplectic geometry and differential topology as well as geometric scattering theory. 26/11/2012 - 14:00 The Nitsche conjecture and affine module Prof. Daoud Bshouty, Technion Prof. Daoud Bshouty, Technion The Nitsche conjecture and affine module The Nitsche conjecture was solved recently by Iwaniec, Kovalov and Onninen and in the same paper they pose the same problem from Teichmuller domain onto Teiuchmuller domain. We present a solution to this problem. 19/11/2012 - 14:00 Quasimodes that do not Equidistribute Dr. Shimon Brooks, Bar-Ilan University Quasimodes that do not Equidistribute We will discuss the case of surfaces of constant negative curvature; in particular, we will explain how to construct examples of sufficiently weak quasimodes that do not satisfy QUE, and show how they fit into the larger theory. 12/11/2012 - 14:00 Characterizating Sobolev Spaces for Arbitrary Open Sets Dr. Daniel Spector Dr. Daniel Spector Characterizating Sobolev Spaces for Arbitrary Open Sets In this talk I will discuss some recent results obtained in collaboration with G. Leoni on new characterizations of Sobolev spaces for arbitrary open sets. The motivation for such a characterization stems from a 2001 paper of Bourgain, Brezis, and Mironescu that gives a related one for smooth and bounded domains, and an open question on the extension of these results to arbitrary open sets. 05/11/2012 - 14:00 On discrete Fourier expansion, influences, and noise sensitivity Dr. Nathan Keller Dr. Nathan Keller On discrete Fourier expansion, influences, and noise sensitivity In 1996, Talagrand established a lower bound on the second-level Fourier coefficients of a monotone Boolean function, in terms of its first-level coefficients. This lower bound and its enhancements were used in various applications to correlation inequalities, noise sensitivity, geometry, percolation, etc. In this talk we present a new proof of Talagrand's inequality, which is somewhat simpler than the original proof, and allows to generalize the result easily to non-monotone functions (with influences replacing the first-level coefficients) and to more general measures on the discrete cube. We then apply our proof to obtain a quantitative version of a theorem of Benjamini-Kalai-Schramm on the relation between influences and noise sensitivity. Time permitting, we shall present recent results and open questions, related to an application of Talagrand's lower bound to correlation inequalities. The first part of the talk is joint work with Guy Kindler. 29/10/2012 - 14:00 On Fourier multipliers and comparison of differential operators Prof. R. Trigub, Donetsk National University, Ukraine On Fourier multipliers and comparison of differential operators The talk consists of two parts. 1) Fourier multipliers and absolute convergence of Fourier integrals. Based on the paper by Liflyand-Samko-Trigub "The Winer algebra of absolutely convergent Fourier integrals: an overview", Analysis and Math. Physics 2(2012), 1-68. 2) Comparison of linear differential operators with constant coefficients by their norms in $L_p,$ $1\le p\le \infty$. In particular, three criteria of comparison are obtained for functions on the circle, on the axis, and on the half-axis, as well as one sharp inequality. 18/06/2012 - 14:00 Duals of Anisotropic Hardy Spaces Tal Weissblat Tal Weissblat Duals of Anisotropic Hardy Spaces In the talk we first review the highly anisotropic Hardy spaces]. We then discuss a careful approximation argument that is needed when analyzing dual spaces of Hardy spaces. One cannot assume that a linear functional, uniformly bounded on all atoms, is automatically bounded on spaces that have atomic representations (e.g. Hardy spaces). 04/06/2012 - 14:00 On Density in Radial Basis Approximation Prof. Vitaly E. Maiorov, Technion Prof. Vitaly E. Maiorov, Technion On Density in Radial Basis Approximation We characterize the radial basis functions whose scattered shifts form a fundamental system in the space $L_{p}(\rrd)$. In particular, we show that for any even function $h$ from the space $L_{2,{\rm loc}}(\rrd)$ the space formed by all possible linear combinations of shifted radial basis functions $h(\|x+a\|)$, $a\in \rrd$, is dense in the space $L_p(\rrd)$, $1\le p\le 2$, if and only if the function $h$ is not a polynomial. 21/05/2012 - 14:00 Relations between the Fourier and the Hilbert transform E. Liflyand, Bar-Ilan University E. Liflyand, Bar-Ilan University Relations between the Fourier and the Hilbert transform In this talk we discuss various conditions of the integrability of the Fourier transform of a function of bounded variation and their connections to the behavior of the Hilbert transform of a related function. Correspondingly, the considered spaces of functions with integrable Fourier transform are intimately related with the real Hardy space. One of the most important connections for the two transforms is given by the space introduced (for different purposes) by Johnson and Warner. 14/05/2012 - 14:00 Multidimensional theorems of Teichm\"uller-Wittich-Belinskii type Prof. Anatoly Golberg, Holon Institute of Technology Multidimensional theorems of Teichm\"uller-Wittich-Belinskii type The classical Teichm\"uller-Wittich-Belinskii theorem implies the conformality of a planar continuous mapping at a point under rather general integral restrictions for the dilatation of this mapping near the point. This theorem is very rich in applications and has been generalized by many authors in various directions (weak conformality, differentiability, multidimensional analogs, etc.). Certain complete generalizations are due to Reshetnyak and Bishop, Gutlyanskii, Martio, Vuorinen. I will show in the talk that the assumptions under which the main results have been obtained, can be essentially weakened and give much stronger estimate for the limit of $|f(x)|/|x|$ as $x$ approaches $0$. We essentially improve the underlying modular technique. 30/04/2012 - 14:00 A family of operators satisfying a Poisson-type summation formula, and a characterization of the Fourier transform Dmitry Faifman Dmitry Faifman A family of operators satisfying a Poisson-type summation formula, and a characterization of the Fourier transform We study to which extent the Poisson summation formula determines the Fourier transform. The answer is positive under certain technical smoothness and rate of decay conditions. This study leads to a class of unitary operators on L^2 that satisfy a weighted form of the Poisson summation formula, which we explicitly diagonalize, with eigenvalues related to associated L-functions. 23/04/2012 - 14:00 On the polarity transform Prof. Shiri Artstein, Tel-Aviv University Prof. Shiri Artstein, Tel-Aviv University On the polarity transform I shall describe the polarity transform for functions, were it came from, and some of its properties, especially in comparison with the well known Legendre transform ("duality" versus "polarity"). Then we shall study its (sub)differential structure, and show that it may be used to solve new families of first order Hamilton--Jacobi type equations as well as some second order Monge-Ampere type equations. The first part is based on joint work with Vitali Milman, and the second part on joint work with Yanir Rubinstein. 16/04/2012 - 15:05 The perturbed Bessel equation and Duality Theorem , Swinburne University of Technology, Hawthorn, Australia Prof. V.P. Gurarii , Swinburne University of Technology, Hawthorn, Australia Prof. V.P. Gurarii The perturbed Bessel equation and Duality Theorem The Euler-Gauss linear transformation formula for the hypergeometric function was extended by Goursat for the case of logarithmic singularities. By replacing the perturbed Bessel differential equation by a monodromic functional equation, and studying this equation separately from the differential equation by an appropriate Laplace-Borel technique, we associate with the latter equation another monodromic relation in the dual complex plane. This enables us to prove a duality theorem and to extend Goursat's formula to much larger classes of functions. 16/04/2012 - 14:00 Flows of metrics on a fiber bundle Prof. Rovenski Vladimir, University of Haifa Prof. Rovenski Vladimir, University of Haifa Flows of metrics on a fiber bundle Let $(M^{n+p},g)$ be a closed Riemannian manifold, and $\pi: M\to B$ a smooth fiber bundle with compact and orientable $p$-dimensional fiber $F$. Denote by $D_F$ ($D$) the distribution tangent (orthogonal, resp.) to fibers. We discuss conformal flows of the metric restricted to $D$ with the speed proportional to (i) the divergence of the mean curvature vector $H$ of $D$, (ii) the mixed scalar curvature $Sc_{mix}$ of the distributions. (If $M$ is a surface, then $Sc_{mix}$ is the gaussian curvature $K$). For (i), we show that the flow is equivalent to the heat flow of the 1-form dual to $H$, provided the initial 1-form is $D_F$-closed. We use known long-time existence results for the heat flow to show that our flow has a global solution $g_t$. It converges to a limiting metric, for which $D$ is harmonic (i.e., $H=0$); actually under some topological assumptions we can prescribe $H$. For (ii) on a twisted product, we observe that $H$ satisfies the Burgers type PDE, while the warping function satisfies the heat equation; in this case the metrics $g_t$ converge to the product. We consider illustrative examples of flows similar to (i) and (ii) on a surface (of revolution), they yield convection-diffusion PDEs for curvature of $D$-curves (parallels) and solutions -- non-linear waves. For $M$ with general $D$, we modify the flow (ii) with the help of a measure of ``non-umbilicity" of $D_F$, and the integrability tensor of $D$, while the fibers are totally geodesic. Let $\lambda_0$ be the smallest eigenvalue of certain Schrödinger operator on the fibers. We assume $H$ to be $D_F$-potential and show that -- $H$ satisfies the forced Burgers type PDE; -- the flow has a unique solution converging to a metric, for which $Sc_{mix}\ge-n\lambda_0$, and $H$ depends only on the $D$-conformal class of the initial metric. -- if $D$ had constant rate of ``non-umbilicity" on fibers, then the limiting metric has the properties: $Sc_{mix}$ is quasi-positive, and $D$ is harmonic. 26/03/2012 - 14:00 Generalized Frobenius theorems on involutivity and overdetermined PDE systems I, II Prof. CHONG KYU HAN, Seoul National University, Republic of Korea Prof. CHONG KYU HAN, Seoul National University, Republic of Korea Generalized Frobenius theorems on involutivity and overdetermined PDE systems I, II By using the generalized Frobenius theorems we study the existence of solutions of overdetermined PDE systems. In particular, we discuss local geometry of Levi-forms associated with the minimality and the existence of complex submanifolds of generic CR manifolds. 19/03/2012 - 14:00 Inequalities for moduli of smoothness versus embeddings of function spaces Prof. Walter Trebels, Technical University, Darmstadt, Germany Prof. Walter Trebels, Technical University, Darmstadt, Germany Inequalities for moduli of smoothness versus embeddings of function spaces Define on $\, L^p({\mathbb R}^n),\, p\ge 1,$ moduli of smoothness of order $\, r,\, r \in {\mathbb N},$ by \omega_r(t,f)_p:=\sup _{|h| <t} \| \Delta_h^rf\|_p\, ,\quad t>0,\; \; \Delta_hf(\cdot)= f(\cdot +h)-f(\cdot),\; \Delta_h^r=\Delta_h \Delta^{r-1}_h . Trivially one has $\, \omega_r(t,f)_p \lesssim \omega_k(t,f)_p\, ,\; k<r.$ Its converse is known as Marchaud inequality. M.F. Timan 1958 proved a sharpening of the converse, nowadays called {\it sharp Marchaud inequality}, which in the present context takes the \omega_k(t,f)_p \lesssim t^k \left( \int_{t}^{\infty} [s^{-k} \omega_r(u,f)_p]^q \frac{du}{u} \right)^{1/q},\qquad t>0,\quad k<r. where $\, q:=\min (p,2),\, 1<p<\infty.$ Here we will show that the sharp Marchaud inequality as well as further sharp inequalities for moduli of smoothness like Ulyanov and Kolyada type ones are equivalent to (known) embeddings between Besov and potential spaces.\\ To this end one has to make use of moduli of smoothness of fractional order which can be characterized by Peetre's (modified) $\, K$-functional, living on $\, L^p$ and associated Riesz potential spaces. Limit cases of the Holmstedt formula (connecting different $\, K$-functionals) show that the embeddings imply the desired inequalities. Conversely, the embeddings result from the inequalities for moduli of smoothness by limit procedures. 12/03/2012 - 14:00 Harmonic analysis on lattices and points count in positive characteristic Prof. Mikhail Zaidenberg, Institut Fourier, Grenoble, France Prof. Mikhail Zaidenberg, Institut Fourier, Grenoble, France Harmonic analysis on lattices and points count in positive characteristic This is a survey talk on special cellular automata related to the game `Lights out'. This game, commercialized by `Tiger Electronics', became a source of inspiration for the work of Sutner, Goldwasser-Klostermayer-Ward, Barua-Sarkar, Hunziker-Machiavello-Park e.a.
CommonCrawl
Research | Open | Published: 23 February 2017 Efficient Bayesian compressed sensing-based channel estimation techniques for massive MIMO-OFDM systems Hayder AL-Salihi1 & Mohammad Reza Nakhai1 EURASIP Journal on Wireless Communications and Networkingvolume 2017, Article number: 38 (2017) | Download Citation Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sparsity to achieve optimum performance, also the conventional CS techniques show poor recovery performance for low signal to noise ratio (SNR). To overcome these shortages, in this paper, an efficient channel estimation approach is proposed for massive MIMO systems using Bayesian compressed sensing (BCS) based on prior knowledge of statistical information regarding channel sparsity. Furthermore, by utilizing the common sparsity feature inherent in the massive MIMO system channel, we extend the proposed Bayesian algorithm to a multi-task (MT) version, so the developed MT-BCS can obtain better performance results than the single task version. Several computer simulation based experiments are performed to confirm that the proposed methods can reconstruct the original channel coefficient more effectively when compared to the conventional channel estimator in terms of estimation accuracy. The main activity of recent research has identified that the major targets for the next generation of mobile communications, the so-called fifth generation of mobile communications, are to achieve 1000 times the system capacity and 10 times the spectral efficiency, energy efficiency and data rate, and 25 times the average cell throughput [1]. From a high-level perspective, there is a promising technology that enables reaching higher fifth generation targets, called a massive multiple input multiple output (MIMO). A massive MIMO can be defined as a system using a large number of antennas at the base station; accordingly, a significant beamforming can be achieved and the system capacity can serve a large number of users [2]. When comparing massive MIMO to the conventional MIMO systems, massive MIMO shows several advantageous aspects. Firstly, as the number of the antennas at the base station goes to high values, the simplest coherent combiner and linear precoder turn out to be optimal. Secondly, by exploiting the features of the channel reciprocity, additional antennas increase the network capacity significantly without the need for additional feedback overhead. Thirdly, enabling the power reduction in the uplink and in the downlink can provide the potential for small-cell size shrinking [3]. The major limiting factor in massive MIMO is the availability of accurate, instantaneous channel state information (CSI) at the base station. The CSI is typically acquired by transmitting predefined pilot signals and estimating the channel coefficients from the received signals by applying an appropriate estimation algorithm [1–3]. Channel estimation accuracy depends on having perfect orthogonal pilots allocated to the users; however, to achieve high spectral efficiency, the same carrier frequency should be used in the neighbouring cells by following a specific reuse pattern. This leads to the creation of a spatially correlated inter-cell interference, known as pilot contamination, which reduces the estimation performance and spectral efficiency [1–3]. The pilot contamination problem was analyzed in [4] and it has shown that the precoding downlink signal of the base station in the serving cell contaminated the received signal of the users roaming in other cells. The authors of [5] analyzed the pilot contamination problems in multi-cell massive MIMO systems relying on a large antennas at the base station, and demonstrated that the pilot contamination problem persisted in large-scale MIMO [6]. However, pilot contamination could be reduced by reducing the number of pilots. A multi-user scenario therefore needs to reduce the number of pilots without affecting the channel impulse response (CIR) quality. Hence, the development of efficient channel estimation techniques for massive MIMO that are computationally less complex and require a fewer number of pilots is a challenge that should be thoroughly addressed [7]. Recently, compressed sensing (CS) techniques have received attention since they can recover the unknown signals from only a small number of measurements, thus using significantly far fewer samples than is possible via the conventional Nyquist rate, which is the signal recovery scheme developed for CS to exploit the sparse nature of signals (that is, only a small number of components in a signal vector are non-zero). CS allows for accurate system parameter estimation with fewer pilots; thereby, addressing the pilot contamination problem and improving the bandwidth efficiency [8, 9]. However, classical CS algorithms require prior knowledge of channel sparsity, which is usually unknown in practical scenarios. In addition, to apply CS algorithms, the sampling matrix must satisfy the restricted isometry property (RIP) for guaranteeing reliable estimators. Such a condition cannot be easily verified because it results computational demanding [10, 11]. To overcome the scarcity of CS-based channel estimation in massive MIMO systems, in this paper, we propose an improved channel estimation scheme based on the theory of Bayesian CS (BCS) that introduces relevance vector machines (RVM) and statistical learning information (SLI) into standard CS; whereby, probabilistic a priori information regarding the channel sparsity can be exploited for more reliable channel recovery to mitigate the pilot contamination problem. Also, the sampling matrix condition is efficiently overcome based on probabilistic formulation [12–14]. Compared with the classical based scheme, our simulation results indicate that the proposed channel estimation methods provide improved estimation accuracy and can address the pilot contamination problem. Furthermore, by exploiting the common statistical sparsity inherent in different multipath signals, we extend the BCS algorithms to a multi-task version for simultaneously reconstructing multiple signals, thus leading to MT-BCS [15, 16]. The main contributions of this paper are summarised as follows: The BCS-based channel estimation algorithm has been proposed for massive MIMO to address the pilot contamination problem. We have also proposed to enhance the performance of the BCS-based estimator through the principle of thresholding to select the most significant taps to improve the channel estimation accuracy. In addition, we have exploited the common statistical sparsity distribution to enhance the estimation accuracy performance through the proposed MT-BCS-based estimator. To provide the benchmark for the minimum performance error of the BSC and MT-BCS, the Cramer Rao bound (CRB) has been drawn for BCS and it has been derived and drawn for MT-BCS. The remainder of this paper is organized as follows. The multi-cell massive MIMO system model is presented in Section 2. The BSC-based and the MT-BSC based channel estimation details are reviewed in Sections 3 and 4, respectively. In section 5, we provide the Cramer-Rao bound analysis. Section 6 presents the simulation results. Finally, the final conclusions are drawn in Section 7. The following notation is adopted throughout the paper: $\mathbb {C}$ denotes the complex number field. For ${A} \in \mathbb {C}$, we have A=A R +j A I , where $j=\sqrt {-1}$, while A R and A I are the real and imaginary parts of A, respectively. For any matrix A, A i,j denotes the (i,j)th element. The transpose, inverse and Hermitian transpose operators are denoted by (.)T, (.)−1, and (.)H, respectively. Upper bold font are used to denote matrices while lower light font are used to denote vectors, lower and upper case represents the time domain and frequency domain, respectively. The I denotes an identity matrix, $diag\{\underline {\mathbf {X}}\}$ denotes the diagonal matrix with the diagonal entries equal to the elements of X and $\hat {X}$ represents the estimate of $\hat {X}$. The Frobenius and spectral norms of a matrix x are denoted by ∥x∥ F and ∥x∥2 respectively. E{.} has been employed to denote expectation with regard to all random variables within the brackets. A Gaussian stochastic variable o is the denoted by o∼N(r,q), where r is the mean and q is the variance. Also, a random vector x having the prober complex Gaussian distribution of mean μ and covariance Σ is indicated by x∼C N(x;μ,Σ), where, $ N(\mathbf {x};\boldsymbol {\mu },\boldsymbol {\Sigma })=\frac {1}{det(\pi \boldsymbol {\Sigma })} e^{-(\mathbf {x}-\boldsymbol {\mu })\boldsymbol {\Sigma }^{-1}(\mathbf {x}-\boldsymbol {\mu })}$, for simplicity we refer to C N(x;μ,Σ) as x∼C N(μ,Σ). Massive MIMO system model We consider a time division duplexing (TDD) multi-cell massive MIMO system with C cells as shown in Fig. 1. Each cell comprises of M antennas at the BS and N single antenna users. To improve the spectral efficiency, orthogonal frequency division multiplexing (OFDM) is adopted [17, 18]. Illustration of the system model of a multi-cell multi-user massive MIMO At the beginning of the transmission, all mobile stations in all cells synchronously transmit OFDM pilot symbols to their serving base stations. Let the OFDM pilot symbol of user n in the c-th cell be denoted by $\mathbf {x}^{n}_{c}=[{X}^{n}_{c}[1]\ {X}^{n}_{c}[2] \cdots {X}^{n}_{c}[K]]^{T}$, where K is the number of subcarriers. The OFDM transmission partition the multipath channel between the user and each antenna of the BS into K parallel independent additive white Gaussian noise (AWGN) sub-channels in the frequency domain. Each sub-channel is associated with a subcarrier. Let ${H}^{n}_{c^{*},c,i}[k]$ denote the k-th sub-channel coefficient between the n-th user in the c-th cell and the i-th antenna of the BS of cell c ∗ in the uplink. The received signal $\phantom {\dot {i}\!}{Y}_{c^{*},i}$ by the i-th antenna element of the cell c ∗ at the k-th subcarrier can be expressed as $$\begin{array}{*{20}l} {Y}_{c^{*},i}[k]&= \sum_{n=1}^{N}{H}^{n}_{c^{*},c^{*},i}[k] {X}^{n}_{c^{*}}[k] \\ &+\sum_{c=1, c\neq{c^{*}}}^{C}\sum_{n=1}^{N}{H}^{n}_{c^{*},c,i}[k] {X}^{n}_{c}[k]+V_{c^{*},i}[k], \end{array} $$ for all 1≤i≤M and 1≤c≤C, where ${V}_{c^{*},i}[k]\phantom {\dot {i}\!}$ is the AWGN at the i-th antenna of the BS in cell c ∗ at the k-th subcarrier. Letting $\phantom {\dot {i}\!}\mathbf {y}_{c^{*},i}=[Y_{c*,i}[1]\cdots Y_{c*,i}[K]]^{T}$, we can write (1) for all subcarriers at the i-th antenna of the BS in cell c ∗ in the compact form as $$\begin{array}{*{20}l} \mathbf{y}_{c^{*},i}&= \sum_{n=1}^{N}\mathbf{X}^{n}_{c^{*}} \mathbf{h}^{n}_{c^{*},c^{*},i}+ \sum_{c=1, c\neq{c^{*}}}^{C} \sum_{n=1}^{N}\mathbf{X}^{n}_{c} \mathbf{h}^{n}_{c^{*},c,i} \\ &+\mathbf{v}_{c^{*},i}, \end{array} $$ where $\mathbf {X}^{n}_{c^{*}}=\text {diag}\{\mathbf {x}^{n}_{c^{*}}\}$, $\mathbf {h}^{n}_{c^{*},c,i}=[{H}^{n}_{c^{*},c,i}[1]\cdots {H}^{n}_{c^{*},c,i}[K]]^{T}$ and $\mathbf {v}_{c^{*},i}=[{V}_{c^{*},i}[1]\cdots {V}_{c^{*},i}[K]]^{T} \sim CN(0,{\sigma }_{v}^{2})$. Let $\mathbf {g}^{n}_{c^{*},c,i}=[g^{n}_{c^{*},c,i}[1] \cdots g^{n}_{c^{*},c,i}[\ell ] \cdots g^{n}_{c^{*},c,i}[L]]^{T}$ collect the samples of the sampled multipath CIR between the n-th user of the c-th cell and the i-th antenna of the BS in cell c ∗, where L is the number of the channel taps and $g^{n}_{c^{*},c,i}[\ell ]$ corresponds to the ℓ-th channel tap. The K frequency domain channel coefficients, i.e., $\mathbf {h}^{n}_{c^{*},c,i}$, can be calculated as the K-point DFT of the CIR samples, i.e., $\mathbf {g}^{n}_{c^{*},c,i} \in \mathbb {C}^{L \times 1}$, e.g., [18]. Hence, $$ \mathbf{h}^{n}_{c^{*},c,i}= \mathbf{F} \mathbf{g}^{\prime n}_{c^{*},c,i}, $$ where $\mathbf {F} \in \mathbb {C}^{K \times K}$ represents the discrete Fourier transform (DFT) matrix, whose element in row s and column r is given by $[\frac {1}{\sqrt {K}}e^{{-j2 \pi *(K-r)(K-s)}/{K}}]$, 1≤r≤K and 1≤s≤K and $\mathbf {g}^{\prime n}_{c^{*},c,i}\in \mathbb {C}^{K \times 1}$ is $\mathbf {g}^{n}_{c^{*},c,i}\in \mathbb {C}^{L \times 1}$ augmented with K−L zeros. Using (3) in (2), we get $$\begin{array}{*{20}l} \mathbf{y}_{c^{*},i}&= \sum_{n=1}^{N}\mathbf{X}^{n}_{c^{*}}\mathbf{F} \mathbf{g}^{\prime n}_{c^{*},c,i} +\sum_{c=1, c\neq{c^{*}}}^{C}\sum_{n=1}^{N}\mathbf{X}^{n}_{c} \mathbf{F} \mathbf{g}^{\prime n}_{c^{*},c,i} \\&+\mathbf{v}_{c^{*},i}. \end{array} $$ The channel coefficient is modelled as $g^{n}_{c^{*},c,i}[\ell ]=\sqrt {{\phi }_{c^{*},c,i}}[\ell ] {\psi }_{c^{*},c,i}[\ell ]$ for 1≤ℓ≤L, where ${\phi }_{c^{*},c,i}\phantom {\dot {i}\!}$ model the path-loss and shadowing (large-scale fading), while the term $\phantom {\dot {i}\!}{\psi }_{c^{*},c,i}$ is assumed to be independent identical distribution (i.i.d) of unknown random variables with C N(0,1) (small-scale fading) [3]. The received signal of (4) can be re-written as $$\begin{array}{*{20}l} \mathbf{y}_{c^{*},i}&= \sum_{n=1}^{N}\mathbf{X}^{n}_{c^{*}}\mathbf{F} \mathbf{g}^{\prime n}_{c^{*},c,i} +\mathbf{z}_{c^{*},i}, \end{array} $$ where the term $\mathbf {z}_{c^{*},i}= \sum _{c=1, c\neq {c^{*}}}^{C}\sum _{n=1}^{N}\mathbf {X}^{n}_{c} \mathbf {F} \mathbf {g}^{\prime n}_{c^{*},c,i}+\mathbf {v}_{c^{*},i}$ in (5) represents the net sum of inter-cell interference plus the receiver noise, the variance interference ${{\sigma }_{I}^{2}}$ of the inter-cell interference term caused during pilot transmission can be expressed as $$\begin{array}{*{20}l} {\sigma}_{I}^{2}&= E \left\{ \left(\sum_{c=1, c\neq{c^{*}}}^{C}\sum_{n=1}^{N}\mathbf{X}^{n}_{c} \mathbf{F} \mathbf{g}^{\prime n}_{c^{*},c,i}\right) \right.\\ & \left.\quad\times\left(\sum_{c=1, c\neq{c^{*}}}^{C}\sum_{n=1}^{N}\mathbf{X}^{n}_{c} \mathbf{F} \mathbf{g}^{\prime n}_{c^{*},c,i}\right)^{H} \right\}. \end{array} $$ We define the measurement matrix $\mathbf {A}^{n}_{c^{*}}= \mathbf {X}^{n}_{c^{*}}\mathbf {F}$, then (5) can be rewritten as $$ {\mathbf{y}}_{c^{*},i}= \sum_{n=1}^{N}{\mathbf{A}}^{n}_{c^{*}} {\mathbf{g}}^{\prime n}_{c^{*},c,i}+\mathbf{z}_{c^{*},i}. $$ Based on the physical properties of outdoor electromagnetic propagation, the CIR in wireless communications usually contain a few significant channel taps as can be shown in Fig. 2, i.e., the CIR are sparse; hence, the number of non-zero taps of the channel is much smaller than the channel length, then the CS techniques can be applied for sparse channel estimation. This sparse property can be exploited to reduce the necessary channel parameters to be estimated. In this case, we can address the pilot contamination problem by using fewer pilots than the unknown channel coefficients [7, 19, 20]. Illustration of the rich scatterers wireless channel and the resulting channel impulse response is sparse BCS-based channel estimation In common literature, channel estimation methods are classified into parametric and Bayesian approaches. A standard parametric approach is the best linear unbiased estimator, which is often referred to as least squares channel estimation. In contrast to parametric methods, the Bayesian approach treats the desired parameters as random variable with a-priori known statistics. Clearly, the a priori probability density function (PDF) of the channel is assumed to be perfectly known at the receiver [21, 22]. Based on the Bayesian channel estimation philosophy, the estimation of unknown parameters is the expectation of the posterior probabilistic distribution that is proportional to the prior probability and the likelihood of the unknown parameters. In this section, BCS-based channel estimation is presented in the context of massive MIMO channel estimation. Following the general procedure of BCS in [23] and [24], the full posterior distribution over unknown parameters of interest for the problem at hand can be given as $${} \begin{aligned} P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i},\boldsymbol{\beta},{\sigma}^{2}|\mathbf{y}_{c^{*},i}\right)\,=\,\frac{P\left(\mathbf{y}_{c^{*},i}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i},\boldsymbol{\beta},{\sigma}^{2}\right)P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i},\boldsymbol{\beta},{\sigma}^{2}\right)}{P(\mathbf{y}_{c^{*},i}) }, \end{aligned} $$ where β represents the hyperparameters that control the sparsity of the channel while σ 2 is the net sum of the noise variance and interference variance. However, the probability of the observation vector, $\phantom {\dot {i}\!}P(\mathbf {y}_{c^{*},i})$, is defined by the following equation $$\begin{array}{*{20}l} P(\mathbf{y}_{c^{*},i})&=\int\int\int P(\mathbf{y}_{c^{*},i}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i},{\sigma}^{2},\boldsymbol{\beta}) \\*& P(\mathbf{g}^{\prime n}_{c^{*},c^{*},i},\boldsymbol{\beta},{\sigma}^{2}) d\mathbf{g}^{\prime} \ d\boldsymbol{\beta} \ d{\sigma}^{2}, \end{array} $$ cannot be computed analytically. So, the posterior distribution can be decomposed as $$\begin{array}{*{20}l} P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i},\boldsymbol{\beta},{\sigma}^{2}|\mathbf{y}_{c^{*},i}\right)& \equiv P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}|\mathbf{y}_{c^{*},i},\boldsymbol{\beta},{\sigma}^{2}\right) \\& P\left(\boldsymbol{\beta},{\sigma}^{2}|\mathbf{y}_{c^{*},i}\right). \end{array} $$ The first term of (10), $P\left (\mathbf {g}^{\prime n}_{c^{*},c^{*},i}|\mathbf {y}_{c^{*},i},\boldsymbol {\beta },\mathbf {\sigma }^{2}\right)$, the posterior distribution over the channel coefficient can be expressed based on Bayes' rule as $$ P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}|\mathbf{y}_{c^{*},i},\boldsymbol{\beta},{\sigma}^{2}\right)=\frac{P\left(\mathbf{y}_{c^{*},i}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i},{\sigma}^{2} \right)P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}|\boldsymbol{\beta}\right)}{P\left(\mathbf{y}_{c^{*},i}|\boldsymbol{\beta},{\sigma}^{2} \right) }. $$ The posterior distribution given above is Gaussian distribution with mean $\boldsymbol {\mu }^{n}_{c^{*},c^{*},i}$ and the variance $\boldsymbol {\Sigma }^{n}_{c^{*},c^{*},i}$ are given by $$ \boldsymbol{\mu}^{n}_{c^{*},c^{*},i}= {\sigma}^{-2} \boldsymbol{\Sigma} \mathbf{A}^{n}_{c^{*}} \mathbf{y}_{c^{*},i}, $$ $$ \boldsymbol{\Sigma}^{n}_{c^{*},c^{*},i}=\left(\boldsymbol{\zeta}+{\sigma}^{-2} \left(\mathbf{A}^{n}_{c^{*}}\right)^{H} \mathbf{A}^{n}_{c^{*}}\right)^{-1}, $$ where ζ=d i a g{β 1,β 2,…,β K }. The estimated channel based on Bayesian estimation approaches to minimize the mean square error (MSE) is the expectation of $P\left (\mathbf {g}^{\prime n}_{c^{*},c^{*},i}|\mathbf {y}_{c^{*},i},\boldsymbol {\beta },{\sigma }^{2}\right)$, so the estimated channel can be expressed as $$ \hat{\mathbf{g}}^{\prime n}_{c^{*},c^{*},i}=E\left(P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}|\mathbf{y}_{c^{*},i},\boldsymbol{\beta},{\sigma}^{2}\right)\right)=\boldsymbol{\mu}^{n}_{c^{*},c^{*},i}. $$ Now, to obtain the estimated channel $\hat {\mathbf {g}}^{\prime n}_{c^{*},c^{*},i}$, we need to find the heyparmarpater σ 2 and β that can be obtained from the second term on the right-hand side of (10) by applying a type −I I maximum likelihood procedure by operating a RVM. Based on Bayes' theorem, the posterior distribution $P\left (\boldsymbol {\beta },{\sigma }^{2}|\mathbf {y}_{c^{*},i}\right)$ is proportional $P\left (\mathbf {y}_{c^{*},i}|\boldsymbol {\beta },{\sigma }^{2}\right)$ [23], Then, the type −I I maximum likelihood is applied to the log marginal likelihood as follows $$ P(\mathbf{y}_{c^{*},i}|\boldsymbol{\beta},{\sigma}^{2})= \int\limits_{-\infty}^{\infty} {P\left(\mathbf{y}_{c^{*},i}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i},{\sigma}^{2}\right)P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}|\boldsymbol{\beta}\right)} d\mathbf{g}^{\prime}. $$ Based on the assumption of the RVM approach in [23], the term $P(\mathbf {g}^{\prime n}_{c^{*},c^{*},i}|\boldsymbol {\beta })$ follows zero-mean Gaussian distribution and can be expressed as $$\begin{array}{*{20}l} P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}|\boldsymbol{\beta}\right) &= (2\pi)^{\frac{-K}{2}} \prod_{i=1}^{K} \beta_{k}^{\frac{1}{2}} \\ & exp\left[\frac{-1}{2} \mathbf{g}^{\prime n}_{c^{*},c^{*},i} \beta_{k}\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}\right)^{H}\right], \end{array} $$ while the Gaussian likelihood function of $\phantom {\dot {i}\!}\mathbf {y}_{c^{*},i}$ according to the probability theory, can be written as $${} {\begin{aligned} P\!\left(\mathbf{y}_{c^{*},i}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i},{\sigma}^{2} \right)\,=\, \left(\frac{2\pi}{{\sigma}^{2}}\right)^{\frac{-K}{2}}\! exp\left(\!\frac{-{\sigma}^{2}}{2} ||\mathbf{y}_{c^{*},i}-\mathbf{A}^{n}_{c^{*}}\mathbf{g}^{\prime n}_{c^{*},c^{*},i}||_{2}^{2}\right). \end{aligned}} $$ By substituting (16) and (17) into (15), marginal likelihood $P(\mathbf {y}_{c^{*},i}|\boldsymbol {\beta },{\sigma }^{2})$ can be expressed as $$\begin{array}{*{20}l} P(\mathbf{y}_{c^{*},i}|\boldsymbol{\beta},{\sigma}^{2})= & \log \left\{\left(\frac{\beta_{k}}{2\pi}\right)^{\frac{K}{2}} \left(\frac{1}{2\pi}\right)^{\frac{K}{2}} \prod_{k=1}^{K} \beta_{k}^{\frac{1}{2}} \right.\\ &\quad\int\limits_{-\infty}^{\infty} exp\left(\frac{-\beta_{k}}{2}||\mathbf{y}_{c^{*},i}-\mathbf{A}^{n}_{c^{*}}\mathbf{g}^{\prime n}_{c^{*},c^{*},i}||_{2}^{2}\right) \\ &\quad + \left.{\vphantom{\left\{\left(\frac{\beta_{k}}{2\pi}\right)^{\frac{K}{2}} \left(\frac{1}{2\pi}\right)^{\frac{K}{2}} \prod_{k=1}^{K} \beta_{k}^{\frac{1}{2}} \right.}}\frac{1}{2}\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i}\right)^{H} \beta_{k} \mathbf{g}^{\prime n}_{c^{*},c^{*},i})\right\}, \end{array} $$ β can be obtained by differentiating the log marginal likelihood with regard to σ 2, and equating it to zero and it can be given as $$ (\beta_{k})^{ii}=\frac{I-\beta_{k} \left(\Sigma^{n}_{c^{*},c^{*},i}\right)_{k}}{\left(\mu^{n}_{c^{*},c^{*},i}\right)_{k}^{2}}. $$ While σ 2 is obtained by differentiating (19) with regard to β and set these derivations to zero and can be expressed as $$ (\sigma^{2})^{ii}=\frac {||\mathbf{y}_{c^{*},i}-\mathbf{A}^{n}_{c^{*}} \mathbf{g}^{\prime n}_{c^{*},c^{*},i}||_{2}^{2}} {(M-I+\sum_{k=1}^{K} \beta_{k})}. $$ The β k and $\sigma _{k}^{2}$ which maximize the log marginal likelihood are then found iteratively by setting β and σ 2 to initial values and then finding values for $\boldsymbol {\mu }^{n}_{c^{*},c^{*},i}$ and $\boldsymbol {\Sigma }^{n}_{c^{*},c^{*},i}$ from (12) and (13). These values are then repeatedly used to calculate a new estimate for β k and σ 2 and until a convergence criteria is met. Further details of the BCS algorithm can be found in [23, 24]. The procedure for implementation of the proposed technique is summarized in Algorithm 1. In contrast to the conventional BCS-based estimator, it can also improve the performance of the BCS estimator based on the principle of thresholding, which can be applied to keep the most significant taps. The proposed algorithm applies a threshold approach by retaining the channel taps that have energy above a threshold value of ϱ and set the other taps to zero. The value of ϱ is the energy of the channel impulse response. Multi-task BCS based channel estimation With a high probability of user movements, the massive MIMO system channel may vary. Consequently, the channels at different time instants/locations are different but share the same common statistical property. As a result, to estimate the current channel, we can exploit the previous compressive vectors in addition to the current compressive vector [15]. Given the system model in II, the received signals of (7) can have the following formulation $$ \mathbf{y}_{c^{*},i,j}= \sum_{n=1}^{N}\mathbf{A}^{n}_{c^{*},j} \mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}+ \mathbf{z}_{c^{*},i,j}, $$ for j=1,2,…J where J is the number of the task, $\mathbf {A}^{n}_{c^{*},j}, \mathbf {g}^{\prime n}_{c^{*},c^{*},i,j}\phantom {\dot {i}\!}$ and $\phantom {\dot {i}\!}\mathbf {z}_{c^{*},i,j}$ represents the jth measurement matrices,channel vector and the noise vector, respectively [15]. The main target is to estimate the channel $\mathbf {g}^{\prime n}_{c^{*},c^{*},i,j}$ which can be computed based on Bayesian channel estimation philosophy as the mean of the channel posterior distribution that can be represented as $$ \hat{\mathbf{g}}^{\prime n}_{c^{*},c^{*},i,j}=E(P(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\mathbf{y}_{c^{*},i,j},\boldsymbol{\Xi}_{j},{\xi}_{0})), $$ where ξ 0 represents the inverse of the net sum of the noise variance and interference variance, while Ξ j represent the hyperparameters that control the sparsity of the channel. Based on Bayes' rule the posterior distribution can be given as $$\begin{array}{*{20}l} P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\mathbf{y}_{c^{*},i,j},\boldsymbol{\Xi}_{j},{\xi}_{0}\right) \end{array} $$ $$\begin{array}{*{20}l} =\frac{P\left(\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}\right) P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\boldsymbol{\Xi}_{j}\right)}{{\int} P\left(\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}\right) P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},\boldsymbol{\Xi}_{j}\right)d\mathbf{g}^{\prime} } \\ \sim N\left(\boldsymbol{\mu}^{n}_{c^{*},i,j},\boldsymbol{\Sigma}^{n}_{c^{*},i,j}\right), \end{array} $$ the mean and covariance can be given by $$ \boldsymbol{\mu}^{n}_{c^{*},i,j}= {\xi}_{0} \boldsymbol{\Sigma}^{n}_{c^{*},i,j} \mathbf{A}^{n}_{c^{*},j} \mathbf{y}_{c^{*},i,j}, $$ $$ \boldsymbol{\Sigma}^{n}_{c^{*},i,j}=\left(\boldsymbol{\psi}+\boldsymbol{\Xi}_{j} (\mathbf{A}^{n}_{c^{*},j})^{H} \mathbf{A}^{n}_{c^{*},j}\right)^{-1}, $$ where ψ=d i a g(ψ 0,ψ 1,ψ 2,…,ψ K ). The likelihood function for the parameter $\mathbf {g}^{\prime n}_{c^{*},c^{*},i,j}$ and ξ 0 based on the received signal $\mathbf {y}_{c^{*},i,j}\phantom {\dot {i}\!}$ and can be expressed as $$\begin{array}{*{20}l}{} P\left(\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}\right)&= \left(\frac{2\pi}{{\xi}_{0}}\right)^{\frac{-N}{2}}\\ &exp\left(\frac{-{\xi}_{0}}{2}||\mathbf{y}_{c^{*},i,j}-\mathbf{A}^{n}_{c^{*},j}\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}||_{2}^{2}\right). \end{array} $$ The channel coefficients $\mathbf {g}^{\prime n}_{c^{*},c^{*},i,j}$ are assumed to be drawn from a product of zero-mean Gaussian distributions that are shared by all tasks as follow $$\begin{array}{*{20}l} P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\boldsymbol{\Xi}_{j}\right)&= \prod_{i=1}^{N} {\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|0,\boldsymbol{\Xi}_{j}^{-1}\right)} \\& =(2\pi)^{\frac{-N}{2}} \prod_{i=1}^{N} \boldsymbol{\Xi}_{j}^{\frac{1}{2}} \\& \quad\times exp\left[{\frac{-1}{2} \left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}\right)^{H} \boldsymbol{\Xi}_{j}\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}}\right]. \end{array} $$ To obtain the estimated channel, we need to estimate Ξ j and ξ 0 by applying the same procedure in Section 3 to the posterior distribution $P\left (\mathbf {y}_{c^{*},i,j}|,\boldsymbol {\Xi }_{j},{\xi }_{0}\right)$ that can be inference as [16] $$\begin{array}{*{20}l} P\left(\mathbf{y}_{c^{*},i,j}|\boldsymbol{\Xi}_{j},{\xi}_{0}\right) & \equiv P\left(\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}\right) \\*& P\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\boldsymbol{\Xi}_{j}\right). \end{array} $$ Now, by maximizing the log marginal likelihood and then differentiating with respect to Ξ j and ξ 0 and setting to zero yields $$ (\boldsymbol{\Xi}_{j})^{new}=\frac{J-\boldsymbol{\Xi}_{j} \sum_{j=1}^{J} \boldsymbol{\Sigma}^{n}_{c^{*},c^{*},i,j}}{\sum_{j=1}^{J}\left(\boldsymbol{\mu}^{n}_{c^{*},c^{*},i,j}\right)^{2}}, $$ $$ ({\xi}_{0})^{new}=\frac{\sum_{j=1}^{J}\left(K-J+\sum_{i=1}^{J} \boldsymbol{\Sigma}^{n}_{c^{*},c^{*},i,j} \boldsymbol{\Xi}_{j}\right)}{\sum_{j=1}^{J}||\mathbf{y}_{c^{*},i,j}-\mathbf{A}^{n}_{c^{*},j} \mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}||_{2}^{2}}. $$ Further information on MT-BCS can be found in [16]. CRB for BCS-based estimator In this section, we analyse the CRB for the proposed BCS and MT-BCS based channel estimation techniques to provide a benchmark for the minimum estimation error that can be achieved by the proposed algorithm. The CRB on the covariance of any estimator $\hat {\boldsymbol \theta }$ can be given as $$\begin{array}{*{20}l} E\left\{(\hat{\boldsymbol {\theta}}-\boldsymbol \theta)(\hat{\boldsymbol \theta}-\boldsymbol \theta)^{H}\right\} \geq J^{-1}(\boldsymbol \theta), \end{array} $$ where J(θ) is the Fisher information matrix (FIM) corresponding to the observation f, and can be given as $$\begin{array}{*{20}l} J(\boldsymbol \theta)= E\left(\frac{\partial }{\partial {\boldsymbol \theta}} log l(\boldsymbol \theta,f)\right)\left(\frac{\partial }{\partial {\boldsymbol \theta}} log l(\boldsymbol \theta,f)\right)^{T}, \end{array} $$ where l(θ,f) is the likelihood function corresponding to the observation f, parameterized by θ [25]. Therefore, given the system model in 2, the closed form expression of the Bayesian CRB (BCRB) for the proposed BCS can be given as $$\begin{array}{*{20}l} J(\mathbf{g}^{\prime n}_{c^{*},c^{*},i})\geq \left(\frac{1}{\boldsymbol{\beta}}+\frac{\mathbf{A}^{n}_{c^{*}}(\mathbf{A}^{n}_{c^{*}})^{H}}{{\sigma}^{2}}\right)^{-1}. \end{array} $$ Theorem 1 Given (28), the closed form expression of the BCRB for the proposed MT-BCS can be given as $$\begin{array}{*{20}l} J\left(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}\right)\geq \left(\frac{1}{\boldsymbol{\Xi}}_{j}+\frac{\mathbf{A}^{n}_{c^{*},j}\left(\mathbf{A}^{n}_{c^{*},j}\right)^{H}}{{{\xi}_{0}}}\right)^{-1}. \end{array} $$ See Appendix 1. □ To verify the accuracy of our analytical results, the simulation parameters can be summarized as follows: the number of antennas is 100, the number of users is 100, the number of the channel taps is 500, the number of subcarrier K is 4096 and the convergence δ is 10−6. The simulation results are obtained by averaging over 1000 realizations. To compare the accuracy of the channel estimation techniques, the normalized (MSE) is used for performance evaluation and is computed as $$ MSE= \frac{||\hat{\mathbf{g}}^{\prime n}_{c^{*},c,i,j}-\mathbf{g}^{\prime n}_{c^{*},c,i,j}||_{2}^{2}}{||\mathbf{g}^{\prime n}_{c^{*},c,i,j}||^{2}_{2}}. $$ Figure 3 shows the MSE performance comparison among a BCS-based channel estimation of three scenarios under small pilot contamination ($\phantom {\dot {i}\!}{\phi _{c^{*},c^{*},i}}=1$ and $\phantom {\dot {i}\!}{\phi _{c^{*},c,i}}=0.1$), strong pilot contamination (${\phi _{c^{*},c^{*},i}}=1\phantom {\dot {i}\!}$ and $\phantom {\dot {i}\!}{\phi _{c^{*},c,i}}=0.5$), very strong pilot contamination ($\phantom {\dot {i}\!}{\phi _{c^{*},c^{*},i}}=1$ and $\phantom {\dot {i}\!}{\phi _{c^{*},c,i}}=0.9$), regularized least square (RLS)-based estimator with no pilot contamination as a benchmark and the BCRB for BCS as a reference line. The results have shown significant improvement in estimation accuracy and addressing the pilot contamination problem for SNR values of −40 to 40 dB for the proposed technique compared with R-LS. This is a result of exploiting the prior statistical of channel sparsity. Furthermore, the results still show enhanced estimation performance for high SNR. MSE performance comparison between BSC, BCRB for ϕ c∗,c,i ={0.1,0.5,0.9} and R-LS versus SNR Figure 4 shows the (MSE) performance versus SNR with a different value of setting to the number of subcarrier K={100,200 and 300}, so the compression ratio (CR) (i.e., L/K) is to be C R={0.2,0.1and 0.06}, while the experiment is run under small pilot contamination ($\phantom {\dot {i}\!}{\phi _{c^{*},c^{*},i}}=1$ and $\phantom {\dot {i}\!}{\phi _{c^{*},c,i}}=0.1$). The results prove that the estimation accuracy is better performed by decreasing the values of the number of subcarriers, accordingly with increasing CR. MSE of BSC for K={100,200, and 300} and C R={0.2,0.1, and 0.06}, respectively Figure 5 demonstrates the MSE of the BSC-based channel estimation versus SNR for three scenarios of different settings to the number of antennas at the base station M={100,200, and 300}, the system under strong pilot contamination ($\phantom {\dot {i}\!}{\phi _{c^{*},c^{*},i}}=1$ and $\phantom {\dot {i}\!}{\phi _{c^{*},c,i}}=0.7$). The results show that the estimation accuracy of the proposed algorithm is enhanced by increasing the number of antennas. Thus, according to the law of large numbers, more coordinated BS antennas could provide more accurate support estimation. MSE of BSC for M={100,200, and 300} versus SNR Figure 6 shows the (MSE) performance versus SNR for BCS with different values for the number of pilots: 1000, 500, 100, 50, and 25, where the number of subcarrier K is 4096. The number of the CIR path is 500 while the experiments run under strong pilot contamination. For cases of the number of the pilots is greater than the number of channel taps (i.e., 1000 and 500), the BCS provides inefficient estimation accuracy, while for the other cases of the number of the pilot of (100, 50, and 25), which is less than 500, the estimation accuracy is enhanced significantly. In addition, there is no significant improvement for the cases of the number of the pilots 100, 50, and 25. In these cases, we can address pilot contamination by employing small values for the number of the pilot, i.e., 25. MSE performance comparison of BSC based estimator for different values of the number of the pilot 100, 50, and 10 versus SNR Figure 7 compares the (MSE) performance versus SNR among BCS, threshold-BSC, MT-BCS, LS, OMP and the Bilinear Approximate Message Passing (Bi-AMP) [26]. The number of subcarrier K is 1024 and the number of the CIR path is 100. Results show the proposed MT-BCS enjoys significant performance improvement over all the other estimators as a result of exploiting the statistical prior information on a large scale. However, this advantage is at the expense of a relatively high complexity of BCS and MT-BCS over other estimators as depicted in Table 1, which compares the computational complexity Bi-AMP [26], BCS [23], OMP [27], LS [28], and the MT-BCS [16]. Also, the results showed that the thresholding approach enhances the estimation accuracy of the conventional BCS, as the CIR contains so many taps with no significant energy. By setting the threshold and neglecting these taps, a huge part of the noise and interference from pilot contamination will be eliminated. MSE performance comparison between BCS, thresholded BCS, LS, MT-BCS, OMP, and BiAMP-based estimators versus SNR Table 1 Complexity analysis To address the pilot contamination problem in massive MIMO systems, we proposed a BCS-based channel estimation algorithm for the multi-cell multi-user massive MIMO. The simulation results have revealed that the BCS-based channel estimation algorithm has tremendous improvement over conventional-based channel estimation algorithms and can address the pilot contamination problem. Furthermore, the proposed technique can be enhanced by thresholding the CIR to a certain value and also by exploiting the common sparsity feature inherent in the system channel. In addition, the number of antennas and the compression ratio should be selected wisely to achieve optimum estimation accuracy. Appendix 1: Proof of Theorem 1 Following Section 5, we can write the FIM as $$ J(\mathbf{y}_{c^{*},i,j})\geq - E\left(\frac{\partial^{2} log (P_{\mathbf{y}_{c^{*},i,j}|\boldsymbol{\Xi}_{j},{\xi}_{0}}(P(\mathbf{y}_{c^{*},i,j}|\boldsymbol{\Xi}_{j},{\xi}_{0}))) }{\partial^{2}{\mathbf{g}^{\prime}}}\right)^{-1} $$ Based on Bayes' rule in (32), the FIM can be decomposed into two terms $$\begin{array}{@{}rcl@{}} & - E\left(\frac{\partial^{2} log (P_{\mathbf{y}_{c^{*},i,j}|\boldsymbol{\Xi}_{j},{\xi}_{0}}(P(\mathbf{y}_{c^{*},i,j}|\boldsymbol{\Xi}_{j},{\xi}_{0}))) }{{\partial^{2}{\mathbf{g}^{\prime}}}}\right)= -\\ &E\left(\frac{\partial^{2} log(P_{\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}}(P(\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0})) }{\partial^{2}{\mathbf{g}^{\prime}}}\right)- \\ &E\left(\frac{\partial^{2} log(P_{\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\boldsymbol{\Xi}_{j}}(P(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\boldsymbol{\Xi}_{j})))}{\partial^{2}{\mathbf{g}^{\prime}}}\right), \end{array} $$ using (28), the first term can be computed as follow $$\begin{array}{*{20}l} - \frac{\partial^{2} log(P_{\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}}(P(\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}))} {\partial^{2}{\mathbf{g}^{\prime}}}= \\ \frac{\partial}{{\partial{\mathbf{g}^{\prime}}}} \left[-log(2\pi)^{\frac{1}{2}}{\xi}_{0}^{-1}- \frac{\xi_{0}}{2}||\mathbf{y}_{c^{*},i,j}-\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}\mathbf{A}^{n}_{c^{*},j}||_{2}^{2}\right], \end{array} $$ $${} \frac{\partial^{2} log(P_{\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0}}(P(\mathbf{y}_{c^{*},i,j}|\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j},{\xi}_{0})) }{\partial^{2}{\mathbf{g}^{\prime}}}= \frac{\mathbf{A}^{n}_{c^{*},j}(\mathbf{A}^{n}_{c^{*},j})^{H}}{{\xi}_{0}}. $$ By applying the same procedure in (38 and 39) to the second term of (37) gives $$\begin{array}{*{20}l} E\left(\frac{\partial^{2} log(P_{\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\boldsymbol{\Xi}_{j}}(P(\mathbf{g}^{\prime n}_{c^{*},c^{*},i,j}|\boldsymbol{\Xi}_{j})))}{\partial^{2}{\mathbf{g}^{\prime}}}\right)=(\boldsymbol{\Xi}_{j})^{-1}. \end{array} $$ C-X Wang, et al, Cellular architecture and key technologies for 5G wireless communication Networks. IEEE Comm. Mag. 52(2), 122–130 (2014). V Jungnickel, K Manolakis, W Zirwas, B Panzner, V Braun, M Lossow, M Sternad, R Apelfrojd, T Svensson, The role of small cells, coordinated multipoint, and massive MIMO in 5G. IEEE Commun. Mag. 52(5), 44–51 (2014). H Zhang, S Gao, D Li, H Chen, L Yang, On superimposed pilot for channel estimation in multi-cell multiuser MIMO uplink: large system analysis. IEEE Trans. Vehicular Technol. 65:, 99 (2015). J Jose, A Ashikhmin, TL Marzetta, et al, Pilot contamination problem in multi-cell TDD systems. Proc. IEEE Int.Symp. Inf. Theory, Seoul. 28:, 2184–2188 (2009). J Jose, A Ashikhmin, TL Marzetta, et al, Pilot contamination and precoding in multi-cell TDD systems. IEEE Trans. Wireless Commun. 10(8), 2640–2651 (2011). J Zhang, B Zhang, S Chen, X Mu, M El-Hajjar, L Hanzo, Pilot contamination elimination for large-scale multiple-antenna aided OFDM systems. IEEE J. Selected Topics Signal Process. 8(5), 759–772 (2014). M Masood, L Afify, TY Al-Naffouri, Efficient coordinated recovery of sparse channels in massive MIMO. IEEE Trans. Signal Process. 63(1), 104–118 (2015). N Sinh. Compressive sensing for multi-channel and large scale MIMO networks. PhD. thesis, Dept. of Elect. And Comp. Eng., Concordia Univ. (Montreal, 2014), p. 13. C Qi, G Yue, L Wu, Y Huang, A Nallanathan, Pilot design schemes for sparse channel estimation in OFDM systems. IEEE Trans. Veh. Technol. 64(4), 1493–1505 (2015). M Carlin, P Rocca, G Oliveri, F Viani, A Massa, Directions-of-arrival estimation through Bayesian compressive sensing strategies. IEEE Trans. Antennas Propagat. 61(7), 3828–3838 (2013). RG Baraniuk, Compressive sampling. IEEE Signal Process. Mag. 24(4), 118–124 (2007). Z Fan, Z Lu, Y Han, in 2014 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). Accurate channel estimation based on Bayesian compressive sensing for next-generation wireless broadcasting systems (Beijing, 2014), pp. 1–5. Z Fan, Z Lu, Y Han, Reliable channel estimation based on Bayesian compressive sensing for TDS-OFDM systems, (Macau, 2014). X Cheng, YL Guan, G Yue, S Li, in Proc. IEEE GLOBECOM. Enhanced Bayesian compressive sensing for ultra-wideband channel estimation (California, 2012), pp. 4065–4070. X Cheng, M Wang, Y Guan, Ultra wideband channel estimation: a Bayesian compressive sensing strategy based on statistical sparsity. IEEE Trans. Vehicular Technol. 64(5), 1819–1832 (2015). S Ji, D Dunson, L Carin, Multi-task compressive sensing. IEEE Trans. Signal Process. 57(1), 92–106 (2009). X Guo, S Chen, J Zhang, X Mu, L Hanzo, Optimal pilot design for pilot contamination elimination/reduction in large-scale multiple-antenna aided OFDM systems. IEEE Trans. Wireless Commun. 15(11), 7229–7243 (2016). MR Nakhai, Multicarrier transmission. IET Signal Process. 2(1), 114 (2008). W Ding, F Yang, W Dai, J Song, Time–frequency joint sparse channel estimation for MIMO-OFDM systems. IEEE Commun. Lett. 19(1), 58–61 (2015). A Scherb, K Kammeyer, in Proc. IEEE Workshop Smart Antennas. Bayesian channel estimation for doubly correlated MIMO systems (Vienna, 2007). SM Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (PTR Prentice-Hall, Englewood Cliffs, 1993). S Ji, Y Xue, L Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008). ME Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1:, 211–244 (2001). AK Jagannatham, BD Rao, Whitening-rotation-based semi-blind MIMO channel estimation. IEEE Trans. Signal Process. 54(3), 861–869 (2006). JT Parker, P Schniter, V Cevher, Bilinear generalized approximate message passing. IEEE Trans. Signal Process. 62(22), 5839–5853 (2014). JA Tropp, AC Gilbert, Signal recovery from partial information via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007). OO Oyerinde, Reweighted regularised variable step size normalised least mean square-based iterative channel estimation for multicarrier-interleave division multiple access systems. IET Signal Process. 10(8), 947–954 (2016). This work is supported by the Iraqi Higher Committee of Educational Development (HCED). The authors would like to acknowledge its financial support. King's College London, Department of Informatics, London, Strand, London, WC2R 2LS, UK Hayder AL-Salihi & Mohammad Reza Nakhai Search for Hayder AL-Salihi in: Search for Mohammad Reza Nakhai in: Correspondence to Hayder AL-Salihi. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Massive multiple input multiple output (MIMO) Channel estimation Bayesian compressed sensing (BCS) Pilot contamination Channel state information (CSI) Multi-task Bayesian compressed sensing (MTBCS) Emerging Air Interfaces and Management Technologies for the 5G Era
CommonCrawl
arXiv.org > cond-mat > arXiv:1005.0583 cond-mat.mes-hall cond-mat Condensed Matter > Mesoscale and Nanoscale Physics Title:Projective Ribbon Permutation Statistics: a Remnant of non-Abelian Braiding in Higher Dimensions Authors:Michael Freedman, Matthew B. Hastings, Chetan Nayak, Xiao-Liang Qi, Kevin Walker, Zhenghan Wang (Submitted on 4 May 2010 (v1), last revised 7 Oct 2010 (this version, v4)) Abstract: In a recent paper, Teo and Kane proposed a 3D model in which the defects support Majorana fermion zero modes. They argued that exchanging and twisting these defects would implement a set R of unitary transformations on the zero mode Hilbert space which is a 'ghostly' recollection of the action of the braid group on Ising anyons in 2D. In this paper, we find the group T_{2n} which governs the statistics of these defects by analyzing the topology of the space K_{2n} of configurations of 2n defects in a slowly spatially-varying gapped free fermion Hamiltonian: T_{2n}\equiv {\pi_1}(K_{2n})$. We find that the group T_{2n}= Z \times T^r_{2n}, where the 'ribbon permutation group' T^r_{2n} is a mild enhancement of the permutation group S_{2n}: T^r_{2n} \equiv \Z_2 \times E((Z_2)^{2n}\rtimes S_{2n}). Here, E((Z_2)^{2n}\rtimes S_{2n}) is the 'even part' of (Z_2)^{2n} \rtimes S_{2n}, namely those elements for which the total parity of the element in (Z_2)^{2n} added to the parity of the permutation is even. Surprisingly, R is only a projective representation of T_{2n}, a possibility proposed by Wilczek. Thus, Teo and Kane's defects realize `Projective Ribbon Permutation Statistics', which we show to be consistent with locality. We extend this phenomenon to other dimensions, co-dimensions, and symmetry classes. Since it is an essential input for our calculation, we review the topological classification of gapped free fermion systems and its relation to Bott periodicity. Comments: Missing figures added. Fixed some typos. Added a paragraph to the conclusion Subjects: Mesoscale and Nanoscale Physics (cond-mat.mes-hall) Journal reference: Phys. Rev. B 83, 115132 (2011) DOI: 10.1103/PhysRevB.83.115132 Cite as: arXiv:1005.0583 [cond-mat.mes-hall] (or arXiv:1005.0583v4 [cond-mat.mes-hall] for this version) From: Chetan Nayak [view email] [v1] Tue, 4 May 2010 17:00:50 UTC (156 KB) [v2] Wed, 5 May 2010 17:03:22 UTC (167 KB) [v3] Tue, 18 May 2010 17:57:42 UTC (167 KB) [v4] Thu, 7 Oct 2010 17:45:40 UTC (168 KB)
CommonCrawl
Discrete & Continuous Dynamical Systems - A October 2004 , Volume 10 , Issue 4 A special issue on Traveling Waves and Shock Waves Guest Editors: Xiao-Biao Lin and Stephen Schecter Traveling waves and shock waves Xiao-Biao Lin and Stephen Schecter 2004, 10(4): i-ii doi: 10.3934/dcds.2004.10.4i +[Abstract](1994) +[PDF](79.4KB) Traveling waves and shock waves are physically important solutions of partial differential equations. Papers in this special issue address two aspects of the theory of traveling waves and shock waves: (1) the linearized stability of traveling waves and (2) the Dafermos regularization of a system of conservation laws. For more information please click the "Full Text" above. Xiao-Biao Lin, Stephen Schecter. Traveling waves and shock waves. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): i-ii. doi: 10.3934/dcds.2004.10.4i. The Evans function and stability criteria for degenerate viscous shock waves Peter Howard and K. Zumbrun 2004, 10(4): 837-855 doi: 10.3934/dcds.2004.10.837 +[Abstract](1920) +[PDF](234.2KB) It is well known that the stability of certain distinguished waves arising in evolutionary PDE can be determined by the spectrum of the linear operator found by linearizing the PDE about the wave. Indeed, work over the last fifteen years has shown that spectral stability implies nonlinear stability in a broad range of cases, including asymptotically constant traveling waves in both reaction--diffusion equations and viscous conservation laws. A critical step toward analyzing the spectrum of such operators was taken in the late eighties by Alexander, Gardner, and Jones, whose Evans function (generalizing earlier work of John W. Evans) serves as a characteristic function for the above-mentioned operators. Thus far, results obtained through working with the Evans function have made critical use of the function's analyticity at the origin (or its analyticity over an appropriate Riemann surface). In the case of degenerate (or sonic) viscous shock waves, however, the Evans function is certainly not analytic in a neighborhood of the origin, and does not appear to admit analytic extension to a Riemann manifold. We surmount this obstacle by dividing the Evans function (plus related objects) into two pieces: one analytic in a neighborhood of the origin, and one sufficiently small. Peter Howard, K. Zumbrun. The Evans function and stability criteria for degenerate viscous shock waves. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): 837-855. doi: 10.3934/dcds.2004.10.837. Eigenvalues and resonances using the Evans function Todd Kapitula and Björn Sandstede In this expository paper, we discuss the use of the Evans function in finding resonances, which are poles of the analytic continuation of the resolvent. We illustrate the utility of the general theory developed in [13, 14] by applying it to two physically interesting test cases: the linear Schrödinger operator and the linearization associated with the integrable nonlinear Schrödinger equation. Todd Kapitula, Bj\u00F6rn Sandstede. Eigenvalues and resonances using the Evans function. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): 857-869. doi: 10.3934/dcds.2004.10.857. Multiple viscous wave fan profiles for Riemann solutions of hyperbolic systems of conservation laws Weishi Liu For a system of hyperbolic conservation laws in one space dimension, we study the viscous wave fan admissibility of Riemann solutions. In particular, we show that structurally unstable Riemann solutions with compressive and overcompressive viscous shocks, and with constant portions crossing the hypersurfaces of eigenvalues admit viscous wave fan profiles. The main tool used in the study is the center manifold theorem for invariant sets and the exchange lemmas for singular perturbation problems. Weishi Liu. Multiple viscous wave fan profiles for Riemann solutions of hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): 871-884. doi: 10.3934/dcds.2004.10.871. An Evans function approach to spectral stability of small-amplitude shock profiles Ramon Plaza and K. Zumbrun In recent work, the second author and various collaborators have shown using Evans function/refined semigroup techniques that, under very general circumstances, the problems of determining one- or multi-dimensional nonlinear stability of a smooth shock profile may be reduced to that of determining spectral stability of the corresponding linearized operator about the wave. It is expected that this condition should in general be analytically verifiable in the case of small amplitude profiles, but this has so far been shown only on a case-by-case basis using clever (and difficult to generalize) energy estimates. Here, we describe how the same set of Evans function tools that were used to accomplish the original reduction can be used to show also small-amplitude spectral stability by a direct and readily generalizable procedure. This approach both recovers the results obtained by energy methods, and yields new results not previously obtainable. In particular, we establish one-dimensional stability of small amplitude relaxation profiles, completing the Evans function program set out in Mascia&Zumbrun [MZ.1]. Multidimensional stability of small amplitude viscous profiles will be addressed in a companion paper [PZ], completing the program of Zumbrun [Z.3]. Ramon Plaza, K. Zumbrun. An Evans function approach to spectral stability of small-amplitude shock profiles. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): 885-924. doi: 10.3934/dcds.2004.10.885. A nonlocal eigenvalue problem for the stability of a traveling wave in a neuronal medium Jonathan E. Rubin Past work on stability analysis of traveling waves in neuronal media has mostly focused on linearization around perturbations of spike times and has been done in the context of a restricted class of models. In theory, stability of such solutions could be affected by more general forms of perturbations. In the main result of this paper, linearization about more general perturbations is used to derive an eigenvalue problem for the stability of a traveling wave solution in the biophysically derived theta model, for which stability of waves has not previously been considered. The resulting eigenvalue problem is a nonlocal equation. This can be integrated to yield a reduced integral equation relating eigenvalues and wave speed, which is itself related to the Evans function for the nonlocal eigenvalue problem. I show that one solution to the nonlocal equation is the derivative of the wave, corresponding to translation invariance. Further, I establish that there is no unstable essential spectrum for this problem, that the magnitude of eigenvalues is bounded, and that for a special but commonly assumed form of coupling, any possible eigenfunctions for real, positive eigenvalues are nonmonotone on $(-\infty,0)$. Jonathan E. Rubin. A nonlocal eigenvalue problem for the stability of a traveling wave in a neuronal medium. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): 925-940. doi: 10.3934/dcds.2004.10.925. Evans function and blow-up methods in critical eigenvalue problems Björn Sandstede and Arnd Scheel Contact defects are one of several types of defects that arise generically in oscillatory media modelled by reaction-diffusion systems. An interesting property of these defects is that the asymptotic spatial wavenumber is approached only with algebraic order O$(1/x)$ (the associated phase diverges logarithmically). The essential spectrum of the PDE linearization about a contact defect always has a branch point at the origin. We show that the Evans function can be extended across this branch point and discuss the smoothness properties of the extension. The construction utilizes blow-up techniques and is quite general in nature. We also comment on known relations between roots of the Evans function and the temporal asymptotics of Green's functions, and discuss applications to algebraically decaying solitons. Bj\u00F6rn Sandstede, Arnd Scheel. Evans function and blow-up methods in critical eigenvalue problems. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): 941-964. doi: 10.3934/dcds.2004.10.941. Computation of Riemann solutions using the Dafermos regularization and continuation Stephen Schecter, Bradley J. Plohr and Dan Marchesin We present a numerical method, based on the Dafermos regularization, for computing a one-parameter family of Riemann solutions of a system of conservation laws. The family is obtained by varying either the left or right state of the Riemann problem. The Riemann solutions are required to have shock waves that satisfy the viscous profile criterion prescribed by the physical model. The system is not required to satisfy strict hyperbolicity or genuine nonlinearity; the left and right states need not be close; and the Riemann solutions may contain an arbitrary number of waves, including composite waves and nonclassical shock waves. The method uses standard continuation software to solve a boundary-value problem in which the left and right states of the Riemann problem appear as parameters. Because the continuation method can proceed around limit point bifurcations, it can sucessfully compute multiple solutions of a particular Riemann problem, including ones that correspond to unstable asymptotic states of the viscous conservation laws. Stephen Schecter, Bradley J. Plohr, Dan Marchesin. Computation of Riemann solutions using the Dafermos regularization and continuation. Discrete & Continuous Dynamical Systems - A, 2004, 10(4): 965-986. doi: 10.3934/dcds.2004.10.965.
CommonCrawl
Field phenotyping of grapevine growth using dense stereo reconstruction Maria Klodt1, Katja Herzog2, Reinhard Töpfer2 & Daniel Cremers1 BMC Bioinformatics volume 16, Article number: 143 (2015) Cite this article The demand for high-throughput and objective phenotyping in plant research has been increasing during the last years due to large experimental sites. Sensor-based, non-invasive and automated processes are needed to overcome the phenotypic bottleneck, which limits data volumes on account of manual evaluations. A major challenge for sensor-based phenotyping in vineyards is the distinction between the grapevine in the foreground and the field in the background – this is especially the case for red-green-blue (RGB) images, where similar color distributions occur both in the foreground plant and in the field and background plants. However, RGB cameras are a suitable tool in the field because they provide high-resolution data at fast acquisition rates with robustness to outdoor illumination. This study presents a method to segment the phenotypic classes 'leaf', 'stem', 'grape' and 'background' in RGB images that were taken with a standard consumer camera in vineyards. Background subtraction is achieved by taking two images of each plant for depth reconstruction. The color information is furthermore used to distinguish the leaves from stem and grapes in the foreground. The presented approach allows for objective computation of phenotypic traits like 3D leaf surface areas and fruit-to-leaf ratios. The method has been successfully applied to objective assessment of growth habits of new breeding lines. To this end, leaf areas of two breeding lines were monitored and compared with traditional cultivars. A statistical analysis of the method shows a significant (p <0.001) determination coefficient R 2= 0.93 and root-mean-square error of 3.0%. The presented approach allows for non-invasive, fast and objective assessment of plant growth. The main contributions of this study are 1) the robust segmentation of RGB images taken from a standard consumer camera directly in the field, 2) in particular, the robust background subtraction via reconstruction of dense depth maps, and 3) phenotypic applications to monitoring of plant growth and computation of fruit-to-leaf ratios in 3D. This advance provides a promising tool for high-throughput, automated image acquisition, e.g., for field robots. Grapevines (Vitis vinifera L ssp. vinifera) are highly susceptible to several fungal diseases (e.g. powdery mildew and downy mildew) and require substantial effort to protect the plants. This susceptibility is the major reason for extended grapevine breeding activities around the world which aim at selecting new cultivars with high disease resistance and high quality characteristics [1]. Representing a perennial woody crop plant, grapevine phenology, analysis of growth habits and yield traits can only be evaluated in the field. The analysis of growth habits is an important aspect in viticulture for site specific canopy management. The aim is to improve grape yield and wine quality [2]. Three factors in particular describe the relationship between canopy structure, light microclimate and grape quality: 1) the geometrical dimensions of the canopy, 2) the foliage density as an indicator of leaf exposure to sunlight, and 3) the bunch exposure to sunlight [3]. The determination of grapevine architecture, e.g., canopy surface area including vigor during the vegetative period and the respective position of organs (leaves, stems and bunches), can be used for dynamic characterization of breeding material and site-specific canopy management. The overall aim is to achieve an optimal canopy microclimate, especially for the grape cluster zone, i.e., minimal shade and aerated conditions [2]. In addition, a balanced ratio between vegetative (shoots and leaves) and fruit growth is important to avoid excess or deficient leaf areas in relation to the weight of the fruit [2]. This fruit-to-leaf characterization requires quantifications of the canopy surface dimension and the grapes. In traditional breeding programs, phenotyping of grapevines is performed by visual inspection. Thus, data acquisition is time consuming, laborious, and the resulting phenotypic data are the subjective assessment of the personnel in charge. Traits can be described with OIV descriptors [4] or the BBCH scale [5]. The OIV descriptor 351 [4] is used to classify grapevine vigor to five categories (1 = very weak; 3 = weak; 5 = medium; 7 = strong; 9 = very strong vigor). Accurate characterization of grapevine growth from a large number of cultivars (viticulture) or breeding material (grapevine breeding) requires simple, fast and sensor-based methods which are applicable from a moving platform for high-throughput data acquisition [6]. Numerous indirect and non-invasive methods have been studied to characterize grapevine foliage directly in vineyards [3,7-9]. Most of the studies are based on costly sensor techniques, e.g. electromagnetic scanners [3], ultrasonic sensors [10], laser scanners [9], infrared sensors [11], fish-eye optical sensors [11-13] or model based strategies [7]. Some of these methods correlate with destructive sampling from direct measurements taken with a leaf area meter [12,14,15]. Electromagnetic and laser scanners directly obtain 3D point clouds of a scene, however provide no volumetric and surface information. Other active sensors include time-of-flight and structured light sensors, which are specialized for indoor environments. These types of sensors can have difficulties in scenes with bright illumination or large distances as often occur in outdoor environments [16]. Image analysis provides a promising technique for non-invasive plant phenotyping [17]. RGB cameras are a practical sensor for usage in the field because they are portable, provide fast data aquisition and are suitable for outdoor illuminations. However, only a few studies exist on automated approaches for monitoring grapevine growth habits directly in vineyards using low-cost consumer cameras [6,8,18]. Color is an important indicator for vegetation and can be used to detect leaves in images [19]. When the foreground plant should be segmented from the background however, a color image alone is often not sufficient. This is due to the fact that the foreground plant and the background containing the field and other plants usually have the same color distributions. The use of single RGB images then requires elaborate installation of artificial backgrounds in the field, to determine the canopy dimensions from grapevines [8]. Furthermore, the image projection process can create size distortions in the 2D image plane, e.g. if some parts of the plant are closer to the camera than others [6,18]. From this point of view, additional 3D information can help to increase the precision of phenotypic data and to eliminate the background automatically [6]. Full 3D models of plants from images were computed in [20] and [21] for segmentation of leaves and stems. In [22] an image based method for 4D reconstruction of plants based on optical flow is introduced. Depth maps, providing 3D information in the image domain, have been used for the determination of leaf inclination angles [23] and bud detection [6]. Stereo reconstruction methods have been intensively studied in the field of computer vision [24,25]. Respective methods can be divided into sparse reconstruction where 3D point clouds are computed [26,27], and dense reconstruction which aims at computing surfaces [24]. The use of sparse 3D information yields little information in homogenous image regions and can result in inaccurate classification results for phenotyping as has been shown in [6]. Thus, dense 3D surfaces are essential for a reliable detection of the foreground (i.e., the grapevine) and elimination of redundant background (i.e., the field). This study presents a novel approach for non-invasive, fast and objective field phenotyping of grapevine canopy dimensions. The method is based on dense depth map reconstruction, color classification and image segmentation from image pairs. An overview of the method is shown in Figure 1. The main contributions of this study are the following: We present a method to robustly segment RGB images of grapevine to the phenotypic classes 'leaf', 'stem', 'grapes' and 'background'. The segmentation is based on color and depth information. The results allow for objective phenotypic assessment of large data sets which yields a step towards overcoming the phenotypic bottleneck. Workflow of the proposed image-based phenotyping method. Stereo image pairs (A, B) are captured from a moving platform in a vineyard. From these images the method computes a dense depth map (C), a color classification based on the the green and blue color channels (D), and an edge detector (E). These features are used to segment the image domain to 'leaf', 'stem' and 'background' (F). The image segmentation allows for objective computation of phenotypic indicators like the visible leaf area and fruit-to-leaf ratio. The method is robust for background subtraction in particular, because of the use of dense depth maps from stereo reconstruction. We developed a stereo reconstruction algorithm that is particularly suitable for the fine-scaled features of plant geometry. This avoids elaborate application of artificial backgrounds during image acquisition, even for images where the foreground and background have similar color distributions. Using a standard consumer camera, data acquisition is fast, simple and portable. The method is thus particularly practical for phenotyping of grapevine, which can only be evaluated directly in the fields. The method has been successfully applied to non-invasive and objective monitoring of grapevine growth and computation of fruit-to-leaf ratios in 3D. Furthermore, we show how growth habits of new breeding lines are classified by comparison to known cultivars. This study presents a method for objective computation of phenotypic traits of grapevine from RGB images captured in vineyards. The method is based on 1) the automated elimination of the background from RGB field images by using dense stereo reconstruction, 2) the automated detection of leaves, grapes and stem in the foreground, and 3) the quantification of the visible leaf area. The following shows phenotypic applications to monitoring of grapevine growth and computation of fruit-to-leaf ratios in 3D. It furthermore shows how growth monitoring enables the classification of new breeding lines by comparing their growth habits to known cultivars. Monitoring of grapevine growth In the following, we show how the presented method can be applied for the analysis of growth habits of breeding material with unknown properties. Objective assessment is achieved by monitoring leaf areas over time and comparing them to traditional cultivars that are used as a reference. To this end, we monitored two breeding lines and sample plants of two traditional cultivars with different growth habits ('Riesling' with medium shoot growth and 'Villard Blanc' with weak shoot growth) during a season. Figure 2A shows the computed leaf area per breeding line and the average and standard deviation of leaf areas for the two traditional cultivars.Standard deviations were only computable for the reference cultivars 'Riesling' and 'Villard Blanc'. From both cultivars three plants were used respectively as biological repetitions at each time point. From the investigated breeding lines only one single plant was available per time point. This is due to the fact that only one biological repetition is available from the breeding lines and thus, no standard deviation was calculated. Figure 2B shows the images of two sample plants that were monitored at multiple time points. As expected, none of the investigated genotypes displayed a detectable vegetative growth before bud burst. For all genotypes, an increasing leaf area can be observed between the 90th day and the end of the experiment on day 160. Differences in the percentage were used for objective scoring of plant growth. Image-based monitoring of grapevine growth. Two breeding lines with unknown growth characteristics are compared to the known cultivars 'Riesling' and 'Villard Blanc'. The progression graph (A) shows increasing leaf areas for the four cultivars during the vegetative growth phase. Numbers were labeled with reference to two of the sample plants that were monitored (B). We observed a ten times faster growth of 'Riesling' compared to the cultivar 'Villard Blanc'. As also shown in Figure 2A, the genotypes offer the major differences in plant growth at day 120. Two weeks later two groups were observed: group 1 consisting of 'Riesling' and breeding line 1; and group 2 consisting of 'Villard Blanc' and breeding line 2. At day 160, breeding line 1 almost displayed the maximum feasible leaf area of 100 %. This genotype also exhibited the fastest growth during the entire experiment. The second breeding line grew at a slower rate and had a smaller digital leaf area at day 160 and thus seems to be more related to 'Villard Blanc'. These results show that the presented method enables an objective distinction of cultivars. This is essential for a reliable identification of subtle differences in visible canopy dimensions and for the objective, comparable characterization of breeding material with unknown phenotypic properties, e.g. on different field sites or different vineyard management conditions. The images were captured at different time points until the first canopy reduction. This enables an objective monitoring of the vegetative growth in a defined time scale. We observed two groups of growth habits which facilitate an objective evaluation of the investigated breeding lines. Thus, the method is also a promising tool for the identification of genotype specific differences in growth rates or for efficiency analysis of plant protection efforts. This kind of fast, objective and comparative monitoring of plant development further enables the study of growing dynamics with respect to climatic influences or soil properties. Computation of fruit-to-leaf-ratios in 3D The segmentation of the images to 'grapes', 'leaves' and 'background' allows for an investigation of bunch positions in the canopy. This involves the analysis of which grape bunches are overlaid with leaves, the amount of bunch exposure to sunlight and whether a grapevine site shows a well-balanced fruit-to-leaf ratio. The use of dense depth maps enables a scaling of each pixel according to its depth which corresponds to the actual size of the area captured in the pixel. The respective area computation in 3D can increase accuracy of the resulting complex phenotypic data, in comparison to area computation in the 2D image plane. Figure 3 shows a comparison of the average grapes-to-leaf ratio in 2D (pixel) and 3D (actual size). Computing the ratio in 3D results in a 10% decreased ratio compared to the computation in 2D.It can balance out the fact that some leaves are closer to the camera and thus occupy a disproportionally larger area in the 2D image plane than the grapes that are farther away. This effect can also be observed in the Additional file 1 which shows a 3D view of the surface shown in Figure 3C. Generalization to the phenotypic class 'grape' and computation of fruit-to-leaf ratios. The input image (A) is segmented to 'grape', 'leaf' and 'background'. The segmentation can be used to compute the grape-to-leaf ratio in the 2D image domain (B). A more accurate ratio can be computed in the depth weighted 3D space using the reconstructed surface (C). Thus, the presented computation of fruit-to-leaf ratios provides an efficient method to objectively evaluate yield efficiency of red grapevine cultivars. Statistical evaluation and error analysis For validation of the method, 22 images of the data set (as the example shown in Figure 4A) were manually segmented and used as ground truth (Figure 4B). The ground truth was used for comparison with the computed segmentation results of the algorithm (Figure 4C). Evaluation of the segmentation results by comparison to ground truth data. A set of 22 images (A) was used to compare manually labeled ground truth images (B) to the computed segmentation results (C). The confusion matrix (D) shows that the proposed method correctly classifies the majority of pixels as 'leaf' (Lv), 'stem' (St) or 'background' (Bg). A confusion matrix was generated from the 22 images in order to investigate the precision of the computed classification results (Figure 4D). The matrix represents the relation between actual classifications in rows and predicted classifications in columns, for the three classes 'leaf', 'stem' and 'background'. It reveals that the major percentage of all three classes was correctly classified by the automated segmentation algorithm. The best results can be observed for the classes 'background' (95% correct classifications) and 'leaf' (87%). The segmentation of the class 'stem' shows the highest false classification rate with 47% pixels classified as 'leaf'. A reason for this might be the fact that young branches often have green color and therefore get falsely classified as 'leaf'. This indicates that the distinction between these two plant organs is not accurate enough when only color information is considered. Additional analysis of geometric information would be required for more accurate classifications, similar to the approaches recently published for point clouds in [28] and dense surfaces in [20,21]. Such an extension might be a promising improvement of the presented method. The accuracy of the computed leaf area was evaluated using the software R for statistical computing. To this end, the computed leaf area was compared with the ground truth leaf area. Figure 5A shows the linear regression analysis where a linear equation was determined from the segmentation results. The regression analysis showed a determination coefficient of R 2=0.93. Further, the estimated regression line (y=0.997x+1.47) was used to predict the leaf area. The slope of 0.997 implies that the error is not systematical. An error analysis of the computed leaf area was performed by calculation of the frequency distribution of observed residue and the root-mean-square error (RMSE). Figure 5B shows the results of this analysis. Every pixel of the 22 computed classifications was compared to the respective ground truth classification, in order to determine the precision of the developed method. The leaf areas were normalized to a range of 0 to 100 by the size of the image domain. The residuals are given as absolute values. The ground truth reference data was plotted against the computed leaf area and a root-mean-square error of 3.083% was calculated. This implies that the regression line approximately represents the reference data. Furthermore, 68% of the residuals are within a bound of ±2.5 around the mean 1.4, 95% are within a bound of ±3.9 and 99.7% are within a bound of ±7.7. Validation and error analysis with N = 22 test images.A. The linear regression analysis shows the difference of leaf area from reference classifications (ground truth) and the computed image segmentations (predicted leaf area). B. Frequency distribution of observed residue and the root-mean-square error (RMSE) of the difference between predicted leaf area and ground truth. Performance and efficiency of the method To compute globally optimal depth reconstructions, we optimize the stereo problem in a higher dimensional space. This requires the respective amount of memory and run-time. In the applications used in this study, the depth map computation was processed off-line after image capturing, and is thus not time-critical, whereas the image capturing can be processed in real-time. The examples shown in this study were computed on an Nvidia GeForce GTX Titan GPU with 6 GB memory. We chose an image resolution of 1024 × 1024 and a depth resolution of 256, because it fits the memory of the GPU. The computation time for this resolution takes about 10 minutes using the Cuda programming language for parallel processing. We use a single camera to capture the stereo image pairs from the grapevines of interest. This makes the image acquisition simple and inexpensive, however the relative camera positions are different for each image pair. In consequence, the depth range is also variable, and needs to be manually adjusted for each image pair. This corresponds to a user input of one value for each image pair, making the method practicable for the applications shown in this study. Furthermore, in 5% of the image pairs used in the experiments, the relative position of the camera capture positions could not be reconstructed. This is usually the case when the distance or orientation angle of the camera positions are too large, and hence not enough corresponding points can be found in the two images. Typically, these images do not have enough overlap, or the images do not contain enough structure. We further observed that wind can cause the plant to move from one image capturing moment to the next. In this case, the depth maps contain incorrect parts and the image acquisition has to be repeated. This problem can be overcome by using two cameras that capture simultaneously as used in [23]. Applying a similar stereo system would be an interesting extension for future work. Then, the relative position of the cameras and the depth range could be calibrated once, which would be an efficient way to save computation time and eliminate the need for user interaction. High-throughput field phenotyping of perennial plants like grapevine requires a combination of automated data recording directly in the field and automated data analysis. Using only image data from unprepared fields, the segmentation into foreground (grapevine) and background (field) constituted the major challenge in this study. Especially at the beginning of a growing season, an automated segmentation based on color only is impossible in single field images as very similar color distributions occur in foreground and background. To overcome this problem, most related works either install artificial backgrounds behind the plants or use depth information generated by e.g. 3D laser scanners. We presented a novel approach for the segmentation of field images to the phenotypic classes 'leaf', 'stem', 'grape', and 'background', with a minimal need for user input. In particular, only one free parameter needs to be manually adjusted for each input image pair, which corresponds to the depth segmentation of foreground and background. The method is based on RGB image pairs, which requires just a low-cost standard consumer camera for data aquisition. We showed robust background subtraction in field images by the use of dense depth maps. This avoids the necessity of costly 3D sensor techniques or elaborate preparation of the scene. We further showed how the method allows for objective computation of canopy dimensions (digital leaf area) which enables the monitoring and characterization of plant growth habits and computations of fruit-to-leaf ratios. Future plans for the application of this approach include the installation of a stereo camera system where the cameras are mounted with fixed position to each other, for a standardized image acquisition setup. Thus, the depth parameter for the image segmentation can be set constant, in order to reduce the need for user interaction. Furthermore, refinements of the method are possible, including an automated detection of wires in the images and other objects that appear in the foreground but do not belong to the plant. The consideration of geometric information to distinguish between leaves and stem would be interesting to investigate. This might be important in order to reduce false positive classifications and thus, enhance the accuracy of the method. The presented method provides a promising tool for high-throughput image-based phenotyping in vineyards. The ability to accurately and quickly monitor phenotypic plant growth, particularly after bud burst, facilitates an improvement to vineyard management, and the early detection of growth defects. Furthermore, the automated analysis of phenotypic traits like fruit-to-leaf ratios, that were usually acquired manually in the past, allows for processing of large data sets of plants. Thus, the method might provide a step towards the automated validation or determination of optimal fruit-to-leaf ratios from a large variety of plants and cultivars. The workflow of the proposed image-based phenotyping approach is shown in Figure 1: First, a stereo image pair is captured in a vineyard with a standard RGB camera. These image pairs are rectified in a pre-processing step in order to transform the image planes to a normalized case (Figure 1A,B). The rectified images facilitate the computation of dense depth maps (Figure 1C). Furthermore, one of the two images is classified by a color classifier enhancing green plant organs (Figure 1D), and image edges are detected in order to preserve fine-scaled structures of the plant (Figure 1E). These features are used to compute a segmentation of the image domain to the phenotypic classes 'leaf', 'stem' and 'background' (Figure 1F). The resulting segmentations are applicable for phenotypic computations like the quantification of visible 3D leaf areas. Field experimental setup The method was validated and tested with a database of 90 images of grapevine plants, captured at five different dates during the 2011 season. For digital phenotyping, we chose images from genotypes with similar phenology, i.e. similar time of bud burst and flowering. Therefore, images of 'Riesling', 'Villard Blanc' and two breeding lines were selected for further investigation. Experimental site The experiments involved plants of the Vitis vinifera cultivars 'Riesling' and 'Villard Blanc' (three plants per cultivar) as well as two breeding lines (F1 generation of the crossing Gf.Ga.47-42 × 'Villard Blanc') at the experimental vineyard of Geilweilerhof located in Siebeldingen, Germany (N 49°21.747, E 8°04.678). For the breeding lines only one plant per genotype is available. Bud burst at BBCH 10 of all selected genotypes was detected at the 100th day of the year 2011, and the flowering began at the 145th day of the year 2011. Hence, the selected genotypes showed similar phenology. 'Villard Blanc' displayed a slow growth rate (OIV 351 class 3) whereas 'Riesling' displayed a medium growth rate (OIV 351 class 5) [unpublished data]. Image acquisition A single-lens reflex (SLR) camera (Canon EOS 60D) was used to capture RGB images in the vineyard. The SLR camera was fixed with variable height mounting above ground level (1.00 – 1.30 m) in the middle of a wheeled carrier vehicle (Fetra GmbH, Hirschberg, Germany). Image acquisitions were carried out in front of the grapevine plants with a distance of at least 1 m by dragging this platform between the grapevine rows as described in [6]. Due to the limited space between two rows this enables a stable distance to the plants and hence allows for comparison of images taken at different time points. The image pairs were captured in the field under natural illumination conditions with manually controlled exposure. No predefined exposure time was used. Image pairs of each plant were captured from different vantage points, in a way that the depicted areas are overlapping. The height of the camera above ground level was adapted in order to standardize the image acquisition. The woody cane of the grapevines was used as reference for the captured grapevine section (the cane or parts of the cane must be visible in the image). This implies that the center of the grapevine is acquired. Image rectification In a pre-processing step, the captured images are rectified, using the software of [26] for identification of key points and the software of [29] for the subsequent rectification transformation. In rectified images, epipolar lines are parallel which simplifies the subsequent computation of depth maps. Rectification implies a reprojection of two images, such that both projected images lie in the same plane and geometrical distortions are corrected. To rectify an image pair, the camera parameters are calibrated, i.e. lens distortion and relative camera positions are estimated. With these parameters homographies are computed that facilitate the image transformation. Dense depth reconstruction Depth reconstruction aims at inferring 3D structure from a pair of rectified 2D images, in the following denoted by \(I_{1},I_{2}:\Omega \rightarrow \mathbb {R}^{3}\). The images are defined in the image domain \(\Omega \subseteq \mathbb {R}^{2}\). We represent 3D information with a dense depth map \(d:\Omega \rightarrow \mathbb {R}\) which assigns to each pixel x of the reference image I 1 the distance (also denoted by the depth) of the respective 3D point to the camera. Depth can be computed from disparity, which is the displacement of image locations of an object point. In order to deduce disparity from the rectified images, pixel pairs that show the same object point have to be identified. Given the images \(I_{1},I_{2}:\Omega \rightarrow \mathbb {R}^{3}\), we compute a dense disparity map v:Ω→Γ:=[0,γ max] by minimizing a higher dimensional variable ϕ:Ω×Γ→[0,1] as in [30,31]. Optimization in the product space Ω×Γ of the image domain Ω and the range of disparities Γ allows for a convex formulation of the stereo reconstruction problem, and therefore enables global optimization. Thus, the disparity map v is computed by integration over ϕ: $$ v(x) = \int_{\Gamma} \phi \,\mathrm{d}\gamma, $$ and ϕ is a minimizer of $$ \begin{aligned} &\min_{\phi\in C} \left\lbrace \int_{\Omega\times\Gamma} |I_{1}(x)-I_{2}(x+\gamma)| | \partial_{\gamma}\phi| \,\mathrm{d}x\mathrm{d}\gamma \right.\\ &\,\,\qquad+ \left.\lambda \int_{\Omega\times\Gamma} | D\nabla(\phi)| \,\mathrm{d}x\mathrm{d}\gamma \right\rbrace, \end{aligned} $$ $$ C = \left\lbrace \phi:\Omega\times\Gamma\rightarrow[0,1] \,:\, \phi(x,0) = 1,\, \phi(x,\gamma_{\text{max}}) = 0\right\rbrace. $$ The first term in (2) is the data fidelity term which measures point-wise color differences between the two images. The second term is the regularizer term, weighted by a smoothness parameter \(\lambda \in \mathbb {R}\). The L 1 norm in the regularizer yields piecewise smooth solutions while preserving edges. We further weight the gradient norm with an anisotroy tensor D which serves as an edge enhancing function, in order to preserve the fine-scaled structures of the plants. An visual representation of D is shown in Figure 1E, where the color of each pixel encodes the direction of the local image gradient and the intensity encodes the length of this vector. The optimization problem can be globally optimized as described in [31] while the constraint set C ensures that the global minimum of 2 is not the trivial solution. Disparity maps give measurements in pixel units, while depth is measured in absolute scale. The depth d is proportional to the inverse of the disparity v and is computed by $$ d(x) = \frac{f b}{v(x)}, $$ where f is the focal length of the camera and b is the baseline, i.e. the distance between the two camera capturing positions. Image segmentation using color and depth Besides computing depth information from images, the images are segmented with respect to the color and depth information. Image segmentation is the partitioning of the image domain into meaningful regions, i.e. each pixel in the image domain Ω gets assigned a label l∈L={1,…,n}. We segment the image domain to n=3 regions corresponding to 'stem', 'leaf' and 'background'. An example of a segmented image is shown in Figure 1F, where green regions represent the class 'leaf', brown regions 'stem', and white regions 'background'. To compute the segmentation, we use the method of [32], using the following two classifiers f depth (5) and f color (6): First, the reconstructed depth map d gives information about location of the foreground and background. We use the following function $$ f_{\text{depth}}(x) = d(x) - c_{\text{depth}}, $$ to implement the assumption that the background is farther away from the camera capturing position than the foreground plant. An example for f depth is shown in Figure 1C, where the color encodes the depth of each pixel, ranging from 'red = near' to 'blue = far'. The free parameter \(c_{\text {depth}}\in \mathbb {R}\) is dependent on the maximum depth that the foreground plant can take. It can be assumed constant for standardized image capturing processes, or if distances of the camera capturing positions and plants vary only in a specified range. In the experiments shown in this paper c depth was adjusted manually for each image pair. Second, the foreground is classified as 'leaf' or 'stem', using the color information of the reference image I 1: $$ f_{\text{color}}(x) = I_{1}^{\text{green}}(x)-I_{1}^{\text{blue}}(x) - c_{\text{color}}. $$ Subtracting the blue color channel I blue from the green color channel I green yields a robust classifier for vegetation [19]. An example for f color is shown in Figure 1D, where green regions represent high function values and blue regions represent low values. The free parameter \(c_{\text {color}}\in \mathbb {R}\) is dependent on the type and stadium of the plant, as well as the prevalent illumination and weather conditions of the scene. In the experiments shown in this paper c color=20 was chosen by experiments, for RGB values ranging from 0 to 255. The implementation of the additional class 'grape' classifies red and blue colored pixels as grapes, enabling the computation of fruit-to-leaf ratios. Computation of leaf surface areas The 3D digital leaf surface area is computed from the segmented images and the depth maps by weighting the pixel sizes according to their depth. The area of region Ω i is computed from the segmentation u and depth map d by: $$ \text{Area}(\Omega_{i}) = \int_{\Omega} d(x)^{2} u_{i}(x) \,\text{dx}, $$ where the size of a pixel is computed as d(x)2, normed by the focal length f of the camera, as in [33]. The weighting balances out the fact that due to projection in the image capturing process, the depicted objects in the image do not appear according to their actual size – objects that are near the camera occupy a larger region in the image than parts that are farther away. Töpfer R, Hausmann L, Harst M, Maul E, Zyprian E, Eibach R.New horizons for grapevine breeding. Methods Temperate Fruit Breed. 2011; 5:79–100. Smart RE, Dick JK, Gravett IM, Fisher BM. Canopy management to improve grape yield and wine quality-principles and practices. S Afr J Enolgy Viticulture. 1990; 11(1):3–17. Mabrouk H, Sinoquet H.Indices of light microclimate and canopy structure of grapevines determined by 3d digitising and image analysis, and their relationship to grape quality. Aust J Grape Wine Res. 1998; 4(1):2–13. OIV.http://www.oiv.int/oiv/info/enpoint2013?lang=en 2013. Lorenz DH, Eichhorn KW, Bleiholder H, Klose R, Meier U, Weber E.Growth stages of the grapevine: Phenological growth stages of the grapevine (vitis vinifera l. ssp. vinifera)-codes and descriptions according to the extended bbch scale. Aust J Grape Wine Res. 1995; 1(2):100–3. Herzog K, Roscher R, Wieland M, Kicherer A, Läbe T, Förstner W, et al.Initial steps for high-throughput phenotyping in vineyards. Vitis. 2014; 53(1):1–8. Louarn G, Lecoeur J, Lebon E.A three-dimensional statistical reconstruction model of grapevine (vitis vinifera) simulating canopy structure variability within and between cultivar/training system pairs. Ann Bot. 2008; 101(8):1167–84. Diago M-P, Correa C, Millán B, Barreiro P, Valero C, Tardaguila J.Grapevine yield and leaf area estimation using supervised classification methodology on rgb images taken under field conditions. Sensors. 2012; 12(12):16988–17006. Arnó J, Escolà A, Vallès J, Llorens J, Sanz R, Masip J, et al.Leaf area index estimation in vineyards using a ground-based lidar scanner. Precision Agric. 2013; 14(3):290–306. Mazzetto F, Calcante A, Mena A, Vercesi A.Integration of optical and analogue sensors for monitoring canopy health and vigour in precision viticulture. Precision Agric. 2010; 11(6):636–49. Bates T, Grochalsky B, Nuske S.Automating measurements of canopy and fruit to map crop load in commercial vineyards. Res Focus: Cornell Viticulture Enology. 2011; 4:1–6. Cutini A, Matteucci G, Mugnozza GS.Estimation of leaf area index with the li-cor lai 2000 in deciduous forests. Forest Ecol Manag. 1998; 105(1–3):55–65. Johnson LF, Pierce LL.Indirect measurement of leaf area index in california north coast vineyards. HortScience. 2004; 39(2):236–38. Behera SK, Srivastava P, Pathre UV, Tuli R.An indirect method of estimating leaf area index in jatropha curcas l. using lai-2000 plant canopy analyzer. Agric Forest Meteorol. 2010; 150(2):307–11. Jonckheere I, Fleck S, Nackaerts K, Muys B, Coppin P, Weiss M, et al.Review of methods for in situ leaf area index determination: Part i. theories, sensors and hemispherical photography. Agric Forest Meteorol. 2004; 121(1–2):19–35. Abbas SM, Muhammad A.Outdoor rgb-d slam performance in slow mine detection. In: Robotics; Proceedings of ROBOTIK 2012; 7th German Conference On. Germany: VDE: 2012. p. 1–6. Fiorani F, Schurr U.Future scenarios for plant phenotyping. Annu Rev Plant Biol. 2013; 64:267–91. Roscher R., Herzog K., Kunkel A., Kicherer A., Töpfer R., Förstner W.Automated image analysis framework for high-throughput determination of grapevine berry sizes using conditional random fields. Comput Electronics Agric. 2014; 100(0):148–58. Meyer GE.Machine vision identification of plants. Recent Trends for Enhancing the Diversity and Quality of Soybean Products. Krezhova D (ed.) Croatia: InTech; 2011. DOI: 10.5772/18690. Klodt M, Cremers D.High-resolution plant shape measurements from multi-view stereo reconstruction. In: ECCV Workshop on Computer Vision Problems in Plant Phenotyping. Zürich, Switzerland: 2014. Paproki A, Sirault X, Berry S, Furbank R, Fripp J.A novel mesh processing based technique for 3d plant analysis. BMC Plant Biol. 2012; 12(1):63. Schuchert T, Scharr H.Estimation of 3d object structure, motion and rotation based on 4d affine optical flow using a multi-camera array. In: Proceedings of the 11th European Conference on Computer Vision: Part IV. ECCV'10. Berlin, Heidelberg: Springer: 2010. p. 596–609. Biskup B, Scharr H, Schurr U, Rascher U. A stereo imaging system for measuring structural parameters of plant canopies. Plant Cell Environ. 2007; 10(30):1299–308. Hirschmüller H.Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intell. 2008; 30(2):328–41. Ranftl R, Gehrig S, Pock T, Bischof H.Pushing the Limits of Stereo Using Variational Stereo Estimation. In: IEEE Intelligent Vehicles Symposium. IEEE Intelligent Transportation Systems Society (ITSS): 2012. Lowe DG.Distinctive image features from scale-invariant keypoints. Int J Comput Vis (IJCV). 2004; 60(2):91–110. Snavely N, Seitz SM, Szeliski R.Photo tourism: Exploring photo collections in 3d. In: SIGGRAPH Conference Proceedings. New York: ACM Press: 2006. p. 835–46. Paulus S, Dupuis J, Mahlein A-K, Kuhlmann H.Surface feature based classification of plant organs from 3d laserscanned point clouds for plant phenotyping. BMC Bioinform. 2013; 14:238. Fusiello A, Irsara L.Quasi-euclidean epipolar rectification of uncalibrated images. Mach Vis Appl. 2011; 22(4):663–70. Ishikawa H.Exact optimization for markov random fields with convex priors. IEEE Trans Pattern Anal Mach Intell. 2003; 25(10):1333–6. Pock T, Schoenemann T, Graber G, Bischof H, Cremers D.A convex formulation of continuous multi-label problems. In: European Conference on Computer Vision (ECCV). Marseille, France: Springer-Verlag GmbH: 2008. Pock T, Cremers D, Bischof H, Chambolle A.An algorithm for minimizing the piecewise smooth mumford-shah functional. In: IEEE International Conference on Computer Vision (ICCV). Kyoto, Japan: 2009. Klodt M, Sturm J, Cremers D.Scale-aware object tracking with convex shape constraints on rgb-d images. In: German Conference on Pattern Recognition (GCPR). Saarbrücken, Germany: Springer.: 2013. This work was supported by the AgroClustEr: CROP.SENSe.net (FKZ 0315534D) which is funded by the German Federal Ministry of Education and Research (BMBF). The authors thank Rudolph Eibach (Julius Kühn-Institut, Siebeldingen, Germany) for providing phenotypic OIV and BBCH data from years ago. The authors gratefully thank Iris Fechter (Julius Kühn-Institut, Siebeldingen, Germany) and Katherine Petsch (Cold Spring Harbor Laboratory, New York, USA) for critically reading the manuscript. Department of Informatics, Technische Universität München, Boltzmannstraße 3, 85748, Garching, Germany Maria Klodt & Daniel Cremers Julius Kühn-Institute - Federal Research Centre for Cultivated Plants, Institute for Grapevine Breeding Geilweilerhof, 76833, Siebeldingen, Germany Katja Herzog & Reinhard Töpfer Search for Maria Klodt in: Search for Katja Herzog in: Search for Reinhard Töpfer in: Search for Daniel Cremers in: Correspondence to Maria Klodt. MK and KH designed the study and drafted the manuscript. MK carried out the programming and produced the results. KH aquired the data and interpreted the results. RT and DC directed the research and gave initial input. All authors read and approved the final manuscript. Additional file 1 Shows a video of the reconstructed 3D grapevine surface from Figure 3C. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Klodt, M., Herzog, K., Töpfer, R. et al. Field phenotyping of grapevine growth using dense stereo reconstruction. BMC Bioinformatics 16, 143 (2015) doi:10.1186/s12859-015-0560-x Depth maps Image segmentation Digital leaf area Fruit-to-leaf ratio Leaf classification Submission enquiries: [email protected]
CommonCrawl
Sperm donation Sperm donation is the provision by a man of his sperm with the intention that it be used in the artificial insemination or other 'fertility treatment' of a woman or women who are not his sexual partners in order that they may become pregnant by him. The man is known as a 'sperm donor' and the sperm he provides is known as 'donor sperm' because the intention is that the man will give up all legal rights to any child produced from his sperm, and will not be the legal father. However conception is achieved, the nature and course of the pregnancy will be the same as one achieved by sexual intercourse, and the sperm donor will be the biological father of every child born from his donations.[1] Sperm donation enables a man to father a child for third-party women, and is therefore, categorized as a form of third party reproduction. Sperm may be donated by the donor directly to the intended recipient woman, or through a sperm bank or fertility clinic. Pregnancies are usually achieved by using donor sperm in assisted reproductive technology (ART) techniques which include artificial insemination (either by intracervical insemination (ICI) or intrauterine insemination (IUI) in a clinic, or intravaginal insemination at home). Less commonly, donor sperm may be used in in vitro fertilization (IVF). The primary recipients of donor sperm are single women, lesbian couples and heterosexual couples suffering from male infertility.[2] Donor sperm and 'fertility treatments' using donor sperm may be obtained at a sperm bank or fertility clinic. Sperm banks or clinics may be subject to state or professional regulations, including restrictions on donor anonymity and the number of offspring that may be produced, and there may be other legal protections of the rights and responsibilities of both recipient and donor. Some sperm banks, either by choice or regulation, limit the amount of information available to potential recipients; a desire to obtain more information on donors is one reason why recipients may choose to use a known donor or private donation (i.e. a de-identified donor).[1] A sperm donor is generally not intended to be the legal or de jure father of a child produced from his sperm. The law may however, make implications in relation to legal fatherhood or the absence of a father. The law may also govern the fertility process through sperm donation in a fertility clinic. It may make provision as to whether a sperm donor may be anonymous or not, and it might give an adult donor conceived offspring the right to trace his or her biological father. In the past it was considered that the method of insemination was crucial to determining the legal responsibility of the male as the father. A recent case (see below 'Natural Insemination')has held that it is the purpose, rather than the method of insemination which will determine responsibility. Laws regulating sperm donation address issues such as permissible reimbursement or payment to sperm donors, rights and responsibilities of the donor towards his biological offspring, the child's right to know his/her father's identity, and procedural issues.[3] Laws vary greatly from jurisdiction to jurisdiction. In general, laws are more likely to disregard the sperm donor's biological link to the child, so that he will neither have child support obligations nor rights to the child. In the absence of specific legal protection, courts may order a sperm donor to pay child support or recognize his parental rights, and will invariably do so where the insemination is carried out by natural, as opposed to artificial means.[4][5][6] Laws in many jurisdictions limit the number of offspring that a sperm donor can give rise to, and who may be a recipient of donor sperm. Lawsuit over donor qualification In 2017, a lawsuit was brought in U.S. District Court for the Northern District of Illinois regarding autism diagnoses among multiple offspring of Donor-H898.[7] The suit asserts that false information was presented regarding a donor who should not have been considered an appropriate candidate for a sperm donation program because of a diagnosis of ADHD. Reportedly, the situation is being studied by some of the world's foremost experts in the genetics of autism because of the numbers of his offspring being diagnosed with autism. The purpose of sperm donation is to provide pregnancies for women whose male partner is infertile or, more commonly, for women who do not have a male partner. Direct sexual contact between the parties is avoided since the donor's sperm is placed in the woman's body by artificial means (but see Natural Insemination). Sperm donation preserves the sexual integrity of a recipient, but a woman who becomes pregnant by a sperm donor benefits from his reproductive capacity. Donor sperm is prepared for use in artificial insemination in intrauterine insemination (IUI) or intra-cervical insemination (ICI). Less commonly, donor sperm is prepared for use in other assisted reproductive techniques such as IVF and intracytoplasmic sperm injection (ICSI). Donor sperm may also be used in surrogacy arrangements either by artificially inseminating the surrogate (known as traditional surrogacy) or by implanting in a surrogate embryos which have been created by using donor sperm together with eggs from a donor or from the 'commissioning female' (known as gestational surrogacy).[8] Spare embryos from this process may be donated to other women or surrogates. Donor sperm may also be used for producing embryos with donor eggs which are then donated to a female who is not genetically related to the child she produces. Procedures of any kind, e.g., artificial insemination or IVF, using donor sperm to impregnate a female who is not the partner of, nor related to the male who provided the sperm, may be referred to as donor treatments.[1] A Swedish study concluded that 94% of potential donors would be willing to donate to single women and 85% would be willing to donate to lesbian single women or lesbian couples.[9] A review of two studies found that 50 to 68% of actual donors would donate for lesbian couples, and 40 to 64% would donate to single women.[9] A sperm donor may donate sperm privately or through a sperm bank, sperm agency, or other brokerage arrangement. Donations from private donors are most commonly carried out using artificial insemination. Generally, a male who provides sperm as a sperm donor gives up all legal and other rights over the biological children produced from his sperm.[10] Private arrangements may permit some degree of co-parenting although this will not strictly be 'sperm donation', and the enforceability of those agreements varies by jurisdiction. Donors may or may not be paid, according to local laws and agreed arrangements. Even in unpaid arrangements, expenses are often reimbursed. Depending on local law and on private arrangements, men may donate anonymously or agree to provide identifying information to their offspring in the future.[11] Private donations facilitated by an agency often use a "directed" donor, when a male directs that his sperm is to be used by a specific person. Non-anonymous donors are also called "known donors", "open donors" or "identity disclosure donors".[1] A review of surveys among donors came to the results that the media and advertising are most efficient in attracting donors, and that the internet is becoming increasingly important in this purpose.[9] Recruitment via couples with infertility problems in the social environment of the sperm donor does not seem to be important in recruitment overall.[9] Main article: Sperm bank A sperm donor will usually donate sperm to a sperm bank under a contract, which typically specifies the period during which the donor will be required to produce sperm, which generally ranges from six to 24 months depending on the number of pregnancies which the sperm bank intends to produce from the donor. If a sperm bank has access to world markets e.g. by direct sales, or sales to clinics outside their own jurisdiction, a male may donate for a longer period than two years, as the risk of consanguinity is reduced (although local laws vary widely). The contract may also specify the place and hours for donation, a requirement to notify the sperm bank in the case of acquiring a sexual infection, and the requirement not to have intercourse or to masturbate for a period of usually 2–3 days before making a donation.[12] Sperm provided by a sperm bank will be produced by a donor attending at the sperm bank's premises in order to ascertain the donor's identity on every occasion. The donor masturbates to provide an ejaculate or by the use of an electrical stimulator, although a special condom, known as a collection condom, may be used to collect the semen during sexual intercourse. The ejaculate is collected in a small container, which is usually extended with chemicals in order to provide a number of vials, each of which would be used for separate inseminations. The sperm is frozen and quarantined, usually for a period of six months, and the donor is re-tested prior to the sperm being used for artificial insemination.[1] The frozen vials will then be sold directly to a recipient or through a medical practitioner or fertility center and they will be used in fertility treatments. Where a woman becomes pregnant by a donor, that pregnancy and the subsequent birth must normally be reported to the sperm bank so that it may maintain a record of the number of pregnancies produced from each donor. Sperm agencies In some jurisdictions, sperm may be donated through an agency. The agency may recruit donors, usually via the Internet. Donors may undergo the same kind of checks and tests required by a sperm bank, although clinics and agencies are not necessarily subject to the same regulatory regimes. In the case of an agency, the sperm will be supplied to the recipient female fresh rather than frozen.[13] A female chooses a donor and notifies the agency when she requires donations. The agency notifies the donor who must supply his sperm on the appropriate days nominated by the recipient. The agency will usually provide the sperm donor with a male collection kit usually including a collection condom and a container for shipping the sperm. This is collected and delivered by courier and the female uses the donor's sperm to inseminate herself, typically without medical supervision. This process preserves anonymity and enables a donor to produce sperm in the privacy of his own home. A donor will generally produce samples once or twice during a recipient's fertile period, but a second sample each time may not have the same fecundity of the first sample because it is produced too soon after the first one. Pregnancy rates by this method vary more than those achieved by sperm banks or fertility clinics. Transit times may vary and these have a significant effect on sperm viability so that if a donor is not located near to a recipient female the sperm may deteriorate. However, the use of fresh, as opposed to frozen, semen will mean that a sample has a greater fecundity and can produce higher pregnancy rates. Sperm agencies may impose limits on the number of pregnancies achieved from each donor, but in practice this is more difficult to achieve than for sperm banks where the whole process may be more regulated. Most sperm donors only donate for a limited period, however, and since sperm supplied by a sperm agency is not processed into a number of different vials, there is a practical limit on the number of pregnancies which are usually produced in this way. A sperm agency will, for the same reason, be less likely than a sperm bank to enable a female to have subsequent children by the same donor. Sperm agencies are largely unregulated and, because the sperm is not quarantined, may carry sexually transmitted diseases. This lack of regulation has led to authorities in some jurisdictions bringing legal action against sperm agencies. Agencies typically insist on STI testing for donors, but such tests cannot detect recent infections. Donors providing sperm in this way may not be protected by laws which apply to donations through a sperm bank or fertility clinic and will, if traced, be regarded as the legal father of each child produced.[13] Private or "directed" donations Couples or individuals who need insemination by a third-party may seek assistance privately and directly from a friend or family member, or may obtain a "private" or "directed" donation by advertising or through a broker. A number of web sites seek to link recipients with sperm donors, while advertisements in gay and lesbian publications are common. Recipients may already know the donor, or if arranged through a broker, the donor may meet the recipients and become known to them. Some brokers facilitate contact that maintains semi-anonymous identities for legal reasons. Where a private or directed donation is used, sperm need not be frozen. Private donations may be free of charge - avoiding the significant costs of a more medicalised insemination - and fresh rather than frozen semen is generally deemed to increase the chances of pregnancy. However, they also carry higher risks associated with unscreened sexual or body fluid contact. Legal treatment of donors varies across jurisdictions, and in most jurisdictions (e.g., Sweden)[14] personal and directed donors lack legal safeguards that may be available to anonymous donors. However, the laws of some countries (e.g. New Zealand) recognize written agreements between donors and recipients in a similar way to donations through a sperm bank.[13] Kits are available, usually on-line, for artificial insemination for private donor use, and these kits generally include a collection pot, a syringe, ovulation tests and pregnancy tests. A vaginal speculum and a soft cup may also be used. STI testing kits are also available but these only produce a 'snap-shot' result and, since sperm will not be frozen and quarantined, there will be risks associated with it. Natural insemination Insemination through sexual intercourse is known as natural insemination (NI). Where natural insemination is carried out by a person who is not the woman's usual sexual partner, and in circumstances where the express intention is to secure a pregnancy, this may be referred to as 'sperm donation by natural insemination'. Traditionally, a woman who becomes pregnant through natural insemination has always had a legal right to claim child support from the donor and the donor a legal right to the custody of the child. Conceiving through natural insemination is considered a natural process, so the biological father has also been seen as the legal and social father and was liable for child support and custody rights of the child.[15] The law therefore made a fine distinction based on the method of conception: the biological relationship between the father and the child and the reason for the pregnancy having been achieved will be the same whether the child was conceived naturally or by artificial means, but the legal position has been different.[16] In some countries and in some situations, sperm donors may be legally liable for any child they produce, but with NI the legal risk of paternity for a donor has always been absolute. Natural insemination donors will therefore often donate without revealing their identity. A case in 2019 in the Canadian province of Ontario has, however thrown doubt on this position. That case held that where the parties agreed in advance of the conception that the resulting child would not be the legal responsibility of the man, the courts would uphold that agreement. The court held that the method of conception was irrelevant: it was the purpose of it which mattered. Where an artificial means of conception is used, the reproductive integrity of the recipient woman will not be preserved, and the purpose of preserving sexual integrity by employing artificial means of insemination will not over-ride this effect. Some private sperm donors offer both natural and artificial insemination, or they may offer natural insemination after attempts to achieve conception by artificial insemination have failed. Many sperm donors are influenced by the fact that a woman who is not the donor's usual sexual partner will carry his child whatever the means of conception, and that for this reason, the actual method of insemination is irrelevant. Women may seek natural insemination for various reasons including the desire by them for a "natural" conception.[17][18] Natural insemination by a donor usually avoids the need for costly medical procedures that may require the intervention of third parties. It may lack some of the safety precautions and screenings usually built into the artificial insemination process[19] but proponents claim that it produces higher pregnancy rates.[13][20] A more 'natural' conception does not involve the intervention and intrusion of third parties. However, it has not been medically proved that natural insemination has an increased chance of pregnancy. NI is generally only carried out at the female's fertile time, as with other methods of insemination, in order to achieve the best chances of a pregnancy.[21] A variation of NI is PI, or partial intercourse, where penetration by the donor takes place immediately before ejaculation, thus avoiding prolonged physical contact between the parties. Because NI is an essentially private matter, the extent of its popularity is unknown. However, private on-line advertisements and social media comments indicate that it is increasingly used as a means of sperm donation. Sperm bank processes A sperm donor is usually advised not to ejaculate for two to three days before providing the sample, to increase sperm count. A sperm donor produces and collects sperm at a sperm bank or clinic by masturbation or during sexual intercourse with the use of a collection condom.[12] Preparing the sperm Sperm banks and clinics may "wash" the sperm sample to extract sperm from the rest of the material in the semen. Unwashed semen may only be used for ICI (intra-cervical) inseminations, to avoid cramping, or for IVF/ICSI procedures. It may be washed after thawing for use in IUI procedures. A cryoprotectant semen extender is added if the sperm is to be placed in frozen storage in liquid nitrogen, and the sample is then frozen in a number of vials or straws.[22] One sample will be divided into 1–20 vials or straws depending on the quantity of the ejaculate, whether the sample is washed or unwashed, or whether it is being prepared for IVF use. Following analysis of an individual donor's sperm, straws or vials may be prepared which contain differing amounts of motile sperm post-thaw. The number of sperm in a straw prepared for IVF use, for example, will be significantly less than the number of motile sperm in a straw prepared for ICI or IUI and there will therefore be more IVF straws per ejaculate. Following the necessary quarantine period, the samples are thawed and used to inseminate women through artificial insemination or other ART treatments. Sperm banks typically screen potential donors for genetic diseases, chromosomal abnormalities and sexually transmitted infections that may be transmitted through sperm. The screening procedure generally also includes a quarantine period, in which the samples are frozen and stored for at least six months after which the donor will be re-tested for sexually transmitted diseases (STIs). This is to ensure no new infections have been acquired or have developed during the period of donation. Providing the result is negative, the sperm samples can be released from quarantine and used in treatments. Children conceived through sperm donation have a birth defect rate of almost a fifth compared to the general population.[23] Samples required per donor offspring The number of donor samples (ejaculates) that is required to help give rise to a child varies substantially from donor to donor, as well as from clinic to clinic. However, the following equations generalize the main factors involved: For intracervical insemination: N = V s × c × r s n r {\displaystyle N={\frac {V_{s}\times c\times r_{s}}{n_{r}}}} Approximate pregnancy rate (rs) varies with amount of sperm used in a cycle (nr). Values are for intrauterine insemination, with sperm number in total sperm count, which may be approximately twice the total motile sperm count. (Old data, rates are likely higher today) N is how many children a single sample can help give rise to. Vs is the volume of a sample (ejaculate), usually between 1.0 mL and 6.5 mL [24] c is the concentration of motile sperm in a sample after freezing and thawing, approximately 5-20 million per ml but varies substantially rs is the pregnancy rate per cycle, between 10% to 35%[25][26] nr is the total motile sperm count recommended for vaginal insemination (VI) or intra-cervical insemination (ICI), approximately 20 million pr. ml.[27] The pregnancy rate increases with increasing number of motile sperm used, but only up to a certain degree, when other factors become limiting instead. For derivation of the equation, see Artificial insemination § Samples per child. With these numbers, one sample would on average help giving rise to 0.1–0.6 children, that is, it actually takes on average 2–5 samples to make a child. For intrauterine insemination, a centrifugation fraction (fc) may be added to the equation: fc is the fraction of the volume that remains after centrifugation of the sample, which may be about half (0.5) to a third (0.33). N = V s × f c × c × r s n r {\displaystyle N={\frac {V_{s}\times f_{c}\times c\times r_{s}}{n_{r}}}} Only 5 million motile sperm may be needed per cycle with IUI (nr=5 million)[25] Thus, only 1–3 samples may be needed for a child if used for IUI. Using ART treatments such as IVF can result in one donor sample (or ejaculate) producing on average considerably more than one birth. However, the actual number of births per sample will depend on the actual ART method used, the age and medical condition of the female bearing the child, and the quality of the embryos produced by fertilization. Donor sperm is less commonly used for IVF treatments than for artificial insemination. This is because IVF treatments are usually required only when there is a problem with the female conceiving, or where there is a 'male factor problem' involving the female's partner. Donor sperm is also used for IVF in surrogacy arrangements where an embryo may be created in an IVF procedure using donor sperm and this is then implanted in a surrogate. In a case where IVF treatments are employed using donor sperm, surplus embryos may be donated to other women or couples and used in embryo transfer procedures. When donor sperm is used for IVF treatments, there is a risk that large numbers of children will be born from a single donor since a single ejaculate may produce up to 20 straws for IVF use. A single straw can fertilise a number of eggs and these can have a 40% to 50% pregnancy rate. 'Spare' embryos from donor treatments are frequently donated to other women or couples. Many sperm banks therefore limit the amount of semen from each donor which is prepared for IVF use, or they may restrict the period of time for which such a donor donates his sperm to perhaps as little as three months (about nine or ten ejaculates). Choosing donors Information about donor In the US, sperm banks maintain lists or catalogues of donors which provide basic information such as racial origin, skin color, height, weight, color of eyes, and blood group.[28] Some of these catalogues are available via the Internet, while others are only made available to patients when they apply for treatment. Some sperm banks make additional information about each donor available for an additional fee, and others make additional basic information known to children produced from donors when those children reach the age of eighteen. Some clinics offer "exclusive donors" whose sperm is only used to produce pregnancies for one recipient female. How accurate this is, or can be, is not known, and neither is it known whether the information produced by sperm banks, or by the donors themselves, is true. Many sperm banks will, however, carry out checks to verify the information requested, such as checking the identity of the donor and contacting his own doctor to verify medical details. Simply because such information is not verifiable does not imply that it is in any way inaccurate, and a sperm bank will rely upon its reputation which, in turn, will be based upon its success rate and upon the accuracy of the information about its donors which it makes available. In the UK, most donors are anonymous at the point of donation and recipients can only see non-identifying information about their donor (height, weight, ethnicity, etc.). Donors need to provide identifying information to the clinic and clinics will usually ask the donor's GP to confirm any medical details they have been given. Donors are asked to provide a pen portrait of themselves which is held by the HFEA and can be obtained by the adult conceived from the donation at the age of 16, along with identifying information such as the donor's name and last known address at 18. Known donation is permitted and it is not uncommon for family or friends to donate to a recipient couple. Qualities that potential recipients typically prefer in donors include the donors being tall, college educated, and with a consistently high sperm count.[29] A review came to the result that 68% of donors had given information to the clinical staff regarding physical characteristics and education but only 16% had provided additional information such as hereditary aptitudes and temperament or character.[9] Other screening criteria Sexually active gay men are prohibited or discouraged from donating in some countries, including the US. Sperm banks also screen out some potential donors based on height, baldness, and family medical history.[28] Number of offspring Where a donor donates sperm through a sperm bank, the sperm bank will generally undertake a number of checks to ensure that the donor produces sperm of sufficient quantity and quality and that the donor is healthy and will not pass diseases through the use of his sperm. The donor's sperm must also withstand the freezing and thawing process necessary to store and quarantine the sperm. The cost to the sperm bank for such tests is considerable, which normally means that clinics may use the same donor to produce a number of pregnancies in multiple women.[30] The number of children permitted from a single donor varies by law and practice. These laws are designed to protect the children produced by sperm donation as well as the donor's natural children from consanguinity in later life: they are not intended to protect the donor himself and those donating sperm will be aware that their donations may give rise to numerous pregnancies in different jurisdictions. Such laws, where they exist, vary from jurisdiction to jurisdiction, and a sperm bank may also impose its own limits. The latter will be based on the reports of pregnancies which the sperm bank receives, although this relies upon the accuracy of the returns and the actual number of pregnancies may therefore be somewhat higher. Nevertheless, sperm banks frequently impose a lower limit on geographical numbers than some jurisdictions and may also limit the overall number of pregnancies permitted from a single donor. The limitation on the number of children which a donor's sperm may give rise to is usually expressed in terms of 'families', on the expectation that children within the family are prohibited from sexual relations under incest laws. In effect, the term family means a "woman" and usually includes the donor's partner or ex-partner, so that multiple donations to the same woman are not counted in the limit. The limits usually apply within one jurisdiction, so that donor sperm may be used in other jurisdictions. Where a woman has had a child by a particular donor, there is usually no limit on the number of subsequent pregnancies which that woman may have by that same donor. There is no limit to the number of offspring which may be produced from private donors. Despite laws limiting the number of offspring, some donors may produce substantial numbers[31] of children, particularly where they donate through different clinics, where sperm is onsold or is exported to different jurisdictions, and where countries or jurisdictions do not have a central register of donors. Sperm agencies, in contrast to sperm banks, rarely impose or enforce limits on the number of children which may be produced by a single donor partly because they are not empowered to demand a report of a pregnancy from recipients and are rarely, if ever, able to guarantee that a female may have a subsequent sibling by the donor who was the biological father of her first or earlier children. In the media, there have been reports of some donors producing anywhere from over 40 offspring [32] to several hundred or more.[33] Where a female wishes to conceive additional children by sperm donation, she will often wish to use the same donor. The advantage of having subsequent children by the same donor is that these will be full biological siblings, having the same biological father and mother. Many sperm banks offer a service of storing sperm for future pregnancies, but few will otherwise guarantee that sperm from the original donor will be available. Sperm banks rarely impose limits on the numbers of second or subsequent siblings. Even where there are limits on the use of sperm by a particular donor to a defined number of families (as in the UK) the actual number of children produced from each donor will often be far greater. Since 2000, donor conceived people have been locating their biological siblings and even their donor through web services such as the Donor Sibling Registry as well as DNA testing services such as Ancestry.com and 23andMe. By using these services, donors can find offspring despite the fact that they may have donated anonymously.[34][35][30] Donor payment The majority of donors who donate through a sperm bank receive some form of payment, although this is rarely a significant amount. A review including 29 studies from 9 countries found that the amount of money donors received varied from $10 to €70 per donation or sample.[9] The payments vary from the situation in the United Kingdom where donors are only entitled to their expenses, to the situation with some US sperm banks where a donor receives a set fee for each donation plus an additional amount for each vial stored. At one prominent California sperm bank for example, TSBC, donors receive roughly $50 for each donation which has acceptable motility/survival rates both at donation and at a test-thaw a couple of days later. Because of the requirement for the two-day abstinence period before donation, and geographical factors which usually require the donor to travel, it is not a viable way to earn a significant income. Some private donors may seek remuneration although others donate for altruistic reasons. According to the EU Tissue Directive donors in EU may only receive compensation, which is strictly limited to making good the expenses and inconveniences related to the donation. A survey among sperm donors in Cryos International Sperm bank [36] showed that altruistic as well as financial motives were the main factors for becoming a donor. However, when the compensation was increased 100% in 2004 (to DKK 500) it did not significantly affect the numbers of new donor candidates coming in or the frequency of donations from the existing donors. When the compensation was reduced to the previous level (DKK 250) again one year later in 2005 there was no effect either. This led to the assumption that altruism is the main motive and that financial compensation is secondary. Equipment to collect, freeze and store sperm is available to the public notably through certain US outlets, and some donors process and store their own sperm which they then sell via the Internet. The selling price of processed and stored sperm is considerably more than the sums received by donors. Treatments with donor sperm are generally expensive and are seldom available free of charge through national health services. Sperm banks often package treatments into e.g. three cycles, and in cases of IVF or other ART treatments, they may reduce the charge if a patient donates any spare embryos which are produced through the treatment. There is often more demand for fertility treatment with donor sperm than there is donor sperm available, and this has the effect of keeping the cost of such treatments reasonably high. Onselling Main article: Onselling of sperm There is a market for vials of processed sperm and for various reasons a sperm bank may sell-on stocks of vials which it holds (known as 'onselling'). Onselling enables a sperm bank to maximize the sale and disposal of sperm samples which it has processed. The reasons for onselling may be where part of, or even the main business of, a particular sperm bank is to process and store sperm rather than to use it in fertility treatments, or where a sperm bank is able to collect and store more sperm than it can use within nationally set limits. In the latter case, a sperm bank may sell on sperm from a particular donor for use in another jurisdiction after the number of pregnancies achieved from that donor has reached its national maximum. Informing the child Many donees do not inform the child that they were conceived through sperm donation, or, when non-anonymous donor sperm has been used, they do not tell the child until they are old enough for the clinic to provide contact information about the donor. Some believe that it is a human right for a person to know who their biological mother and father are, and thus it should be illegal to conceal this information in any way and at any time. For donor conceived children who find out after a long period of secrecy, their main grief is usually not the fact that they are not the genetic child of the couple who have raised them, but the fact that the parent or parents have kept information from or lied to them, causing loss of trust.[37] There are certain circumstances where the child very likely should be told: When many relatives know about the insemination, so that the child might find it out from somebody else.[37] When the adoptive father carries a significant genetic disease, relieving the child from fear of being a carrier.[37] The parents' decision-making process of telling the child is influenced by many intrapersonal factors (such as personal confidence), interpersonal factors, as well as social and family life cycle factors.[38] For example, health care staff and support groups have been demonstrated to influence the decision to disclose the procedure.[38] The appropriate age of the child at disclosure is most commonly given at between 7 and 11 years.[38] Single mothers and lesbian couples are more likely to disclose from a young age. Donor conceived children in heterosexual coupled families are more likely to find out about their disclosure from a third party.[39] Families sharing same donor Having contact and meeting among families sharing the same donor generally has positive effects.[40][41] It gives the child an extended family and helps give the child a sense of identity[41] by answering questions about the donor.[40] It is more common among open identity-families headed by single men/women.[40] Less than 1% of those seeking donor-siblings find it a negative experience, and in such cases it is mostly where the parents have disagreed with each other about how the relationship should proceed.[42] Other family members Parents of donors, who are the grandparents of donor offspring and may therefore be the oldest surviving progenitors, may regard the donated genetic contribution as a family asset, and may regard the donor conceived people as their grandchildren.[43] A review came to the result that a minority of actual donors involved their partner in the decision-making process of becoming a donor.[9] In one study, 25% of donors felt they needed permission from their partner.[9] In another study, however, 37% of donors with a partner did not approve of a consent form for partners and rather felt that donors should make their own decisions.[9] In a Swedish study, donors reported either enthusiastic or neutral responses from their partners concerning sperm donation.[9] It is considered common for donors to not tell their spouses that they are or have been sperm donors.[44] Mother-child relation Studies have indicated that donor insemination mothers show greater emotional involvement with their child, and they enjoy motherhood more than mothers by natural conception and adoption. Compared to mothers by natural conception, donor insemination mothers tend to show higher levels of disciplinary aggression.[39] Studies have indicated that donor insemination fathers express more warmth and emotional involvement than fathers by natural conception and adoption, enjoy fatherhood more, and are less involved in disciplining their adolescent. Some donor insemination parents become overly involved with their children.[39] Adolescents born through sperm donation to lesbian mothers have reported themselves to be academically successful, with active friendship networks, strong family bonds, and overall high ratings of well-being. It is estimated that over 80% of adolescents feel they can confide in their mothers, and almost all regard their mothers to be good role models.[39] Motivation vs reluctance to donate A systematic review came to the result that altruism and financial compensation are the main motivations to donate, and to a lesser degree procreation or genetic fatherhood and questions about the donor's own fertility.[9] Financial compensation is generally more prevalent than altruism as a motivation among donors in countries where the compensation is large, which is largely explained by a larger number of economically driven people becoming donors in such countries.[9] Among men who do not donate, the main reason thereof has been stated to be a lack of motivation rather than concerns about the donation.[9] Reluctance to donate may be caused by a sense of ownership and responsibility for the well-being of the offspring.[45] Support for donors In the UK, the National Gamete Donation Trust[46] is a charity which provides information, advice and support for people wishing to become egg, sperm or embryo donors. The Trust runs a national helpline and online discussion list for donors to talk to each other. In one Danish study, 40% of donors felt happy thinking about possible offspring, but 40% of donors sometimes worried about the future of resulting offspring.[9] A review came to the result that one in three actual donors would like counselling to address certain implications of their donation, expecting that counselling could help them to give their decision some thought and to look at all the involved parties in the donation.[9] A systematic review in 2012 came to the conclusion that the psychosocial needs and experiences of the donors, and their follow-up and counselling are largely neglected in studies on sperm donation.[9] Ethical and legal issues Anonymous sperm donation occurs under the condition that recipients and offspring will never learn the identity of the donor. A non-anonymous donor, however, will disclose his identity to recipients. A donor who makes a non-anonymous sperm donation is termed a known donor, an open identity donor, or an identity release donor. Non-anonymous sperm donors are, to a substantially higher degree, driven by altruistic motives for their donations.[47] Even in the case of anonymous donation, some information about the donor may be released to recipients at the time of treatment. Limited donor information includes height, weight, eye, skin and hair colour. In Sweden, this is the extent of disclosed information. In the US, however, additional information may be given, such as a comprehensive biography and sound/video samples. Several jurisdictions (e.g., Sweden, Norway, the Netherlands, Britain, Switzerland, Australia and New Zealand, and others) only allow non-anonymous sperm donation. This is generally based on the principle that a child has a right to know his or her biological origins. In 2013, a German court precedent was set based on a case brought by a 21-year-old woman.[48] Generally, these jurisdictions require sperm banks to keep up-to-date records and to release identifying information about the donor to his offspring after they reach a certain age (15–18). See Sperm donation laws by country. Attitudes towards anonymity For most sperm recipients, anonymity of the donor is not of major importance at the obtainment or tryer-stage.[47] Anonymous sperm is often less expensive. Another reason that recipients choose anonymous donors is concern about the role that the donor or the child may want the donor to play in the child's life. Sperm recipients may prefer a non-anonymous donor if they anticipate disclosing donor conception to their child and anticipate the child's desire to seek more information about their donor in the future. A Dutch study found that lesbian couples are significantly more likely (98%) to choose non-anonymous donors than heterosexual couples (63%). Of the heterosexual couples that opted for anonymous donation, 83% intended never to inform their child of their conception via sperm donation.[49] For children conceived by an anonymous donor, the impossibility of contacting a biological father or the inability to find information about him can potentially be psychologically burdensome.[50] One study estimated that approximately 67% of adolescent donor conceived children with an identity-release donor plan to contact him when they turn 18.[39] Among donors and potential donors Among donors, a systematic review of 29 studies from nine countries concluded that 20–50% of donors would still be willing to donate even if anonymity could not be guaranteed.[9] Between 40 and 97% of donors agree to release non-identifying information such as physical characteristics and level of education.[9] The proportion of actual donors wishing for contact with their offspring varies between 10 and 88%.[9] Most donors are not open to contact with offspring, although more open attitudes are observed among single and homosexual donors.[9] About half of donors feel that degree of involvement should be decided by the intended parents.[9] Some of the donors prefer contact with offspring in a non-visible way, such as where the child can ask questions but the donor will not reveal his identity.[9] One study recruited donors through the Donor Sibling Registry who wanted contact with offspring or who had already made contact with offspring. It resulted that none of the donors said that there was "no relationship", a third of donors felt it was a special relationship, almost like a very good friend, and a quarter felt it was merely a genetic bond and nothing more. Fifteen percent of actual donors considered offspring to be "their own children".[9] On the whole, donors feel that the first step towards contact should come from offspring rather than parents or the donor himself.[9] Some even say that it is the moral responsibility of the donor not to seek contact with offspring.[9] The same review indicated that up to 37% of donors reported changes in their attitude towards anonymity before and after donation, with one in four being prepared to be more open about themselves after the donation than before (as a "potential donor").[9] Among potential donors, 30–46% of potential donors would still be willing to donate even if anonymity could not be guaranteed.[9] Still, more than 75% of these potential donors felt positive towards releasing non-identifying information to offspring, such as physical characteristics and level of education.[9] Single or homosexual men are significantly more inclined to release their identity than married, heterosexual men.[9] Potential donors with children are less inclined to want to meet offspring than potential donors without children (9 versus 30% in the review).[9] Potential donors in a relationship are less inclined to consider contact with offspring than single potential donors (7 versus 28% in the review).[9] From US data, 20% would actively want to know and meet offspring and 40% would not object if the child wished to meet but would not solicit a meeting themselves.[9] From Swedish data, where only non-anonymous donation is permitted in clinics, 87% of potential donors had a positive attitude towards future contact with offspring, although 80% of these potential donors did not feel that the donor had any moral responsibilities for the child later in life.[9] Also from UK data, 80% of potential donors did not feel responsible for whatever happened with their sperm after the donation.[9] With variation between different studies, between 33% and 94% of potential donors want to know at least whether or not the donation resulted in offspring.[9] Some of these potential donors merely wanted to know if a pregnancy had been achieved but did not want to know any specific information about the offspring (e.g. sex, date of birth).[9] Other potential donors felt that knowing the outcome of the donation made the experience more meaningful.[9] In comparison, a German study came to the result that 11% of donors actually asked about the outcome in the clinic where they donated.[9] An Australian study concluded that potential donors who would still be willing to donate without a guarantee of anonymity were not automatically more open to extended or intimate contact with offspring.[9] Donor tracking Even when donors choose to be anonymous, offspring may still find ways to learn more about their biological origins. Registries and DNA databases have been developed for this purpose. Registries that help donor-conceived offspring identify half-siblings from other mothers also help avoid accidental incest in adulthood.[51][52] Tracking by registries Further information: Donor sibling registration Offspring of anonymous donors may often have the ability to obtain their biological father's donor number from the fertility clinic or sperm bank used for their birth. They may then share their number on a registry. By finding shared donor numbers, offspring may find their genetic half-siblings. The donor may also find his number on a registry and choose to make contact with his offspring or otherwise reveal his identity.[51] Tracking by DNA databases Even sperm donors who have chosen anonymity and not to contact their offspring through a registry are now increasingly being traced by their children. Improved DNA technology has brought into question the possibility of assuring a donor's anonymity. For example, at least one child found his biological father using his own DNA test and internet research, and was able to identify and contact his anonymous donor.[53] Fertility tourism and international sperm markets Further information: Fertility tourism § Donor insemination destinations Different factors motivate individuals to seek sperm from outside their home state. For example, some jurisdictions do not allow unmarried women to receive donor sperm. Jurisdictional regulatory choices as well as cultural factors that discourage sperm donation have also led to international fertility tourism and sperm markets. When Sweden banned anonymous sperm donation in 1980, the number of active sperm donors dropped from approximately 200 to 30.[54] Sweden now has an 18-month waiting list for donor sperm.[47] At least 250[47] Swedish sperm recipients travel to Denmark annually for insemination. Some of this is also due to the fact that Denmark also allows single women to be inseminated.[55] After the United Kingdom ended anonymous sperm donation in 2005, the numbers of sperm donors went up, reversing a three-year decline.[56] However, there is still a shortage,[57][56] and some doctors have suggested raising the limit of children per donor.[58] Some UK clinics import sperm from Scandinavia. Despite the shortage, sperm exports from the UK are legal and donors may remain anonymous in this context. However, the HFEA does impose safeguards on the export of sperm, such as that it must be exported to fertility clinics only and that the result of any treatment must be traceable. Sperm banks impose their own limits on the number of pregnancies obtained from exported sperm. Since 2009, the import of sperm via registered clinics for use in the UK has been authorised by the HFEA. The sperm must have been processed, stored and quarantined in compliance with UK regulations. The donors have agreed to be identified when the children produced with their sperm reach the age of eighteen. The number of children produced from such donors in the UK will, of course, be subject to HFEA rules (i.e. currently a limit of ten families,) but the donors' sperm may be used worldwide in accordance with the clinic's own limit of one child per 200.000 of population, subject to national or local limits which apply. By 2014 the UK was importing nearly 40% of its sperm requirements, up from 10% in 2005.[59] In 2018 it was reported that almost half of the imported sperm into Britain came from Denmark (3,000 units).[60] Korean Bioethics Law prohibits selling and buying of sperm between clinics, and each donor may only help giving rise to a child to one single couple.[61] It suffers from a shortage. Canada prohibits payment for gamete donation beyond the reimbursement of expenses.[62] Many Canadians import purchased sperm from the United States.[63] The United States, which permits monetary compensation for sperm donors, has had an increase in sperm donors during the late 2000s recession[64] Social controversy The use of sperm donation is most common among single women and lesbians.[2] Some sperm banks and fertility clinics, particularly in the US, Denmark and the UK, have a predominance of women being treated with donor sperm who come within these groups. This produces many ethical issues around the ideals of conventional parenting and has wider issues for society as a whole, including the issues of the role of men as parents, family support for children, and financial support for women with children.[65] The growth of sperm banks and fertility clinics, the use of sperm agencies and the availability of anonymous donor sperm have served to make sperm donation a more respectable, and therefore a more socially acceptable, procedure.[66] The intervention of doctors and others may be seen as making the whole process a respectable and merely a medical procedure which raises no moral issues, where donor inseminations may be referred to as 'treatments' and donor children as 'resulting from the use of a donor's sperm', or 'born following donation' and subsequent children may be described as 'born using the same donor' rather than as biological children of the same male. A 2009 study has indicated that both men and women view the use of donor sperm with more skepticism compared with the use of donor eggs, suggesting a unique underlying perception regarding the use of male donor gametes.[67] As acceptance of sperm donation has generally increased, so has the level of questioning as to whether 'artificial' means of conception are necessary, and some donor children too, have been critical of the procedures which were taken to bring them into the world.[52] Against this background has been the increase in the use of NI as a method of sperm donation. However, while some donors may be willing to offer this as a method of impregnation, it has many critics and it also raises further legal and social challenges. Some donor children grow up wishing to find out who their fathers were, but others may be wary of embarking on such a search since they fear they may find scores of half-siblings who have been produced from the same sperm donor. Even though local laws or rules may restrict the numbers of offspring from a single donor, there are no worldwide limitations or controls and most sperm banks will onsell and export all their remaining stocks of vials of sperm when local maxima have been attained (see 'onselling' above). One item of research has suggested that donor children have a greater likelihood of substance abuse, mental illness and criminal behavior when grown.[68] However, its motivation and credibility have been questioned.[69] Coming forward publicly with problems is difficult for donor-conceived people as these issues are very personal and a public statement may attract criticism. Additionally, it may upset their parents if they speak out. A website called Anonymous Us[70] has been set up where they can post details of their experiences anonymously, on which there are many accounts of problems. Religious responses Further information: Religious response to assisted reproductive technology There are a wide range of religious responses to sperm donation, with some religious thinkers entirely in support of the use of donor sperm for pregnancy, some who support its use under certain conditions, and some entirely against. Catholicism officially opposes both the donation of sperm and the use of donor sperm on the basis that it compromises the sexual unity of the marital relationship and the idea "that the procreation of a human person be brought about as the fruit of the conjugal act specific to the love between spouses."[71] Jewish thinkers hold a broad range of positions on sperm donation. Some Jewish communities are totally against sperm donation from donors that are not the husbands of the recipient, while others have approved the use of donor insemination in some form, while liberal communities accept it entirely.[72][73][74] The Southern Baptist Convention holds that sperm donation from a third party violates the marital bond.[75] In 1884, Professor William Pancoast of Philadelphia's Jefferson Medical College performed an insemination on the wife of a sterile Quaker merchant, which may be the first insemination procedure that resulted in the birth of a child. Instead of taking the sperm from the husband, the professor chloroformed the woman, then let his medical students vote which one of among them was "best looking", with that elected one providing the sperm that was then syringed into her cervix.[76] At the husband's request, his wife was never told how she became pregnant. As a result of this experiment, the merchant's wife gave birth to a son, who became the first known child by donor insemination. The case was not revealed until 1909, when a letter by Addison Davis Hard appeared in the American journal Medical World, highlighting the procedure.[77] Since then, a few doctors began to perform private donor insemination. Such procedures were regarded as intensely private, if not secret, by the parties involved. Records were usually not maintained so that donors could not be identified for paternity proceedings. Technology permitted the use of fresh sperm only, and it is thought that sperm largely came from the doctors and their male staff, although occasionally they would engage private donors who were able to donate on short notice on a regular basis.[78] In 1945, Mary Barton and others published an article in the British Medical Journal on sperm donation.[79] Barton, a gynecologist, founded a clinic in London which offered artificial insemination using donor sperm for women whose husbands were infertile. This clinic helped conceive 1,500 babies of which Mary Barton's husband, Bertold Weisner, probably fathered about 600.[80] The first successful human pregnancy using frozen sperm was in 1953.[81][82] "Donor insemination remained virtually unknown to the public until 1954".[83] In that year the first comprehensive account of the process was published in The British Medical Journal.[84] Donor insemination provoked heated public debate. In the United Kingdom, the Archbishop of Canterbury established the first in a long procession of commissions that, over the years, inquired into the practice. It was at first condemned by the Lambeth Conference, which recommended that it be made a criminal offence. A Parliamentary Commission agreed. In Italy, the Pope declared donor insemination a sin, and proposed that anyone using the procedure be sent to prison.[84] Sperm donation gained popularity in the 1980s and 1990s.[85] In many western countries, sperm donation is now a largely accepted procedure. In the US and elsewhere, there are a large number of sperm banks. A sperm bank in the US pioneered the use of on-line search catalogues for donor sperm, and these facilities are now widely available on the websites of sperm banks and fertility clinics.[65] Recent years have also seen sperm donation become relatively less popular among heterosexual couples, who now have access to more sophisticated fertility treatments, and more popular among single women and lesbian couples[2] - whose access to the procedure is relatively new and still prohibited in some jurisdictions.[86] In 1954, the Superior Court of Cook County, Illinois granted a husband a divorce because, regardless of the husband's consent, the woman's donor insemination constituted adultery, and that donor insemination was "contrary to public policy and good morals, and considered adultery on the mother's part." The ruling went on to say that, "A child so conceived, was born out of wedlock and therefore illegitimate. As such, it is the child of the mother, and the father has no rights or interest in said child."[87] However, the following year, Georgia became the first state to pass a statute legitimizing children conceived by donor insemination, on the condition that both the husband and wife consented in advance in writing to the procedure.[88] In 1973, the Commissioners on Uniform State Laws, and a year later, the American Bar Association, approved the Uniform Parentage Act. This act provides that if a wife is artificially inseminated with donor semen under a physician's supervision, and with her husband's consent, the husband is legally considered the natural father of the donor inseminated child. That law was followed by similar legislation in many states.[89] In the United Kingdom, the Warnock Committee was formed in July 1982 to consider issues of sperm donation and assisted reproduction techniques.[90] Donor insemination was already available in the UK through unregulated clinics such as BPAS. Many of these clinics had started to offer sperm donation before the widespread use of freezing techniques. 'Fresh sperm' was donated to order by donors at the fertile times of patients requiring treatments. Commonly, infertility of a male partner or sterilisation was a reason for treatment. Donations were anonymous and unregulated. The Warnock Committee's report was published on July 18, 1984.[90] and led to the passing of the Human Fertilisation and Embryology Act 1990. That act provided for a system of licensing for fertility clinics and procedures. It also provided that, where a male donates sperm at a licensed clinic in the UK and his sperm is used at a UK clinic to impregnate a female, the male is not legally responsible for the resulting child. The 1990 Act also established a UK central register of donors and donor births to be maintained by the Human Fertilisation and Embryology Authority (the 'HFEA'), a supervisory body established by the Act. Following the Act, for any act of sperm donation through a licensed UK clinic that results in a living child, information on the child and the donor must be recorded on the register. This measure was intended to reduce the risk of consanguinity as well as to enforce the limit on the number of births permitted by each donor. The natural child of any donor has access to non-identifying information about their donor, starting at the child's eighteenth birthday. The emphasis of the 1990 Act was on protecting the unborn child. However, a general shortage of donor sperm at the end of the 20th century, exacerbated by the announcement of the removal of anonymity in the UK, led to concerns about the excessive use of the sperm of some donors. These concerns centered on the export and exchange of donor sperm with overseas clinics, and also the interpretation of the term 'sibling use' to include donated embryos produced from one sperm donor, and successive births by surrogates using eggs from different women but sperm from the same sperm donor. Donors were informed that up to ten births could be produced from their sperm, but the words 'other than in exceptional circumstances' in the consent form could potentially lead to many more pregnancies. These concerns led to the SEED Report[91] commissioned by the HFEA, which was in turn followed by new legislation and rules meant to protect the interests of donors: When a male donates his sperm through a UK clinic, that sperm is not permitted to give rise to more than ten families total, anywhere in the world. On the global market, Denmark has a well-developed system of sperm export. This success mainly comes from the reputation of Danish sperm donors for being of high quality[92] and, in contrast with the law in the other Nordic countries, gives donors the choice of being either anonymous or non-anonymous to the receiving couple.[92] Furthermore, Nordic sperm donors tend to be tall, with rarer features like blond hair or different color eyes and a light complexion, and highly educated[93] and have altruistic motives for their donations,[93] partly due to the relatively low monetary compensation in Nordic countries. More than 50 countries worldwide are importers of Danish sperm, including Paraguay, Canada, Kenya, and Hong Kong.[92] Several UK clinics also export donor sperm, but they must take steps to ensure that the maximum number of ten families produced from each donor is not exceeded. The use of the sperm outside the UK will also be subject to local rules. Within the EU there are now regulations governing the transfer of human tissue including sperm between member states to ensure that these take place between registered sperm banks. However, the Food and Drug Administration (FDA) of the US has banned import of any sperm, motivated by a risk of mad cow disease, although such a risk is insignificant, since artificial insemination is very different from the route of transmission of mad cow disease.[94] The prevalence of mad cow disease is one in a million, probably less for donors. If prevalence was the case, the infectious proteins would then have to cross the blood-testis barrier to make transmission possible.[94] Transmission of the disease by an insemination is approximately equal to the risk of getting killed by lightning.[95] Fictional representation Movie plots involving artificial insemination by donor are seen in Made in America, Road Trip, The Back-Up Plan, The Kids Are All Right, The Switch, Starbuck, and Baby Mama, the latter also involving surrogacy.[96] Films and other fiction depicting emotional struggles of assisted reproductive technology have had an upswing first in the latter part of the 2000s (decade), although the techniques have been available for decades.[96] Yet, the number of people that can relate to it by personal experience in one way or another is ever growing, and the variety of trials and struggles is huge.[96] A 2012 Bollywood comedy movie, Vicky Donor, was based on sperm donation. The film release saw an effect; the number of men donating sperm increased in India.[97] A 2017 Kollywood movie Kutram 23 is also a movie based on sperm donation. Accidental incest Conception device Donor conceived people Egg donor Ferguson v. McKiernan Ford Island McD v. L Niyoga Posthumous sperm retrieval Rh Disease Semen quality Sperm bank Sperm donation laws by country Third party reproduction "Donor Sperm for Fertility Treatment | IVF Australia". www.ivf.com.au. Retrieved 2018-08-31. Rumbelow H (17 October 2016). "Looking for a sperm donor? Swipe right; A new app that allows women to 'shop' for fathers raises difficult questions". The Times. Retrieved 2016-10-17. Jacqueline Acker (2013). "The Case for Unregulated Private Sperm Donation". UCLA Women's Law Journal. Malvern J (2007-12-04). "Sperm donor forced to pay child support after lesbian couple split". The Times. London. Archived from the original on 2009-10-08. Retrieved 2009-06-29. Neil M (2007-05-10). "Court Says Sperm Donor Owes Child Support". ABA Journal. Retrieved 2018-03-12. Carne L (2007-12-02). "$PERM WAIL BY DONOR MUST PAY SORT 18 YRS. LATER". New York Post. Retrieved 2009-06-29. Cha, Arlana Eunjung, The children of Donor H898, The Washington Post, September 14, 2019 "What is Gestational Surrogacy, How it Works | Surrogate.com". surrogate.com. Retrieved 2018-08-31. Van den Broeck U, Vandermeeren M, Vanderschueren D, Enzlin P, Demyttenaere K, D'Hooghe T (2012). "A systematic review of sperm donors: demographic characteristics, attitudes, motives and experiences of the process of sperm donation". Human Reproduction Update. 19 (1): 37–51. doi:10.1093/humupd/dms039. PMID 23146866. "Family Law Act 1975 – SECT 60H: Children born as a result of artificial conception procedures". www5.austlii.edu.au. Retrieved 2018-08-31. "Frequently asked questions - Cryos International Sperm Bank". Archived from the original on 28 August 2013. Retrieved 25 April 2015. "Sperm Donor Process | Australia Needs Your Sperm | Help a family in need". Australia Needs Your Sperm. Retrieved 2018-08-31. Lewis P (2007-04-30). "Internet sperm agencies plan to sidestep new rules". the Guardian. Retrieved 2018-08-31. "Lag (2006:351) om genetisk integritet m.m." Retrieved 25 April 2015. "What is natural insemination?". Retrieved 4 December 2015. Natural insemination has not been recognised in any state as other than a natural procreation process whereby the sperm donor and biological father is liable for care and support of the child. A woman who becomes pregnant through natural insemination will therefore always have a legal right to claim child support from the donor and the donor has a legal right to the custody of the child. "ASSISTED REPRODUCTIVE TECHNOLOGY ACT 2007". www8.austlii.edu.au. Retrieved 2018-08-31. Washington S (September 18, 2009). "Secret world of sperm donations". BBC News. Retrieved May 23, 2010. John E (June 27, 2010). "Conceivable ideas: meet the modern sperm donor". The Guardian. London. Paterniti, Michael (2015-10-01). "How One Man Fathered 106 Babies (and Counting)". GQ. Retrieved 2018-09-05. Wylie R. "Would you have sex with a stranger to make a baby?". www.kidspot.com.au. Retrieved 2018-08-31. "Getting pregnant". Australia Healthdirect. 2018-08-01. Retrieved 2018-08-31. "The Male Fertility Crisis". Mother Earth News. Retrieved 25 April 2015. Essig MG (2007-02-20). Van Houten S, Landauer T (eds.). "Semen Analysis". Healthwise. WebMD. Retrieved 2007-08-05. Cryos International - What is the expected pregnancy rate (PR) using your donor semen? Archived 2011-01-07 at the Wayback Machine Utrecht CS News Subject: Infertility FAQ (part 4/4) "The Good, the Bad and the Ugly Sperm". LiveScience.com. Retrieved 25 April 2015. Almeling R (24 June 2016). "Selling Genes, Selling Gender: Egg Agencies, Sperm Banks, and the Medical Market in Genetic Material". American Sociological Review. 72 (3): 319–340. doi:10.1177/000312240707200301. https://www.nytimes.com/interactive/2019/06/26/magazine/sperm-donor-siblings.html?ref=cta&nl=top-stories?campaign_id=61&instance_id=0&segment_id=14786&user_id=579ae23cfcbd75c9aac87cb571cc201c&regi_id=72995439ries Whitman C (2014-01-31). "I fathered 34 children through sperm donation". the Guardian. Retrieved 2018-08-31. 44 siblings and counting One Sperm Donor, 150 Offspring https://www.cbc.ca/news/technology/sperm-donor-dna-testing-1.4500517 https://stluciatimes.com/2018/10/01/super-sperm-donor-may-have-1000-children/ Attitudes among sperm donors … Archived 2013-07-22 at the Wayback Machine Donor Insemination Edited by C.L.R. Barratt and I.D. Cooke. Cambridge (England): Cambridge University Press, 1993. 231 pages. Indekeu A, Dierickx K, Schotsmans P, Daniels KR, Rober P, D'Hooghe T (2013). "Factors contributing to parental decision-making in disclosing donor conception: a systematic review". Human Reproduction Update. 19 (6): 714–33. doi:10.1093/humupd/dmt018. PMID 23814103. Ilioi EC, Golombok S (2014). "Psychological adjustment in adolescents conceived by assisted reproduction techniques: a systematic review". Human Reproduction Update. 21 (1): 84–96. doi:10.1093/humupd/dmu051. PMC 4255607. PMID 25281685. Scheib JE, Ruby A (July 2008). "Contact among families who share the same sperm donor". Fertility and Sterility. 90 (1): 33–43. doi:10.1016/j.fertnstert.2007.05.058. PMID 18023432. T. Freeman, V. Jadva, W. Kramer, and S. Golombok. Gamete donation: parents' experiences of searching for their child's donor siblings and donor. Human Reproduction, 2008; 24 (3): 505 doi:10.1093/humrep/den469 Contact with donor siblings a good experience for most families Archived 2009-03-11 at the Wayback Machine By HAYLEY MICK. From Thursday's Globe and Mail. February 26, 2009 at 8:58 AM EDT My scattered grandchildren The Globe and Mail. Alison Motluk. Sunday, September 13, 2009 07:53PM EDT From sperm donor to 'Dad': When strangers with shared DNA become a family McMahon CA, Saunders DM (January 2009). "Attitudes of couples with stored frozen embryos toward conditional embryo donation". Fertility and Sterility. 91 (1): 140–7. doi:10.1016/j.fertnstert.2007.08.004. PMID 18053994. http://ngdt.co.uk/ National Gamete Donation Trust Ekerhovd E, Faurskov A, Werner C (2008). "Swedish sperm donors are driven by altruism, but shortage of sperm donors leads to reproductive travelling". Upsala Journal of Medical Sciences. 113 (3): 305–13. doi:10.3109/2000-1967-241. PMID 18991243. "Court rules sperm donors' children have right to know". Deutsche Welle. 6 February 2013. Retrieved 29 December 2016. Breaeys, A., de Bruyn, J.K., Louwe, L.A., Helmerhorst, F.M., "Anonymous or identity registered sperm donors? A study of Dutch recipients choices", Human Reproduction Vol. 20, No. 3 pp. 820–824, 2005 Malisow C (5 November 2008). "Donor Babies Search for Their Anonymous Fathers". houstonpress.com. Retrieved 29 December 2016. "DonorChildren". www.donorchildren.com. Retrieved 2018-08-31. "The reality of sperm donation is hitting home". ABC News. 2015-12-07. Retrieved 2018-08-31. New Scientist article about a 15-year-old who found his donor using a DNA test Sydsvenskan:[Här börjar livet för 100 svenska barn varje år (Google translate:Here begins the lives of 100 Swedish children each year).] By Karen Söderberg 17 April 2005 00:00 Single Women Head For Denmark For Guess What? The Impudent Observer. July 24, 2009 by Fred Stopsky team. "New donor registrations". Human Fertilisation and Embryology Authority, Strategy and Information Directorate, Web. hfea.gov.uk. Archived from the original on 18 January 2017. Retrieved 29 December 2016. "4Sperm, Egg and Embryo Donation". hfea.gov.uk. Archived from the original on 27 November 2012. Retrieved 29 December 2016. "Sperm donation in the UK". Retrieved 25 April 2015. "UK facing major sperm shortage due to donor shortfall - Herald Globe". Retrieved 25 April 2015. Olsen, Martine Berg (2018-08-25). "Why are British women using Danish sperm to get pregnant?". Metro. Retrieved 2018-09-04. Digital Chosun Ilbo: Sperm Donations Drying Up. Archived 2009-01-27 at the Wayback Machine Updated January 7, 2009 08:29 KST Assisted Human Reproduction Act Sperm donor shortage hits Canadian infertility clinics December 19, 2006. Retrieved February 4, 2009. WCBD: Well-paid sperm donations up during slumping economy Archived 2009-07-05 at the Wayback Machine Published: April 6, 2009. Retrieved on April 7, 2009 "The ins and outs of sperm donation". ABC News. 2015-08-30. Retrieved 2018-08-31. "Single women taking DIY path to motherhood". ABC News. 2017-07-21. Retrieved 2018-08-31. Eisenberg ML, Smith JF, Millstein SG, Walsh TJ, Breyer BN, Katz PP (August 2010). "Perceived negative consequences of donor gametes from male and female members of infertile couples". Fertility and Sterility. 94 (3): 921–6. doi:10.1016/j.fertnstert.2009.04.049. PMC 2888643. PMID 19523614. "New study shows sperm-donor kids suffer". Slate Magazine. Retrieved 25 April 2015. "BioNews - 'My Daddy's Name is Donor': Read with caution!". Retrieved 25 April 2015. "The Anonymous Us Project: Stories from Donor Conceived". Retrieved 25 April 2015. Congregation for the Doctrine of the Faith: Instruction on Dignitas Personae on Certain Bioethical Questions, para. 12, quoting Congregation for the Doctrine of the Faith, Instruction Donum vitae, II, B, 4: AAS 80 (1988), 92., and Congregation for the Doctrine of the Faith, Instruction Donum vitae, Introduction, 3: AAS 80 (1988), 75. "Artificial Insemination: Infertility and Judaism", Mazornet.com Dorff, Elliot, "Artificial Insemination in Jewish Law" Richard V. Grazi, MD, Joel B. Wolowelsky, PhD, "Donor Gametes for Assisted Reproduction in Contemporary Jewish Law and Ethics", Assisted Reproduction Reviews 2:3 (1992) Woodruff T (2010). Oncofertility: Ethical, Legal, Social, and Medical Perspectives. Springer. p. 267. ISBN 978-1441965172. Donor Babies Search for the Anonymous Fathers Archived 2008-12-09 at the Wayback Machine From: Donorconceived Adult. Posted: Sunday, December 7, 2008 "Letter to the Editor: Artificial Impregnation". The Medical World: 163–164. April 1909. Archived from the original on 2012-07-24. (cited in Gregoire AT, Mayer RC (1964). "The Impregnators. (William Pancoast), (Addison Davis Hard)". Fertility and Sterility. 16: 130–4. doi:10.1016/s0015-0282(16)35476-0. PMID 14256095. ) Braun W (2015-12-31). "A Doctor, a Quaker Woman and a Galvanized Rubber Syringe: The 19th Century Origins of Artificial Insemination in America". Huffington Post. Retrieved 2018-08-31. Barton M, Walker K, Wiesner BP (January 1945). "Artificial Insemination". British Medical Journal. 1 (4384): 40–3. doi:10.1136/bmj.1.4384.40. PMC 2056529. PMID 20785841. Smith R (2016-08-10). "British man 'fathered 600 children' at own fertility clinic - Telegraph". The Telegraph. Archived from the original on 2016-08-10. Retrieved 2016-10-17. CS1 maint: BOT: original-url status unknown (link) Kramer W (2016-05-10). "A Brief History of Donor Conception". Huffington Post. Retrieved 2018-08-31. Ombelet W, Van Robays J (2015). "Artificial insemination history: hurdles and milestones". Facts, Views & Vision in ObGyn. 7 (2): 137–43. PMC 4498171. PMID 26175891. Kapnistos PF (2015-04-08). Hitler's Doubles: Fully-Illustrated. Peter Fotis Kapnistos. ISBN 9781496071460. "Sperm Banking History | California Cryobank". cryobank.com. Retrieved 2018-08-31. "Google Ngram Viewer". google.com. Retrieved 29 December 2016. "Fertility Treatment Bans In Europe Draw Criticism". Fox News. 2012-04-13. Retrieved 2016-10-17. Doornbos v. Doornbos, 23 U.S.L.W. 2308 (Superior Court, Cook County, Ill., December 13, 1954) Allan S (2016-10-14). Donor Conception and the Search for Information: From Secrecy and Anonymity to Openness. Routledge. ISBN 9781317177814. "Uniform Parentage Act 1973" (PDF). Uniform Law Commission. 1973. Retrieved 31 August 2018. "How legislation on fertility treatment developed". Human Fertilisation and Embryology Authority, Strategy and Information Directorate. hfea.gov.uk. Archived from the original on 14 April 2012. Retrieved 29 December 2016. "Archived copy" (PDF). Archived from the original (PDF) on 2011-10-04. Retrieved 2010-07-13. CS1 maint: archived copy as title (link) Assisted Reproduction in the Nordic Countries ncbio.org FDA Rules Block Import of Prized Danish Sperm Posted August 13, 08 7:37 AM CDT in World, Science & Health The God of Sperm Archived 2008-07-04 at the Wayback Machine By Steven Kotler A 'BABY BJORN' SPERM CRISIS NEW YORK POST. September 16, 2007 chicagotribune.com --> Heartache of infertility shared on stage, screen Colleen Mastony, Chicago Tribune. June 21, 2009 Sharma R (7 May 2012). "'Vicky Donor' causes flood of sperm donors". The Times of India. Ahmedabad, India. Retrieved 2 June 2012. Free Sperm Donation Forum Sperm donation in the UK (the UK's regulatory body – lists all UK clinics offering sperm donation) Sperm Donation at Curlie Assisted reproductive technology Fertility testing Fertility tourism Fertility medication Estrogen antagonists aromatase inhibitor clomifene GnRH agonists menotropins In vitro fertilisation (IVF) and expansions Assisted zona hatching Autologous endometrial coculture Cytoplasmic transfer Gestational carrier In vitro maturation Intracytoplasmic sperm injection Oocyte selection Ovarian hyperstimulation Snowflake children Transvaginal ovum retrieval Zygote intrafallopian transfer embryos ovarian tissue Gamete intrafallopian transfer Reproductive surgery Vasectomy reversal Selective reduction Donor registration Donor Sibling Registry Semen collection Ova bank Genetic diagnosis of intersex Religious response to ART Mitochondrial donation See subsection in sperm donation Reproduction and pregnancy in speculative fiction
CommonCrawl
How to calculate critical values ($W_{\alpha}$) for the Shapiro-Wilk test? When performing the Shapiro-Wilk test, we can obtain the critical $W_{\alpha}$ statistic from tables, given $\alpha$ is 0.05 or 0.01. If $\alpha$ is nonstandard, say 0.07 or 0.02 for example, how we can calculate the value of $W_{\alpha}$? goodness-of-fit normality-assumption Glen_b Mohammad MawedMohammad Mawed I think you're asking how to calculate a critical value for the Shapiro-Wilk test at significance levels other than the usual tabulated ones (I mention this because it's possible you're really interested in how to compute a p-value, which is a closely related issue, though now Chris and I have edited your question, my doubt looks a bit odd.) If your desired significance level is between tabulated ones, such as wanting 2% and having 5% and 1%, you could use interpolation, which will at least be approximately suitable. It looks to me like $\Phi(1-p)$ is close to linear* in $\log(n(1-W))$ for $n\ge 12$, and even down at $n=5$ it's locally close enough that linear interpolation should work fine if done on those scales. More generally, computer software is the most obvious choice. Some software offers direct calculation of p-values for the Shapiro-Wilk that may avoid the need to use critical values at all. Finally, simulation is an option; one can simulate the statistic and hence obtain simulations from the distribution under the null; this allows one to compute estimates of quantiles of the distribution. * Edit: looking at the p-value code in R, that shift in nearness-to-linearity between n=11 and n=12 is because R is using a different approximation to compute p-values for $n$ below 12; that shouldn't affect the suitability of linear interpolation at say 12, but it does suggest to me that it appearing to be very close to linear is more that the approximation is close to linear for $n\ge 12$; the actual transformed distribution is probably somewhat less linear down that low. [Now I look, the help even gives the information that a different approximation is used below 12.] R also offers the following references, in case you want to write your own code: [1] Patrick Royston (1982) An extension of Shapiro and Wilk's W test for normality to large samples. Applied Statistics, 31, 115-124. [2] Patrick Royston (1982) Algorithm AS 181: The W test for Normality. Applied Statistics, 31, 176-180. [3] Patrick Royston (1995) Remark AS R94: A remark on Algorithm AS 181: The W test for normality. Applied Statistics, 44, 547-551. Glen_bGlen_b I think this depends quite a lot on what you intend to do, and to what detail you would like to define your answer. The $P$-value of the Shapiro-Wilk test is not a trivial thing to compute unfortunately. I'll give a simple answer to identify specific levels of $W_{\alpha}$ with a small degree of error, then we'll trace how you can actually calculate the $P$-value yourself. Quick and dirty method Assuming that you are working in R, we can simply generate a number of data sets and test them until we find the actual value of $W$ that gives a particular $\alpha$ value. Note that $W_{\alpha}$ is dependant upon the sample size that you are using, so you will have to perform this method with your own sample size $n$. We're going to basically generate a normally distributed random sequence of numbers of size $n$ with the function rnorm(n), then calculate the $W$ statistic from R until we find one that we like. find.W <- function(alpha = 0.05, error = 0.000001, n = 100){ not.done <- TRUE while(not.done){ a <- shapiro.test(rnorm(n)) if(a$p.value < alpha+error && a$p.value > alpha-error){ not.done = FALSE W <- a$statistic return(W) Doing this once we find that for sample size 100, we find that w <- find.W(alpha = 0.07, error = 0.001, n = 100); w This is of course stochastic. If we repeat this 1000 times to get a better idea of the real number: w <- vector() for(i in 1:1000){ w[i] <- find.W(alpha = 0.07, error = 0.001, n = 100) mean(w); sd(w) [1] 0.9764422 [1] 4.544819e-05 We find that for $\alpha = 0.07$, $W_{\alpha} \approx 0.9764$. If we look into it a little bit further, we find that this is only an approximation of the true $W$ statistic. To find the real value $W$ value, let's trace how this one is calculated. Theoretical value of $W$ If we look into the source code for R, we can pretty easily find how the $W$ statistic is calculated. Looking at the source of the shapiro.test() function, we find that it calls a C file named S_Wilks.c. Looking into it, we find the source code here [1]. Inside the code, a paper gets referenced with the theoretical aspect. After a little digging, it appears that the approximation method comes from this [2,3] paper. You can read this yourself if you would like to know the actually theoretical aspects to manually calculated $P$-values. The methods used in the original Shapiro & Wilk 1965 paper are from an unpublished manuscript that has since been lost to time (at least, I can't find it). [1] Source code which R uses: http://www.matrixscience.com/msparser/help/group___shapiro_wilk_source_code.html [2] Approximation of $W$ paper: Some Techniques for Assessing Multivarate Normality Based on the Shapiro- Wilk W, J. P. Royston, Journal of the Royal Statistical Society. Series C (Applied Statistics), Vol. 32, No. 2 (1983), pp. 121-133 [3] Approximation of $W$ technique: Statistical Algorithms: Algorithm AS, J. P. Royston, Journal of the Royal Statistical Society. Series C (Applied Statistics), Vol. 32, No. 2 (1983), pp. 176-180 Archive on JSTOR found here for that particular issue. Chris CChris C $\begingroup$ is there a method to calculate it with a simple scientific calculator? $\endgroup$ – Mohammad Mawed Sep 12 '15 at 2:03 $\begingroup$ If you read @Glen_b's answer, he links to a linear interpolation method that you could perform using a calculator, but it would not be optimal. $\endgroup$ – Chris C Sep 12 '15 at 2:05 $\begingroup$ Mohammad -- For largish $n$ (say bigger than 30 or so), the transformed relationship I mention should be adequate even for moderate extrapolation outside the tabulated values, assuming you have decent normal tables (or a normal cdf function - and its inverse - on your calculator; some do). Of course, the algorithms given by Royston can in principle be performed on a calculator. $\endgroup$ – Glen_b Sep 12 '15 at 2:07 $\begingroup$ If you're looking to perform this calculation on a calculator, I would definitely go with what @Glen_b has mentioned. Thanks by the way Glen_b, I learned a lot from reading your answer and the linear interpolation method. $\endgroup$ – Chris C Sep 12 '15 at 2:11 Not the answer you're looking for? Browse other questions tagged goodness-of-fit normality-assumption or ask your own question. How do I find values not given in (interpolate in) statistical tables? Interpretation of Shapiro-Wilk test D'Agostino-Pearson vs. Shapiro-Wilk for normality How does the Shapiro-Wilk test for normality work? Is the Shapiro Wilk test W an effect size? What is the CDF of the Shapiro-Wilk $W$ statistic? What is a lay explanation for the numerator of W in the Shapiro-Wilk test?
CommonCrawl
A robot-based behavioural task to quantify impairments in rapid motor decisions and actions after stroke Teige C. Bourke1, Catherine R. Lowrey1, Sean P. Dukelow5, Stephen D. Bagg2, Kathleen E. Norman1,3 & Stephen H. Scott1,4 Stroke can affect our ability to perform daily activities, although it can be difficult to identify the underlying functional impairment(s). Recent theories highlight the importance of sensory feedback in selecting future motor actions. This selection process can involve multiple processes to achieve a behavioural goal, including selective attention, feature/object recognition, and movement inhibition. These functions are often impaired after stroke, but existing clinical measures tend to explore these processes in isolation and without time constraints. We sought to characterize patterns of post-stroke impairments in a dynamic situation where individuals must identify and select spatial targets rapidly in a motor task engaging both arms. Impairments in generating rapid motor decisions and actions could guide functional rehabilitation targets, and identify potential of individuals to perform daily activities such as driving. Subjects were assessed in a robotic exoskeleton. Subjects used virtual paddles attached to their hands to hit away 200 virtual target objects falling towards them while avoiding 100 virtual distractors. The inclusion of distractor objects required subjects to rapidly assess objects located across the workspace and make motor decisions about which objects to hit. As many as 78 % of the 157 subjects with subacute stroke had impairments in individual global, spatial, temporal, or hand-specific task parameters relative to the 95 % performance bounds for 309 non-disabled control subjects. Subjects with stroke and neglect (Behavioural Inattention Test score <130; n = 28) were more often impaired in task parameters than other subjects with stroke. Approximately half of subjects with stroke hit proportionally more distractor objects than 95 % of controls, suggesting they had difficulty in attending to and selecting appropriate objects. This impairment was observed for affected and unaffected limbs including some whose motor performance was comparable to controls. The proportion of distractors hit also significantly correlated with the Montreal Cognitive Assessment scores for subjects with stroke (r s < = − 0.48, P < 10−9). A simple robot-based task identified that many subjects with stroke have impairments in the rapid selection and generation of motor responses to task specific spatial goals in the workspace. Moving and interacting in the world requires rapid processing of the visual environment to identify potential motor goals, select a movement and finally move in a timely manner. For example, when packing groceries, we must decide where to put items based on their shape, size, fragility and other features. The selection, planning and execution of motor actions must be done rapidly to keep pace with the flow of groceries from the cashier. There is growing evidence that sensory feedback is rapidly integrated into motor decisions [1–3]. Sensory feedback is integrated with higher-level behavioural goals to make rapid decisions on how to move and interact in the environment. Selective attention refines spatial representations of the environment into potential movement targets [1, 4]. The choice between these internal representations is then based on 'decisional factors' [1]. One such factor is the recognition of combinations of visual features and their behavioural relevance [1, 5, 6]. Thus, the sensorimotor system rapidly integrates information on the environment to guide motor decisions. Another important aspect of voluntary motor control is the ability to inhibit a motor action [7]. When instructed, it is very automatic to simply reach towards spatial targets as they appear in the workspace [8]. In contrast, it can be hard to avoid reaching towards a target when instructed to move in the opposite direction. In this anti-reach condition, subjects can make erroneous initial motor responses to the spatial goal and are delayed in moving in the opposite direction. This task requires the voluntary override of an automatic response to reach towards the target and involves many brain areas including frontal and parietal cortex [7, 9–12]. This ability can be impaired in persons with stroke [12], mild cognitive impairment [13], Alzheimer's Disease [14], and a history of concussion [15], Thus, successful voluntary motor control involves processing sensory feedback not only to select motor actions but also to avoid making others. Post-stroke disability stems from a variety of motor, sensory, and/or cognitive deficits [16]. The ability to pack groceries described above highlights that impairments in these functional tasks may reflect not only motor impairments but also cognitive impairments. When driving, one must quickly decide on actions to apply pressure to the brake or accelerator pedals, or turn the wheel based on information from street signs, traffic signals, other traffic, and pedestrians. However, neuropsychological tests or cognitive screening tools generally separate motor and cognitive assessments – the latter often requiring verbal or written responses – and typically do not impose time limits to perform the tasks [17, 18]. Few neuropsychological assessments focus on rapid motor decisions beyond simple reaction time tests [18, 19], or timed cognitive tasks such as trail making [20], even though complex and time sensitive demands are often required for everyday activities. Furthermore, many standard assessments of post-stroke functioning have problems of subjectivity, coarse ordinal scales, criteria-based scoring, and lack of responsiveness (including floor and ceiling effects) [21]. Thus, we developed a novel approach of using a robotic assessment to provide objective, continuous measures of performance that are compared to a normative model of healthy control performance. We recently used an object hit task to quantify simultaneous upper limb bimanual sensorimotor performance [22]. Although this task quantified rapid motor skills, decisional processes required to perform the task were limited to identifying the trajectory of an object and selecting a limb to hit the object. As all objects were targets in this task, it did not require cognitive processes related to attending to object qualities (rather than just spatial location) to select a motor action nor require inhibiting inappropriate motor responses. These processes can be impaired following stroke [9, 23, 24]. The goal of the present study was to develop a task that examined rapid motor skills with both arms that also required greater cognitive processing. We developed a variant of the object hit task [22] by requiring subjects to hit 2 possible targets while avoiding all other objects in the workspace. We hypothesize that individual subjects with stroke will be impaired in enacting or inhibiting a motor response to a potential target based on sensory feedback of the object's features and their relevance to the ongoing task. The performance of subjects with stroke was compared to a large cohort of non-disabled control subjects. Clinically, knowledge of impairments in these more complex visuomotor skills can guide novel rehabilitation strategies to regain the ability to rapidly process sensory information for motor actions. As well, it may help to identify if individuals should return to more complex daily activities such as driving. Participants included patients recruited from Providence Care (St. Mary's of the Lake Hospital, Kingston, ON), the Dr. Vernon Fanning Centre and Foothills Hospital (Calgary, AB). Prospective subjects were excluded if they had other significant neurologic diagnoses (e.g., Parkinson's disease), acute medical illness, and/or ongoing upper extremity musculoskeletal injuries. Subjects were also excluded if they appeared fatigued, reported pain associated with attempting robotic assessments or reported pain during clinical testing on strength or range of motion that would be relevant to the robotic task. Non-disabled control subjects were recruited from the Kingston, ON and Calgary, AB communities. This study was approved by the Queen's University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board (#ANAT-024-05) and the University of Calgary's Conjoint Health Research Ethics Board (#22123) and subjects provided informed consent. Experimental setup Details of the robotic set-up have been reported previously [25, 26]. Briefly, the behavioural task was performed using a bimanual exoskeleton robot which measures limb motion (KINARM, BKIN Technologies Ltd, Kingston, ON, Canada). Participants sat in a modified wheelchair base, and their arms were fitted in supports permitting movement in the horizontal plane. Arm supports were adjusted such that the robot's linkages aligned with the subject's elbows and shoulders. Subjects received visual feedback from a virtual reality system which displayed fingertip position and virtual objects in the same plane as arm motion via a two-way mirror. Direct vision of the hands and arms was occluded. Behavioural task Subjects were assessed in an object hit and avoid task (Fig. 1a), which is based on a previous object hit task [22]. At the beginning of the task, subjects were presented two shapes on the screen. Subjects were instructed to hit these two shapes ('targets') away from them and avoid all other shapes ('distractors'). Subjects could use both hands which were represented by horizontal paddles. Both target objects and distractor objects dropped from one of 10 locations along the top of the screen 8 cm apart (virtual bins). A total of 30 objects (20 targets and 10 distractors) were released at each bin (200 targets and 100 distractors total). Objects were released from all 10 bins before a bin was reused. Objects dropped at an increasing rate following the equation: Task details and exemplar subjects. a Screenshot of a subject performing the task. Objects included 2 target shapes (chosen from 6 pair variants) and 6 distractor shapes (4 are shapes used as targets in other task variants and 2 were always distractors). b Task performance summary of a 62 year old right-handed male control subject. Y axes are number of targets (top) or distractors (bottom) dropped from each bin (X axis). Hits with the left hand are blue areas and hits with the right hand are red areas. Missed objects are the white areas. The top of each plot represents the beginning of the task, and the bottom represents the end. Hand transition and miss bias are indicated with dashed and dotted lines (respectively). c Performance of a 65 year old right-handed, right-affected male subject 5 days post-stroke. d Performance of a 63 year old right-handed male subject 8 days post-stroke. Subject was left-affected and had a BIT score of 67 (indicative of visual neglect) $$ \mathrm{Drop}\ \mathrm{Rate} = 0.5\ \mathrm{objects}/\mathrm{second} + \left[0.025\ \mathrm{objects}/{\mathrm{second}}^2 \times \left(\mathrm{time}\left(\mathrm{s}\right)\right)\right] $$ The maximum number of objects possible to appear on the screen simultaneously increased from 1 to 16 over the course of the task. The speed of the objects moving towards the subject was 50 to 100 % of maximum drop speed, which increased following the equation: $$ \max\ \mathrm{drop}\ \mathrm{speed} = 15\ \mathrm{cm}/\mathrm{s} + \left[0.3\ \mathrm{cm}/{\mathrm{s}}^2 \times \left(\mathrm{time}\left(\mathrm{s}\right)\right)\right] $$ Thus targets moved at ~10 cm/s in the beginning of the task and increased to ~50 cm/s by the end of the task. Position of the objects and hand position were recorded at 200 Hz. The task took just over 2 min to complete. One of 6 task variants were used with varying shapes designated as targets and distractors. Target pairs had similar width but were always different heights and different classes of shapes (Fig. 1a). Distractors consisted of the remaining unused shapes (shapes used as targets in other task variants), as well as two wider shapes. Every effort was made to ensure subjects understood the task instructions. Operators usually obtained verbal confirmation that they understood which targets to hit when showing the target objects before starting the task. Reminders to hit the specific target shapes and avoid all others would be given early in the task, especially if there seemed to be confusion with similar distractor shapes (for example tall target rectangle vs. wide rectangle distractor). As well, targets hit by a paddle were knocked away and haptic feedback of the contact was provided by the robot [22], whereas distractors simply passed through the paddle to provide immediate feedback that it was a distractor. Data were analyzed using MATLAB (Mathworks Inc., Massachusetts). Hand speed was filtered using a sixth-order double-pass Butterworth filter (cutoff frequency 10 Hz). Tasks parameters We used 14 metrics to quantify task performance in order to characterize a diverse range of sensory, motor and cognitive functions examined in this task. Most of the parameters paralleled the metrics in our previously published object hit task [22], a simpler version of this task in which there were no distractors. As well, a few parameters were added or modified to capture the addition of distractors in the present task. Global Performance was evaluated using five parameters: Targets hit: The number of target objects hit away from the body. Distractors hit: The number of distractor objects hit. Objects hit: The number of objects hit (target hits + distractor hits). Distractor proportion: Distractors hit divided by objects hit. In the case of multiple hits for the same object, the first hit is used to determine which hand/paddle hit the ball. Object processing rate (objects/second): The rate of correctly processed objects (number of targets hit + distractors missed per second) at 80 % of task completion. The rate of correctly processed objects was determined at every time step (every 0.005 s) from the time the first object was hit or left the screen to the time the 240th object (80 % of task complete) was hit or left the screen. To filter this signal, we convolved the rate with a Gaussian window (MATLAB function normpdf). From this rate signal, the optimal growth curve (y = <max height > *(1-exp(−(<curvature>)* < data>))) [27, 28] for the data was calculated and the value of this curve when 80 % of the task was completed was used to approximate the maximal object processing rate for each individual subject. The rate was taken at 80 % of task complete so that performance was at or near maximum, but not at 100 % as the ratio of distractor object dropping statistically increased at the end of task. This is because there is always a 66 % chance of dropping a target object, but objects are sampled without replacement, leading to the statistical scenario of running out of target objects and only being able to drop distractor objects from a given bin at the end of the task. All other parameters were defined in the same way as the object hit task [22]. Spatial and Temporal Performance Miss bias: Spatial position quantifying the extent to which the number of target misses deviates from being equally distributed on either side. Computed as sum of target misses in each bin (m), multiplied by the bin position (x), and then divided by the total number of target misses (sum(mx)/sum(m)). Given that the centre of the bins is x = 0, the greater the mean location of misses deviates from 0, the more misses were on the left or right side of the workspace (dependent on whether the miss bias is negative or positive, respectively). Hand transition: Spatial transition point in subject's hand preference for hitting targets. This is the mean of the right hand's and the left hand's weighted means of their respective target hit distributions. The weighted mean of each hand only includes target hits by that hand in bins where both hands have been used to hit targets (overlapping bins) and one additional bin beside the overlapping bins (where that hand has been used to hit targets). If bins do not overlap, only the leftmost bin with target hits from the right hand and the rightmost bin with target hits from the left hand are used in the weighted means. Median error (% of targets): Point in time when subjects missed half of the target objects that they missed over the entire task. Hand Specific Performance Movement Area: The areas of space used by each hand during the task. Computed as the area of the convex hull- a complex polygon which captures the boundaries of the movement trajectories of each hand [22, 29]. Calculated for each hand separately. Hand speed: The average hand speed calculated from each time step (5 ms) over the course of the task. Calculated for each hand separately. Hand bias hits: The difference between the number of target hits with the right hand and the number of target hits with the left hand divided by the total number of target hits. Hand selection overlap: The number of times successive target hits from a given bin were with different hands divided by the total number of target hits. Hand movement bias area: Difference in movement area of the right and left hands divided by the sum of the movement area of the right and left hands. Hand bias speed: The difference in mean hand speed of the right and left hands divided by the sum of the mean hand speed of the right and left hands. Performance of control subjects was analyzed for any effects of age, sex, or handedness. Control values were age-regressed and Box Cox transforms were used to normalize control distributions when necessary [30, 31]. Control parameter values were then assessed for any effect of sex or handedness, and values subdivided into respective categories if effects were significant, and age regressed and Box Cox transformed again if necessary. All parameter values were converted to z-scores of the model to allow for comparison across all subjects (because age, sex, and handedness are now accounted for in the model). Individual subjects with stroke were defined as having impaired performance on a task parameter, when their z-score was >1.65 or < −1.65 for one tailed tests, or > |1.96| for two tailed tests. A subset of subjects was assessed a second time by a different operator within 7 days of their initial assessment. An intraclass correlation was used to determine interrater reliability (significant if P < 0.05, acceptable if ICC > 0.8). Clinical assessments Subjects with stroke were evaluated by a trained physician, physiotherapist, or occupational therapist using a number of standardized clinical assessments. Both arms were assessed using the Chedoke-McMaster Stroke Assessment (CMSA) [32] to determine arm function. The CMSA is based on Brunnstrom's stages of motor recovery post-stroke [33]. Subjects were broadly categorized as "Left-Affected" (LA) or "Right-Affected" (RA) depending on the clinically most affected side of the body. Elbow flexor spasticity was measured by the Modified Ashworth Scale, which categorizes the amount of resistance produced by the arm in response to passively moving it through its range of motion [34]. Functional abilities were measured with the Functional Independence Measure (FIM), which has both a motor and cognitive component [35]. The conventional subtests of the Behavioural Inattention Test (BIT) were used to screen for deficits in spatial attention [36]. Subjects with stroke who scored <130 on the BIT were defined as having visuospatial neglect and were analyzed separately (neglect subjects). Subjects were also screened for cognitive deficits using the Montreal Cognitive Assessment (MoCA) [37]. The handedness of controls and subjects with stroke was determined by the Modified Edinburgh Handedness test [38]. Subject demographics and clinical information Table 1 shows the demographic information and clinical scores for the 157 subjects with stroke and 309 control subjects. The majority of subjects with stroke were assessed early, with only 17/157 subjects being assessed >28 days of their stroke. Exclusion of these 17 subjects did not substantively alter the present results. Subjects with stroke were usually assessed either on the same day (n = 90) or within 1 day (n = 40) of the robotic assessment. Some subjects with stroke were assessed within 2–4 days (n = 18) and a few within 5–10 days (n = 9). Twenty eight subjects with stroke displayed visual neglect as indicated by scores of <130 on the BIT. These subjects were analyzed separately to assess differences in the patterns of task performance with stroke and visual neglect. Table 1 Demographic information of subjects included in the experiment Exemplar subjects Figure 1b-d displays the distribution of target and distractor hits and misses for a control subject and two subjects with stroke. The control subject (Fig. 1b) was very effective in hitting targets (136/200) and avoiding distractors (94/100). Control subjects gradually missed more targets, especially lateral ones, as task difficulty increased. The RA subject with stroke (Fig. 1c) hit fewer targets (117/200) and more distractors (47/100) than the control. The LA neglect subject (Fig. 1d) hit even fewer targets (83/200) and a similar amount of distractors (40/100). This subject also hit very few objects with their left hand and very few on the left side of the workspace. Impairments identified using the robot-based task Each parameter classified a varying number of subjects with stroke as impaired (Table 2). Target hits identified the largest number of subjects as impaired. The number of targets hit by controls depended on age and sex (Fig. 2a). Subjects with stroke were considered to be impaired in targets hit if their performance fell below the 5 % level performance of controls after correcting for age and sex. Overall, 78 % of LA subjects (left-affected subjects with stroke), 68 % of RA subjects (right affected subjects with stroke), and 96 % of neglect subjects (subjects with stroke and visual neglect) were impaired in targets hit. Similarly, age effects were found for objects hit. In total, 64 % LA subjects, 51 % RA subjects, and 86 % of neglect subjects hit less objects than the lower cutoff of the age normative model. Table 2 Task performance, interrater reliability, and clinical correlations. Task parameter sensitivity is defined by the corresponding z-score cutoff range Global Performance in Task Parameters. a Scatter plot of age versus target hits. Performance of male and female controls is shown by filled and empty grey markers, respectively. Performance of subjects with stroke is shown by the leftward and rightward pointing triangles representing left-affected and right-affected subjects, respectively. Triangle markers are filled if subject also had a BIT score <130 indicative of visual neglect. Age normative model is shown by the blue and magenta lines representing the median (solid lines) and cutoff (dashed lines) z-score for male and female control subject performance distribution (respectively) according to the model. The black arrow indicates which side of the cutoff score corresponds with subjects being impaired on the particular parameter. b Scatter plot of age versus distractor proportion. Performance of control subjects is shown by the filled grey markers. Age normative model is shown by the median and cutoff z-score of control subject performance distribution according to the model. c Scatter plot of age versus estimated maximum object processing rate. d Scatter plot of object hits versus distractor proportion. Values have been converted to z-scores based on the normative models. Dashed lines represent the cutoff used to indicate impairment in each parameter. The control performance range is the quadrant indicated by the 'CR' Distractor proportion identified more individual subjects with stroke as impaired (39 % LA, 51 % RA, 79 % neglect; see Fig. 2b and Table 2) compared to distractors hit (15 % of subjects with stroke impaired). Object processing rate was also a parameter that identified a large proportion of subjects with stroke (Fig. 2c): controls mostly had a processing rate between 1.5 to 3 objects per second, whereas the object processing rate of most subjects with stroke was below 2. Subjects with stroke who hit fewer objects also tended to hit a higher proportion of distractors (Spearman correlation; controls: r s = 0.16, P = 0.006; subjects with stroke: r s = −.33, P = 3×10−5). Twenty nine percent of subjects with stroke displayed impairments in both object hits and distractor proportion (Fig. 2d, upper left quadrant). In contrast, 29 % of subjects with stroke had impairments in only object hits (Fig. 2d, lower left quadrant) and 16 % had impairments in only distractor proportion (Fig. 2d, upper right quadrant). All neglect subjects were impaired in at least one of these two parameters, and 64 % were impaired in both parameters. Almost all subjects with stroke (92 %) hit fewer objects with their affected arm than with their unaffected arm (Fig. 3a). Similarly, 77 % of subjects with stroke showed a greater distractor proportion with their affected arm than with their unaffected arm (Fig. 3b). Overall, 57 % of LA subjects, 60 % of RA subjects, and 50 % of neglect subjects had impaired object hits with their affected arm only. Hand Specific Performance in Task Parameters. a Scatter plot of object hits (z-score) with the right versus the left hand. Symbols same as Fig. 2. b Scatter plot of distractor proportion with the right versus the left hand. c Scatter plot of hand speed versus distractor proportion with the affected arm (AA) of subjects with stroke and non-dominant arm (NDA) of control subjects. d Scatter plot of hand speed versus distractor proportion with the unaffected arm (UA) of subjects with stroke and dominant arm (DA) of control subjects Motor and distractor-related impairments in performance of the affected arm commonly co-occurred. Of the subjects whose affected arm was impaired in distractor proportion, the same arm was usually also impaired in objects hit (97 % LA, 81 % RA) and/or hand speed (85 % LA, 63 % RA) (Fig. 3c). Distractor proportion was also negatively correlated with the number of object hits by the affected arm of subjects with stroke (r s = −0.58, P < 10−10). Although this would be expected, a negative correlation was not observed in the non-dominant arm of controls (r s = 0.15, P = 0.008). In contrast, this coupling of motor and distractor-related impairments was less common for the unaffected arm. For the subjects with stroke who had impaired distractor proportion with their unaffected arm, the majority did not have impaired object hits (47 % LA, 82 % RA) or hand speed (58 % LA, 75 % RA) with the same arm (Fig. 3d). The correlation between distractor proportion and objects hit was weaker for unaffected arm performance of subjects with stroke (r s = −0.15, P = 0.07). A negative correlation was not observed for the dominant arm of controls (r s = 0.18, P = 0.001). Neglect subjects who had impaired distractor proportion with their affected arm were usually impaired in hitting objects and/or hand speed with the same arm (100 and 92 % impaired in both parameters, respectively). Impairments in distractor proportion for their unaffected arm were less likely to co-occur with impairments in objects hit and/or hand speed with that arm (36 and 36 % impaired in both parameters, respectively). We aggregated the number of task parameters that each subject was impaired in (Table 2). Most subjects with stroke (82 %) were impaired in more task parameters than 95 % of controls (>5 parameters). This included all but one subject with neglect. Correlations with standard clinical assessments Task parameter measures were compared to scores on the FIM, MoCA, and BIT (Table 3). Object hits showed moderate correlations with BIT (r s = 0.40, P = 2×10−7) and FIM scores (r s = 0.45, P = 3×10−9) and weak correlations with MoCA scores (r s = 0.23, P = 0.004). Distractor hits displayed modest correlations with MoCA (r s = −0.31, P = 1×10−4), but distractor proportion displayed moderate correlations with BIT (r s = −0.43, P = 2×10−8), FIM (r s = −0.45, P = 4×10−9), and MoCA (r s = −0.49, P = 2×10−10) (Fig. 4a,b). Out of the 36 % of subjects with stroke who passed the MoCA (scored > =26), 27 % of these subjects with stroke were impaired in distractor proportion. Object hits with the affected arm correlated better with the FIM (r s = 0.51, P = 1×10−11) than the unaffected arm (r s = 0.26, P = 8×10−4). Object hits with the unaffected arm showed a modest correlation with BIT (r s = 0.30, P = 1×10−4) and a weak correlation with MoCA (r s = 0.17, P = 0.04). Distractor proportion with the unaffected arm showed moderate correlations with BIT (r s = −0.41, P = 9×10−8) and MoCA (r s = −0.48, P = 4×10−10). MoCA scores correlated most strongly with overall and unaffected arm distractor proportion (r s = <= − 0.48, P < 10−9). The number of parameters impaired was also moderately correlated with FIM (r s = −0.61, P = 2×10−17) and BIT (r s = −0.43, P = 3×10−8) scores. Table 3 The relationship between task performance of subjects with stroke and Functional Independence Measure (FIM), Montreal Cognitive Assessment (MoCA), and Behavioural Inattention Test (BIT) scores is shown by the corresponding Spearman correlations Clinical correlations with task performance. a Scatter plot of Montreal Cognitive Assessment (MoCA) scores versus overall distractor proportion. Symbols same as Fig. 2. b Scatter plot of Behavioural Inattention Test (BIT) scores versus overall distractor proportion Spasticity, as measured by Modified Ashworth, showed modest correlations with a few task parameters: the number of objects hit with the right hand (r s = −0.32, P = 4.2×10−5), movement area with the right hand (r s = −0.34, P = 1.6×10−5), and hand speed with the right hand (r s = −0.33, P = 2.6×10−5). Interrater reliability The interrater reliability of the task parameters is shown in Table 2 for subjects (13 controls and 10 subjects with stroke) assessed in the task twice. Intraclass correlation coefficients were often high: ICC > =0.8, for 76 % of parameters. Lower reliability values were generally associated with parameters that identified fewer subjects with stroke as impaired and thus had a relatively small range of values across the control and stroke populations. In contrast, higher reliability values tended to be associated with parameters that identified more subjects with stroke as impaired and thus tended to have a larger range of values across the inter-rater sample. The current study quantified impairments in stroke survivors to rapidly hit certain objects (targets) while avoiding all other objects (distractors). Up to 78 % of subjects with stroke had impairments in individual global, spatial, temporal, or hand-specific task parameters. The task instructions were simple, minimizing the impact of comorbid language impairment [39]. The task was completed in ~3 min yet provided a wide range of information related to sensorimotor and cognitive function. Most parameters had high inter-rater reliability providing an objective approach to measure impairments and track recovery. The object hit and avoid task is a variant of an object hit task in which subjects had to rapidly locate and hit all objects moving in the workspace [22]. The present task extended this approach by requiring the subject to select amongst many options when moving and interacting in the environment. Total objects hit quantified each subject's ability to make rapid motor actions, regardless of whether they hit the correct objects or not. Subjects with stroke almost always hit fewer objects with their more affected side, and this arm's performance was more correlated with FIM scores than the unaffected side. Thus, the reduction and asymmetry of the ability to make rapid motor actions is quantitatively measured by the object hit and avoid task, and may have importance in the ability to complete activities of daily living. We used a large number of parameters to quantify a broad range of sensory, motor and cognitive functions necessary to perform this task. For healthy subjects, some of these measures were highly correlated, but nevertheless captured different functions. For instance, the correlation between target hits and object hits was very strong for controls (r = 0.81). The reason why both parameters were measured rather than choosing only one was because it was important to differentiate between the ability to make fast and accurate movements, and the ability to make correct motor decisions om whether an object was a correct reach target or not. Thus, these metrics represent different domains of performance. Furthermore, subjects with stroke do not necessarily follow this typical pattern of performance. As shown in Fig. 2d, some subjects with stroke hit a high proportion of distractors and others do not, showing the value of each parameter to identify different impairments that do not necessarily co-occur in some individuals with stroke. The inclusion of both target and distractor objects in the current task added an additional cognitive load to the previous object hit task. This is important as many different cognitive processes are necessary to perform daily activities, and their impairment after stroke is a significant cause of disability [40]. The present object hit and avoid task focused on a few key processes. First, demands on the attentional system are high in a visual search task, as it requires differentiating target and distractor stimuli [41]. Rapid parallel processing of the entire visual workspace can be employed to find a target amongst many distractors with minimal effort if the target has a unique feature separate from distractors that makes it 'pop out'. In contrast, focused attention is required to serially analyze each stimulus if the target can only be differentiated from the distractors by a conjunction of features. The greater attentional demands required for a conjunction versus a feature visual search task results in greater reaction time for both controls and subjects with stroke who do not have visuospatial neglect [23]. Subjects with visuospatial neglect also show significantly increased times to detect targets in a conjunction search task (regardless of which side of the workspace was tested), when compared to the performance of controls and subjects with stroke. The object hit and avoid task is representative of a conjunctive visual search as targets could only be differentiated from distractors by attending to the geometry (circular, three- or four-sided) and relative dimensions (tall, wide or equal) of each object (see Methods-Behavioural Task). Correspondingly, BIT scores correlated with many individual task parameters, as well as the total number of parameters impaired. Although correlations were weak to moderate, all were in the expected direction: greater task impairment associated with greater clinical impairment. In the current study, participants are required to either enact a reach toward the target, or actively avoid hitting a distractor. Despite visual feedback, haptic feedback, and initial reminders on the need to hit only two types of objects and avoid the rest, over half of the subjects with stroke hit a greater proportion of distractors than 95 % of controls. Subjects with stroke were twice as likely to be impaired in this parameter if they also had neglect. The ability to inhibit a motor action is an important cognitive function of voluntary motor behaviour [7]. Motor decisional processes mediate the initiation of an automatic motor response to a new stimulus with the voluntary response required by the task [42]. This ability to inhibit stimulus-driven and enact task-driven motor responses can be measured by eye movements in the anti-saccade task [43] and arm movements in an anti-pointing task [8]. In both tasks, subjects must inhibit a movement to the appearance of a visual stimulus and move to the equal and opposite location. Subjects with stroke having damage to frontal lobes have been shown to make erroneous saccades towards a stimulus in an anti-saccade task [9]. Subjects with stroke and visual neglect show greater endpoint errors and longer reaction times in an anti-pointing condition (on both sides of space) than controls or subjects with stroke who do not have visuospatial nelgect [24]. Distractor proportion in the current study correlated with BIT scores just as anti-pointing impairments correlated with the severity of neglect. The assessment of rapid visuomotor skills post-stroke has potentially useful applications when rehabilitation goals are to regain high function. The object hit and avoid task may be very predictive of the ability to drive, return to work, or maintain complete independence as these skills require the ability to make many rapid motor decisions daily. We show that impairments in these skills are not always captured by currently used pen and paper cognitive screening tools such as the MoCA. Also, since this task relies on many domains of function to be successful, it may be a good indicator of overall stroke recovery. The measurement of cognitive function after stroke, as measured by the MoCA, correlated moderately with distractor proportion, but only modestly with the number of distractor hits. As well distractor proportion also identified more subjects as impaired as compared to distractor hits. These differences reflect the fact that some control subjects hit a substantive number of distractors, but they also hit many targets. This is why we measured distractor proportion which quantified the ratio between distractors hit and total objects hit. This task is also part of a larger research program to design a battery of robotic assessment tasks to create a quantitative diagnostic assessment of sensory, motor, and cognitive impairments post-stroke [21]. The use of a robotic assessment provides objective, continuous measures of performance that are responsive to small changes and compared to a normative model of healthy control performance. This overcomes issues of subjectivity, coarse ordinal scales, criteria-based scoring, and lack of responsiveness (including floor and ceiling effects) seen in many standard assessments of post-stroke functioning. We have also developed assessments of visually-guided reaching [44], bimanual control [30], limb position sense [26], kinesthesia [45], and limb afferent feedback for action [46]. The goal is that information from this assessment battery may be used collectively to provide more precise and responsive tools to guide individualized rehabilitation care. Successful performance in the current task requires many sensorimotor and cognitive skills, thus failure can reflect many potential impairments in sensory, motor and cognitive functions. In order to identify unique impairments in individual participants, it is important to consider the type of parameters that show poor performance. For example, subjects who have impairments in the number objects hit, but not distractor proportion, may have underlying sensorimotor impairments, but no cognitive impairments. These subjects may be better candidates for sensorimotor rather than more cognitive-related rehabilitation. Future work is required to identify whether these patterns of impairment can predict the best type of rehabilitation for each individual. The object hit and avoid task provides a simple and fast approach to quantify the use of attention and selection to perform rapid motor actions with the arms. Most subjects with stroke were found to be impaired when performing this task, especially those with neglect. Many parameters had high inter-rater reliability and correlated with various clinical measures of impairments and ability to perform daily activities. Affected arm BIT: Behavioural inattention test C + SC: Cortical + subcortical Cb: Cerebellar Cb + Br: Cerebellar + brainstem CMSA: Chedoke-McMaster stroke assessment CR: Control range Dominant arm FIM: Functional independence measure Left-affected LH: MoCA: Montreal cognitive assessment Mx: NDA: Non-dominant arm RA: Right-affected RH: Subcortical Unaffected arm Cisek P. Cortical mechanisms of action selection: the affordance competition hypothesis. Philos Trans R Soc Lond B Biol Sci. 2007;362:1585–99. Cisek P, Pastor-Bernier A. On the challenges and mechanisms of embodied decisions. Philos Trans R Soc Lond B Biol Sci. 2014;369. Scott SH. A functional taxonomy of bottom-up sensory feedback processing for motor actions. Trends Neurosci. 2016;39:512–26. Treue S. Neural correlates of attention in primate visual cortex. Trends Neurosci. 2001;24:295–300. Tanaka K, Saito H, Fukada Y, Moriya M. Coding visual images of objects in the inferotemporal cortex of the macaque monkey. J Neurophysiol. 1991;66:170–89. Eskandar EN, Richmond BJ, Optican LM. Role of inferior temporal neurons in visual memory. I. Temporal encoding of information about visual images, recalled images, and behavioral context. J Neurophysiol. 1992;68:1277–95. Munoz DP, Everling S. Look away: the anti-saccade task and the voluntary control of eye movement. Nat Rev Neurosci. 2004;5:218–28. Day BL, Lyon IN. Voluntary modification of automatic arm movements evoked by motion of a visual target. Exp Brain Res. 2000;130:159–68. Guitton D, Buchtel HA, Douglas RM. Frontal lobe lesions in man cause difficulties in suppressing reflexive glances and in generating goal-directed saccades. Exp Brain Res. 1985;58:455–72. Pierrot-Deseilligny C, Muri RM, Ploner CJ, Gaymard B, Demeret S, Rivaud-Pechoux S. Decisional role of the dorsolateral prefrontal cortex in ocular motor behaviour. Brain. 2003;126:1460–73. Hawkins KM, Sayegh P, Yan X, Crawford JD, Sergio LE. Neural activity in superior parietal cortex during rule-based visual-motor transformations. J Cogn Neurosci. 2013;25:436–54. Tippett WJ, Alexander LD, Rizkalla MN, Sergio LE, Black SE. True functional ability of chronic stroke patients. J Neuroeng Rehabil. 2013;10:1. Salek Y, Anderson ND, Sergio L. Mild cognitive impairment is associated with impaired visual-motor planning when visual stimuli and actions are incongruent. Eur Neurol. 2011;66:283–93. Tippett WJ, Sergio LE. Visuomotor integration is impaired in early stage Alzheimer's disease. Brain Res. 2006;1102:92–102. Brown JA, Dalecki M, Hughes C, Macpherson AK, Sergio LE. Cognitive-motor integration deficits in young adult athletes following concussion. BMC Sports Sci Med Rehabil. 2015;7:1. Teasell R, Hussein N. Clinical consequences of stroke. In: Evidence based review of stroke rehabilitation (wwwebrsrcom). 2013. Salter K, Campbell N, Richardson M, Mehta S, Jutai J, Zettler L, Moses M, McClure A, Mays R, Foley N, Teasell R. Outcome measures in stroke rehabilitation. In: Evidence Based Review of Stroke Rehabilitation (wwwebrsrcom). 2013. Cumming TB, Brodtmann A, Darby D, Bernhardt J. Cutting a long story short: reaction times in acute stroke are associated with longer term cognitive outcomes. J Neurol Sci. 2012;322:102–6. Ballard C, Stephens S, Kenny R, Kalaria R, Tovee M, O'Brien J. Profile of neuropsychological deficits in older stroke survivors without dementia. Dement Geriatr Cogn Disord. 2003;16:52–6. Sanchez-Cubillo I, Perianez J, Adrover-Roig D, Rodriguez-Sanchez J, Rios-Lago M, Tirapu J, Barcelo F. Construct validity of the trail making test: role of task-switching, working memory, inhibition/interference control, and visuomotor abilities. J Int Neuropsychol Soc. 2009;15:438. Scott SH, Dukelow SP. Potential of robots as next-generation technology for clinical assessment of neurological disorders and upper-limb therapy. J Rehabil Res Dev. 2011;48:335–53. Tyryshkin K, Coderre AM, Glasgow JI, Herter TM, Bagg SD, Dukelow SP, Scott SH. A robotic object hitting task to quantify sensorimotor impairments in participants with stroke. J Neuroeng Rehabil. 2014;11:47. Erez AB, Katz N, Ring H, Soroker N. Assessment of spatial neglect using computerised feature and conjunction visual search tasks. Neuropsychol Rehabil. 2009;19:677–95. Rossit S, Malhotra P, Muir K, Reeves I, Duncan G, Harvey M. The role of right temporal lobe structures in off-line action: evidence from lesion-behavior mapping in stroke patients. Cereb Cortex. 2011;21:2751–61. Scott SH. Apparatus for measuring and perturbing shoulder and elbow joint positions and torques during reaching. J Neurosci Methods. 1999;89:119–27. Dukelow SP, Herter TM, Moore KD, Demers MJ, Glasgow JI, Bagg SD, Norman KE, Scott SH. Quantitative assessment of limb position sense following stroke. Neurorehabil Neural Repair. 2010;24:178–87. von Bertalanffy L. On the von Bertalanffy growth curve. Growth. 1966;30:123–4. Allen KR. A method of fitting growth curves of the von bertalanffy type to observed data. J Fish Res Board Can. 1966;23:163–79. Cormen TH, Leiserson CE, Rivest RL. Introduction to algorithms. Cambridge: MIT press; 1990. Lowrey C, Jackson C, Bagg S, Dukelow S, Scott S. A novel robotic task for assessing impairments in bimanual coordination post-stroke. Int J Phys Med Rehabil S. 2014;3:2. Box GE, Cox DR. An analysis of transformations. J R Stat Soc B Methodol. 1964;211–252. Gowland C, Stratford P, Ward M, Moreland J, Torresin W, Van Hullenaar S, Sanford J, Barreca S, Vanspall B, Plews N. Measuring physical impairment and disability with the chedoke-McMaster stroke assessment. Stroke. 1993;24:58–63. Brunnstrom S. Motor testing procedures in hemiplegia: based on sequential recovery stages. Phys Ther. 1966;46:357. Bohannon RW, Smith MB. Interrater reliability of a modified Ashworth scale of muscle spasticity. Phys Ther. 1987;67:206–7. Keith RA, Granger CV, Hamilton BB, Sherwin FS. The functional independence measure: a new tool for rehabilitation. Adv Clin Rehabil. 1987;1:6–18. Wilson B, Cockburn J, Halligan P. Development of a behavioral test of visuospatial neglect. Arch Phys Med Rehabil. 1987;68:98–102. Nasreddine ZS, Phillips NA, Bedirian V, Charbonneau S, Whitehead V, Collin I, Cummings JL, Chertkow H. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005;53:695–9. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. Pasi M, Salvadori E, Poggesi A, Inzitari D, Pantoni L. Factors predicting the Montreal cognitive assessment (MoCA) applicability and performances in a stroke unit. J Neurol. 2013;260:1518–26. Cicerone KD, Dahlberg C, Kalmar K, Langenbahn DM, Malec JF, Bergquist TF, Felicetti T, Giacino JT, Harley JP, Harrington DE. Evidence-based cognitive rehabilitation: recommendations for clinical practice. Arch Phys Med Rehabil. 2000;81:1596–615. Treisman AM, Gelade G. A feature-integration theory of attention. Cogn Psychol. 1980;12:97–136. Theeuwes J, Kramer AF, Hahn S, Irwin DE. Our eyes do not always go where we want them to go: capture of the eyes by new objects. Psychol Sci. 1998;9:379–85. Hallett PE. Primary and secondary saccades to goals defined by instructions. Vision Res. 1978;18:1279–96. Coderre AM, Zeid AA, Dukelow SP, Demmer MJ, Moore KD, Demers MJ, Bretzke H, Herter TM, Glasgow JI, Norman KE, et al. Assessment of upper-limb sensorimotor function of subacute stroke patients using visually guided reaching. Neurorehabil Neural Repair. 2010;24:528–41. Semrau JA, Herter TM, Scott SH, Dukelow SP. Robotic identification of kinesthetic deficits after stroke. Stroke. 2013;44:3414–21. Bourke TC, Coderre AM, Bagg SD, Dukelow SP, Norman KE, Scott SH. Impaired corrective responses to postural perturbations of the arm in individuals with subacute stroke. J Neuroeng Rehabil. 2015;12:7. The authors would like to thanks S Appaqaq, H Bretzke, MJ Demers, M Metzler, K Moore, J Peterson, M Piitz, and J Yajure for their help with data collection, patient recruitment, and technical support. This work was supported by the Ontario Research Fund – Research Excellence (ORF-RE 04–47), Canadian Institute of Health Research operating grants (MOP 106662), and the Heart and Stroke Foundation of Canada (G-13-0003029). SH Scott was supported by a GSK-CIHR chair in Neuroscience (XGG124631 & 279791). None of these funding sources had any role in the design of the study, the collection, analysis, and interpretation of data, nor in writing the manuscript. Please contact the author for data requests. TCB assisted in data collection, and was the primary person responsible for conducting the data analysis, and writing the manuscript. CRL assisted with data analysis and drafting of the manuscript. SPD assisted with data collection and drafting of the manuscript. SDB assisted with data collection and drafting of the manuscript. KEN assisted with data analysis and drafting of the manuscript. SHS participated in the design of the study, data analysis, and drafting of the manuscript. All authors read and approved the final manuscript. SHS is co-founder and chief scientific officer of BKIN Technologies that commercializes the KINARM robot. This study was approved by the Queen's University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board (#ANAT-024-05) and the University of Calgary's Conjoint Health Research Ethics Board (#22123) and subjects provided informed consent. Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada Teige C. Bourke, Catherine R. Lowrey, Kathleen E. Norman & Stephen H. Scott Department of Physical Medicine and Rehabilitation, Queen's University, Kingston, ON, Canada Stephen D. Bagg School of Rehabilitation Therapy, Queen's University, Kingston, ON, Canada Kathleen E. Norman Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada Stephen H. Scott Department of Clinical Neurosciences, Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada Sean P. Dukelow Teige C. Bourke Catherine R. Lowrey Correspondence to Teige C. Bourke. Bourke, T.C., Lowrey, C.R., Dukelow, S.P. et al. A robot-based behavioural task to quantify impairments in rapid motor decisions and actions after stroke. J NeuroEngineering Rehabil 13, 91 (2016). https://doi.org/10.1186/s12984-016-0201-2 Cognitive impairments
CommonCrawl
Hot answers tagged emergency Why do airplanes use MAYDAY when in danger but ships send SOS? The difference here isn't between ships and aircraft: it's between Morse code and voice. The SOS signal is only for Morse code. It's short, easy to send, and easy to recognise. But it's not as convenient to say. It doesn't actually mean "save our souls". The letters were chosen just to form the simple Morse pattern, and "save our souls" is a backformation: ... emergency radio-communications phraseology Dan Hulme Why are commercial flights not equipped with parachutes for the passengers? Qualification: I worked at a sport parachute center as an instructor for 10 years and I hold an FAA Master Parachute Rigger certificate. I believe that qualifies me as an expert on the subject. The majority of the above other statements here are correct. In summary: The door of a pressurized passenger plane cannot be opened in flight for the stated reasons.... safety commercial-aviation emergency parachute How miraculous was the miracle on the Hudson? This would be an extremely difficult question to answer without being opinionated, but I'll give a try. As part of the investigation, the NTSB, along with other agencies carried out a number of tests on simulator, where pilots were tasked with either landing or ditching the aircraft. There were three different scenarios contemplated: Normal landings ... emergency engine-failure aeroalias Why don't pilots parachute from small planes that are in distress? Mainly because in the situation that you describe, the airplane is perfectly capable of flying. You don't need an engine to fly as airplanes are designed to glide without it. Part of every pilots training is how to land the airplane when this happens. Many of the same issues also apply in the smaller airplanes. Unless the pilot and the passengers fly ... safety emergency general-aviation parachute Lnafziger What are the advantages of squawking 7700? If you make a radio call, unless you are on 121.5 (or 243 military), then only the station you are talking to will initially know about the emergency. Initial calls should always be with the unit you are working with unless you are VFR. If you squawk 7700, then all stations in transponder range, including possible airborne stations such as AWACS and SAR ... emergency radio-communications procedure transponder Why not just drop an engine on fire? The current position of regulators like the FAA is that dropping things off of airplanes in flight is a bad thing. Dropping extremely large and heavy things like engines would be extremely bad. This poses an unacceptable risk to people and property on the ground (or even in the water). Most aircraft with engine fires do not kill anyone on board, so why risk ... jet-engine emergency engine-failure fire fooot♦ Why is the time of useful consciousness only seconds at high altitudes, when I can hold my breath much longer at ground level? When you are breathing, oxygen ($\mathrm{O}_2$) and carbon dioxide ($\mathrm{CO}_2$) are exchanged between the alveoli in your lungs and the environment. This gas exchange is based on diffusion, which means the partial pressures of each gas involved will move towards equalization: Henry's law states that the amount of a specific gas that dissolves in a ... emergency hypoxia physiological Bianfable If a typical passenger plane had total failure of all engines mid-flight, is it possible for passengers to survive? Air Transat Flight 236 experienced a complete power loss over the Atlantic Ocean in 2001. Yes, all passengers and crew survived after the aircraft glided 75 miles to a runway on the Azores islands. Even in the event of the loss of all engines, an aircraft can keep its critical electrical systems running thanks to the ram air turbine which allows the crew to ... safety aerodynamics airliner emergency shanet Why are there many pilots who don't say "mayday, mayday, mayday" to declare an emergency? I'm a controller, not a pilot, so I can only speak from my own perspective. What we are taught in ATC school is that many pilots are reluctant to use the word mayday because they feel it might escalate a situation unnecessarily and potentially create a lot of paperwork. I guess, mentally, it seems like calling mayday is a significant, irreversible step ... air-traffic-control emergency phraseology J. Hougaard What prevents a passenger from opening the emergency door on his own will, mid-flight? At cruising altitude there is between 4 and 8 tons of pressure acting on the inside of the door. There aren't too many passengers capable of exerting that much force on the handle (and even fewer handles that won't just snap off). Latch type doors have interlocks or over-center latches that prevent operation with a pressurized cabin. It's theoretically ... emergency cabin-design Why did the Swiss International Air LX40 (a 777-300ER) emergency land at Iqaluit airport? Given that the 777-300ER could have easily flown the remainder to LA with one engine -- which means it could have easily diverted to a nearby larger airport -- why didn't it? Standard operating procedure for engine failure on a twin calls for emergency and landing as soon as possible. That was their closest diversion point at the time, so there they landed. ... emergency etops Why is laser illumation of a cockpit an emergency? Let me posit a hypothetical game ("Don't Try This at... well Anywhere"). It's simple - there are just 3 rules: Give your friend a nice laser pointer and have them stand at the end of a road A half-mile or so should be plenty. Get in your car at the other end of the road and drive toward your friend. Your friend's goal is to shine the laser in ... safety emergency laser-illumination voretaq7 What level of damage to an aircraft is acceptable to let it depart? Aircraft Maintenance Engineer here. To evaluate if an aircraft is safe to fly we use three main reference documents: 1. MEL (minimum equipment list). This document contains the list of elements that are allowed to be inoperative. For example a pneumatic valve, a computer, a seat, etc... All unserviceable elements will have to be fixed within a ... emergency accidents preflight dispatch ant Bldel When are aircraft required to dump fuel for emergency landings? For many medium and large sized jets the maximum gross takeoff weight is higher than the maximum landing weight. If the airplane has an emergency that requires an air return or other landing in the early part of flight, it is very likely overweight for landing. The plane has 3 options at this point: Land overweight Dump fuel (if able) Fly around at low ... safety landing emergency fuel-dumping Why evacuate upwind? There are two problems linked to the wind after accidents: Inflating the slides. Running away from the aircraft when on the ground @DavidRicherby listed the reasons related to running away upwind to try to avoid the effects of flames and fumes (visibility, heat, toxicity). This is part of the IATA guidelines for post-evacuation: Post-evacuation. Once ... safety landing emergency evacuation aircraft-failure What is the procedure when an aircraft with an emergency can't land due to a blocked runway? What's the procedure? The procedure is, be creative to save as many lives as possible! Really. The procedure is to determine a course of action which will likely result in the best outcome for everyone, utilizing all resources and given all constraints. Period. It is as simple as that. There are infinitely many scenarios, and one cannot be trained for ... air-traffic-control landing emergency emergency-procedures Why does ATC ask a crew who has declared an emergency if their aircraft will be overweight when landing? If an aircraft encounters a serious problem quite soon after departure that forces it to land immediately, the aircraft may be above its certified maximum landing weight. This is because there is still a lot of fuel in the tanks, which adds a lot of weight. As @RonBeyer mentioned in a comment, landing overweight can have a number of serious consequences. ... air-traffic-control landing emergency What things can a passenger look out for, to indicate an emergency? Honestly, as a passenger, you're not really qualified to look for problems. If you're a pilot qualified and with experience in that type then you might see something. I've had passengers tell my flight attendants that they saw flames coming out of a seam in the engine cowling. It was actually a section of orange rubbery material that was sticking out and ... safety passenger emergency Ralgha Pilot passed out in a small GA plane. What can a passenger do? Best case scenario: You're straight and level, on frequency with some form of human being, there's no immediate danger and you have the know-how to transmit. In that case, that human will provide you with everything they possibly can to help you. Most important thing for you to do is keep the aircraft away from clouds, away from terrain and not panic. You'll ... emergency general-aviation Jamiec♦ Do jet aircraft have an emergency propeller? I think you might have heard about the Ram Air Turbine, which is deployed in case of some aircraft in case of loss of main electrical power supply. From A320 Systems briefing: In case of total loss of all main generators, the RAT is automatically extended and drives the emergency generator via a hydraulic motor. The location of the ram air turbine ... jet-engine emergency propeller How should passengers report SOS signal sightings? There is a procedure, at least for the aircraft crew. It is appropriate for a flight crew to transmit a "pan pan" or even a "mayday relay" so a passenger should immediately inform the senior cabin crew member, usually by asking to speak to the "purser". SOS is an internationally recognised distress signal. If the purser did not act upon it, I would ... emergency procedure Why can't you ditch your aircraft in the sea? The writer was dramatizing things a bit maybe, it's possible to ditch a jet fighter and survive, however your chances are much better ejecting. Ditching is an option for any aircraft, with some airplanes ditching is the only option if there's a loss of power over water, for example commercial jets have no mode of egress other than the doors. I fly light ... emergency ditching oceanic GdD This headline made the news this week: Passenger Snaps Photo of Fuel Pouring Out of a Dreamliner's Wing: The passenger, Ann Kristin Balto from Tromsø, noticed the highly disconcerting leak as the plane was taxiing to the runway—before it actually took off. After alerting the stewardess, the flight was immediately cancelled. So fuel leaks are one thing ... Danny Beckett Why are pilots deemed unfit to fly after emergency ejection? Ejection Seats are not a free ticket out. They are incredibly violent and rough on your body. This newspaper article has a more chilling quote from an interview: About one in three will get a spinal facture, due to the force when the seat is ejected - the gravitational force is 14 to 16 times normal gravity and it might be applied at 200G per second. ... emergency pilots procedure ejection-seat Thunderstrike Why can't an Airbus A330 dump fuel in an emergency? To supplement Jimmy's answer, if they had to land right away, they could have; it just would've resulted in an overweight landing being recorded, and which on most airliners triggers a special inspection of the landing gear and its attaching structure, and if nothing is permanently bent or cracked or broken, you are good to go. An overweight landing in ... emergency airbus-a330 fuel-dumping Do you have to explain what your emergency is? The order is "Aviate, Navigate, Communicate". As a pilot, your first responsibility is to keep the plane flying. After that it is to avoid hitting something or to avoid getting lost. After that comes communication. There are many instances of in-flight emergencies where the pilots never talked to anybody about it, because they were busy flying the plane... ... faa-regulations emergency Ron Beyer Is there any emergency situation in which jet engines would not be shut down before passenger evacuation of a transport aircraft? It is possible for the engine to not respond to shutdown sequence, so you'd have to get the passengers off before dealing with the rogue engine. In the Qantas Flight 32 one engine had an un-contained turbine failure in engine No. 2 leaving them unable to shutdown engine No. 1. This was due to a piece of debris severing all communication to the engine, ... airliner landing emergency emergency-procedures Notts90 is off to codidact.org I just saw a plane drop off online radar; should I do anything about it? Really there's nothing you should do in cases like these. Flightradar24, FlightAware, and similar services should not be used for flight safety purposes, and most of them specifically state so in their terms of service (such as sections 12 and 14 of Flightradar24's terms and conditions). Their sources may go down for whatever reason -- that does not mean ... safety air-traffic-control emergency online-tracking flightradar24 Qantas 94 Heavy Parachutes are heavy, expensive, difficult to use and will be useless in pretty much any air disaster. In order to parachute from a commercial aircraft it would need to be in a stable attitude, at low speed and below about 12 000 ft. Short of an aircraft losing power to all engines like the Gimli Glider, in which case ditching in the ocean or finding an ... Darren Olivier If there is a fire, any smoke and flames will blow downwind. If there is a fuel leak, the fuel vapours will be blown downwind, risking a fire or explosion there and potentially making it hard to breathe. emergency × 338 landing × 56 safety × 46 emergency-procedures × 32 air-traffic-control × 31 faa-regulations × 19 passenger × 18 airliner × 14 commercial-aviation × 14 radio-communications × 14 accidents × 14 engine-failure × 13 procedure × 13 general-aviation × 12 ditching × 12 airline-operations × 11 landing-gear × 11 aerodynamics × 10 aircraft-design × 9 flight-controls × 7 airbus-a320 × 7 fuel × 7 cabin-pressure × 7 phraseology × 7
CommonCrawl
How does support vector regression work intuitively? All the examples of SVMs are related to classification. I don't understand how an SVM for regression (support vector regressor) could be used in regression. From my understanding, A SVM maximizes the margin between two classes to finds the optimal hyperplane. How would this possibly work in a regression problem? regression svm A AA A In short: Maximising the margin can more generally be seen as regularising the solution by minimising $w$ (which is essentially minimising model complexity) this is done both in the classification and regression. But in the case of classification this minimisation is done under the condition that all examples are classified correctly and in the case of regression under the condition that the value $y$ of all examples deviates less than the required accuracy $\epsilon$ from $f(x)$ for regression. In order to understand how you go from classification to regression it helps to see how both cases one applies the same SVM theory to formulate the problem as a convex optimisation problem. I'll try putting both side by side. (I'll ignore slack variables that allow for misclassifications and deviations above accuracy $\epsilon$) In this case the goal is to find a function $f(x)= wx +b$ where $f(x) \geq 1$ for positive examples and $f(x) \leq -1$ for negative examples. Under these conditions we want to maximise the margin (distance between the 2 red bars) which is nothing more than minimising the derivative of $f'=w$. The intuition behind maximising the margin is that this will give us a unique solution to the problem of finding $f(x)$ (i.e. we discard for example the blue line) and also that this solution is the most general under these conditions, i.e. it acts as a regularisation. This can be seen as, around the decision boundary (where red and black lines cross) the classification uncertainty is the biggest and choosing the lowest value for $f(x)$ in this region will yield the most general solution. The data points at the 2 red bars are the support vectors in this case, they correspond to the non-zero Lagrange multipliers of the equality part of the inequality conditions $f(x) \geq 1$ and $f(x) \leq -1$ In this case the goal is to find a function $f(x)= wx +b$ (red line) under the condition that $f(x)$ is within a required accuracy $\epsilon$ from the value value $y(x)$ (black bars) of every data point, i.e. $|y(x) -f(x)|\leq \epsilon$ where $epsilon$ is the distance between the red and the grey line. Under this condition we again want to minimise $f'(x)=w$, again for the reason of regularisation and to obtain a unique solution as the result of the convex optimisation problem. One can see how minimising $w$ results in a more general case as the extreme value of $w=0$ would mean no functional relation at all which is the most general result one can obtain from the data. The data points at the 2 red bars are the support vectors in this case, they correspond to the non-zero Lagrange multipliers of the equality part of the inequality condition $|y -f(x)|\leq \epsilon$. Both cases result in the following problem: $$ \text{min} \frac{1}{2}w^2 $$ Under the condition that: All examples are classified correctly (Classification) The value $y$ of all examples deviates less than $\epsilon$ from $f(x)$. (Regression) LejafarLejafar In SVM for classification problem we actually try to separate the class as far as possible from the separating line (Hyperplane) and unlike logistic regression, we create a safety boundary from both sides of the hyperplane (different between logistic regression and SVM classification is in their loss function). Eventually, having a separated different data points as far as possible from hyperplane. In SVM for regression problem, We want to fit a model to predict a quantity for future. Therefore, we want the data point(observation) to be as close as possible to the hyperplane unlike SVM for classification. The SVM regression inherited from Simple Regression like (Ordinary Least Square) by this difference that we define an epsilon range from both sides of hyperplane to make the regression function insensitive to the error unlike SVM for classification that we define a boundary to be safe for making the future decision(prediction). Eventually, SVM in Regression has a boundary like SVM in classification but the boundary for Regression is for making the regression function insensitive respect to the error but the boundary for classification is only to be way far from hyperplane(decision boundary) to distinguish between class for future (that is why we call it safety margin). mortezamorteza Not the answer you're looking for? Browse other questions tagged regression svm or ask your own question. How different is Support Vector Regression compared to SVM? What is the difference between support vector machines and support vector regression? Error distribution of support vector machine How can one set up a linear support vector machine in Excel? The difference between logistic regression and support vector machines? The Algorithm of Support Vector Machines Optimizing a Support Vector Machine with Quadratic Programming What are the support vectors in a support vector machine? SVM : support vector has margin of 0?
CommonCrawl
2MASS J154043.42-510135.7: a new addition to the 5 pc population Bibcode 2014A&A...567A...6P 10.1051/0004-6361/201423615 Pérez Garrido, A.; Lodieu, N.; Béjar, V. J. S.; Ruiz, M. T.; Gauza, B.; Rebolo, R.; Zapatero Osorio, M. R. Bibliographical reference Astronomy & Astrophysics, Volume 567, id.A6, 8 pp. Advertised on: Astronomy & Astrophysics Aims: The aim of the project is to find the stars closest to the Sun and to contribute to the completion of the stellar and substellar census of the solar neighbourhood. Methods: We identified a new late-M dwarf within 5 pc, looking for high proper motion sources in the 2MASS-WISE cross-match. We collected astrometric and photometric data available from public large-scale surveys. We complemented this information with low-resolution (R ~ 500) optical (600-1000 nm) and near-infrared (900-2500 nm) spectroscopy with instrumentation on the European Southern Observatory New Technology Telescope to confirm the nature of our candidate. We also present a high-quality medium-resolution VLT/X-shooter spectrum covering the 400 to 2500 nm wavelength range. Results: We classify this new neighbour as an M7.0 ± 0.5 dwarf using spectral templates from the Sloan Digital Sky Survey and spectral indices. Lithium absorption at 670.8 nm is not detected in the X-shooter spectrum, indicating that the M7 dwarf is older than 600 Myr and more massive than 0.06 M⊙. We also derive a trigonometric distance of 4.4+0.5-0.4 pc, in agreement with the spectroscopic distance estimate, making 2MASS J154043.42-510135.7 (2M1540) the nearest M7 dwarf to the Sun. This trigonometric distance is somewhat closer than the ~6 pc distance reported by the ALLWISE team, who independently identified this object recently. This discovery represents an increase by 25% in the number of M7-M8 dwarfs already known at distances closer than 8 pc from our Sun. We derive a density of ρ = 1.9 ± 0.9 × 10-3 pc-3 for M7 dwarfs in the 8 pc volume, a value similar to those quoted in the literature. Conclusions: This new ultracool dwarf is among the 50 closest systems to the Sun, demonstrating that our current knowledge of the stellar census within the 5 pc sample remains incomplete. 2M1540 represents a unique opportunity to search for extrasolar planets around ultracool dwarfs due to its proximity and brightness. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile. Very Low Mass Stars, Brown Dwarfs and Planets Our goal is to study the processes that lead to the formation of low mass stars, brown dwarfs and planets and to characterize the physical properties of these objects in various evolutionary stages. Low mass stars and brown dwarfs are likely the most numerous type of objects in our Galaxy but due to their low intrinsic luminosity they are not so Rebolo López Exoplanets and Astrobiology The search for life in the universe has been driven by recent discoveries of planets around other stars (known as exoplanets), becoming one of the most active fields in modern astrophysics. The growing number of new exoplanets discovered in recent years and the recent advance on the study of their atmospheres are not only providing new valuable Pallé Bago Refereed Stellar & Interstellar Physics (FEEI) The impact of two massive early accretion events in a Milky Way-like galaxy: repercussions for the buildup of the stellar disc and halo We identify and characterize a Milky Way-like realization from the Auriga simulations with two consecutive massive mergers $\sim 2$ Gyr apart at high redshift, comparable to the reported Kraken and Gaia-Sausage-Enceladus. The Kraken-like merger (z = 1.6, $M_{\rm Tot}=8\times 10^{10}\, \rm {M_{\odot }}$) is gas-rich, deposits most of its mass in the Orkney, Matthew D. A. et al. 2022MNRAS.517L.138O The Circular Polarization of the Mn 1 Resonance Lines around 280 nm for Exploring Chromospheric Magnetism We study the circular polarization of the Mn I resonance lines at 279.56, 279.91, and 280.19 nm (hereafter, UV multiplet) by means of radiative transfer modeling. In 2019, the CLASP2 mission obtained unprecedented spectropolarimetric data in a region of the solar ultraviolet including the Mg II h and k resonance lines and two lines of a subordinate del Pino Alemán, Tanausú et al. 2022ApJ...940...78D Euclid preparation. XXI. Intermediate-redshift contaminants in the search for z > 6 galaxies within the Euclid Deep Survey Context. The Euclid mission is expected to discover thousands of z > 6 galaxies in three deep fields, which together will cover a ∼50 deg2 area. However, the limited number of Euclid bands (four) and the low availability of ancillary data could make the identification of z > 6 galaxies challenging. Aims: In this work we assess the degree of van Mierlo, S. E. et al. 2022A&A...666A.200V
CommonCrawl
Association between dietary acid load and the risk of hypertension among adults from South China: result from nutrition and health survey (2015–2017) Shao-wei Chen1, Gui-yuan Ji1, Qi Jiang1, Ping Wang1, Rui Huang1, Wen-jun Ma1, Zi-hui Chen1 & Jie-wen Peng1 Higher dietary acid load (DAL) was considered to be associated with an elevated risk of hypertension, while related data from mainland China remains scarce and incomplete. We aim to evaluate the association between DAL and the risk of hypertension among adults from South China. We conducted a nutrition and health survey in Guangdong Province located in southern China from 2015 to 2017. A four-stage probability sampling method was utilized to select representative samples of citizens aged ≥18 years old. DAL was assessed by potential renal acid load (PRAL) and net endogenous acid production (NEAP). Participants were divided to 4 groups (Q1-Q4) according to the quartile points of PRAL or NEAP distributions. Generalized linear mixed effects models were applied to evaluate the association between DAL and the risk of hypertension. A total of 3501 individuals were eligible for this study and 45.9% was male participants. Hypertension rate was 30.7%. A higher PRAL was associated with higher prevalence rate of hypertension among the male (P-trend = 0.03). OR for Q2 was 1.34 (95%CI, 0.94–1.91), Q3 was 1.53 (95%CI = 1.08, 2.16) and Q4 was 1.51 (95%CI, 1.08–2.16) among the male. However, as for total participants, the female, the participants with ≤55 years or participants with > 55 years, the associations were lack of significance. With respect to association between NEAP and hypertension, non-significant results were identified. The current study indicated male hypertension was associated with higher PRAL, while given to this study was cross-sectional design, further studies are warranted to verify the association. Hypertension is a leading cause of premature death around the world on account of its high prevalence and significant impact on cardiovascular disease mortality [1,2,3]. Large-scale and the most recent surveys exhibit that hypertension is commonly prevalent in China and its prevalence rate is up to 30% in adults [4]. Findings from previous studies have estimated that above 750,000 cardiovascular disease deaths could be attributable to abnormally high blood pressure in China in 2010 [5]. In addition, situation about hypertension control is not so optimistic for hypertension treatment rate is < 50% [5]. It is paramount and indispensable to prevent the hypertension from its etiological factors. For the etiology of hypertension, besides genetic, social and environmental factors, diet pattern could also exert a role in the ongoing epidemics of hypertension [6,7,8,9,10]. There are both potentially adverse and preventive effects of dietary factors on hypertension. Some previous publications [6, 11, 12] showed an inverse risk of developing hypertension associated with high quality and/or health-related dietary patterns, such as Mediterranean diets. These dietary patterns are perceived to be associated with inflammation suppression, which are involved in biological pathways of preventive effect on hypertension [13]. Another possible mechanism comes to endogenous acid and base production [14]. Individuals with highly consuming plant food are perceived to get a more sound equilibrium of acid-base as consumption of these food generally could obtain more alkali-rich nutrients such as magnesium and potassium [14]. The alkali-rich nutrients help to produce potential bicarbonate precursors in vivo [15]. However, people with habitually animal products might be much easier to get a chronic and mild metabolic acidosis because phosphorus-rich and sulfur-rich components from these food could provide proton load and lower pH value in blood [15, 16]. Several studies revealed that acid-base equilibrium affects blood pressure toward kidneys and renin-angiotensin-aldosterone system [15, 16]. Dietary acid load (DAL) was proposed to evaluate acid-base equilibrium resulted from dietary intake [17,18,19]. Two methods were commonly applied for computing DAL, including potential renal acid load (PRAL) and net endogenous acid production (NEAP) [17,18,19]. Some observational studies exhibited a higher risk of hypertension in citizens with higher DAL [14, 20, 21], but it is still under discussion as some other studies reported a contradictory result [22]. Up to now, to our best understanding, there is still lack of data from mainland China. Guangdong province locates in the south of China and has a population of 110 million in 2016. We have previously reported the prevalence trend of hypertension from 23.6 to 40.8% within a period from 2002 to 2010 [23]. Risk factors of hypertension such as lifestyle, industrialization and urbanization were also studied [23], but data on dietary impacts were lacking. Thus, we perform a cross-sectional health survey from 2015 to 2017 including a population of 3501 in Guangdong province to explore whether DAL is associated with the risk of hypertension. This study employed data from the Guangdong Nutrition and Health Survey (NHS) 2015, which was performed by the Guangdong Provincial Center for Disease Control and Prevention from 2015 to 2017. The NHS was cross-sectional design and was a part of the China National Nutrition and Health Survey 2015, as previously described [24]. A four-stage probability sampling method was utilized to select representative samples of citizens who lived in Guangdong province for continuous 6 months within the last year. A total of 14 counties in Guangdong province were selected and the procedure was similar to the NHS conducted in 2002 and 2012 [23, 25]. Briefly, 125 counties in Guangdong province were stratified into urban or rural areas according to economic capabilities. Eight counties from urban layer and six counties from rural layer were randomly selected. After then, 3 communities (urban) or townships (rural) were randomly sampled from each selected county. Two residential committees (urban) or villages (rural) were further extracted from the target communities or townships. Finally, about 20 households were included and all the adults aged ≥18 years old in the households attended our investigations with signed informed consent. To obtain sufficient number of participants, about 270 households and 612 participants in each county were required. Additional households and participants were obtained from neighboring area, if it did not meet the requirement in the specific selected county. Dietary consumption assessment A 24-h dietary recall in consecutive 3 days was applied to collect dietary consumption at the individual level. Food species and intake amount were recorded by trained interviewers. Cooking oil and condiments at the household were weighted on daily basis. Individual data about cooking oil and condiments was calculated by individual ratio of dietary energy among household members. Questionnaire applied for collecting food consumption in this study was developed from the China National Nutrition and Health Survey 2015, as described in previous study [24]. Daily dietary nutrients and energy were computed by food consumption and food composition data which was derived from the Chinese Food Composition Tables (2004 and 2009). DAL was assessed by PRAL and NEAP. PRAL evaluated endogenous acid load though synthesizing effect of dietary protein, phosphorus, potassium, calcium and magnesium, and NEAP though dietary protein and potassium. PRAL and NEAP were computed using the following formulas, respectively [17,18,19]: $$ \mathrm{PRAL}\ \left(\mathrm{mEq}/\mathrm{d}\right)=0.4888\times \mathrm{protein}\ \mathrm{intake}\ \left(\mathrm{g}/\mathrm{d}\right)+0.0366\times \mathrm{phosphorus}\ \left(\mathrm{mg}/\mathrm{d}\right)-0.0205\times \mathrm{potassium}\ \left(\mathrm{mg}/\mathrm{d}\right)-0.0125\times \mathrm{calcium}\ \left(\mathrm{mg}/\mathrm{d}\right)-0.0263\times \mathrm{magnesium}\ \left(\mathrm{mg}/\mathrm{d}\right); $$ $$ \mathrm{NEAP}\ \left(\mathrm{mEq}/\mathrm{d}\right)=\left(54.5\times \mathrm{protein}\ \mathrm{intake}\ \left(\mathrm{g}/\mathrm{d}\right)\div \mathrm{potassium}\ \mathrm{intake}\ \left(\mathrm{mEq}/\mathrm{d}\right)\right)-10.2 $$ Anthropometry of participants were measured by well-trained researchers based on standard procedures. Weight (kg) and height (cm) of participants were assessed by electronic instruments after removing the heavy clothes and shoes. Body mass index (BMI) was computed for each individual (weight kg/height m2). Systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured by an electronic device (OMRON Corporation, HBP1300) for 3 times in each participant. We applied the mean of the 3 measured values in the final analysis. According to guidelines of the World Health Organization (WHO), hypertension was defined as a SBP ≥ 140 mmHg and/or a DBP ≥ 90 mmHg, or the use of medication for hypertension [26]. Baseline information and hypertension related factors such as smoking, alcohol intake, physical activity and etc. were also collected by face to face interview. Collation of the data from all participants was conducted by 2 investigators. Data on continuous variables was exhibited as mean and standard deviation (SD), and data on categorical variables was as number and proportion. If there was abnormal or incomplete data, we excluded them from analysis of association between DAL and hypertension. Participants were divided to 4 groups according to the quartile points of DAL (PRAL and NEAP) distributions. Correlation between blood pressure (SBP and DBP) and different nutrients intake (protein, calcium, potassium, phosphorus and magnesium) was evaluated by scatter plot. We employed analysis methods for complex samples as the data in this study was collected by multi-stage complex sampling. A generalized linear mixed effects model was employed to examine the relationship between DAL and hypertension. As participant selection was stratified into urban or rural areas, we put region layers (urban layer or rural layer) as fixed effect in the model. Considering there might be family cluster of hypertension and food consumption, household id was put as random effect in the model. Crude odds ratio (OR) and its corresponding 95% confidence interval (CI) for difference levels of DAL (PRAL or NEAP) were calculated. In addition, as there were potential confounding factors for hypertension, including age, sex, smoking, drinking, BMI, sedentary leisure time, physical activity time, sodium intake, education and marital status, we also put these variables in the model to compute adjusted OR. Adjusted model 1 was deemed as generalized linear mixed effects model adjusted for sex and age, and adjusted model 2 was generalized linear mixed effects model adjusted for sex, age, smoking, drinking, body mass index, sedentary leisure time, physical activity time, sodium intake, education and marital status. Subgroups analysis concerning different gender groups (male and female) and age groups (≤55 years and > 55 years) was performed. Statistical analyses in this study were carried out by SAS Enterprise Guide (SAS Institute Inc., Cary, NC, USA). Figures were produced in R version 3.5.1 (R Core Development Team). A p value below 0.05 was conducted to characterize statistically significant results. Characteristic of participants A total of 3643 adults was included in our study and 3501 adults completed the survey and anthropometry measurements. The response rate was 96.1%. Among the 3501 individuals, their average age was 52.0 ± 15.0 years and 45.9% was male participants. Prevalence of hypertension was 30.7% (29.1% in male and 32.7% in female). The mean value of PRAL and NEAP were 22.1 ± 18.6 and 86.8 ± 53.9 mEq/d, respectively. Nutrient intake and other variables about characteristic of participants are summarized in Table 1. A total of 3237 participants and 3233 participants were included in generalized linear mixed effects model for PRAL and NEAP analysis, respectively, because 268 participants and 264 participants had abnormal high value of PRAL and NEAP or hypertension patients were not under control after taking antihypertensive drugs. Table 1 Characteristic of eligible participants Scatter plots did not show obvious linear relationship between blood pressure (SBP and DBP) and different nutrients intake (protein, calcium, potassium, phosphorus and magnesium), as exhibited in Fig. 1. Scatter plots regarding association between blood pressure and five nutrients Association between PRAL and hypertension We employed data from the first quartile (Q1) as reference group, crude ORs of the second quartile (Q2), the third quartile (Q3) and the fourth quartile (Q4) were 1.00, 1.10 and 1.05, respectively, which were lack of statistical significance (P-values > 0.05 and P-trend = 0.13). After adjusting for sex and age, the associations between PRAL and hypertension were still not statistically significant. In addition, after adjusting for a broad of potential confounding factors including age, sex, smoking, drinking, BMI, sedentary leisure time, physical activity time, sodium intake, education and marital status, the non-significant results remain (Table 2). Table 2 Association between potential renal acid load and the risk of hypertension Results from subgroup analysis indicated that higher PRAL was associated with higher prevalence rate of hypertension among male adults (P-trend = 0.03). Crude OR for Q2 was 1.34 (95%CI, 0.94–1.91), Q3 was 1.53 (95%CI = 1.08, 2.16) and Q4 was 1.51 (95%CI, 1.08–2.16). However, these ORs had a reduction after adjusting for a broad of potential confounding factors (adjusted ORs for Q2, Q3 and Q4 were 1.29, 1.42 and 1.49 respectively). With respect to the female or separated age groups (≤55 years and > 55 years), the associations between elevated PRAL and hypertension risk were not pronounced (Fig. 2). Forest plots regarding association between potential renal acid load and the risk of hypertension. (Plot a. assuming base model and Plot b. assuming adjusted model) Association between NEAP and hypertension In terms to association between NEAP and hypertension, crude ORs of Q2, Q3 and Q4 were 1.19, 1.16 and 1.24, respectively, which were not statistically significant (P-values > 0.05 and P-trend = 0.76). Both adjusted model 1 and adjusted model 2 suggest non-significant results (Table 3). Subgroup analysis demonstrated that the associations between elevated NEAP and hypertension risk were not sound among different genders and different age groups (Fig. 3). Forest plots regarding association between net endogenous acid production and the risk of hypertension. (Plot a. assuming base model and Plot b. assuming adjusted model) To the best of our knowledge, this is the first study from mainland China to evaluate the association between DAL and hypertension in adults using data from large-scale samples of 3501 population. Our analysis showed higher PRAL was associated with higher prevalence rate of hypertension among male adults. However, for the total participants, the female, the participants with ≤55 years or participants with > 55 years, the associations was lack of significance. With respect to association between NEAP and hypertension, non-significant results were identified. Debates concerning the association between DAL and hypertension were identified in previous studies. Two cohort designed studies from America (comprising 87,293 participants) [27] and Hong Kong, China (comprising 3956 participants) [28] reported an increased risk ratio of hypertension in those with elevated DAL. However, two other cohort studies from Netherlands (comprising 3411 and 2241 participants, respectively) [22, 29] demonstrated that the association was lack of significance. The controversy also exists in cross-sectional studies conducted in Japan [14], Germany [20], Korea [30] and Sweden [31]. Different levels of DAL in various studies might be one of contributors to inconsistent results. Krupp et al. [20] found a 1.45-fold higher prevalence rate of hypertension in participants with higher PRAL (15.5 mEq/day) comparing to a lower PRAL values (− 30.8 mEq/day). Engberink et al. [22] did not identify a significant association between higher PRAL and the risk of hypertension incidence (the medians of lower and higher PRAL were − 14.6 to 19.9 mEq/d, respectively). The medians of lower and higher PRAL in our study (13.81 to 28.19 mEq/d, respectively) also differed from other studies. The other possible explanations might include different study design and inclusion criteria of participants. Our findings demonstrated higher PRAL was associated with higher prevalence rate of hypertension among male adults, while no significant association was found among female adults. Previous studies have reported some biological factors such as sex hormones, immune inflammatory factors and chromosomal differences [32], which are considered to be protective for the female. Even though the prevalence rate of hypertension in male (29.1%) was mildly lower than female (32.7%), our finding showed a significantly higher PRAL in the male (25.3 ± 20.2 mEq/d) comparing to the female (19.4 ± 16.7 mEq/d). It indicated that blood pressure in the male are more susceptible to PRAL than the female. The discrepancy of DAL in different genders is need to be verified as it disappeared when it comes to the effect of NEAP. There are uncertain about the mechanism underlying the putative association between elevated DAL and the risk of hypertension, nevertheless, some previous studies were dedicated to explaining the potential relationship. A chronic and low grade metabolic acidosis was supposed to be a main contributor to the risk of hypertension on this issue [22]. The chronic symptoms are characterized with increased proton load and decreased pH value in blood, resulting from a high DAL [20, 31]. Metabolic acidosis could indirectly raise blood pressure through upgrading secreting cortisol, excreting calcium or inhibiting citrate excretion [20, 31]. Another mechanism involving serum anion gap in metabolic acidosis is also discussed. Previous studies demonstrated an elevated blood pressures in individuals with a high level of anion gap [33, 34], nonetheless the possible underlying pathway is not understood. Even though the evidence mentioned above have been confirmed by a set of cross-sectional and animal studies, the possible mechanisms are considered to be weak. There were several strengths in this study as follows: (1) a large sample size of participants was randomly selected by well-designed sampling strategy; (2) anthropometry was measured by well-trained staff, but not self-reported. However, some potential limitations should also be discussed. First of all, this study was a cross-sectional designed study, which was indeterminately chronological order of the relation and might introduce potential confounding bias. The causal link between dietary acid load and the risk hypertension cannot be confirmed as indeterminately chronological order of the relation. Although we put a broad of confounding factors as we could, such as sex, age, smoking, drinking, body mass index, sedentary leisure time, physical activity time, sodium intake, education and marital status, in the generalized linear mixed effects model, confounding effect of some other potential factors, such as family history, might exist as well. What's more, information bias might be not eliminate due to a short period of dietary data collection (consecutive 3 days). The collecting data could not comprehensively reflect a long-term dietary habit of local residents and seasonal factors were not taken into consideration. In addition, even though dietary acid load (PRAL and NEAP) were widely employed in previous studies, it was not directly measured, but computed according to dietary intake. Two aspects should be noted to future studies. The first one is that a comprehensive review about putative association between elevated dietary acid load and the risk of hypertension should be conducted. Quantitative synthesis of available studies could be performed to obtain a comprehensive result on the issue. The second one is that underlying pathway regarding how dietary acid load has impacts on blood pressure should be uncovered if the putative association has been actually proved. In summary, association between elevated PRAL and higher prevalence rate of hypertension among male adults was identified, while in terms to total participants, the female, the participants with ≤55 years or participants with > 55 years, the associations was lack of significance. With respect to association between NEAP and hypertension, non-significant results were identified. Given to this study was cross-sectional design, further studies are warranted to elucidate role of DAL in hypertension risk regarding different genders and the underlying pathway how DAL affects blood pressure should be verified and uncovered. The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request. Dietary acid load DBP: Diastolic blood pressure NEAP: Net endogenous acid production NHS: Nutrition and Health Survey PRAL: Potential renal acid load The first quartile point The second quartile point The third quartile point The fourth quartile point SBP: Systolic blood pressure Roth GA, Nguyen G, Forouzanfar MH, Mokdad AH, Naghavi M, Murray CJ. Estimates of global and regional premature cardiovascular mortality in 2025. Circulation. 2015;132(13):1270–82. Forouzanfar MH, Liu P, Roth GA, Ng M, Biryukov S, Marczak L, Alexander L, Estep K, Hassen AK, Akinyemiju TF, et al. Global burden of hypertension and systolic blood pressure of at least 110 to 115 mm hg, 1990-2015. JAMA. 2017;317(2):165–82. Chugh SS, Roth GA, Gillum RF, Mensah GA. Global burden of atrial fibrillation in developed and developing nations. Glob Heart. 2014;9(1):113–9. Lu J, Lu Y, Wang X, Li X, Linderman GC, Wu C, Cheng X, Mu L, Zhang H, Liu J, et al. Prevalence, awareness, treatment, and control of hypertension in China: data from 1.7 million adults in a population-based screening study (China PEACE million persons project). LANCET. 2017;390(10112):2549–58. Lewington S, Lacey B, Clarke R, Guo Y, Kong XL, Yang L, Chen Y, Bian Z, Chen J, Meng J, et al. The burden of hypertension and associated risk for cardiovascular mortality in China. JAMA Intern Med. 2016;176(4):524–32. Han H, Fang X, Wei X, Liu Y, Jin Z, Chen Q, Fan Z, Aaseth J, Hiyoshi A, He J, et al. Dose-response relationship between dietary magnesium intake, serum magnesium concentration and risk of hypertension: a systematic review and meta-analysis of prospective cohort studies. Nutr J. 2017;16(1):26. Qi D, Nie XL, Wu S, Cai J. Vitamin D and hypertension: prospective study and meta-analysis. PLoS One. 2017;12(3):e174298. Xi B, Huang Y, Reilly KH, Li S, Zheng R, Barrio-Lopez MT, Martinez-Gonzalez MA, Zhou D. Sugar-sweetened beverages and risk of hypertension and CVD: a dose-response meta-analysis. Br J Nutr. 2015;113(5):709–17. Cai Y, Zhang B, Ke W, Feng B, Lin H, Xiao J, Zeng W, Li X, Tao J, Yang Z, et al. Associations of short-term and long-term exposure to ambient air pollutants with hypertension: a systematic review and meta-analysis. Hypertension. 2016;68(1):62–70. Park SH, Lim JE, Park H, Jee SH. Body burden of persistent organic pollutants on hypertension: a meta-analysis. Environ Sci Pollut Res Int. 2016;23(14):14284–93. Wu L, Sun D, He Y. Fruit and vegetables consumption and incident hypertension: dose-response meta-analysis of prospective cohort studies. J Hum Hypertens. 2016;30(10):573–80. Nissensohn M, Roman-Vinas B, Sanchez-Villegas A, Piscopo S, Serra-Majem L. The effect of the Mediterranean diet on hypertension: a systematic review and meta-analysis. J Nutr Educ Behav. 2016;48(1):42–53. Chuang SY, Chiu TH, Lee CY, Liu TT, Tsao CK, Hsiung CA, Chiu YF. Vegetarian diet reduces the risk of hypertension independent of abdominal obesity and inflammation: a prospective study. J Hypertens. 2016;34(11):2164–71. Akter S, Eguchi M, Kurotani K, Kochi T, Pham NM, Ito R, Kuwahara K, Tsuruoka H, Mizoue T, Kabe I, et al. High dietary acid load is associated with increased prevalence of hypertension: the Furukawa nutrition and health study. Nutrition. 2015;31(2):298–303. Ball D, Maughan RJ. Blood and urine acid-base status of premenopausal omnivorous and vegetarian women. Br J Nutr. 1997;78(5):683–93. Remer T. Influence of nutrition on acid-base balance--metabolic aspects. Eur J Nutr. 2001;40(5):214–20. Remer T, Dimitriou T, Manz F. Dietary potential renal acid load and renal net acid excretion in healthy, free-living children and adolescents. Am J Clin Nutr. 2003;77(5):1255–60. Remer T, Manz F. Estimation of the renal net acid excretion by adults consuming diets containing variable amounts of protein. Am J Clin Nutr. 1994;59(6):1356–61. Frassetto LA, Todd KM, Morris RJ, Sebastian A. Estimation of net endogenous noncarbonic acid production in humans from diet potassium and protein contents. Am J Clin Nutr. 1998;68(3):576–83. Krupp D, Esche J, Mensink G, Klenow S, Thamm M, Remer T. Dietary acid load and potassium intake associate with blood pressure and hypertension prevalence in a representative sample of the German Adult Population. Nutrients. 2018;10(1):103. Murakami K, Livingstone M, Okubo H, Sasaki S. Higher dietary acid load is weakly associated with higher adiposity measures and blood pressure in Japanese adults: the National Health and nutrition survey. Nutr Res. 2017;44:67–75. Engberink MF, Bakker SJ, Brink EJ, van Baak MA, van Rooij FJ, Hofman A, Witteman JC, Geleijnse JM. Dietary acid load and risk of hypertension: the Rotterdam study. Am J Clin Nutr. 2012;95(6):1438–44. Lao XQ, Xu YJ, Wong MC, Zhang YH, Ma WJ, Xu XJ, Cai QM, Xu HF, Wei XL, Tang JL, et al. Hypertension prevalence, awareness, treatment, control and associated factors in a developing southern Chinese population: analysis of serial cross-sectional health survey data 2002-2010. Am J Hypertens. 2013;26(11):1335–45. Zhao J, Su C, Wang H, Wang Z, Wang Y, Zhang B. Secular trends in energy and macronutrient intakes and distribution among adult females (1991-2015), results from the China health and nutrition survey. Nutrients. 2018;10(2):115. Lao XQ, Ma WJ, Sobko T, Zhang YH, Xu YJ, Xu XJ, Yu DM, Nie SP, Cai QM, Wei XL, et al. Dramatic escalation in metabolic syndrome and cardiovascular risk in a Chinese population experiencing rapid economic development. BMC Public Health. 2014;14:983. World Health Organization. A Global Brief on Hypertension: Silent Killer, Global Public Health Crisis. Available online: http://www.who.int/cardiovascular_diseases/publications/global_brief_hypertension/en/. Accessed 20 June 2018. Zhang L, Curhan GC, Forman JP. Diet-dependent net acid load and risk of incident hypertension in United States women. Hypertension (Dallas, Tex. : 1979). 2009;54(4):751–5. Chan R, Leung J, Woo J. Estimated net endogenous acid production and risk of prevalent and incident hypertension in community-dwelling older people. World J Hypertens. 2015;5(4):129. Tielemans MJ, Erler NS, Franco OH, Jaddoe VW, Steegers EA, Kiefte-de JJ. Dietary acid load and blood pressure development in pregnancy: the generation R study. Clin Nutr. 2018;37(2):597-603. Han E, Kim G, Hong N, Lee YH, Kim DW, Shin HJ, Lee BW, Kang ES, Lee IK, Cha BS. Association between dietary acid load and the risk of cardiovascular disease: nationwide surveys (KNHANES 2008-2011). Cardiovasc Diabetol. 2016;15(1):122. Luis D, Huang X, Riserus U, Sjogren P, Lindholm B, Arnlov J, Cederholm T, Carrero JJ. Estimated dietary acid load is not associated with blood pressure or hypertension incidence in men who are approximately 70 years old. J Nutr. 2015;145(2):315–21. Vitale C, Mendelsohn ME, Rosano GM. Gender differences in the cardiovascular effect of sex hormones. Nat Rev Cardiol. 2009;6(8):532–42. Taylor EN, Forman JP, Farwell WR. Serum anion gap and blood pressure in the national health and nutrition examination survey. Hypertension. 2007;50(2):320–4. Pasarikovski CR, Granton JT, Roos AM, Sadeghi S, Kron AT, Thenganatt J, Moric J, Chau C, Johnson SR. Sex disparities in systemic sclerosis-associated pulmonary arterial hypertension: a cohort study. Arthritis Res Ther. 2016;18:30. We thank all team members and participants for the Guangdong Provincial Nutrition and Health Survey (2015-2017). This study was support Guangdong key research and development program (NO: 2019B020230001 and 2019B020210001) in interpretation of data and in writing the manuscript. Department of Health Risk Assessment Research Center, Guangdong Provincial Institute of Public Health, Guangdong Provincial Center for Disease Control and Prevention, No. 160 Qunxian Road, Panyu District, Guangzhou, 511430, China Shao-wei Chen , Gui-yuan Ji , Qi Jiang , Ping Wang , Rui Huang , Wen-jun Ma , Zi-hui Chen & Jie-wen Peng Search for Shao-wei Chen in: Search for Gui-yuan Ji in: Search for Qi Jiang in: Search for Ping Wang in: Search for Rui Huang in: Search for Wen-jun Ma in: Search for Zi-hui Chen in: Search for Jie-wen Peng in: JWP, ZHC and SWC conceived and designed the experiments; ZHC and SWC performed the analyses; SWC wrote the first draft of the manuscript; GYJ, QJ, PW, RH and WJM significantly contributed to the development of the final draft of the manuscript. All authors approve of the final version of the manuscript. Correspondence to Zi-hui Chen or Jie-wen Peng. This study was performed in accordance with the Declaration of Helsinki. Protocol of this study was approved by the Ethical Committee of Guangdong Provincial Center for Disease Control and Prevention. Eligible participants attended this study with signed informed consent. Chen, S., Ji, G., Jiang, Q. et al. Association between dietary acid load and the risk of hypertension among adults from South China: result from nutrition and health survey (2015–2017). BMC Public Health 19, 1599 (2019) doi:10.1186/s12889-019-7985-5 Chronic Disease epidemiology
CommonCrawl
Comparison of malaria incidence rates and socioeconomic-environmental factors between the states of Acre and Rondônia: a spatio-temporal modelling study Meyrecler Aglair de Oliveira Padilha1, Janille de Oliveira Melo1, Guilherme Romano1, Marcos Vinicius Malveira de Lima1,2, Wladimir J. Alonso5, Maria Anice Mureb Sallum3 & Gabriel Zorello Laporta ORCID: orcid.org/0000-0001-7412-93901,4 Plasmodium falciparum malaria is a threat to public health, but Plasmodium vivax malaria is most prevalent in Latin America, where the incidence rate has been increasing since 2016, particularly in Venezuela and Brazil. The Brazilian Amazon reported 193,000 cases in 2017, which were mostly confirmed as P. vivax (~ 90%). Herein, the relationships among malaria incidence rates and the proportion of accumulated deforestation were contrasted using data from the states of Acre and Rondônia in the south-western Brazilian Amazon. The main purpose is to test the hypothesis that the observed difference in incidence rates is associated with the proportion of accumulated deforestation. An ecological study using spatial and temporal models for mapping and modelling malaria risk was performed. The municipalities of Acre and Rondônia were the spatial units of analysis, whereas month and year were the temporal units. The number of reported malaria cases from 2009 until 2015 were used to calculate the incidence rate per 1000 people at risk. Accumulated deforestation was calculated using publicly available satellite images. Geographically weighted regression was applied to provide a local model of the spatial heterogeneity of incidence rates. Time-series dynamic regression was applied to test the correlation of incidence rates and accumulated deforestation, adjusted by climate and socioeconomic factors. The malaria incidence rate declined in Rondônia but remained stable in Acre. There was a high and positive correlation between the decline in malaria and higher proportions of accumulated deforestation in Rondônia. Geographically weighted regression showed a complex relationship. As deforestation increased, malaria incidence also increased in Acre, while as deforestation increased, malaria incidence decreased in Rondônia. Time-series dynamic regression showed a positive association between malaria incidence and precipitation and accumulated deforestation, whereas the association was negative with the human development index in the westernmost areas of Acre. Landscape modification caused by accumulated deforestation is an important driver of malaria incidence in the Brazilian Amazon. However, this relationship is not linearly correlated because it depends on the overall proportion of the land covered by forest. For regions that are partially degraded, forest cover becomes a less representative component in the landscape, causing the abovementioned non-linear relationship. In such a scenario, accumulated deforestation can lead to a decline in malaria incidence. Human malaria emerged from the tropical forest of Africa, propagated globally and became a tropical and subtropical disease in the second half of last century [1,2,3]. Six species of Plasmodium parasites can cause disease in humans: Plasmodium falciparum, Plasmodium vivax, Plasmodium malariae, Plasmodium ovale curtisi, Plasmodium ovale wallikeri and Plasmodium knowlesi [1, 4,5,6,7]. Recently, Plasmodium simium emerged as another potential species to infect humans [8]. The malaria transmission cycle includes Plasmodium spp., anopheline and human components [9]. In 2017, 91 countries reported a total of 219 million cases of malaria, with 435,000 deaths [10]. Worldwide, P. falciparum malaria is more prevalent than P. vivax malaria. Plasmodium falciparum malaria accounted for 99.7% of the cases in areas across sub-Saharan Africa [10]. In the Americas, P. vivax malaria occurs more frequently than P. falciparum malaria [11,12,13], with 723,000 (74%) infections reported in 2017 [10]. Whereas P. falciparum causes higher levels of morbidity and mortality than P. vivax [14, 15], the latter is gaining attention as a major hurdle in the era of malaria elimination [16, 17]. A reason for this can be that current malaria commodities, including the available anti-malarial drugs, are not very effective against P. vivax, leading to a high proportion of P. vivax asymptomatic reservoirs that can infect anopheline vectors [13, 16], further propagating the parasites in environments where competent mosquito vectors occur. The global malaria incidence rate has declined from 76 to 59 cases per 1000 population at risk from 2010 to 2017 [10]. However, the rate of decrease has either slowed or reversed in some regions since 2015 [10]. In the Americas (in 2017), malaria incidence has been increasing since 2013, mainly because of the Bolivarian Republic of Venezuela, Brazil and Nicaragua [10, 13]. Between 2016 and 2017, malaria incidence increased approximately 100% in Nicaragua and Venezuela. In 2017, Venezuela accounted for 53% of reported cases, followed by Brazil (22%) [10]. Malaria distribution is spatially clustered, with hotspots of transmission in Choco (in Colombia), Loreto (in Peru) and Bolivar (in Venezuela) [10, 18]. In Brazil, approximately 45% of reported cases are from 15 municipalities in the states of Acre and Amazonas [18]. In Brazil, malaria decreased by 65% from 2010 (384,655) to 2016 (133,591). However, the disease increased by 63% between 2016 and 2017 (217,928) in comparison to 2015 [10]. Most malaria cases occur in the Amazon River Basin. In 2017, 193,000 cases occurred in the Amazonian Region (99.95%), which were mostly P. vivax malaria (174,000; ~ 90%). Consequently, the majority of the studies have focused on hotspots of malaria transmission in areas across the Brazilian Amazon (e.g., [19,20,21,22,23,24,25,26,27,28,29]). Malaria transmission has been associated with several scenarios: (1) legal and illegal mining with high human exposure to mosquito bites, human movement and extensive environmental changes [16]; (2) expansion of agricultural frontiers, leading to deforestation, land-use changes and human encroachment in forested areas [30]; (3) discontinuity of malaria control programmes in poorly accessed remote areas [21]; and (4) ecological factors, which can drastically increase vector abundance, such as fish ponds in rural areas and towns [16, 25, 31]. These aforementioned transmission settings can represent transmission hotspots, and they were employed to construct a flexible model for predicting malaria emergence in similar scenarios [28, 32]. Frontier malaria is a concept offering an explanation for the trajectory of malaria incidence with deforestation and was applied for predicting the emergence of malaria in the Brazilian Amazon region [28]. This concept model predicts high malaria transmission risk in the first years of a human settlement in the Amazon forest. The main mechanisms are (1) a high number of immunologically naïve immigrants intermixed with asymptomatic human reservoirs, (2) a high contact rate between the main malarial vector and human hosts, and (3) a precarious socio-environmental matrix [28]. After 10 years of colonization and development in the settlement, the frontier malaria concept predicts a steep malaria decline rate. The mechanisms for the decline are related to overall improvements in the settlement with economic gains from agriculture, ranching and urban development [28]. A mathematical model was developed to address the possible dynamical trajectories of malaria with land-use change in frontier regions [33]. This work is a theoretical generalization of the frontier malaria concept, with a mathematical model coupling land-use change, malaria transmission and economic development. Most of the plausible parameter space led to numerical simulations with malaria population dynamics showing an initial increase in malaria incidence followed by a decrease in this incidence afterwards [33]. The initial state of high malaria risk in early stages of land-use change is driven by environmental conditions. Malaria risk decreases over time because these environmental conditions interact with the socioeconomic factors that tend to reduce risk on slower and longer timescales [33]. The tension between environmental and socioeconomic forces supports the pattern of the rise and fall of malaria population dynamics under land transformation (Fig. 1). Theoretical background. The convex curve supports the convex trajectory observed in the generalization model [33] of the frontier malaria concept [28]. Environmental conditions (1–3) and socioeconomic factors (4–6) are processes estimated with parameters in the model by Baeza et al. [33]. Environmental conditions (1–3) are driving forces in the high-risk scenario of malaria transmission in the first years of colonization. Socioeconomic factors (4–6) counterbalance and surpass environmental conditions effects, decreasing malaria incidence in the long-term. (1) Carrying capacity: the maximum abundance of adult mosquitoes per unit of land area. (2) Ecological differences: the magnitude of land-use changes. (3) Human Blood Index: the proportion of blood meals from humans by a mosquito. (4) Investment in malaria: the effect of investment in malaria medication. (5) Gain economic protection: the rate which people gain protection against malaria due to the overall economic improvements. (6) Treatment effectiveness: the cost-effectiveness of the treatment Fragmented landscapes with approximately 30–70% forest cover have the forest fringe effect, which maximizes the abundance of the main malarial vector (Anopheles darlingi) in the Amazon [24]. Malaria transmission can be sustained in any given landscape with the forest fringe effect [34]. Landscapes near natural conservation units (e.g., federal forests and indigenous reserves) are generally represented by settlements with intermediate forest cover and present a high risk of malaria incidence [35]. The variation in malaria incidence associated with changes in forest cover (100–0%) can be depicted by a convex curve [36]. Herein, a test of the unimodal (i.e., convex curve) relationship between malaria incidence and forest cover on a large scale (Amazonian states) is proposed. The importance of this work is the necessity of depicting the big picture and overall knowledge of malaria transmission for tailoring interventions. Determinants of the disease in two Amazonian states (Rondônia and Acre) that share a common historical root and started colonization at the same time in the 1900s were addressed. The dissimilarity between the two is that Rondônia represents a deforested Amazonian state, while Acre represents a forest-conserved Amazonian state. In addition, the state of Rondônia was the epicentre of the malaria burden in the 1980s to 1990s, but this state has recently seen a strong decrease in its incidence rate [23, 28]. In contrast, in the state of Acre, transmission is stable with some areas defined as hotspots of malaria in Brazil [25, 29]. The hypothesis is that this difference in the transmission level is related to the following: (1) most of the area of Rondônia previously covered by forest has been deforested [30]; and (2) the state of Acre, which has larger areas of preserved forest, is under anthropogenic changes in the natural environment, and forest fragmentation is increasing in some regions [30]. The specific aims are as follows: (1) to analyse the spatio-temporal distribution of the incidence rates and compare them between the states of Acre and Rondônia (in the western Brazilian Amazon); and (2) to address potential determinants of the disease. Acre and Rondônia states (Fig. 2) have a common historical root. The creation of both states is rooted, in part, in the Treaty of Petropolis signed in 1903 between Brazil and Bolivia. This agreement resulted in the end of a deadlock with respect to a Bolivian territory, which is now the geographical seat of Acre in Brazil, and allowed for the construction of the Madeira Mamoré Railroad, which gave rise to the city of Porto Velho, the capital of Rondônia. Study region. The Brazilian states of Acre (AC) and Rondônia (RO) are located in the Southwestern Amazon, bordering neighbouring Peru and Bolivia. Forest cover and fragmentation of these states are represented as dark/light green (forest), dark brown (deforested area) or light brown (rocky soil) These states have different assumptions for the colonization process. The pride of the people of Acre is latent in its history, which is the sum of the struggles of rubber workers, indigenous people, pioneers and descendants of individuals with these origins. Porto Velho, however, does not seem to neither feed from the cradle of its Amazonian history nor seek the past glory of the pioneers who had been there before. The state of Rondônia (Fig. 2; 237.765 km2) shows a diversified phytogeography that reflects the heterogeneity of physical aspects such as relief, lithology, soil and climate. With the growing population, the tropical rain forest has been gradually decreasing since the late 1970s. Currently, natural forest is restricted to reserves, indigenous lands and parks. The mapping of the state of Acre (Fig. 2; 164.123 km2) shows the occurrence of highly preserved vegetation types with ombrophylous forest and campinarana (Amazonian plain forest). The climate in both states is humid tropical, with two major seasons: the rainy season from November to April and the dry season from May to October. Malaria incidence is higher in the rainy season because of the increase in available larval habitats for the mosquito vector. The state of Rondônia has a mostly rural population (70%) out of a total of 1.7 million people estimated in 2018, who own approximately 905,000 vehicles (e.g., cars, trucks, buses and motorcycles). The average monthly income is US$251 per capita, and the human development index (HDI) is 0.69. In contrast, in Acre, the estimated population in 2018 was 800,000, with 70% living in rural areas. The number of vehicles in the whole state was 251,000, the income was lower (US$202 per capita), and the HDI was 0.66 (http://www.ibge.gov.br). Study design and rationale This is an ecological study in epidemiology that employs aggregate malaria, environmental and socioeconomic data from January 2009 to December 2015 of all municipalities in the states of Acre and Rondônia, Amazon Region, Brazil. Malaria time-series data were first analysed, and monthly incidence rates were compared between Rondônia and Acre with EPIPOI v. 15 (Alonso and McCormick, Oxford, UK) [37]. The stationarity of the time-series malaria data was verified using an augmented Dickey-Fuller test with the package tseries in the R programming environment v. 3.5.1 (The R Foundation, Vienna, Austria) [38]. A second round of analysis was performed to correlate annual malaria incidence rates with annual accumulated deforestation from 2009 to 2015 for each state. To reduce the spatial dimension of 22 municipalities in Acre and 52 municipalities in Rondônia, the first axes of principal component analyses were utilized. These axes represent variations of malaria incidence and accumulated deforestation in each state. A Pearson's product-moment correlation in R v.3.5.1 was applied to test the relationship between these variables. A standard protocol of spatial analysis with geographically weighted regression (GWR) was employed for assessing the local correlation between annual malaria incidence rates and annual accumulated deforestation in each municipality of both states, using overall data from 2009 to 2015. A time-series modelling analysis was employed to verify the association between variations in monthly malaria incidence rates and climate, landscape, and social factors. This analysis was applied to those localities with the highest incidence rates in Acre. Malaria incidence rate The malaria incidence rate was estimated as the number of malaria cases per 1000 population at risk. Data from each municipality in the states of Acre and Rondônia were downloaded from the SIVEP-Malaria database, available at http://portalms.saude.gov.br/saude-de-a-z/malaria/notificacao. The raw data were concatenated in a database for the analyses. The estimated population of each municipality was available in the SIVEP-Malaria database. Because monthly based data were also needed, linear interpolation between subsequent years was performed using the following equation: $$\frac{{y - y_{0} }}{{x - x_{0} }} = \frac{{y_{1} - y_{0} }}{{x_{1} - x_{0} }}$$ where y0 and y1 were the available population data in x0 and x1 months, respectively. The coordinate (x, y) was estimated, and the population data (y) were linearly interpolated in each month (x). Annual accumulated deforestation To calculate the overall accumulated deforestation in km2 that occurred in a certain year per municipality in both Amazonian states, we employed publicly available information from the Instituto Nacional de Pesquisas Espaciais (INPE) (INPE/PRODES Project website, http://www.dpi.inpe.br/prodesdigital). Spatial regression analysis As a first step, an ordinary least square model (a non-spatial model) was fitted in R v.3.5.1: $$Y = \beta_{0} + \beta_{1} X + \varepsilon$$ where Y = annual malaria incidence rate (cases/pop*1000) and X = annual accumulated deforestation (%). Parameters \(\beta_{0}\) = Y value when X equals zero, \(\beta_{1}\) = linear effect of annual accumulated deforestation on annual malaria incidence rate, and \(\varepsilon\) = model residuals. The statistical significance level was 5%. To check whether the linear relationship between Y and X was not biased by the spatial dimension, residuals of the aforementioned linear model were tested for spatial autocorrelation with the Moran index calculation in GeoDa v. 1.12 (The University of Chicago, Chicago, Illinois, US) $$I = \frac{n}{W}\frac{{\sum_{i} \sum_{j} w_{ij} z_{i} z_{j} }}{{\sum_{i} z_{i}^{2} }}$$ where I = Moran index (equivalent to the product \(\frac{n}{W}\) \(\frac{{\sum_{i} \sum_{j} w_{ij} z_{i} z_{j} }}{{\sum_{i} z_{i}^{2} }}\)), n = number of municipalities, W = first-order Queen-type spatial weight matrix, \(w_{ij}\) = element in spatial weights matrix, and \(z_{i}\) and \(z_{j}\) = deviations from the mean z. The statistical significance level was 5%. When the non-spatial model was not adequate, the GWR was applied to model spatially heterogeneous relationships between Y and X in GWR v. 4.09 (Arizona State University, Tempe, Arizona, US). $$Y\left( s \right) = \beta \left( s \right)X$$ where Y(s) = annual malaria incidence rate in each municipality and β(s)X = linear effect of annual accumulated deforestation on annual malaria incidence rate in each municipality. Time-series modelling To verify the presence of stable foci of transmission in the state of Acre, a dynamic regression modelling analysis was performed. Socioeconomic, climate and landscape data were employed to verify the potential association of each factor to the incidence rate of malaria in the westernmost areas of Acre. The time-series of monthly malaria incidence data were modelled with the available socioeconomic-environmental data of the Cruzeiro do Sul (CZS), Mancio Lima (ML), Rodrigues Alves (RA), Porto Walter (PW) and Tarauaca (TA) municipalities from 2009 to 2015. These municipalities represent the current frontier malaria in the western Amazon. Specifically, an autoregressive integrated moving average (ARIMA) model was utilized using the following equation: $$y_{t} = \beta_{0} + \beta_{1} x_{1,t} + \cdots + \beta_{k} x_{k,t} + rY_{t - 1} + e_{t} + ae_{t - 1}$$ With the monthly malaria incidence rates as the response variables (yt), the socioeconomic-environmental factors (variables x1, x2,…, xk) were divided into three sets: (1) climate (2 variables); (2) landscape (2 variables); and (3) socioeconomic (5 variables). The implementation of ARIMA in the package forecast in R v. 3.5.1 [39] was utilized. Accordingly, the equation of the regression model was estimated using a stepwise approach with forward selection. The 95% confidence interval of each intercept (β1, …, βk) was estimated. The autoregressive parameter (r), the pure error (e) and the moving average (a) were also estimated. No assumptions on the lags for the socioeconomic-environmental factors were made. The covariate lags were selected based on the model's best prediction. The ARIMA algorithm in the R forecast package automatically took seasonal differences (i.e., interannual variation) into account when they were relevant in improving model prediction. The time-series analysis protocol is available in Additional file 1. Total precipitation (mm) and average maximum temperature (°C) were selected because of their well-known importance for standing water as habitats of the mosquito vector. Precipitation and temperature data are available in the Instituto Nacional de Meteorologia (INMET; http://www.inmet.gov.br). Total precipitation and average maximum temperature in the rainy (Nov.–Apr.) and dry (May–Oct.) seasons were interpolated using data from the following meteorological stations: Uruguaiana (− 29.75, − 57.08), Corumba (− 57.67, − 19.02), Ponta Pora (− 55.71, − 22.55), Eirunepe (− 69.86, − 6.66), Labrea (− 64.83, − 7.25), Benjamin Constant (− 70.03, − 4.38), Cruzeiro do Sul (− 72.66, − 7.6), and Rio Branco (− 70.76, − 8.16) and Tarauaca (− 67.8, − 9.96). More information on how temperature and precipitation were interpolated is provided in Additional file 2. Two landscape parameters were chosen because they represented a proxy for the presence of mosquito vector larval habitats: (1) annual forest cover (km2) and (2) annual accumulated deforestation (km2) per municipal area. These land-use land-cover variables were obtained from the aforementioned INPE/PRODES Project website. Annual socioeconomic data were obtained from the PNUD/Atlas Project website (https://popp.undp.org), including infant mortality rate (per 1000 live births), proportion of people living in extreme poverty (% of people living on less than US$1.90 per day), proportion of people living in poverty (% of people earning less than US$3.75 a day), a measure of inequality of income (GINI index, 0–1, the most inequality = 1) and municipal HDI (MHDI) (0, minimum; 1, maximum). These parameters were selected because they can represent risk factors for human exposure to mosquito vector bites and malaria. Regarding the Brazilian Institutional Review Board for protection of human subjects, the present study does not require approval for access to data. Any patient information was not publicly available in the SIVEP-Malaria platform. In addition, malaria data are part of the public domain according to the Brazilian Law of Information Access (12.527/2011). The malaria incidence rate ranged from 0.2 to 3.5 cases per 1000 population from 2009 to 2015, showing a decreasing linear trend (− 36%, P < 0.001) in Rondônia, whereas it ranged from 1.5 to 6 cases per 1000 population, without evidence of a linear trend (− 5%, P = 0.27) in Acre (Fig. 3). The results of the Dickey-Fuller test showed that the time-series of the malaria incidence rate in Rondônia had a stationary process (P < 0.01), whereas Acre had a non-stationary process (P = 0.11). Time series. Malaria incidence rates in Acre and Rondônia The correlation between the malaria incidence rate and annual accumulated deforestation was strongly negative (i.e., more deforestation, less malaria) in Rondônia (r = − 0.96, P < 0.001), whereas it was not significant in Acre (r = 0.13, P = 0.79) (Fig. 4). In 2015, forest cover (85%) in Acre was 1.7 times higher than that estimated for Rondônia (51%), while the 2015 total deforested area (37%) in Rondônia was 2.84 times higher than that (13%) in Acre. The temporal processes of forest cover loss or gain per municipality in both states are depicted in Additional file 3. The full results of principal component analysis are in Additional file 4. Correlation testing. Scatterplot of malaria incidence rate (MIR) vs. annual accumulated deforestation (AAD) in Rondônia and Acre. PCA1 = first axis of the principal component analysis that reduced all the municipality-based data into state-based data The non-spatial model comparing municipalities in Acre (22) and Rondônia (52) was not adequate because its residuals showed a strong spatial dependence (Moran's I = 0.74, highly clustered). GWR showed that the relationship of malaria incidence (Fig. 5a) and annual accumulated deforestation (Fig. 5b) is complex because it could be either positive (i.e., more deforestation, more malaria; red cluster in Fig. 5c) or negative (i.e., more deforestation, less malaria; blue cluster in Fig. 5c), depending on the amount of remaining forest. Deforestation in areas with high forest cover, such as in Acre, showed a positive relationship with malaria incidence, whereas in areas with low forest cover (in Rondônia), additional deforestation decreased malaria incidence. The GWR model had better performance than the non-spatial model, with the coefficient of determination (R2) of 0.82 vs. 0.09 (non-spatial model) and Akaike information criteria (AIC) of 709 vs. 805 (non-spatial model). Spatial analysis. a Average malaria incidence rate 2009–2015 in each municipality (per 1000 inhabitants). b Accumulated deforestation in 2015 proportional to each municipality area. c Results of t-distribution from the geographically weighted regression model for each municipality. Acre municipalities: ML Mancio Lima, RA Rodrigues Alves, CS Cruzeiro do Sul, PW Porto Walter and TA Tarauaca; Rondônia municipalities, PV Porto Velho, CJ Candeias do Jamari, CB Cujubim, RC Rio Crespo, MO Machadinho d'Oeste Malaria incidence rates decreased in the municipalities of Porto Velho, Candeias do Jamari, Itabua do Oeste, Cujubim, Machadinho d'Oeste and Rio Crespo in north-western Rondônia between 2009 and 2015 (Fig. 6a). However, in Mancio Lima, Cruzeiro do Sul, Rodrigues Alves, Porto Walter and Tarauaca, the monthly incidence ranged from 10 to 60 (per 1000 population) (Fig. 6b). Heat-grid time-series. a Monthly malaria incidence rate per municipality 2009–2015 in Rondônia and b Acre. MIR = malaria incidence rate (cases/1000 people) In the simple time-series regression analysis for the municipalities highlighted in Figs. 5c and 6 (ML, CZS, RA, PW, and TA), all socioeconomic-environmental factors were important predictors in the monthly variation in malaria incidence rates. Additionally, precipitation and temperature were seasonally correlated (i.e., more precipitation, lower temperature and vice versa), and accumulated deforestation and forest cover were positively correlated (which may reflect initial stages of colonization, as expected by the frontier malaria concept). All socioeconomic variables were correlated with each other but were only available in the Cruzeiro do Sul municipality. In the following analysis, precipitation and deforestation were selected to represent the environmental factors, while poverty and MHDI were selected to represent the socioeconomic factors. Complete results from the time-series modelling are in Additional file 5. Multiple time-series regression analysis showed monthly malaria incidence rates as a function of precipitation, deforestation and MHDI or poverty in Cruzeiro do Sul (Table 1). In Cruzeiro do Sul, precipitation was positively but not statistically significantly correlated with malaria incidence, whereas deforestation and socioeconomic factors were statistically significant in the two models (Table 1). An increase of 0.01 in the MHDI meant 361 fewer malaria cases per 1000, whereas an increase of one unit in proportion (%) of people in poverty meant 346 more malaria cases per 1000. An increase in 10 km2 in deforestation meant ~ 400 more malaria cases per 1000. Table 1 Results from the multiple time-series regression analysis of monthly malaria incidence rate, Cruzeiro do Sul-Acre, 2009–2015 In Mancio Lima, Rodrigues Alves, Tarauca and Porto Walter, deforestation is positively correlated with malaria incidence. These positive correlations are statistically significant in all cases, except in Tarauaca, where they are slightly non-significant (Table 2). An increase in 10 km2 in deforestation meant 2–54 more malaria cases per 1000. Table 2 Results from the multiple time-series regression analysis of monthly malaria incidence rate, Mancio Lima, Rodrigues Alves, Tarauaca and Porto Walter, Acre, 2009–2015 The results of this study showed that the correlation between accumulated deforestation and malaria incidence can be discordant, showing either a positive or a negative statistical association. In Rondônia, the accumulated deforestation was three times higher than in Acre, and consequently, the trend in malaria incidence declined with increased deforestation. In contrast, the correlation was positive and statistically significant in Acre. Mechanistically, this pattern can be related to the frontier malaria concept [28] and the extension of this concept model by Baeza et al. [33], but it is also related to other works that state the importance of forest cover in malaria incidence in Amazon [24, 35, 36]. In the late 1970s, 2% of the state of Rondônia was deforested. Deforestation was intensified during the 1980s–1990s, affecting larger areas because of intensive migration. Malaria increased at very high rates during that time [16, 17]. However, starting in the late 1990s, Rondônia has gone through a turning point in its economic growth [40]. Mid-sized cities, which were merely a flow trail of natural resources to the urban centres of the capital (Porto Velho) or to southern Brazil in the 1980s, emerged as a central nerve in the production chain due to urban growth in the 2000s [40]. The five most important local hubs in Rondônia (Ji-Parana, Ariquemes, Vilhena, Cacoal, and Rolim de Moura) underwent population increases of 15–43% from 2000 to 2010 [40]. Capital investments that come to these urban centres in exchange for the region's rich reserves of natural resources remain in the form of economic growth, rising socioeconomic indictors and public investments [40]. In addition, north-western Rondônia, which includes the capital (Porto Velho) and its adjacent municipalities (Fig. 5c), is considered a logging zone and a traditional wood transportation route in Brazil [30]. The fall of malaria observed in Rondônia can be related to both (1) socioeconomic factors that surpassed environmental forces on malaria transmission [28, 33] and (2) the loss of available habitats for the malarial vector due to deforestation [36]. Economic development in Acre is historically dependent on forest conservation for rubber exploitation and other extractivist activities, as well as fish farming [41]. Fish farming is not associated with deforestation [42] but can increase the risk of malaria [25, 31]. Cruzeiro do Sul was a former rubber town on the Jurua River and is now a local hub of economic growth and public investment in the westernmost area of Acre [40]. Additionally, Cruzeiro do Sul is also considered a local hub for the new frontier of logging zones [30]. The rise of malaria in the Jurua Valley Region may be related to environmental factors that tend to increase malaria risk in the early stages of colonization and to the lack of or still-incipient socioeconomic forces that tend to reduce malaria risk in the long term [33]. Parallel with the use of the frontier malaria concept [28] to predict malaria emergence in the Amazon is the debate regarding the association between deforestation in newly colonized sites and malaria emergence [43, 44]. The generality of the relationship between deforestation and malaria emergence was challenged [35] because the authors found higher malaria incidence in human settlements near priority areas for nature conservation. The controversy between the deforestation-malaria hypothesis [35] stimulated intensive debates [45, 46]. An alternative was proposed: deforestation may benefit or be harmful to the malarial vector population, depending on the pattern and proportion of forest cover [24]. The proposed unimodal relationship between forest cover and malaria emergence indicates that 30% to 70% of the remaining forest cover represents a landscape scenario that can encompass the ecological and environmental conditions that can favour peak transmission of malaria [24, 36, 47]. This risky scenario can occur either in newly colonized or old settlements [34]. For instance, the landscapes shown in Fig. 7 started colonization in the 1970s [34] and currently have high levels of transmission, with an estimated malaria incidence of 45–100 cases per day and a P. vivax reproduction number of 3.3–16.8 [48]. Satellite imagery composite. Landscape (5-km2) in where malaria transmission level [48] and the deforestation timeline [34] were estimated. CZS Cruzeiro do Sul, ML Mancio Lima, GUA Guarani-landscape studied by Lana et al. [49]. The satellite imagery composite was made by using the protocol developed by Ilacqua et al. [34] with QGis v. 2.18.14 (QGis Community, https://qgis.org) and SCP plugin v. 5.4.2 (Luca Congedo, Italy). Legend: blue, ground waters; dark green, forest vegetation; light green, crops, shrubs or secondary vegetation; pink, exposed or urban soil. Source: USGS/Landsat 8 The satellite imagery composite shows Cruzeiro do Sul and Mancio Lima divided by a natural barrier: the hydrographic basin of the Moa River (Fig. 7). The configuration of the land use land cover shown in Fig. 7 can support an increase in malaria incidence [35] because of the availability of larval habitats for the malarial vector [24]. In addition, Lana et al. [49] identified improvements in socioeconomic factors in the landscape GUA (Fig. 7) at the same time as a high risk of malaria transmission due to (1) the abundance of malarial vectors and (2) the mobility of people in this urban centre of Mancio Lima. The pattern depicted in Fig. 7 seems supported by the frontier malaria concept [28] and Baeza et al. [33], thus representing the increasing phase of malaria population dynamics. Malaria decline may occur later in this real scenario (Fig. 7), when socioeconomic development can reduce transmission risk and accumulated deforestation can decrease larval habitat availability for the mosquito vectors. The main malarial vector in the Amazon is Nyssorhynchus darlingi, formerly known as Anopheles darlingi [30, 48]. Foster et al. [50] built a globally based phylogeny of Anophelinae and concluded that Neotropical subgenera (including Nyssorhynchus) can be elevated to the genus level. In frontier malaria, Ny. darlingi is abundant, and its contact rate with humans is high [48]. On the one hand, other anopheline species known to be malarial vectors are not well adapted as Ny. darlingi in the anthropogenic matrix [51]. On the other hand, anopheline diversity continues to be underestimated in frontier malaria, with several species thought to be unknown [52]. Additionally, in specific scenarios, other species (e.g., Nyssorhynchus albitarsis sensu lato) can emerge as the primary vectors [53, 54]. A proposition for future research is herein made. The best study design for testing a temporal phenomenon as frontier malaria is a long-term prospective study. In the 1970s, a long-term prospective study was conceived for testing ecological theories (e.g., island biogeography) in the Amazon: the Forest Fragments Project (http://pdbff.inpa.gov.br/), e.g., [55]. Considering malaria elimination as a global target [56], the timing might be optimal for a bold proposal, such as a long-term prospective study on land transformation and its impact on socioeconomic and environmental determinants of malaria transmission. Spatial and temporal variations in malaria incidence were not assessed by a statistical autoregressive model that considers time and space [57]. Landscape modification caused by accumulated deforestation is an important driver of malaria population dynamics in Amazonia. In the initial phase of human settlement development, accumulated deforestation transforms a landscape with high forest cover into a landscape with intermediate levels of forest cover, increasing the odds of malaria emergence. In a later phase of development, when forest cover is reduced to low levels and its capacity to sustain malarial vectors' larval habitats is decreased, the on-going accumulated deforestation only decreases the risk of malaria transmission. The westernmost area of the state of Acre currently has stable malaria foci because it represents an initial phase of development, whereas the north-western area of the state of Rondônia, which had been considered the main hub for malaria in the 1980s and 1990s, is now seeing its malaria burden decline, which thus represents the later phase of development. The datasets used and analysed are of public domain, as detailed in the Methods section. They are available in the Additional files 1–5. AAD: AC: Acre state Akaike information criteria ARIMA: autoregressive integrated moving average CZS: Cruzeiro do Sul municipality GINI: GWR: geographically weighted regression HDI: INMET: Instituto Nacional de Meteorologia INPE: Instituto Nacional de Pesquisas Espaciais MHDI: municipal human development index MIR: Mancio Lima municipality PNUD/UNDP: PRODES: Projeto de Monitoramento do Desmatamento na Amazônia Porto Walter municipality RO: Rondônia state RA: Rodrigues Alves municipality SIVEP: Sistema de Informações de Vigilância Epidemiológica TA: Tarauaca municipality Prugnolle F, Durand P, Neel C, Ollomo B, Ayala FJ, Arnathau C, et al. African great apes are natural hosts of multiple related malaria species, including Plasmodium falciparum. Proc Natl Acad Sci USA. 2010;107:1458–63. Webb JLA. The long struggle against malaria in tropical Africa. New York: Cambridge Univ. Press; 2014. p. 219. Hay SI, Guerra CA, Tatem AJ, Noor AM, Snow RW. The global distribution and population at risk of malaria: past, present, and future. Lancet Infect Dis. 2004;4:327–36. Fuehrer H-P, Habler VE, Fally MA, Harl J, Starzengruber P, Swoboda P, et al. Plasmodium ovale in Bangladesh: genetic diversity and the first known evidence of the sympatric distribution of Plasmodium ovale curtisi and Plasmodium ovale wallikeri in southern Asia. Int J Parasitol. 2012;42:693–9. Bronner U, Divis PC, Farnert A, Singh B. Swedish traveller with Plasmodium knowlesi malaria after visiting Malaysian Borneo: a case report. Malar J. 2009;8:15. Mueller I, Zimmerman PA, Reeder JC. Plasmodium malariae and Plasmodium ovale—the 'bashful' malaria parasites. Trends Parasitol. 2007;23:278–83. Marchesini P, Carter R, Mendis K, Sina B. The neglected burden of Plasmodium vivax malaria. Am J Trop Med Hyg. 2001;64:97–106. Brasil P, Zalis MG, de Pina-Costa A, Siqueira AM, Júnior CB, Silva S, et al. Outbreak of human malaria caused by Plasmodium simium in the Atlantic Forest in Rio de Janeiro: a molecular epidemiological investigation. Lancet Glob Health. 2017;5:e1038–46. Laporta GZ, de Prado PIKL, Kraenkel RA, Coutinho RM, Sallum MAM. Biodiversity can help prevent malaria outbreaks in tropical forests. PLoS Negl Trop Dis. 2013;7:e2139. WHO. World malaria report 2018. Geneva: World Health Organization; 2018. Bardach A, Ciapponi A, Rey-Ares L, Rojas JI, Mazzoni A, Glujovsky D, et al. Epidemiology of malaria in Latin America and the Caribbean from 1990 to 2009: systematic review and meta-analysis. Value Health Reg Issues. 2015;8:69–79. Carter KH, Escalada RP, Ade MP, Singh P, Espinal MA, Mujica OJ. Malaria in the Americas: trends from 1959 to 2011. Am J Trop Med Hyg. 2015;92:302–16. Conn JE, Grillet ME, Correa M, Sallum MAM. Malaria Transmission in South America—Present Status and Prospects for Elimination. In: Manguin S, Dev V, editors. Towards malaria elimination—a leap forward. London: InTech; 2018. p. 281–313. Nájera JA, González-Silva M, Alonso PL. Some lessons for the future from the global malaria eradication programme (1955–1969). PLoS Med. 2011;8:e1000412. Ferreira MU, Castro MC. Challenges for malaria elimination in Brazil. Malar J. 2016;15:284. Oliveira-Ferreira J, Lacerda MVG, Brasil P, Ladislau JLB, Tauil PL, Daniel-Ribeiro CT. Malaria in Brazil: an overview. Malar J. 2010;9:115. World Health Organization. World Malaria Report 2017. Geneva: WHO; 2017. Confalonieri UEC, Margonari C, Quintão AF. Environmental change and the dynamics of parasitic diseases in the Amazon. Acta Trop. 2014;129:33–41. Morais SA, Urbinatti PR, Sallum MAM, Kuniy AA, Moresco GG, Fernandes A, et al. Brazilian mosquito (Diptera: Culicidae) fauna: I. Anopheles species from Porto Velho, Rondônia state, western Amazon, Brazil. Rev Inst Med Trop Sao Paulo. 2012;54:331–5. Terrazas WCM, Sampaio V, de Castro DB, Pinto RC, de Albuquerque BC, Sadahiro M, et al. Deforestation, drainage network, indigenous status, and geographical differences of malaria in the State of Amazonas. Malar J. 2015;14:379. Vieira G, Gim KNM, Zaqueo GM, Alves T, Katsuragawa TH, Basano S, et al. Reduction of incidence and relapse or recrudescence cases of malaria in the western region of the Brazilian Amazon. J Infect Dev Ctries. 2014;8:1181–7. Angelo JR, Katsuragawa TH, Sabroza PC, de Carvalho LAS, da Silva LHP, Nobre CA. The role of spatial mobility in malaria transmission in the Brazilian Amazon: the case of Porto Velho municipality, Rondônia, Brazil (2010–2012). PLoS ONE. 2017;12:e0172330. Barros FSM, Honório NA. Deforestation and malaria on the Amazon frontier: larval clustering of Anopheles darlingi (Diptera: Culicidae) determines focal distribution of malaria. Am J Trop Med Hyg. 2015;93:939–53. Reis IC, Honório NA, de Barros FSM, Barcellos C, Kitron U, Camara DCP, et al. Epidemic and endemic malaria transmission related to fish farming ponds in the Amazon Frontier. PLoS ONE. 2015;10:e0137521. Barros FSM, Honório NA, Arruda ME. Temporal and spatial distribution of malaria within an agricultural settlement of the Brazilian Amazon. J Vector Ecol. 2011;36:159–69. Barros FSM, Arruda ME, Gurgel HC, Honório NA. Spatial clustering and longitudinal variation of Anopheles darlingi (Diptera: Culicidae) larvae in a river of the Amazon: the importance of the forest fringe and of obstructions to flow in frontier malaria. Bull Entomol Res. 2011;101:643–58. Castro MC, Monte-Mór RL, Sawyer DO, Singer BH. Malaria risk on the Amazon frontier. Proc Natl Acad Sci USA. 2006;103:2452–7. Olson SH, Gangnon R, Silveira GA, Patz JA. Deforestation and malaria in Mâncio Lima County, Brazil. Emerg Infect Dis. 2010;16:1108–15. Chaves LSM, Conn JE, López RVM, Sallum MAM. Abundance of impacted forest patches less than 5 km2 is a key driver of the incidence of malaria in Amazonian Brazil. Sci Rep. 2018;8:7077. Reis IC, Codeço CT, Degener CM, Keppeler EC, Muniz MM, de Oliveira FGS, et al. Contribution of fish farming ponds to the production of immature Anopheles spp. in a malaria-endemic Amazonian town. Malar J. 2015;14:452. Castro MC. Malaria transmission and prospects for malaria eradication: the role of the environment. Cold Spring Harb Perspect Med. 2017;7:a025601. Baeza A, Santos-Vega M, Dobson AP, Pascual M. The rise and fall of malaria under land-use change in frontier regions. Nat Ecol Evol. 2017;1:0108. Ilacqua RC, Chaves LSM, Bergo ES, Conn JE, Sallum MAM, Laporta GZ. A method for estimating the deforestation timeline in rural settlements in a scenario of malaria transmission in frontier expansion in the Amazon Region. Mem Inst Oswaldo Cruz. 2018;113:e170522. Valle D, Clark J. Conservation efforts may increase malaria burden in the Brazilian Amazon. PLoS ONE. 2013;8:e57519. Laporta GZ. Amazonian rainforest loss and declining malaria burden in Brazil. Lancet Planet Health. 2019;3:e4–5. Alonso WJ, McCormick BJJ. EPIPOI: a user-friendly analytical tool for the extraction and visualization of temporal parameters from epidemiological time series. BMC Public Health. 2012;12:982. Said SE, Dickey DA. Testing for unit roots in autoregressive-moving average models of unknown order. Biometrika. 1984;71:599. Hyndman RJ, Khandakar Y. Automatic time series forecasting: the forecast package for R. J Stat Soft. 2008;27:1–22. Richards P, VanWey L. Where deforestation leads to urbanization: how resource extraction is leading to urban growth in the Brazilian Amazon. Ann Assoc Am Geogr. 2015;105:806–23. ACRE. Acre em números 2017. Rio Branco: Governo do Estado do Acre; 2017. Barlow J, Lennox GD, Ferreira J, Berenguer E, Lees AC, Mac Nally R, et al. Anthropogenic disturbance in tropical forests can double biodiversity loss from deforestation. Nature. 2016;535:144–7. Vittor AY, Pan W, Gilman RH, Tielsch J, Glass G, Shields T, et al. Linking deforestation to malaria in the Amazon: characterization of the breeding habitat of the principal malaria vector, Anopheles darlingi. Am J Trop Med Hyg. 2009;81:5–12. Vittor AY, Gilman RH, Tielsch J, Glass G, Shields T, Lozano WS, et al. The effect of deforestation on the human-biting rate of Anopheles darlingi, the primary vector of Falciparum malaria in the Peruvian Amazon. Am J Trop Med Hyg. 2006;74:3–11. Hahn MB, Olson SH, Vittor AY, Barcellos C, Patz JA, Pan W. Conservation efforts and malaria in the Brazilian Amazon. Am J Trop Med Hyg. 2014;90:591–4. Valle D. Response to the critique by Hahn and others entitled "Conservation and malaria in the Brazilian Amazon". Am J Trop Med Hyg. 2014;90:595–6. Hiwat H, Bretas G. Ecology of Anopheles darlingi Root with respect to vector importance: a review. Parasit Vectors. 2011;4:177. Sallum MAM, Conn JE, Bergo ES, Laporta GZ, Chaves LSM, Bickersmith SA, et al. Vector competence, vectorial capacity of Nyssorhynchus darlingi and the basic reproduction number of Plasmodium vivax in agricultural settlements in the Amazonian Region of Brazil. Malar J. 2019;18:117. Lana RM, Riback TIS, Lima TFM, da Silva-Nunes M, Cruz OG, Oliveira FGS, et al. Socioeconomic and demographic characterization of an endemic malaria region in Brazil by multiple correspondence analysis. Malar J. 2017;16:397. Foster PG, de Oliveira TMP, Bergo ES, Conn JE, Sant'Ana DC, Nagaki SS, et al. Phylogeny of Anophelinae using mitochondrial protein coding genes. R Soc Open Sci. 2017;4:170758. Valle D, Ben Toh K, Laporta GZ, Zhao Q. Ordinal regression models for zero-inflated and/or over-dispersed count data. Sci Rep. 2019;9:3046. Bourke BP, Conn JE, de Oliveira TMP, Chaves LSM, Bergo ES, Laporta GZ, et al. Exploring malaria vector diversity on the Amazon Frontier. Malar J. 2018;17:342. Laporta GZ, Linton Y-M, Wilkerson RC, Bergo ES, Nagaki SS, Sant'Ana DC, et al. Malaria vectors in South America: current and future scenarios. Parasit Vectors. 2015;8:426. Conn JE, Wilkerson RC, Segura MNO, de Souza RTL, Schlichting CD, Wirtz RA, et al. Emergence of a new neotropical malaria vector facilitated by human migration and changes in land use. Am J Trop Med Hyg. 2002;66:18–22. Lenz BB, Jack KM, Spironello WR. Edge effects in the primate community of the biological dynamics of forest fragments project, Amazonas, Brazil: primate edge effects at the BDFFP. Am J Phys Anthropol. 2014;155:436–46. Hommel M. Towards a research agenda for global malaria elimination. Malar J. 2008;7:S1. Lowe R, Bailey TC, Stephenson DB, Graham RJ, Coelho CAS, Carvalho MS, et al. Spatio-temporal modelling of climate-sensitive disease risk: towards an early warning system for dengue in Brazil. Comput Geosci. 2011;37:371–81. To the three reviewers who promoted opportunity for a more comprehensive study case. MAOP, JOM, and MVML were supported by the Secretaria de Estado de Saúde do Acre (SESACRE) Process n. 007/2015. GR was a recipient of a National Council for Scientific and Technological Development (CNPq) scholarship (Process n. 162253/2017-6). GZL was supported by the São Paulo Research Foundation (FAPESP) and Biota-FAPESP Program 2014/09774-1 and 2015/09669-6. MAMS was supported by the FAPESP Grant Number 2014/26229-7 and the CNPq Grant Number 301877/2016-5. This work was partially funded by the National Institutes of Health (NIH) 1 R01 AI110112-01A1 (to Jan Conn and MAMS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Meyrecler Aglair de Oliveira Padilha and Janille de Oliveira Melo contributed equally to this work Setor de Pós-graduação, Pesquisa e Inovação, Centro Universitário Saúde ABC, Fundação do ABC, Santo André, SP, Brazil Meyrecler Aglair de Oliveira Padilha, Janille de Oliveira Melo, Guilherme Romano, Marcos Vinicius Malveira de Lima & Gabriel Zorello Laporta Gerência Estadual de Controle de Endemias, Rio Branco, AC, Brazil Marcos Vinicius Malveira de Lima Departamento de Epidemiologia, Faculdade de Saúde Pública, Universidade de São Paulo, São Paulo, SP, Brazil Maria Anice Mureb Sallum School of Forest Resources and Conservation, University of Florida, Gainesville, FL, USA Gabriel Zorello Laporta Cartagena, Spain Wladimir J. Alonso Meyrecler Aglair de Oliveira Padilha Janille de Oliveira Melo Guilherme Romano Original idea and study design: MAOP, JOM, GZL. Organization of datasets: MAOP, JOM, GR, MVML, WAJ. Data analysis: WAJ, MVML, GR, GZL. Production of figures and tables: GZL, WAJ, GR. First manuscript draft and further revisions: GZL, MAMS, and WJA. All authors read and approved the final manuscript. Correspondence to Maria Anice Mureb Sallum or Gabriel Zorello Laporta. Time-series analysis protocol in the R programming environment. Interpolation of total precipitation and average maximum temperature. Forest cover variations in municipalities of the states of Acre and Rondônia. Results from the principal component analysis. Results from the time-series modelling. de Oliveira Padilha, M.A., de Oliveira Melo, J., Romano, G. et al. Comparison of malaria incidence rates and socioeconomic-environmental factors between the states of Acre and Rondônia: a spatio-temporal modelling study. Malar J 18, 306 (2019). https://doi.org/10.1186/s12936-019-2938-0 Tropical forest Spatio-temporal models Dynamics models Malaria distribution
CommonCrawl
The Cauchy problem for a tenth-order thin film equation II. Oscillatory source-type and fundamental similarity solutions DCDS Home March 2015, 35(3): 793-806. doi: 10.3934/dcds.2015.35.793 Polynomial loss of memory for maps of the interval with a neutral fixed point Romain Aimino 1, , Huyi Hu 2, , Matthew Nicol 3, , Andrei Török 3, and Sandro Vaienti 1, Aix Marseille Université, CNRS, CPT, UMR 7332, 13288 Marseille, France, France Department of Mathematics, Michigan State University, East Lansing, MI 48824 Department of Mathematics, University of Houston, Houston, TX 77204-3008 Received February 2014 Revised June 2014 Published October 2014 We give an example of a sequential dynamical system consisting of intermittent-type maps which exhibits loss of memory with a polynomial rate of decay. A uniform bound holds for the upper rate of memory loss. The maps may be chosen in any sequence, and the bound holds for all compositions. Keywords: non-stationary dynamics, sequential systems, distortion, loss of memory, neutral fixed point, Intermittency, polynomial decorrelation.. Mathematics Subject Classification: 37E05, 37A25, 37H99, 37M9. Citation: Romain Aimino, Huyi Hu, Matthew Nicol, Andrei Török, Sandro Vaienti. Polynomial loss of memory for maps of the interval with a neutral fixed point. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 793-806. doi: 10.3934/dcds.2015.35.793 R. Aimino, Vitesse de mélange et théorèmes limites pour les systèmes dynamiques aléatoires et non-autonomes,, Ph. D. Thesis, (2014). Google Scholar R. Aimino and J. Rousseau, Concentration inequalities for sequential dynamical systems of the unit interval,, preprint., (). Google Scholar W. Bahsoun, Ch. Bose and Y. Duan, Decay of correlation for random intermittent maps,, Nonlinearity, 27 (2014), 1543. doi: 10.1088/0951-7715/27/7/1543. Google Scholar J.-P. Conze and A. Raugi, Limit theorems for sequential expanding dynamical systems on [0, 1],, Ergodic theory and related fields, 430 (2007), 89. doi: 10.1090/conm/430/08253. Google Scholar W. de Melo, S. van Strien, One-dimensional Dynamics,, Springer, (1993). Google Scholar S. Gouëzel, Central limit theorem and stable laws for intermittent maps,, Probab. Theory Relat. Fields, 128 (2004), 82. doi: 10.1007/s00440-003-0300-4. Google Scholar C. Gupta, W. Ott and A. Török, Memory loss for time-dependent piecewise expanding systems in higher dimension,, Mathematical Research Letters, 20 (2013), 141. doi: 10.4310/MRL.2013.v20.n1.a12. Google Scholar H. Hu, Decay of correlations for piecewise smooth maps with indifferent fixed points,, Ergodic Theory and Dynamical Systems, 24 (2004), 495. doi: 10.1017/S0143385703000671. Google Scholar C. Liverani, B. Saussol and S. Vaienti, A probabilistic approach to intermittency,, Ergodic theory and dynamical systems, 19 (1999), 671. doi: 10.1017/S0143385799133856. Google Scholar W. Ott, M. Stenlund and L.-S. Young, Memory loss for time-dependent dynamical systems,, Math. Res. Lett., 16 (2009), 463. doi: 10.4310/MRL.2009.v16.n3.a7. Google Scholar O. Sarig, Subexponential decay of correlations,, Invent. Math., 150 (2002), 629. doi: 10.1007/s00222-002-0248-5. Google Scholar W. Shen and S. Van Strien, On stochastic stability of expanding circle maps with neutral fixed points,, Dynamical Systems, 28 (2013), 423. doi: 10.1080/14689367.2013.806733. Google Scholar M. Stenlund, Non-stationary compositions of Anosov diffeomorphisms,, Nonlinearity, 24 (2011), 2991. doi: 10.1088/0951-7715/24/10/016. Google Scholar M. Stenlund, L-S. Young and H. Zhang, Dispersing billiards with moving scatterers,, Comm. Math. Phys., 322 (2013), 909. doi: 10.1007/s00220-013-1746-6. Google Scholar Yaofeng Su. Almost surely invariance principle for non-stationary and random intermittent dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6585-6597. doi: 10.3934/dcds.2019286 Fredi Tröltzsch, Alberto Valli. Optimal voltage control of non-stationary eddy current problems. Mathematical Control & Related Fields, 2018, 8 (1) : 35-56. doi: 10.3934/mcrf.2018002 Hi Jun Choe, Do Wan Kim, Yongsik Kim. Meshfree method for the non-stationary incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 17-39. doi: 10.3934/dcdsb.2006.6.17 Anushaya Mohapatra, William Ott. Memory loss for nonequilibrium open dynamical systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3747-3759. doi: 10.3934/dcds.2014.34.3747 Kaïs Ammari, Thomas Duyckaerts, Armen Shirikyan. Local feedback stabilisation to a non-stationary solution for a damped non-linear wave equation. Mathematical Control & Related Fields, 2016, 6 (1) : 1-25. doi: 10.3934/mcrf.2016.6.1 Yinnian He, Yanping Lin, Weiwei Sun. Stabilized finite element method for the non-stationary Navier-Stokes problem. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 41-68. doi: 10.3934/dcdsb.2006.6.41 Sherif I. Ammar, Alexander Zeifman, Yacov Satin, Ksenia Kiseleva, Victor Korolev. On limiting characteristics for a non-stationary two-processor heterogeneous system with catastrophes, server failures and repairs. Journal of Industrial & Management Optimization, 2017, 13 (5) : 0-0. doi: 10.3934/jimo.2020011 Zhendong Luo. A reduced-order SMFVE extrapolation algorithm based on POD technique and CN method for the non-stationary Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1189-1212. doi: 10.3934/dcdsb.2015.20.1189 Nicholas Long. Fixed point shifts of inert involutions. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1297-1317. doi: 10.3934/dcds.2009.25.1297 Tiphaine Jézéquel, Patrick Bernard, Eric Lombardi. Homoclinic orbits with many loops near a $0^2 i\omega$ resonant fixed point of Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3153-3225. doi: 10.3934/dcds.2016.36.3153 Chunyou Sun, Daomin Cao, Jinqiao Duan. Non-autonomous wave dynamics with memory --- asymptotic regularity and uniform attractor. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 743-761. doi: 10.3934/dcdsb.2008.9.743 Yakov Krasnov, Alexander Kononovich, Grigory Osharovich. On a structure of the fixed point set of homogeneous maps. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1017-1027. doi: 10.3934/dcdss.2013.6.1017 Jorge Groisman. Expansive and fixed point free homeomorphisms of the plane. Discrete & Continuous Dynamical Systems - A, 2012, 32 (5) : 1709-1721. doi: 10.3934/dcds.2012.32.1709 Shui-Hung Hou. On an application of fixed point theorem to nonlinear inclusions. Conference Publications, 2011, 2011 (Special) : 692-697. doi: 10.3934/proc.2011.2011.692 Luis Hernández-Corbato, Francisco R. Ruiz del Portal. Fixed point indices of planar continuous maps. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 2979-2995. doi: 10.3934/dcds.2015.35.2979 Antonio Garcia. Transition tori near an elliptic-fixed point. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 381-392. doi: 10.3934/dcds.2000.6.381 Yong Ji, Ercai Chen, Yunping Wang, Cao Zhao. Bowen entropy for fixed-point free flows. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6231-6239. doi: 10.3934/dcds.2019271 Rafael Obaya, Víctor M. Villarragut. Direct exponential ordering for neutral compartmental systems with non-autonomous $\mathbf{D}$-operator. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 185-207. doi: 10.3934/dcdsb.2013.18.185 S. Gatti, Elena Sartori. Well-posedness results for phase field systems with memory effects in the order parameter dynamics. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 705-726. doi: 10.3934/dcds.2003.9.705 Tomás Caraballo Garrido, Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero, Michael Zgurovsky. Preface to the special issue "Dynamics and control in distributed systems: Dedicated to the memory of Valery S. Melnik (1952-2007)". Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : ⅰ-ⅴ. doi: 10.3934/dcdsb.20193i Romain Aimino Huyi Hu Matthew Nicol Andrei Török Sandro Vaienti
CommonCrawl
> astro-ph > arXiv:1707.00132 astro-ph.CO (refers to | cited by ) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Planck intermediate results. LIII. Detection of velocity dispersion from the kinetic Sunyaev-Zeldovich effect Authors: Planck Collaboration: N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi, M. Ballardini, A. J. Banday, R. B. Barreiro, N. Bartolo, S. Basak, R. Battye, K. Benabed, J.-P. Bernard, M. Bersanelli, P. Bielewicz, J. R. Bond, J. Borrill, F. R. Bouchet, C. Burigana, E. Calabrese, J. Carron, H. C. Chiang, B. Comis, D. Contreras, B. P. Crill, A. Curto, F. Cuttaia, P. de Bernardis, A. de Rosa, G. de Zotti, J. Delabrouille, E. Di Valentino, C. Dickinson, J. M. Diego, O. Doré, A. Ducout, X. Dupac, F. Elsner, T. A. Enßlin, H. K. Eriksen, E. Falgarone, Y. Fantaye, F. Finelli, F. Forastieri, M. Frailis, A. A. Fraisse, E. Franceschi, A. Frolov, S. Galeotta, S. Galli, K. Ganga, M. Gerbino, K. M. Górski, A. Gruppuso, J. E. Gudmundsson, W. Handley, F. K. Hansen, D. Herranz, E. Hivon, Z. Huang, A. H. Jaffe, E. Keihänen, R. Keskitalo, K. Kiiveri, J. Kim, T. S. Kisner, N. Krachmalnicoff, M. Kunz, H. Kurki-Suonio, J.-M. Lamarre, A. Lasenby, M. Lattanzi, C. R. Lawrence, M. Le Jeune, F. Levrier, M. Liguori, P. B. Lilje, V. Lindholm, M. López-Caniego, P. M. Lubin, Y.-Z. Ma, J. F. Macías-Pérez, G. Maggio, D. Maino, N. Mandolesi, A. Mangilli, P. G. Martin, E. Martínez-González, S. Matarrese, N. Mauri, J. D. McEwen, A. Melchiorri, A. Mennella, M. Migliaccio, M.-A. Miville-Deschênes, D. Molinari, A. Moneti, L. Montier, G. Morgante, P. Natoli, C. A. Oxborrow, L. Pagano, D. Paoletti, B. Partridge, O. Perdereau, L. Perotto, V. Pettorino, F. Piacentini, S. Plaszczynski, L. Polastri, G. Polenta, J. P. Rachen, B. Racine, M. Reinecke, M. Remazeilles, A. Renzi, G. Rocha, G. Roudier, B. Ruiz-Granados, M. Sandri, M. Savelainen, D. Scott, C. Sirignano, G. Sirri, L. D. Spencer, L. Stanco, R. Sunyaev, J. A. Tauber, D. Tavagnacco, M. Tenti, L. Toffolatti, M. Tomasi, M. Tristram, T. Trombetti, J. Valiviita, F. Van Tent, P. Vielva, F. Villa, N. Vittorio, B. D. Wandelt, I. K. Wehus, A. Zacchei, A. Zonca et al. (84 additional authors not shown) (Submitted on 1 Jul 2017 (v1), last revised 23 Aug 2018 (this version, v3)) Abstract: Using the ${\it Planck}$ full-mission data, we present a detection of the temperature (and therefore velocity) dispersion due to the kinetic Sunyaev-Zeldovich (kSZ) effect from clusters of galaxies. To suppress the primary CMB and instrumental noise we derive a matched filter and then convolve it with the ${\it Planck}$ foreground-cleaned `${\tt 2D-ILC\,}$' maps. By using the Meta Catalogue of X-ray detected Clusters of galaxies (MCXC), we determine the normalized ${\it rms}$ dispersion of the temperature fluctuations at the positions of clusters, finding that this shows excess variance compared with the noise expectation. We then build an unbiased statistical estimator of the signal, determining that the normalized mean temperature dispersion of $1526$ clusters is $\langle \left(\Delta T/T \right)^{2} \rangle = (1.64 \pm 0.48) \times 10^{-11}$. However, comparison with analytic calculations and simulations suggest that around $0.7\,\sigma$ of this result is due to cluster lensing rather than the kSZ effect. By correcting this, the temperature dispersion is measured to be $\langle \left(\Delta T/T \right)^{2} \rangle = (1.35 \pm 0.48) \times 10^{-11}$, which gives a detection at the $2.8\,\sigma$ level. We further convert uniform-weight temperature dispersion into a measurement of the line-of-sight velocity dispersion, by using estimates of the optical depth of each cluster (which introduces additional uncertainty into the estimate). We find that the velocity dispersion is $\langle v^{2} \rangle =(123\,000 \pm 71\,000)\,({\rm km}\,{\rm s}^{-1})^{2}$, which is consistent with findings from other large-scale structure studies, and provides direct evidence of statistical homogeneity on scales of $600\,h^{-1}{\rm Mpc}$. Our study shows the promise of using cross-correlations of the kSZ effect with large-scale structure in order to constrain the growth of structure. Comments: 20 pages, 12 figures and 8 tables, A&A in press Journal reference: A&A 617, A48 (2018) Cite as: arXiv:1707.00132 [astro-ph.CO] (or arXiv:1707.00132v3 [astro-ph.CO] for this version) From: Yin-Zhe Ma [view email] [v1] Sat, 1 Jul 2017 10:03:56 GMT (1534kb) [v2] Thu, 28 Jun 2018 19:59:04 GMT (1679kb) [v3] Thu, 23 Aug 2018 17:27:55 GMT (1680kb)
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 12 > Page 7299 Christoph Hitzenberger, Editor-in-Chief Multiple-pulse damage thresholds of retinal explants in the ns-time regime Scarlett Lipp, Sebastian Kotzur, Philipp Elmlinger, and Wilhelm Stork Scarlett Lipp,1,2,* Sebastian Kotzur,1,3 Philipp Elmlinger,1 and Wilhelm Stork2 1Robert Bosch GmbH, Chassis Systems Control, Herrenwiesenweg 24, Schwieberdingen, 71701, Germany 2Karlsruher Institute of Technology, Institute for Information Processing Technologies, Department of Electrical Engineering and Information Technology, Engesserstrasse 5, Karlsruhe, 76131, Germany 3Institute for Ophthalmic Research, Eberhard Karls University Tübingen, Elfriede-Aulhorn-Strasse 7, Tübingen, 72076, Germany *Corresponding author: [email protected] S Lipp S Kotzur P Elmlinger W Stork •https://doi.org/10.1364/BOE.412012 Scarlett Lipp, Sebastian Kotzur, Philipp Elmlinger, and Wilhelm Stork, "Multiple-pulse damage thresholds of retinal explants in the ns-time regime," Biomed. Opt. Express 11, 7299-7310 (2020) Porcine skin damage threshold from mid-infrared optical parametric oscillator radiation at 3.743 µm (BOE) Laser-induced corneal injury: validation of a computer model to predict thresholds (BOE) Retinal damage thresholds from 100-millisecond laser radiation exposure at 1319 nm: a comparative study for rabbits with different ocular axial lengths (BOE) Optical Biophysics and Photobiology Laser damage Laser induced damage thresholds Laser irradiation Pulsed fiber lasers Original Manuscript: October 7, 2020 Revised Manuscript: November 13, 2020 Manuscript Accepted: November 13, 2020 Interactions of multiple-pulse irradiation in the thermo-mechanical damage regime Methods and materials The data situation of laser-induced damage measurements after multiple-pulse irradiation in the ns-time regime is limited. Since the laser safety standard is based on damage experiments, it is crucial to determine damage thresholds. For a better understanding of the underlying damage mechanism after repetitive irradiation, we generate damage thresholds for pulse sequences up to N = 20 000 with 1.8 ns-pulses using a square-core fiber and a pulsed Nd:YAG laser. Porcine retinal pigment epithelial layers were used as tissue samples, irradiated with six pulse sequences and evaluated for damage by fluorescence microscopy. The damage thresholds decreased from 31.16 µJ for N = 1 to 11.56 µJ for N = 20 000. The reduction indicates photo-chemical damage mechanisms after reaching a critical energy dose. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Lasers are classified for safety reasons based on their potential to cause injury to the human eye and skin. For this purpose, the accessible emission limits (AELs) per laser safety class are defined for different pulse durations, wavelengths and specified by application-related correction factors [1]. These AELs are correlated to the maximum permissible exposures (MPEs), which indicate the human exposure limits to prevent injury to the human eye and skin. For the determination of the AELs for multiple irradiation of pulsed exposures, a correction factor was introduced in the laser safety standard IEC 60825-1, which was empirically derived from damage experiments on non-human primates (NHPs) and on explants [2]. The damage experiments showed that the energy of a single pulse of a pulse train to induce damage, decreases due to the underlying mechanism of cell damage. In the thermal damage regime (sub-microsecond range until a few seconds), a pulse additivity could be observed [3] and showed a good agreement with the empiric derived correction factor for multiple-pulses. This kind of damage mechanism could be explained and modeled using the Arrhenius integral based on the thermo-kinetic relationship [4,5]. Since a damage-causing energy decrease was also observed in the thermo-mechanical regime [6–9] (nanoseconds until the sub-microsecond range), this correction factor is also used for these pulse durations, although the damage processes regarding the additive behavior in the thermo-mechanical regime [8,9] are not yet fully understood. Since the data set for short pulse laser irradiation is incomplete, further damage thresholds are generated and discussed for repeated irradiation and briefly possible explanations of additive behavior are presented in this paper. The aim of this work is to generate data of high pulse numbers that can later be used to describe the trend of damage thresholds and deduce reduction trends based on the underlying cell death mechanism. 2. Interactions of multiple-pulse irradiation in the thermo-mechanical damage regime Thermo-mechanical damage is caused by the appearance of small microbubbles on the melanosome surface of retinal pigment epithelium (RPE) cells, which inevitably lead to cell death [10,11]. The question as to how the microbubbles induce cell death has not yet been fully clarified. It has been demonstrated that the cell membrane was destroyed but the damage procedure remains still unclear, although this mechanism is absolutely crucial for understanding the intercellular processes of repetitive sub-threshold irradiation (compared to single pulse thresholds). For a better understanding of the occurring effects, the process or combination of processes causing the lethal response of the cell need to be recognized. For this reason, possible interactions [12] were identified that can be induced by repeated sub-threshold irradiation. These interactions relate to cumulative processes and the description of the occurrence of a statistically independent event and are explained in more detail in the following sections. 2.1 Accumulation effects This hypothesis is based on the assumption that the tissue is sustainably sensitized by irradiation. The irradiation thus affects the tissue in terms of an accumulation, since the influence is increased by the frequency of occurrence (number of pulses of a pulse sequence). In this section, accumulations are presented, which can be (a) reversible as well as (b) non-reversible. With (a) reversible accumulation, it is possible for the tissue to return to its previous condition between the pulses. An example could be background heating in the tissue at high repetition rates. An (b) irreversible accumulation effect, would indicate a permanent change in environmental conditions. This would mean that the tissue would progressively change and the pulse pause would not contribute to regeneration. Studies by Qiang et al. [13] have shown this fatigue in other cell types. The main difference between the two theses is the assumption of an ability to "reset" the cell conditions in the reversible case. Assuming a non-reversible effect, it can be concluded that even time pauses between the pulses are not able to restore the cell to its initial position. 2.2 Probability summation model The probability summation model (PSM) is a statistical method for describing the occurrence of an event by increasing probability. Here, the PSM serves as a methodical approach for the prediction of a damage after multiple laser irradiation with low-dose energy. This model is based on the probit analysis which was introduced by Finney [14] for dose-response curves in other fields of application back in the 1970s, and later on, the PSM was used by Menendez [15] for laser irradiation damage prediction. This approach is partly still used today to predict damage and compared with other established methods such as the correction factor from the laser safety standard [16,17] . The PSM is based on the assumption that the response to any exposure of a pulse train can be considered as an independent event to previous pulses. All previous exposures of a pulse sequence have no effect on the retina in kind of a sensitization or a desensitization. Assuming that the probability $P$ of a retinal response to each single pulse $p_{\mathrm {Single}}$ of a pulse train is identical, the probability $P(N)$ of inducing a retinal response after $N$ pulses can be calculated according Eq. (1) [15,16]: (1)$$P(N)=1-(1-p_{\mathrm{Single}})^{N}$$ Consequently, the exposure that represents a 50% probability of injury ($\mathrm {ED_{50}}$) concludes a value of $P(N)$ = 0.5. The answer probability for each pulse can then be determined by solving Eq. (1) for $p_{\mathrm {Single}}$: (2)$$p_{\mathrm{Single}}=1-(0.5)^{1/N}$$ In order to determine $p_{\mathrm {Single}}$, the ProbitFit developed by Lund [16] was used. This analysis yields a dose-response curve (based on the assumption of a log-normal distribution) which is characterized by the $\mathrm {ED_{50}}$ as inflection point. The resulting curve indicates the probability of occurrence of an event (damage) with increasing energy doses. This means that the individual probability sought from Eq. (2) in the ProbitFit indicates an energy dose at which a damage occurs in the respective repetitive irradiation. For a prediction of the $\mathrm {ED_{50}}$ for $N$ = 10, the single pulse $\mathrm {ED_{50}}$ data from previous reports [12] is used: For the calculation of the $\mathrm {ED_{50}}$ according to Eq. (1), Eq. (2) needs to be solved for 10 pulses which results in a $p_{\mathrm {Single}}$ = 0.067. In Fig. 1, this probability point is marked in green colour and thus indicates which dose is necessary after a probabilistic summation to produce damage at a tenfold irradiation. Fig. 1. Dose-response curve for the single exposure evaluation. The probability $p_{\mathrm {Single}}$ of 0.067 is marked schematically in green and indicates the probabilistic summation to produce damage after 10 pulses. The PSM predicts an according pulse energy of 30.63 µJ to be necessary to provoke damage at a tenfold irradiation. 3. Methods and materials 3.1 Use of porcine explants The experiments were carried out on explants, as the dosimetry control of diameter, beam profile and real interaction energy during irradiation is easier compared to NHP. The ex vivo samples were obtained from fresh porcine eyes purchased from a local slaughterhouse. During transport, the unopened eyeballs were stored in a dark and insulated cool box. For sample preparation (see Fig. 2) the fresh eyes were then rinsed with Hanks balanced saline solution (HBSS). The surrounding tissue was removed and the bulbus was opened with a needle (see Fig. 2(a)). The bulbus was cut concentrically at the equator (see Fig. 2(b)). The anterior part of the eye and the vitreous body were removed. The sensory retina was removed by gentle withdrawal or rinsing with HBSS to expose the underlying RPE in a trefoil shape (see Fig. 2(c)). The sclera and the underlying choroid were not removed. The black pigmented parts of the ocular fundus were cut into rectangular pieces and clamped in holding devices (see Fig. 2(d)) and inserted into the HBSS and into a nutrient medium. Fig. 2. Sample preparation of porcine eyes: (a) Opening (b) Cut (c) Trefoil shape (d) Holding constructions to flatten the sample. After irradiation, RPE cell vitality was checked using the fluorescent dyes Calcein-Acetoxy-methylester (Calcein-AM) (stock 1 µg/µl) and Propidium iodide (PI) (stock 1 mg/ml). HBSS was mixed (a) with Calcein-AM (1:200 to 5 µM) and independently (b) with PI (1:100 to 15 µM) and incubated for 20 minutes. Calcein-AM is transformed to Calcein in living cells by esterases which can be excited by blue light (excitation maximum at 494 nm, emission at 517 nm) and appears bright under the microscope. Non-vital cells appear dark. PI is used as a dye which penetrates only damaged cellular membranes. Intercalation complexes are formed by PI with double-stranded deoxyribonucleic acid (DNA), which causes an amplification of fluorescence (excitation maximum at 488 nm, emission at 590 nm - 617 nm). Thus, damaged cells appear bright, and vital cells dark [18]. 3.2 Procedure In Fig. 3 the setup for the determination of the laser-induced damage thresholds is shown schematically. A Q-Switched frequency-doubled Nd:YAG laser (Crylas, FDSS 532-1000, Germany) produced temporal Gaussian pulses with a full width at half maximum (FWHM) of 1.8 ns at a wavelength of 532 nm. The spatial mode was $\mathrm {TEM_{00}}$. Furthermore, the long term pulse energy stability (regarding 6 h) was less than $\pm \,5\%$ and the pulse-to-pulse-stability was less than 5% root mean square (rms). Since the maximum exposure time in our experiments for a spot remained 800 s, we have neglected the deviation (less than 3% in our application) for the calculation of systematic uncertainties. Fig. 3. Optical setup for laser-induced measurements with top hat profile. The pulsed Nd:YAG is coupled into a multimode fiber to excite mode mixing. The squared beam profile at the distal tip of the fiber is imaged onto the samples. By using a beam splitter, the applied energy can be measured on the samples during irradiation. A camera is used to secure the top hat on the sample by recognition of the squared shape of the image. (a) Schematic illustration of the setup (b) Spatial beam profile at the sample position. The beam was coupled into a square-core-fiber (Thorlabs, FP150QMT, Germany) with a numerical aperture (NA) of 0.39 and a length of 20 m to excite mode mixing for the generation of a top hat beam profile at the distal fiber tip. The square-core-shape has the advantage that the imaging of a square shaped beam profile (see Fig. 3(b)) projected onto the plane of the sample position can be controlled with high precision with the attached camera (IDS, UI-1540SE-M-GL, 1280 x 1024 pixel, Germany). The camera as well as the sample position were readjusted by means of an adjustment laser (Thorlabs, CPS532, Germany). In the imaging position, the top hat beam profile was thus ensured at all times by the squared shape recognition. The image position of the adjustment laser had a deviation $\leq$ 2 % and was neglected at this point. Subsequently, the fiber tip was imaged by an asphere (Thorlabs, ACL3026U-A, Germany) and an objective (Thorlabs, LMH-5X-532, Germany) onto the plane of the sample position. A beam splitter (Thorlabs, BS013, Germany) was used to deflect portions of the beam into an energy meter (Coherent, LabMax-TOP, Model No. 1104622, USA) to measure pulse energy and to count the number of pulses. Before and after the experiments the pulse length was measured by a high-speed free-space detector (Thorlabs, DET025AL/M, Germany) at the position of the sample. The detector was coupled to an oscilloscope (Teledyne LeCroy, waverunner 6100A, 1 GHz, 10 GS/s). Prior to the experiments, a BeamViewer (Coherent, LaserCam-HR II, USA) was used to examine the spatial beam profile (see Fig. 3(b)). The setup was used to apply a square beam profile with an edge length of 319 µm $\pm \,3\,$µm to the samples. In the experiment, the samples were irradiated with pulse trains of N = 1, 10, 100, 1 000, 10 000 and 20 000 to determine the damage threshold in terms of the $\mathrm {ED_{50}}$. The pulse repetition frequency (PRF) was set to 25 Hz for each experiment to prevent background heating [19]. The room temperature was set to 21$^{\circ }$C. In a previous study [12], the samples were immersed and moistened in a medium, but not completely, so that the irradiated area was also in contact with air. To minimise oxidative stress on the cells, we fully immersed the samples this time. In order to take into account the biological variability between and within individuals, we have indicated the number of eyes studied in Table 1. In addition, each series of experiments (related to the pulse sequence) was carried out on several dates to take environmental influences into consideration. Table 1. $\mathrm {ED_{50}}$ measured for exposure to a 319 µm squared top hat of 1.8-ns-duration pulses at $\mathrm {\lambda }$ = 532 nm. 95% confidence intervals on the $\mathrm {ED_{50}}$ are given in parentheses. A $"-"$ indicates data insufficient to obtain confidence intervals. The slope is defined by the ratio of $\mathrm {ED_{84}}$ to $\mathrm {ED_{50}}$. Intensity modulation factor was 1.15 $\pm$ 0.1. The thresholds obtained for the explants are listed in Table 1 for the 319 µm squared top hat exposures. The $\mathrm {ED_{50}}$ is expressed as the energy per pulse. Table 1 includes 95% confidence intervals on the $\mathrm {ED_{50}}$, the slope $b$ of the ProbitFit, the number of total exposures as well as the exposures corrected by the intensity modulation factor (IMF) [20]. The slope of the ProbitFit and the standard deviation $\mathrm {\sigma }$ of the log-normal dose-response probability distribution are related through $b$ = $\frac {1}{\mathrm {\sigma }}$ [14,16]. The results from Fig. 4 show that the damage-induced energy of a single pulse of a pulse sequence decreases with increasing number of pulses. The slope defines the ratio between $\mathrm {ED_{84}}$ to $\mathrm {ED_{50}}$ and is generally used as a quality feature, thus a step function is ideal for describing the transition from "no damage" to "damage" in dependence of pulse energy [2]. The slope $b$ in this study was never higher than 1.14 indicating an obvious transition for damage definition. Fig. 4. Damage thresholds of multiple pulse irradiation with spot edge length d = 319 µm and pulse duration of $\mathrm {\tau }$ = 1.8 ns with PRF = 25 Hz. Error bars indicate 95% confidence limits. A decrease of the individual pulse energy of a pulse sequence can be observed with increasing number of pulses. At pulse sequences between N = 10, N = 100, and N = 20 000 the overlap between "damage" and "no damage" was too low, which meant that no confidence limits could be defined. In addition, the calculated exposures (incident energy of the top hat beam profile on the sample) were corrected by the IMF, assuming that the peaks in this profile caused damage to the sample. The IMF was calculated by evaluating the beam profile in advance. For this, the profil was investigated for 30 transversal cross-sections of several recordings concerning peak occurrence. The IMF is defined by the ratio of maximum occurred peaks to the averaged measured level using a top hat fit. The detailed evaluation of the IMF can be found in a previous report [20]. In order to quantify the uncertainties precisely, we have shown the 95% confidence intervals ($1.96\mathrm {\sigma }-$interval) in brackets in Table 1. The exposure columns, which can be calculated by $\mathrm {ED_{50}}$, diameter and IMF correction, were examined in more detail by means of error propagation to indicate the uncertainty in the $1\mathrm {\sigma }-$interval. The uncertainty of the $\mathrm {ED_{50}}$ can be indicated by the above mentioned relation to the slope. The diameter was determined with a measurement uncertainty of 319 $\pm$ 3 microns. The IMF was determined with a value of 1.15 $\pm$ 0.1. These initial parameters were used to calculate the exposure in the last two columns of Table 1. As expected, the damage threshold of the pulse sequences for the individual pulse decreases indicating an interaction between the pulses (see Sec. 2). Above all, it is very noticeable that especially in the range of higher pulse numbers the decrease is particularly strong. 5.1 Application of the probability summation model The PSM is an established method to predict the probability of laser-induced damages based on an increasing pulse number. Therefore, we compared the measured $\mathrm {ED_{50}}$s with the PSM (cf. Figure 5). Based on the slope of a measurement series, it can be deduced how the $\mathrm {ED_{50}}$s changes and decreases for multiple pulses. This model can thus be generated not only from the single pulse value (N = 1) but also from higher pulse numbers and predict how the damage threshold $\mathrm {ED_{50}}$ will decrease. [17] Fig. 5. Damage thresholds of multiple pulse irradiation with spot edge length d = 319 µm and pulse duration of $\mathrm {\tau }$ = 1.8 ns with PRF = 25 Hz. Error bars indicate 95% confidence limits. A decrease of the individual pulse energy of a pulse sequence can be observed with increasing number of pulses. PSM models (dashed) of the individual multiple-pulse thresholds were generated, but are independent of the starting point and in any case able to describe sufficiently restrictive the reduction. It can be concluded from Fig. 5 that the models do not appear adequate for describing the respective damage $\mathrm {ED_{50}}$s. The predictions of the PSM underestimate in any case the risk of damage. This result indicates that the probability summation cannot be used for low pulse numbers (N < 1 000) due to underestimation of the hazard. The applicability and thus qualification of the PSM for higher number of pulses (N > 1 000) can be examined by further investigations in the higher pulse number range. Since the course of the PSM is based on the slope of the underlying measurement, this is also the reason for the deviation from the real measured values: Although a low slope indicates a clear transition between the binary response (damage or no damage), it also influences the trend of probability summation predictions. Applying the PSM to our data leads to the conclusion that this description of multiple-pulse data is not appropriate. Another issue about the PSM is that for unknown input parameters such as pulse duration and damage range, no prediction can be made for individual pulses, since PSM is based on the injury thresholds of individual pulse exposure. This assumption is also one of the major weaknesses of the model approach, since the course and prediction are based on the quality and biological variability of a single pulse exposure and its slope [17,21]. 5.2 Investigation of accumulation effects From the previous section we concluded that the statistical PSM is unsuitable for describing our data. Therefore, in this section we deal with accumulating effects from Sec. 2. Assuming a stimulating effect, this would suggest that the tissue is sensitized after each pulse. The use of the explants had the advantage that the fluorescence images (see Fig. 6) provide indirect information on the metabolic activity of the irradiated cells: By using the fluorescent dye Calcein-AM (see Fig. 6(a)), the increased metabolic activity in the irradiated cells was detected. The cells were particularly stimulated shortly before the damage (especially bright spots). This behaviour indicates biochemical processes that support the theory of resulting free radicals with cytotoxic effects. Below the injury threshold the irradiated cells light up brightly upon stimulation, suggesting that these cells have converted more Calcein-AM into Calcein, which can be excited after conversion. Fig. 6. Fluorescence microscope images (a) Activation and excitation of the converted calcein after sub-threshold irradiation: (1) Regular excited, vital hexagonal RPE structures can be recognized. (2) Destroyed cell membranes by intensity modulation peaks in the beam profile. (3) Cells with a higher brightness level, which indicates an increased metabolic activity. (b) Fluorescence microscope image of exposures with an edge length of 319 µm and a pulse duration of 1.8 ns. Hexagonal structure of single RPE cells are visible. Green bright cells represent living cells. Red bright cell nuclei are excited by the PI through destroyed cell membrane. In Fig. 6(b), we show an exemplary photograph on the samples with the irradiated square area. Bright green cells indicate vitality due to calcein. The red dots on the other hand are assigned to cell nuclei that could only be excited by PI after the cell membrane had been destroyed. Another investigation concerns the temperature increase in the tissue through multiple-pulse irradiation. This background heating can hypothetically lead to thermal damage or promote photo-chemical processes within the cell. In the following sections we will discuss the evidence for photo-chemical processes. At this point, however, it cannot be excluded that a combination of interactions can also occur. Apart from these considerations regarding thermal or photo-chemical damage, photo-mechanical aspects still have to be considered. The stress fatigue of the cells can also contribute to the lethal response after irradiation. The most recent work by Qiang et al. [13], describes the stress and failure tendencies of red blood cells. These results clearly demonstrate the important role of mechanical fatigue in influencing the physical properties of biological cells. They provide further insights into accumulated membrane damage, which cannot be excluded in our experiments. 5.3 Indication of a photo-chemical damage The strong reduction of the damage thresholds (between N = 1 000 and N = 10 000) indicates a further damage mechanism in the retinal tissue. Figure 7 shows that from the high number of pulses (longer operating beam durations) a dose is reached which leads to damage. This damage can be described by photo-chemical effects, since critical limit doses are characteristic for this type of damage [22,23]. Photo-chemical effects typically occur at longer exposure times and shorter wavelengths of the visible spectrum [24,25]. Nevertheless, it is known that the subcellular reactions as a consequence of the photo-chemical effect also occur during shorter exposures, but are not the dominant damage mechanism [26]. Fig. 7. Photo-chemical damage occurs from a dose of about $200\,\mathrm {J/cm^{2}}$ (green line) or lower (considering the decreased slope from N = 1 000). Previous doses that caused the damage based on thermo-mechanical mechanism and have not been of photo-chemical origin. The further slight increase in the effective dose for damage may be due to repair mechanisms. The results from Fig. 4 indicate that at the lower pulse numbers in our studies, the photo-chemical effect is probably not decisive for the damage. Thus, for the shorter emission durations the thermo-mechanical damage dominates. This assumption can also be observed in a transition region (between N = 1 000 and N = 10 000), where the damage thresholds drop more strongly than before. This transition could be interpreted as a photo-chemical effect with regard to the higher dose. According to the laser safety standard, this total exposure time (400 s) and longer, might lead to both thermal and photo-chemical damage. In the case of single pulse exposures or low pulse numbers, the thermo-mechanical or thermal damage threshold is expected to be significantly lower than photo-chemical damage. Higher pulse numbers therefore require less single pulse energy as they have to apply the same dose (neglecting internal cell repair mechanisms [27]) to trigger oxidative stress (photo-chemical damage [25]). The studies of Ham et al. [24] have shown that photo-chemical damage was observed: The damage experiments were performed in vivo NHP. A wavelength of 488 nm was applied with cw-radiation. The results were evaluated after 48 h. Ham et al. concluded that for exposure times longer than 1 000 s (488 nm), the damage seems to be photo-chemical. The observation of photo-chemical effects (see Fig. 8) in terms of RPE disruption has already been observed in vivo in NHP for different wavelengths: In a study of Zhang et al. [28] in Rochester, the photo-chemical effects were demonstrated as primary damage. They determined the $\mathrm {ED_{50}}$ to be 82 µ$\mathrm {J/cm^{2}}$, the lethal dose $\mathrm {ED_{100}}$ was 140 µ$\mathrm {J/cm^{2}}$ for the cw-exposure using 594 nm. Despite their use of a cw-laser and a different spot size on the samples (which should be negligible under the assumption of photo-chemical effects), the data show a similar behaviour. Their threshold data fit well to the data of this work. An even greater difference would also have been expected since the samples in this study were irradiated at controlled room temperature. However, it is also possible that the photo-chemical processes are only triggered once the body temperature of the NHP is exceeded. Fig. 8. Comparison of integrated damage thresholds with other studies on photo-chemical effects (a) For different emission durations. Experimentally determined damage thresholds of this study (squares) with data from Ham et al. [24] (circles) and NHP experiments from Zhang et al. [28] (triangles). The studies of Ham et al. [24] were examined for different wavelengths of different cw-irradiation durations. Saturation can be identified for the shorter wavelength (b) Wavelength dependence: The data of this work (squares) fit well with the study of Zhang et al. [28] (triangles), for different wavelengths. For longer pulse durations, the dose necessary to cause damage is only slightly higher (probably due to repair mechanisms). Figure 8 shows the comparison of the measured damaging exposure (for the entire pulse train) of this work with the two published comparable studies [24,28]: In each publication a kind of a "saturation" can be noticed, starting from a certain emission duration. The data from Ham et al. indicate this transition range only for the shorter wavelengths from emission durations of $<$ 10 s. In the experiment of this study, the "transition" can be observed from about 400 s (corresponding to N $=$ 10 000). Longer emission durations required more energy to cause damage to the samples, but this increase was low. The slight increase can possibly be explained by cell-internal repair processes. The work of Ham et al. shows a similar course of damaging exposure. In his study of different wavelengths, he examined the necessary damaging dose for several irradiation times. The curves from Ham et al. for wavelengths of 488 nm and 514.5 nm show a similar bending behaviour as observed in this study. The longer wavelengths, on the other hand, do not indicate this "saturation" trend. [24] The Zhang et al. [28] and Ham et al. [24] studies show a similar wavelength dependence for the necessary damaging exposure: A time frame was investigated for wavelengths from 476 nm to 594 nm indicating a wavelength dependence to the damage threshold similar to the values in the literature. Furthermore, Zhang underlines an underestimated photo-chemical RPE-disruption for longer wavelengths. At this point, it cannot be definitively concluded that photo-chemical damage occurs for the longer exposures. However, there is a lot of evidence for this, besides the good agreement with the data of Zhang et al. [28] and Ham et al. [24]. 6. Outlook The strongly decreasing damage thresholds at the high number of pulses suggest that either an accumulating effect occurs or photo-chemical processes are induced. In order to identify the dominant damage mechanism, the experiments could be repeated with a higher PRF (approx. 1 kHz). Since the time intervals are significantly shorter than in our experiments, it can be evaluated to what extent these "pauses" have an influence on the damage threshold. For further evaluation of photo-chemical processes, several approaches can be applied: Backscattering [29] or interferometric [30] measurements can be used to determine whether microbubbles have occurred at all in the pulse sequence. Furthermore, photo-chemical processes are very temperature-dependent. A study of the damage thresholds at higher temperatures (e.g. body temperature) seem to be useful. Prospective studies should investigate long-term damage. RPE cell cultures (primary cells) could be used to approach the model of an in vivo organism. Intercellular processes can be observed and understood over a longer period of time. The underlying damage mechanism could thus possibly be better understood. In this work, the laser damage thresholds of explants were determined for multiple pulses up to N = 20 000 in the ns-time damage range using a Q-switched, frequency-doubled Nd:YAG laser (532 nm wavelength, 1.8 ns pulse, 25 Hz, 319 µm edge length, squared top hat, porcine RPE). We observed how the damaging individual pulse energy of a pulse train decreases with increasing pulse train, contrary to the predictions of probability summation model. The degree of reduction is especially strong at pulse numbers above 1 000, which indicates further damage mechanisms such as photo-chemical effects which seem to dominate above an energy dose of almost 200 $\mathrm {J/cm^{2}}$. The authors would like to thank Sven Schnichels, Heidi Mühl and Agnes Fietz of the Eye Clinic in Tübingen, Germany for supporting sample preparation. Furthermore, we would like to thank Brian Lund for providing the ProbitFit software. The authors declare that there are no conflicts of interest related to this article. 1. IEC 60825-1, Safety of Laser Products - Part 1: Equipment Classification and Requirements (International Electrotechnical Commission, Geneva, 2014), 3rd ed. 2. D. H. Sliney, J. Mellerio, V.-P. Gabel, and K. Schulmeister, "What is the meaning of threshold in laser injury experiments? implications for human exposure limits," Health Phys. 82(3), 335–347 (2002). [CrossRef] 3. K. Schulmeister and M. Jean, "Manifestation of the strong non-linearity of thermal injury," in International Laser Safety Conference, vol. 2011 (LIA, 2011), pp. 201–204. 4. R. Birngruber, F. Hillenkamp, and V. Gabel, "Theoretical investigations of laser thermal retinal injury," Health Phys. 48(6), 781–796 (1985). 5. S. L. Jacques, "Ratio of entropy to enthalpy in thermal transitions in biological tissues," J. Biomed. Opt. 11(4), 041108 (2006). [CrossRef] 6. R. Brinkmann, G. Hüttmann, J. Rögener, J. Roider, R. Birngruber, and C. P. Lin, "Origin of retinal pigment epithelium cell damage by pulsed laser irradiance in the nanosecond to microsecond time regimen," Lasers in Surgery and Medicine: The Official Journal of the American Society for Laser Medicine and Surgery 27(5), 451–464 (2000). [CrossRef] 7. G. Schüle, M. Rumohr, G. Huettmann, and R. Brinkmann, "RPE damage thresholds and mechanisms for laser exposure in the microsecond-to-millisecond time regimen," Investigative ophthalmology & visual science 46(2), 714–719 (2005). [CrossRef] 8. B. J. Lund, D. J. Lund, P. R. Edsall, and V. D. Gaines, "Laser-induced retinal damage threshold for repetitive-pulse exposure to 100-µs pulses," J. Biomed. Opt. 19(10), 105006 (2014). [CrossRef] 9. B. J. Lund, D. J. Lund, and P. R. Edsall, "Damage threshold from large retinal spot size repetetive-pulse laser exposures," in International Laser Safety Conference, vol. 2009 (LIA, 2009), pp. 84–87. 10. B. S. Gerstman, C. R. Thompson, S. L. Jacques, and M. E. Rogers, "Laser induced bubble formation in the retina," Lasers in Surgery and Medicine: The Official Journal of the American Society for Laser Medicine and Surgery 18(1), 10–21 (1996). [CrossRef] 11. J. Neumann and R. Brinkmann, "Boiling nucleation on melanosomes and microbeads transiently heated by nanosecond and microsecond laser pulses," J. Biomed. Opt. 10(2), 024001 (2005). [CrossRef] 12. S. Ramos, W. Stork, and N. Heussner, "Multiple-pulse damage thresholds on the retinal pigment epithelium layer using top hat profiles," in Optical Interactions with Tissue and Cells XXXI, vol. 11238 (International Society for Optics and Photonics, 2020), p. 112380D. 13. Y. Qiang, J. Liu, M. Dao, S. Suresh, and E. Du, "Mechanical fatigue of human red blood cells," Proc. Natl. Acad. Sci. 116(40), 19828–19834 (2019). [CrossRef] 14. D. J. Finney, Probit Analysis: A Statistical Treatment of the Sigmoid Response Curve (Cambridge University Press, 1952). 15. A. R. Menendez, F. E. MCheney, J. A. MZuclich, and P. MCrump, "Probability-summation model of multiple laser-exposure effects," Health Phys. 65(5), 523–528 (1993). [CrossRef] 16. B. J. Lund, "The probitfit program to analyze data from laser damage threshold studies," Tech. rep., Northrop Grumman Corp., San Antonio, TX, infomation technology (2006). 17. C. D. Clark and G. D. Buffington, "On the probability summation model for laser-damage thresholds," J. Biomed. Opt. 21(1), 015006 (2016). [CrossRef] 18. W. A. Dengler, J. Schulte, D. P. Berger, R. Mertelsmann, and H. H. Fiebig, "Development of a propidium iodide fluorescence assay for proliferation and cytotoxicity assays," Anti-Cancer Drugs 6(4), 522–532 (1995). [CrossRef] 19. M. A. Mainster, "Decreasing retinal photocoagulation damage: principles and techniques," in Seminars in Ophthalmology, vol. 14 (Taylor & Francis, 1999), pp. 200–209. 20. S. Ramos, P. Elmlinger, W. Stork, and N. Heussner, "Influence of the beam profile on laser-induced thresholds using explants," in Tissue Optics and Photonics, vol. 11363 (International Society for Optics and Photonics, 2020), p. 113631D. 21. D. H. Sliney and D. J. Lund, "Do we over-state the risk of multiple pulsed exposures?" in International Laser Safety Conference, vol. 2009 (LIA, 2009), pp. 93–98. 22. D. J. Lund and B. E. Stuck, "Retinal injury thresholds for blue wavelength lasers," in International Laser Safety Conference, vol. 2003 (Laser Institute of America, 2003), pp. 50–56. 23. International Commission on Non-Ionizing Radiation Protection, ICNIRP guidelines on limits of exposure to laser radiation of wavelengths between 180 nm and 1,000 µm," Heal. Phys. 105(3), 271–295 (2013). 24. W. T. Ham, H. A. Mueller, J. J. Ruffolo, and A. Clarke, "Sensitivity of the retina to radiation damage as a function of wavelength," Photochem. Photobiol. 29(4), 735–743 (1979). [CrossRef] 25. M. B. Rozanowska, "Light-induced damage to the retina: current understanding of the mechanisms and unresolved questions: a symposium-in-print," Photochem. Photobiol. 88(6), 1303–1308 (2012). [CrossRef] 27. G. Griess and M. Blankenstein, "Additivity and repair of actinic retinal lesions," Investig. Ophthalmol. & Vis. Sci. 20, 803–807 (1981). 28. J. Zhang, R. Sabarinathan, T. Bubel, D. R. Williams, and J. J. Hunter, "Spectral dependence of light exposure on retinal pigment epithelium (RPE) disruption in living primate retina," Investig. Ophthalmol. & Vis. Sci. 57, 2220 (2016). 29. J. Rögener, R. Brinkmann, and C. P. Lin, "Pump-probe detection of laser-induced microbubble formation in retinal pigment epithelium cells," J. Biomed. Opt. 9(2), 367–372 (2004). [CrossRef] 30. J. Neumann, "Mikroskopische Untersuchungen zur laserinduzierten Blasenbildung und - dynamik an absorbierenden Mikropartikeln," Ph.D. thesis (2005). IEC 60825-1, Safety of Laser Products - Part 1: Equipment Classification and Requirements (International Electrotechnical Commission, Geneva, 2014), 3rd ed. D. H. Sliney, J. Mellerio, V.-P. Gabel, and K. Schulmeister, "What is the meaning of threshold in laser injury experiments? implications for human exposure limits," Health Phys. 82(3), 335–347 (2002). K. Schulmeister and M. Jean, "Manifestation of the strong non-linearity of thermal injury," in International Laser Safety Conference, vol. 2011 (LIA, 2011), pp. 201–204. R. Birngruber, F. Hillenkamp, and V. Gabel, "Theoretical investigations of laser thermal retinal injury," Health Phys. 48(6), 781–796 (1985). S. L. Jacques, "Ratio of entropy to enthalpy in thermal transitions in biological tissues," J. Biomed. Opt. 11(4), 041108 (2006). R. Brinkmann, G. Hüttmann, J. Rögener, J. Roider, R. Birngruber, and C. P. Lin, "Origin of retinal pigment epithelium cell damage by pulsed laser irradiance in the nanosecond to microsecond time regimen," Lasers in Surgery and Medicine: The Official Journal of the American Society for Laser Medicine and Surgery 27(5), 451–464 (2000). G. Schüle, M. Rumohr, G. Huettmann, and R. Brinkmann, "RPE damage thresholds and mechanisms for laser exposure in the microsecond-to-millisecond time regimen," Investigative ophthalmology & visual science 46(2), 714–719 (2005). B. J. Lund, D. J. Lund, P. R. Edsall, and V. D. Gaines, "Laser-induced retinal damage threshold for repetitive-pulse exposure to 100-µs pulses," J. Biomed. Opt. 19(10), 105006 (2014). B. J. Lund, D. J. Lund, and P. R. Edsall, "Damage threshold from large retinal spot size repetetive-pulse laser exposures," in International Laser Safety Conference, vol. 2009 (LIA, 2009), pp. 84–87. B. S. Gerstman, C. R. Thompson, S. L. Jacques, and M. E. Rogers, "Laser induced bubble formation in the retina," Lasers in Surgery and Medicine: The Official Journal of the American Society for Laser Medicine and Surgery 18(1), 10–21 (1996). J. Neumann and R. Brinkmann, "Boiling nucleation on melanosomes and microbeads transiently heated by nanosecond and microsecond laser pulses," J. Biomed. Opt. 10(2), 024001 (2005). S. Ramos, W. Stork, and N. Heussner, "Multiple-pulse damage thresholds on the retinal pigment epithelium layer using top hat profiles," in Optical Interactions with Tissue and Cells XXXI, vol. 11238 (International Society for Optics and Photonics, 2020), p. 112380D. Y. Qiang, J. Liu, M. Dao, S. Suresh, and E. Du, "Mechanical fatigue of human red blood cells," Proc. Natl. Acad. Sci. 116(40), 19828–19834 (2019). D. J. Finney, Probit Analysis: A Statistical Treatment of the Sigmoid Response Curve (Cambridge University Press, 1952). A. R. Menendez, F. E. MCheney, J. A. MZuclich, and P. MCrump, "Probability-summation model of multiple laser-exposure effects," Health Phys. 65(5), 523–528 (1993). B. J. Lund, "The probitfit program to analyze data from laser damage threshold studies," Tech. rep., Northrop Grumman Corp., San Antonio, TX, infomation technology (2006). C. D. Clark and G. D. Buffington, "On the probability summation model for laser-damage thresholds," J. Biomed. Opt. 21(1), 015006 (2016). W. A. Dengler, J. Schulte, D. P. Berger, R. Mertelsmann, and H. H. Fiebig, "Development of a propidium iodide fluorescence assay for proliferation and cytotoxicity assays," Anti-Cancer Drugs 6(4), 522–532 (1995). M. A. Mainster, "Decreasing retinal photocoagulation damage: principles and techniques," in Seminars in Ophthalmology, vol. 14 (Taylor & Francis, 1999), pp. 200–209. S. Ramos, P. Elmlinger, W. Stork, and N. Heussner, "Influence of the beam profile on laser-induced thresholds using explants," in Tissue Optics and Photonics, vol. 11363 (International Society for Optics and Photonics, 2020), p. 113631D. D. H. Sliney and D. J. Lund, "Do we over-state the risk of multiple pulsed exposures?" in International Laser Safety Conference, vol. 2009 (LIA, 2009), pp. 93–98. D. J. Lund and B. E. Stuck, "Retinal injury thresholds for blue wavelength lasers," in International Laser Safety Conference, vol. 2003 (Laser Institute of America, 2003), pp. 50–56. International Commission on Non-Ionizing Radiation Protection, ICNIRP guidelines on limits of exposure to laser radiation of wavelengths between 180 nm and 1,000 µm," Heal. Phys. 105(3), 271–295 (2013). W. T. Ham, H. A. Mueller, J. J. Ruffolo, and A. Clarke, "Sensitivity of the retina to radiation damage as a function of wavelength," Photochem. Photobiol. 29(4), 735–743 (1979). M. B. Rozanowska, "Light-induced damage to the retina: current understanding of the mechanisms and unresolved questions: a symposium-in-print," Photochem. Photobiol. 88(6), 1303–1308 (2012). G. Griess and M. Blankenstein, "Additivity and repair of actinic retinal lesions," Investig. Ophthalmol. & Vis. Sci. 20, 803–807 (1981). J. Zhang, R. Sabarinathan, T. Bubel, D. R. Williams, and J. J. Hunter, "Spectral dependence of light exposure on retinal pigment epithelium (RPE) disruption in living primate retina," Investig. Ophthalmol. & Vis. Sci. 57, 2220 (2016). J. Rögener, R. Brinkmann, and C. P. Lin, "Pump-probe detection of laser-induced microbubble formation in retinal pigment epithelium cells," J. Biomed. Opt. 9(2), 367–372 (2004). J. Neumann, "Mikroskopische Untersuchungen zur laserinduzierten Blasenbildung und - dynamik an absorbierenden Mikropartikeln," Ph.D. thesis (2005). Berger, D. P. Birngruber, R. Blankenstein, M. Brinkmann, R. Bubel, T. Buffington, G. D. Clark, C. D. Clarke, A. Dao, M. Dengler, W. A. Du, E. Edsall, P. R. Elmlinger, P. Fiebig, H. H. Finney, D. J. Gabel, V. Gabel, V.-P. Gaines, V. D. Gerstman, B. S. Griess, G. Ham, W. T. Heussner, N. Hillenkamp, F. Huettmann, G. Hunter, J. J. Hüttmann, G. Jacques, S. L. Jean, M. Lin, C. P. Lund, B. J. Lund, D. J. Mainster, M. A. MCheney, F. E. MCrump, P. Mellerio, J. Menendez, A. R. Mertelsmann, R. Mueller, H. A. MZuclich, J. A. Neumann, J. Qiang, Y. Ramos, S. Rögener, J. Rogers, M. E. Roider, J. Rozanowska, M. B. Ruffolo, J. J. Rumohr, M. Sabarinathan, R. Schüle, G. Schulmeister, K. Schulte, J. Sliney, D. H. Stork, W. Stuck, B. E. Suresh, S. Thompson, C. R. Williams, D. R. Anti-Cancer Drugs (1) Heal. Phys. (1) Health Phys. (3) Investig. Ophthalmol. & Vis. Sci. (2) Investigative ophthalmology & visual science (1) J. Biomed. Opt. (5) Lasers in Surgery and Medicine: The Official Journal of the American Society for Laser Medicine and Surgery (2) Photochem. Photobiol. (2) Proc. Natl. Acad. Sci. (1) Table 1. E D 50 measured for exposure to a 319 µm squared top hat of 1.8-ns-duration pulses at λ = 532 nm. 95% confidence intervals on the E D 50 are given in parentheses. A " − " indicates data insufficient to obtain confidence intervals. The slope is defined by the ratio of E D 84 to E D 50 . Intensity modulation factor was 1.15 ± 0.1. (1) P ( N ) = 1 − ( 1 − p S i n g l e ) N (2) p S i n g l e = 1 − ( 0.5 ) 1 / N E D 50 measured for exposure to a 319 µm squared top hat of 1.8-ns-duration pulses at λ = 532 nm. 95% confidence intervals on the E D 50 are given in parentheses. A " − " indicates data insufficient to obtain confidence intervals. The slope is defined by the ratio of E D 84 to E D 50 . Intensity modulation factor was 1.15 ± 0.1. Pulse train E D 50 #Eyes/Samples/Exposures Exposure H IMF-Corrected H - µ J - - m J / c m 2 m J / c m 2 1 31.16 (30.48 - 31.96) 1.09 13/8/510 30.6 ± 2.76 35.2 ± 4.41 10 21.20 ( − ) 1.01 3/3/106 20.8 ± 0.21 23.9 ± 2.09 100 19.37 ( − ) 1.01 4/4/76 19.0 ± 0.19 27.9 ± 1.92 1 000 17.37 (15.93 - 18.40) 1.1 6/6/109 17.1 ± 1.71 21.9 ± 2.6 10 000 13.17 (11.61 - 14.58) 1.14 6/13/31 12.9 ± 1.81 14.9 ± 2.45 20 000 11.56 ( − ) 1.01 15/11/27 11.4 ± 0.11 13.1 ± 1.14
CommonCrawl
Physics and Astronomy (31) Materials Research (28) The China Quarterly (4) Geological Magazine (3) Canadian Journal of Neurological Sciences (2) High Power Laser Science and Engineering (2) Journal of Plasma Physics (2) Journal of the Marine Biological Association of the United Kingdom (2) Powder Diffraction (2) Disaster Medicine and Public Health Preparedness (1) Journal of Nutritional Science (1) PMLA / Publications of the Modern Language Association of America (1) Proceedings of the Royal Society of Edinburgh Section A: Mathematics (1) Zygote (1) The Association for Asian Studies (4) Canadian Neurological Sciences Federation (2) Global Science Press (2) MBA Online Only Members (2) AEPC Association of European Paediatric Cardiology (1) Arab Grid for Learning (1) CINP (1) Canadian Mathematical Society (1) International Soc for Twin Studies (1) Modern Language Association of America (1) Royal College of Psychiatrists / RCPsych (1) The Paleontological Society (1) ryantest123456 (1) International Corporate Law and Financial Market Regulation (1) Nano chitosan–zinc complex improves the growth performance and antioxidant capacity of the small intestine in weaned piglets Minyang Zhang, Guojun Hou, Ping Hu, Dan Feng, Jing Wang, Weiyun Zhu Journal: British Journal of Nutrition , First View The present study was conducted to test the hypothesis that dietary supplementation with a nano chitosan–zinc complex (CP–Zn, 100 mg/kg Zn) could alleviate weaning stress in piglets challenged with enterotoxigenic Escherichia coli K88 by improving growth performance and intestinal antioxidant capacity. The in vivo effects of CP–Zn on growth performance variables (including gastrointestinal digestion and absorption functions and the levels of key proteins related to muscle growth) and the antioxidant capacity of the small intestine (SI) were evaluated in seventy-two weaned piglets. The porcine jejunal epithelial cell line IPEC-J2 was used to further investigate the antioxidant mechanism of CP–Zn in vitro. The results showed that CP–Zn supplementation increased the jejunal villus height and decreased the diarrhoea rate in weaned piglets. CP–Zn supplementation also improved growth performance (average daily gain and average daily feed intake), increased the activity of carbohydrate digestion-related enzymes (amylase, maltase, sucrase and lactase) and the mRNA expression levels of nutrient transporters (Na+-dependent glucose transporter 1, glucose transporter type 2, peptide transporter 1 and excitatory amino acid carrier 1) in the jejunum and up-regulated the expression levels of mammalian target of rapamycin (mTOR) pathway-related proteins (insulin receptor substrate 1, phospho-mTOR and phospho-p70S6K) in muscle. In addition, CP–Zn supplementation increased glutathione content, enhanced total superoxide dismutase (T-SOD) and glutathione peroxidase (GSH-px) activity, and reduced malondialdehyde (MDA) content in the jejunum. Furthermore, CP–Zn decreased the content of MDA and reactive oxygen species, enhanced the activity of T-SOD and GSH-px and up-regulated the expression levels of nuclear factor erythroid 2-related factor 2 (Nrf2) pathway-related proteins (Nrf2, NAD(P)H:quinone oxidoreductase 1 and haeme oxygenase 1) in lipopolysaccharide-stimulated IPEC-J2 cells. Collectively, these findings indicate that CP–Zn supplementation can improve growth performance and the antioxidant capacity of the SI in piglets, thus alleviating weaning stress. Class Consciousness of Rural Migrant Children in China Jiaxin Chen, Dan Wang Journal: The China Quarterly , First View The state of class consciousness of working-class children in China has received scant attention in the scholarly literature. This study examines the class consciousness of rural migrant children as they are about to join their migrant parents and become "China's new workers." Qualitative investigations were conducted in two primary schools in Beijing. Focus group and individual interviews were held with 87 fifth- and sixth-grade migrant children in the two case schools and 324 valid student questionnaires were collected. The findings reveal that migrant children are aware of the unequal class relationships suffered by migrant workers; however, their interpretations of class-based injustices exhibit false consciousness, shadowed by individualism, meritocracy and the duality of images. Family and school may play vital roles in shaping migrant children's class consciousness. Infection, screening, and psychological stress of health care workers with COVID-19 in a non-frontline clinical department Ge Wang, Jia-Lun Guan, Xiu-Qing Zhu, Mu-Ru Wang, Dan Fang, Yue Wen, Meng Xie, De-An Tian, Pei-Yuan Li Journal: Disaster Medicine and Public Health Preparedness / Accepted manuscript To investigate risk factors and psychological stress of health care workers (HCWs) with COVID-19 in a non-frontline clinical department. Data of 2 source patients and all HCWs with infection risk were obtained in a department in Wuhan from January to February 2020. A questionnaire was designed to evaluate psychological stress of COVID-19 on HCWs. The overall infection rate was 4.8% in HCWs. 10 of 25 HCWs who contacted with 2 source patients were diagnosed with confirmed COVID-19 (8/10) and suspected COVID-19 (2/10). Other 2 HCWs were transmitted by other patients or colleagues. Close care behaviours included physical examination (6/12), life nursing (4/12), ward rounds (4/12), endoscopic examination (2/12). Contacts fluctuated from 1 to 24 times and each contact was short (8.1 min ± 5.6 min). HCWs wore surgical masks (11/12), gloves (7/12), and isolation clothing (3/12) when providing medical care. Most HCWs experienced a mild course with 2 asymptomatic infections, taking 9.8 days and 20.9 days to obtain viral shedding and clinical cure, respectively. Psychological stress included worry (58.3%), anxiety (83.3%), depression (58.3%), and insomnia (58.3%). Close contact with COVID-19 patients and insufficient protection were key risk factors. Precaution measures and psychological support on COVID-19 is urgently required for HCWs. Effects of exogenous C18 unsaturated fatty acids on milk lipid synthesis in bovine mammary epithelial cells Hang Zhang, Ni Dan, Changjin Ao, Sizhen Wang, Khas Erdene, Mohammed Umair Ashraf We determined the effects of a combination of C18 unsaturated fatty acids (C18-UFAs) consisting of oleic, linoleic, and linolenic acids on milk lipogenesis in bovine mammary epithelial cells (BMECs). By orthogonal experiments to determine cellular triacylglycerol (TAG) accumulation, a combination of 200 μmol/l C18 : 1, 50 μmol/l C18 : 2, and 2 μmol/l C18 : 3 was selected as C18-UFAs combination treatment, and culture in medium containing fatty acid-free bovine serum albumin was used as the control. The expression of genes related to milk lipid synthesis and intracellular FA composition was measured. The results showed that cytosolic TAG formation was higher under C18-UFAs treatment than under control treatment. The mRNA expression of acetyl-CoA carboxylase-α (ACACA), fatty acid synthase (FASN), and peroxisome proliferator-activated receptor gamma (PPARG) did not differ between treatments. The abundance of stearoyl-CoA desaturase (SCD) and acyl-CoA synthetase long-chain family member 1 (ACSL1) was higher, whereas that of sterol regulatory element binding transcription factor 1 (SREBF-1) was lower after C18-UFAs treatment compared to control treatment. The C16 : 0 and SFA content was decreased following C18-UFAs treatment compared to control treatment, while the cis-9 C18 : 1 and UFA content was increased. In conclusion, C18-UFAs could stimulate triglyceride accumulation, increase the cellular UFA concentration, and regulate lipogenic genes in BMECs. Cost-effectiveness of WHO Problem Management Plus for adults with mood and anxiety disorders in a post-conflict area of Pakistan: randomised controlled trial Syed Usman Hamdani, Zill-e- Huma, Atif Rahman, Duolao Wang, Tao Chen, Mark van Ommeren, Dan Chisholm, Saeed Farooq Journal: The British Journal of Psychiatry / Volume 217 / Issue 5 / November 2020 With the development of evidence-based interventions for treatment of priority mental health conditions in humanitarian settings, it is important to establish the cost-effectiveness of such interventions to enable their scale-up. To evaluate the cost-effectiveness of the Problem Management Plus (PM+) intervention compared with enhanced usual care (EUC) for common mental disorders in primary healthcare in Peshawar, Pakistan. Trial registration ACTRN12614001235695 (anzctr.org.au). We randomly allocated 346 participants to either PM+ (n = 172) or EUC (n = 174). Effectiveness was measured using the Hospital Anxiety and Depression Scale (HADS) at 3 months post-intervention. Cost-effectiveness analysis was performed as incremental costs (measured in Pakistani rupees, PKR) per unit change in anxiety, depression and functioning scores. The total cost of delivering PM+ per participant was estimated at PKR 16 967 (US$163.14) using an international trainer and supervisor, and PKR 3645 (US$35.04) employing a local trainer. The mean cost per unit score improvement in anxiety and depression symptoms on the HADS was PKR 2957 (95% CI 2262–4029) (US$28) with an international trainer/supervisor and PKR 588 (95% CI 434–820) (US$6) with a local trainer/supervisor. The mean incremental cost-effectiveness ratio (ICER) to successfully treat a case of depression (PHQ-9 ≥ 10) using an international supervisor was PKR 53 770 (95% CI 39 394–77 399) (US$517), compared with PKR 10 705 (95% CI 7731–15 627) (US$102.93) using a local supervisor. The PM+ intervention was more effective but also more costly than EUC in reducing symptoms of anxiety, depression and improving functioning in adults impaired by psychological distress in a post-conflict setting of Pakistan. Electrocatalytic water splitting using organic polymer materials-based hybrid catalysts Lijuan Niu, Lu Sun, Li An, Dan Qu, Xiayan Wang, Zaicheng Sun Journal: MRS Bulletin / Volume 45 / Issue 7 / July 2020 Sustainable and green energy sources are in high demand to meet the current human energy needs and environmental requirements. Hydrogen energy, with the highest energy density and zero carbon emission, is considered a potential solution. Hydrogen is primarily produced by splitting water. Rationally designed electrocatalysts are required to promote the cathodic hydrogen evolution reaction (HER) and the anodic oxygen evolution reaction (OER). Organic polymer matrices provide new opportunities for electrocatalytic water splitting due to their special physical and chemical characteristics and thermal stability. This article explains the role of organic polymers in electrocatalytic water decomposition from three aspects: ion-conductive polymers, conjugated conductive polymers, and carbon materials derived from organic polymers. We hope that this article will provide more rational ideas and promote the design of organic polymers for water-splitting electrocatalysis, and furnish more technical insights for the future of water electrolysis. Dynamic Liquidity Management by Corporate Bond Mutual Funds Hao Jiang, Dan Li, Ashley Wang Journal: Journal of Financial and Quantitative Analysis , First View Published online by Cambridge University Press: 22 June 2020, pp. 1-31 How do corporate bond mutual funds manage liquidity to meet investor redemptions? We show that during tranquil market conditions, these funds tend to reduce liquid asset holdings to meet redemptions, temporarily increasing relative exposures to illiquid asset classes. When aggregate uncertainty rises, however, they tend to scale down their liquid and illiquid assets proportionally to preserve portfolio liquidity. This fund-level dynamic management of liquidity appears to affect the broad financial market: Redemptions from the corporate bond fund sector lead to more corporate bond selling during high-uncertainty periods, which generates price pressures and predicts strong return reversals. Study on cytotoxicity of polyethylene glycol and albumin bovine serum molecule–modified quantum dots prepared by hydrothermal method Enlv Hong, Lumin Liu, Chen Li, Dan Shan, Hailong Cao, Baiqi Wang Journal: Journal of Materials Research / Volume 35 / Issue 9 / 14 May 2020 Fluorescent quantum dots (QDs) modified with polyethylene glycol (PEG) and albumin bovine serum (BSA) have profound application in the detection and treatment of hepatocellular carcinoma (HCC) cells. In the present study, the effects and mechanism of PEG and BSA modification on the cytotoxicity of QDs have been explored. It was found that the diameter of the as-prepared QDs, PEG@QDs, BSA@QDs is 3–5 nm, 4–5 nm, and 4–6 nm, respectively. With increase of the treatment time from 0 to 24 h, the HCC cell viability treated with QDs, PEG@QDs, and BSA@QDs obviously decreases, showing a certain time-dependent manner. When the concentration of several nanomaterials is increased from 10 to 90 nM, the cell viability decreases accordingly, exhibiting a certain concentration-dependent manner. Under the same concentration change conditions, the reactive oxygen species contents of cells treated by QDs, PEG@QDs, and BSA@QDs also rise from 7.9 × 103, 6.7 × 103, and 4.7 × 103 to 13.2 × 103, 14.3 × 103, and 12.3 × 103, respectively. In these processes, superoxide dismutase does not play a major role. This study provides strong foundation and useful guidance for QD applications in the diagnosis and treatment of HCC. Core principles for infection prevention in hemodialysis centers during the COVID-19 pandemic Gang Chen, Yangzhong Zhou, Lei Zhang, Ying Wang, Rong-rong Hu, Xue Zhao, Dan Song, Jing-hua Xia, Yan Qin, Li-meng Chen, Xue-mei Li Journal: Infection Control & Hospital Epidemiology / Volume 41 / Issue 7 / July 2020 Co-circulation of influenza A(H1N1), A(H3N2), B(Yamagata) and B(Victoria) during the 2017−2018 influenza season in Zhejiang Province, China Haiyan Mao, Yi Sun, Yin Chen, Xiuyu Lou, Zhao Yu, Xinying Wang, Zheyuan Ding, Wei Cheng, Dan Zhang, Yanjun Zhang, Jianmin Jiang Journal: Epidemiology & Infection / Volume 148 / 2020 Published online by Cambridge University Press: 14 February 2020, e296 Influenza is a major human respiratory pathogen. Due to the high levels of influenza-like illness (ILI) in Zhejiang, China, the control and prevention of influenza was challenging during the 2017–2018 season. To identify the clinical spectrum of illness related to influenza and characterise the circulating influenza virus strains during this period, the characteristics of ILI were studied. Viral sequencing and phylogenetic analyses were conducted to investigate the virus types, substitutions at the amino acid level and phylogenetic relationships between sequences. This study has shown that the 2017/18 influenza season was characterised by the co-circulation of influenza A (H1N1) pdm09, A (H3N2) and B viruses (both Yamagata and Victoria lineage). From week 36 of 2017 to week 12 of 2018, ILI cases accounted for 5.58% of the total number of outpatient and emergency patient visits at the surveillance sites. Several amino acid substitutions were detected. Vaccination mismatch may be a potential reason for the high percentage of ILI. Furthermore, it is likely that multiple viral introductions played a role in the endemic co-circulation of influenza in Zhejiang, China. More detailed information regarding the molecular epidemiology of influenza should be included in long-term influenza surveillance. LYZ Matrices and SL( $n$ ) Contravariant Valuations on Polytopes General convexity Polytopes and polyhedra Dan Ma, Wei Wang Journal: Canadian Journal of Mathematics , First View All SL( $n$ ) contravariant symmetric matrix valued valuations on convex polytopes in $\mathbb{R}^{n}$ are completely classified without any continuity assumptions. The general Lutwak–Yang–Zhang matrix is shown to be essentially the unique such valuation. Diet quality is associated with reduced risk of hypertension among Inner Mongolia adults in northern China Xuemei Wang, Aiping Liu, Maolin Du, Jing Wu, Wenrui Wang, Yonggang Qian, Huiqiu Zheng, Dan Liu, Xi Nan, Lu Jia, Ruier Song, Danyan Liang, Ruiqi Wang, Peiyu Wang Journal: Public Health Nutrition / Volume 23 / Issue 9 / June 2020 The present study investigated the association between dietary patterns and hypertension applying the Chinese Dietary Balance Index-07 (DBI-07). A cross-sectional study on adult nutrition and chronic disease in Inner Mongolia. Dietary data were collected using 24 h recall over three consecutive days and weighing method. Dietary patterns were identified using principal components analysis. Generalized linear models and multivariate logistic regression models were used to examine the associations between DBI-07 and dietary patterns, and between dietary patterns and hypertension. Inner Mongolia (n 1861). A representative sample of adults aged ≥18 years in Inner Mongolia. Four major dietary patterns were identified: 'high protein', 'traditional northern', 'modern' and 'condiments'. Generalized linear models showed higher factor scores in the 'high protein' pattern were associated with lower DBI-07 (βLBS = −1·993, βHBS = −0·206, βDQD = −2·199; all P < 0·001); the opposite in the 'condiments' pattern (βLBS = 0·967, βHBS = 0·751, βDQD = 1·718; all P < 0·001). OR for hypertension in the highest quartile of the 'high protein' pattern compared with the lowest was 0·374 (95 % CI 0·244, 0·573; Ptrend < 0·001) in males. OR for hypertension in the 'condiments' pattern was 1·663 (95 % CI 1·113, 2·483; Ptrend < 0·001) in males, 1·788 (95 % CI 1·155, 2·766; Ptrend < 0·001) in females. Our findings suggested a higher-quality dietary pattern evaluated by DBI-07 was related to decreased risk for hypertension, whereas a lower-quality dietary pattern was related to increased risk for hypertension in Inner Mongolia. Shotgun metagenomics reveals both taxonomic and tryptophan pathway differences of gut microbiota in major depressive disorder patients Wen-tao Lai, Wen-feng Deng, Shu-xian Xu, Jie Zhao, Dan Xu, Yang-hui Liu, Yuan-yuan Guo, Ming-bang Wang, Fu-sheng He, Shu-wei Ye, Qi-fan Yang, Tie-bang Liu, Ying-li Zhang, Sheng Wang, Min-zhi Li, Ying-jia Yang, Xin-hui Xie, Han Rong Journal: Psychological Medicine , First View The microbiota–gut–brain axis, especially the microbial tryptophan (Trp) biosynthesis and metabolism pathway (MiTBamp), may play a critical role in the pathogenesis of major depressive disorder (MDD). However, studies on the MiTBamp in MDD are lacking. The aim of the present study was to analyze the gut microbiota composition and the MiTBamp in MDD patients. We performed shotgun metagenomic sequencing of stool samples from 26 MDD patients and 29 healthy controls (HCs). In addition to the microbiota community and the MiTBamp analyses, we also built a classification based on the Random Forests (RF) and Boruta algorithm to identify the gut microbiota as biomarkers for MDD. The Bacteroidetes abundance was strongly reduced whereas that of Actinobacteria was significantly increased in the MDD patients compared with the abundance in the HCs. Most noteworthy, the MDD patients had increased levels of Bifidobacterium, which is commonly used as a probiotic. Four Kyoto Encyclopedia of Genes and Genomes (KEGG) orthologies (KOs) (K01817, K11358, K01626, K01667) abundances in the MiTBamp were significantly lower in the MDD group. Furthermore, we found a negative correlation between the K01626 abundance and the HAMD scores in the MDD group. Finally, RF classification at the genus level can achieve an area under the receiver operating characteristic curve of 0.890. The present findings enabled a better understanding of the changes in gut microbiota and the related Trp pathway in MDD. Alterations of the gut microbiota may have the potential as biomarkers for distinguishing MDD patients form HCs. Malnutrition screening and acute kidney injury in hospitalised patients: a retrospective study over a 5-year period from China Chenyu Li, Lingyu Xu, Chen Guan, Long Zhao, Congjuan Luo, Bin Zhou, Xiaosu Zhang, Jing Wang, Jun Zhao, Junyan Huang, Dan Li, Hong Luan, Xiaofei Man, Lin Che, Yanfei Wang, Hui Zhang, Yan Xu Malnutrition and acute kidney injury (AKI) are common complications in hospitalised patients, and both increase mortality; however, the relationship between them is unknown. This is a retrospective propensity score matching study enrolling 46 549 inpatients, aimed to investigate the association between Nutritional Risk Screening 2002 (NRS-2002) and AKI and to assess the ability of NRS-2002 and AKI in predicting prognosis. In total, 37 190 (80 %) and 9359 (20 %) patients had NRS-2002 scores <3 and ≥3, respectively. Patients with NRS-2002 scores ≥3 had longer lengths of stay (12·6 (sd 7·8) v. 10·4 (sd 6·2) d, P < 0·05), higher mortality rates (9·6 v. 2·5 %, P < 0·05) and higher incidence of AKI (28 v. 16 %, P < 0·05) than patients with normal nutritional status. The NRS-2002 showed a strong association with AKI, that is, the risk of AKI changed in parallel with the score of the NRS-2002. In short- and long-term survival, patients with a lower NRS-2002 score or who did not have AKI achieved a significantly lower risk of mortality than those with a high NRS-2002 score or AKI. Univariate Cox regression analyses indicated that both the NRS-2002 and AKI were strongly related to long-term survival (AUC 0·79 and 0·71) and that the combination of the two showed better accuracy (AUC 0·80) than the individual variables. In conclusion, malnutrition can increase the risk of AKI and both AKI and malnutrition can worsen the prognosis that the undernourished patients who develop AKI yield far worse prognosis than patients with normal nutritional status. Dynamics of arbuscular mycorrhizal fungi in relation to root colonization, spore density, and soil properties among different spreading stages of the exotic plant threeflower beggarweed (Desmodium triflorum) in a Zoysia tenuifolia lawn Xiaoge Han, Changchao Xu, Yutao Wang, Dan Huang, Qiang Fan, Guorong Xin, Christoph Müller Journal: Weed Science / Volume 67 / Issue 6 / November 2019 Weed invasion is a prevailing problem in modestly managed lawns. Less attention has been given to the exploration of the role of arbuscular mycorrhizal fungi (AMF) under different invasion pressures from lawn weeds. We conducted a four-season investigation into a Zoysia tenuifolia Willd. ex Thiele (native turfgrass)–threeflower beggarweed [Desmodium triflorum (L.) DC.] (invasive weed) co-occurring lawn. The root mycorrhizal colonizations of the two plants, the soil AM fungal communities and the spore densities under five different coverage levels of D. triflorum were investigated. Desmodium triflorum showed significantly higher root hyphal and vesicular colonizations than those of Z. tenuifolia, while the root colonizations of both species varied significantly among seasons. The increased coverage of D. triflorum resulted in the following effects: (1) the spore density initially correlated with mycorrhizal colonizations of Z. tenuifolia but gradually correlated with those of D. triflorum. (2) Correlations among soil properties, spore densities, and mycorrhizal colonizations were more pronounced in the higher coverage levels. (3) Soil AMF community compositions and relative abundances of AMF operational taxonomic units changed markedly in response to the increased invasion pressure. The results provide strong evidence that D. triflorum possessed a more intense AMF infection than Z. tenuifolia, thus giving rise to the altered host contributions to sporulation, soil AMF communities, relations of soil properties, spore densities, and root colonizations of the two plants, all of which are pivotal for the successful invasion of D. triflorum in lawns. Electrospinning of PAN/Ag NPs nanofiber membrane with antibacterial properties Chenrong Wang, Wei Wang, Lishan Zhang, Shan Zhong, Dan Yu Journal: Journal of Materials Research / Volume 34 / Issue 10 / 28 May 2019 Published online by Cambridge University Press: 04 March 2019, pp. 1669-1677 Durable antibacterial PAN/Ag NPs nanofiber membrane was prepared by electrospinning. In this study, Ag NPs were composed by applying polyvinyl pyrrolidone as a dispersant and sodium borohydride (NaBH4) as a reductant. The composite nanofiber films and silver nanoparticles were characterized and tested by transmission electron microscopy, scanning electron microscopy, energy dispersive spectroscopy, X-ray photoelectron spectroscopy, Fourier-transform infrared spectroscopy, X-ray diffraction, and Brunauer Emmett Teller (BET) and thermogravimetric analysis test. The specific surface area of PAN/Ag NPs (1%) and PAN/Ag NPs (3%) nanofiber membrane were about 25.00 m2/g calculated by the BET equation. It can be seen that the pore sizes of PAN, PAN/Ag NPs (1%), and PAN/Ag NPs (3%) nanofiber membranes were mainly distributed between 30 and 40 nm. The maximum removal rate of PM10, PM2.5, and PM1.0 was about 94%, 89%, and 82%, respectively, indicating it has a good filtering effect. The results also demonstrated that this membrane has bacterial reduction of over 99.9% for E. coli and S. aureus, respectively. In addition, the thermal stability of the fiber membrane with Ag NPs has no clear difference when compared to pure PAN nanofiber membrane and also has better moisture conductivity, indicating it is a potential candidate applied in biopharmaceutical antiseptic protection products. Fecal microbiota transplantation in an elderly patient with mental depression Ting Cai, Xiao Shi, Ling-zhi Yuan, Dan Tang, Fen Wang Determination of the barrier height of iridium with hydrogen-terminated single crystal diamond Yan-Feng Wang, Wei Wang, Xiaohui Chang, Juan Wang, Jiao Fu, Tianfei Zhu, Zongchen Liu, Yan Liang, Dan Zhao, Zhangcheng Liu, Minghui Zhang, Kaiyue Wang, Hong-Xing Wang, Ruozheng Wang Journal: MRS Communications / Volume 9 / Issue 1 / March 2019 Direct determination of barrier height (ΦBH) value between Ir and single crystal (001) hydrogen-terminated diamond with lightly boron doped has been performed using x-ray photoelectron spectroscopy technique. 70 nm Ir islands were formed on hydrogen-terminated diamond surface using anodic aluminum oxide. The ΦBH value for Ir/hydrogen-terminated diamond was −0.43 ± 0.14 eV, indicating that Ir was a suitable metal for ohmic contact with hydrogen-terminated diamond. The band diagram of Ir/hydrogen-terminated diamond was obtained. The experimental ΦBH was compared with the theoretical ΦBH in this work. Efficient visible light degradation of dyes in wastewater by nickel–phosphorus plating–titanium dioxide complex electroless plating fabric Xiaodong Ding, Wei Wang, Ao Zhang, Lishan Zhang, Dan Yu Journal: Journal of Materials Research / Volume 34 / Issue 6 / 28 March 2019 Published online by Cambridge University Press: 07 February 2019, pp. 999-1010 Dyeing wastewater has caused serious environmental problems nowadays. In this work, nickel–phosphorus plating–titanium dioxide (Ni-P-TiO2) electroless plating polyimide (PI) fabric was fabricated as an excellent visible light response composite. First, polyaniline (PANI) was in situ polymerized on the surface of the PI fabric. Second, PANI reduced palladium ions to be active seeds for initiating electroless plating of Ni-P-TiO2 layer. Finally, the Ni-P-TiO2/PANI/PI fabric with all-in-one structure was prepared, which can effectively overcome the drawbacks of poor loading fastness and insensitivity to visible light response. It was characterized by scanning electron microscopy, energy-dispersive spectroscopy, X-ray diffraction, X-ray photoelectron spectroscopy, thermogravimetric analysis, and ultraviolet–visible diffuse reflectance spectroscopy. The photocatalytic activity was evaluated by degrading reactive blue 19, methylene blue, and reactive red (M-3BE) under visible light irradiation. The results show that the degradation rates of the all three dyes were over 91% with robust cycle stability for repeated 5 cycles of use. The possible photocatalytic degradation mechanism of fabrics was also proposed based on free radical and hole removal experiments. Some congruences involving fourth powers of central q-binomial coefficients Sequences and sets Victor J. W. Guo, Su-Dan Wang Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 150 / Issue 3 / June 2020 We prove some congruences on sums involving fourth powers of central q-binomial coefficients. As a conclusion, we confirm the following supercongruence observed by Long [Pacific J. Math. 249 (2011), 405–418]: $$\sum\limits_{k = 0}^{((p^r-1)/(2))} {\displaystyle{{4k + 1} \over {{256}^k}}} \left( \matrix{2k \cr k} \right)^4\equiv p^r\quad \left( {\bmod p^{r + 3}} \right),$$ where p⩾5 is a prime and r is a positive integer. Our method is similar to but a little different from the WZ method used by Zudilin to prove Ramanujan-type supercongruences.
CommonCrawl
When did the first carbon nucleus in the Universe come into existence? I am a chemist with a passion for astrophysics and particle physics, and one of the most marvellous things I have learned in my life is the process of stellar nucleosynthesis. It saddens me how my colleagues point to the periodic table and draw compound structures so often, yet never seem to be curious about where the elements on which they rely came from, and completely miss the beauty underneath. As far as I understand, there are several types of nucleosynthesis conditions (big bang, stellar, supernova, black hole accretion disk, artificial, pycnonuclear, and perhaps others), the first three of which are most important when discussing the composition of the Universe, but I wish to focus on an aspect of big bang nucleosynthesis. The Universe became extremely hot after inflation (or so it is thought), and while cooling down there was a very brief period in which the temperatures and densities were adequate for protons and neutrons to exist and fuse into heavier elements. However, this period of nucleosynthesis was so brief that most of the nucleons formed in baryogenesis didn't even have time to fuse, ending up as hydrogen. Of the amount that did manage to fuse, almost all of it stopped at helium, helped by the exceptional stability of the $^4He$ nucleus relative to its neighbours. Only a tiny amount of lithium is said to have been made, and I've rarely heard anyone discussing elements heavier than that. It is often said that "metals" (in the astronomical sense) only really appeared after stellar nucleosynthesis began, but there was already a lot of matter around after baryogenesis ended (about the same amount as the $10^{80}$ baryons present now?), so even the tiniest atomic fraction of an element could correspond to galactic masses' worth of it. It would be really neat if it turned out that carbon, the very stuff of life as we know it, didn't exist at all until over a hundred million years after the big bang, when the first generation of stars began to light up. However, I don't know if this is the case, as confirmation or refutation relies on precise quantitative modelling of the big bang nucleosynthesis. Is it possible to figure out when the first carbon nuclei came into existence? I tried using a search engine, but my specific query is completely drowned in links covering the more general aspects of nucleosynthesis. Note: I suppose that if one were to consider the matter outside the observable Universe, then there probably were carbon nuclei even the tiniest timestep after nucleosynthesis began, because the sheer ridiculous amount of space expected to be out there could well overwhelm the nigh-impossible odds of carbon formation. Thus, the calculation is more interesting if limited to our observable volume. cosmology astrophysics nuclear-physics big-bang fusion Nicolau Saker Neto Nicolau Saker NetoNicolau Saker Neto $\begingroup$ See physics.stackexchange.com/a/3837/37548 's first paragraph, it also answers your question. $\endgroup$ – Kvothe Mar 28 '14 at 2:08 $\begingroup$ That's the crux of my question. Everyone says that essentially all of the elements heavier than lithium come from stellar nucleosynthesis, but the problem is hidden in quantifying what "essentially all" means. Basically, have we performed any calculations which can resolve whether we should statistically expect at least one carbon atom from nucleosynthesis? $\endgroup$ – Nicolau Saker Neto Mar 28 '14 at 2:17 $\begingroup$ If you mean atom, stick with atom, but if you mean nucleus then edit. The two have different answers. $\endgroup$ – ProfRob Jul 14 '15 at 22:15 $\begingroup$ @RobJeffries Good point, it never occurred to me, but you are right. I'll edit the title to avoid confusion. $\endgroup$ – Nicolau Saker Neto Jul 14 '15 at 23:04 The article The path to metallicity: synthesis of CNO elements in standard BBN attempts to quantify the amount of carbon produced during big bang nucleosynthesis. It concludes that the ratio of carbon-12 formed to hydrogen was $~4 \times 10^{-16}$ with lesser amounts of carbon-13 and carbon-14. DavePhDDavePhD $\begingroup$ Very interesting. The amount of carbon is actually far higher than I expected; I assumed the high energy barriers and limited intermediate abundances would have created a far sharper decline in production with increased nuclei mass. So even though the amount of carbon after big bang nucleosynthesis was approximately 10 orders of magnitude smaller than it is today, in absolute terms, there was still a hell of a lot of it in the observable Universe, some $10^{65}$ carbon nuclei. $\endgroup$ – Nicolau Saker Neto Apr 21 '14 at 21:00 $\begingroup$ The question asks when was the first carbon atom formed! These are the first carbon nuclei. I know, pedantry, but it means the answer is more like $10^5$ years after the BB. $\endgroup$ – ProfRob Jul 14 '15 at 11:44 $\begingroup$ @RobJeffries First we'd have to decide if "atom" means only neutral atoms or carbon with at least one electron. Then from density and temperature find the fraction of carbon in each ionization state like in table 4 here: adsabs.harvard.edu/full/1961ApJ...134..435R And finally consider the $10^{65}$ carbon nuclei and calculate the probability that one of these has been an atom. $\endgroup$ – DavePhD Jul 14 '15 at 12:30 $\begingroup$ Yes. I agree with your analysis - an atom would normally mean the full complement of electrons. $\endgroup$ – ProfRob Jul 14 '15 at 12:59 $\begingroup$ Problem solved! Now all you need to do is work out the when (in the first few minutes). $\endgroup$ – ProfRob Jul 14 '15 at 23:34 Carbon has to be produced by the triple-alpha process because there is no stable nucleus with 8 or 5 nucleons. The probability of this is very low, because it requires three different particles to be in the same place at the same time. You'll note that the Wikipedia article says: One consequence of this is that no significant amount of carbon was produced in the Big Bang because within minutes after the Big Bang, the temperature fell below that necessary for nuclear fusion. There would have been some carbon created in the Big Bang, if only because the universe is infinite (or at least very big) so even the small probability of the triple alpha reaction happening in a few minutes means the reaction must occur to some extent. I don't have figures for how much carbon was created, though with some head scratching and approximations about the conditions during nucleosynthesis it could be estimated. However it's clear that the vast majority of carbon has been created in stars. John RennieJohn Rennie Not the answer you're looking for? Browse other questions tagged cosmology astrophysics nuclear-physics big-bang fusion or ask your own question. Why didn't the Big Bang create heavy elements? Age of the Earth and the star that preceded the Sun How do solar cells increase the lifespan of the Sun? CNO-cycle in Population III stars How long will the Universe's hydrogen reserves last for? Is the observable universe enclosed by an infinitely dense shell? How many times has the matter in our solar system been recycled from previous stars? Where's the missing helium in the Universe? What would be the first compound formed in the early universe? Nucleosynthesis: other than big bang and supernova Why doesn't Dark Matter participate in BBN reactions? When did the Universe turn from a quantum system to a multi-particle system? Would the universe be significantly different if neutrinos were massless?
CommonCrawl
meiron.net Just another researcher's homepage Random sciency or techy thoughts. Restaurant reviews revisited Over a year ago I analyzed restaurant reviews in Budapest and came up with a complimentary way to rank businesses. This was based on the 4- and 2-star reviews rather than the average score. I followed up by repeating this analysis for Toronto. Below are the results (particularly comparing the two cities) and some new thoughts. Three-body encounter rate in one dimension Walking long and mildly busy streets in Budapest, I noticed that 3-body encounters were surprisingly common. 2-body encounters are when two people walking in opposite directions meet, or they walk in the same direction and one passes the other. These seemed rare enough (only a few per minute on my daily route), so I got curious how come a relatively large fraction of them involved a third person walking independently. The full mathematical description of the problem can be found here and a simple applet to calculate the encounter rate using a toy model can be found here. In more detail, a 3-body encounter is defined as the moment in time at which the largest pairwise separation among the three bodies is the smallest, and such an encounter is considered "interesting" if this separation is below some threshold ε. In the paper, I calculated the rate of 3-body encounters undergone by a reference body with velocity v0, given that all bodies move at constant speeds in one dimension, their density is n per unit length, and the probability density function of their velocities is f(v). The result is that the rate linearly depends on ε and quadratically on n. In figure shows the 2- and 3-body encounter rates as a function of the reference body's speed in a toy model representing pedestrian movement. The other bodies' velocity distribution is made up of two rectangular functions with widths of 2 km h-1 around ±5 km h-1 (also n = 120 km-1 representing a mildly busy street in Budapest, and ε = 0.5 m representing a lower limit on comfortable personal space). Read more... (PDF, 475 kb) Play with the applet... Probability of a multivariate function I frequently encounter the problem of having to find the probability density function (pdf) of some quantity which is a function of several independent random variables with known distributions. For some cases such as addition of two variables, there exists a relatively simple formula (the new pdf is just the convolution of the two) but there is no general magic formula. In a more mathematical language, I need a procedure to find the pdf of a function \(y\) where \(y : \mathbb{R}^n \rightarrow \mathbb{R}\). If \(n=1\) this is quite easy and only involves inverse functions and derivatives. To find an analytical solution in the case where \(n>1\), one would generally have to obtain an expression for the cumulative distribution of \(y\), and whether it is possible or not to do analytically depends on how complicated the function is. Restaurant review analysis I enjoy good food, and Budapest has a lot to offer in terms of eating out. Choosing a restaurant is a difficult task, and is often based on a single number, the score of a restaurant in an online list. This score is usually a weighted arithmetic mean of customer one to five star reviews, such is the case with TripAdvisor. But how reliable is this number? Let's find out. In this figure I plot the "standard" average score versus some alternative (complementary) definition. Miner's spiral If you dig straight down, you reach the center of the Earth in 6400 km, but what happens if you dig at a shallow but constant angle? Intuitively the result would be a spiral, and it's easy to calculate, we just need to assume some constant digging "speed" and a constant ratio of the radial and tangential velocity components given by the angle's tangent. In radial coordinates \begin{align*} \dot{r} & =-v\sin\beta\\ r\dot{\phi} & =v\cos\beta. \end{align*} The first equation is trivial to integrate \[ r(t)=r_{0}-v\sin\beta t. \] The second is also pretty easy \begin{align*} \dot{\phi} & =\frac{v\cos\beta}{r_{0}-v\sin\beta t}\\ \phi(t) & =\phi_{0}-\cot\beta\log\left(1-\frac{v\sin\beta}{r_{0}}t\right). \end{align*} We can now also show that it's a logarithmic spiral by isolating \(t\) from \(r(t)\) and substituting in \(\phi(t)\). We get \begin{align*} \phi(r) & =\phi_{0}-\cot\beta\log\left(\frac{r}{r_{0}}\right)\\ r(\phi) & =r_{0}\exp\left[(\phi_{0}-\phi)\tan\beta\right]. \end{align*} The length element is just \(v\mathrm{d}t\) and from here it's clear that the length of the tunnel to the center of the Earth is \[ L=\frac{r_{0}}{\sin\beta} \] Flight clock Long haul flights can be confusing. You start at some time zone and hours later find yourself in another, but in the duration of the flight you exist in timezone limbo. Airlines serve breakfast, lunch or dinner at somewhat arbitrary times, and the Sun's position in the sky is not always helpful (flights between East Asia and North America pass in the polar circle, where the Sun can be up at midnight). Error flower Sometimes there's no deep meaning behind a beautiful picture. This image is just an illustration of numerical error. I used my code to calculate the gravitational force in a triaxial ellipsoid; the error distribution when using single vs. double precision is bimodal, and the blue and red dots are points around the two peaks. Cool but meaningless simulation This is part of a gravitational N-body simulation done with the NOBDY6++ code, stored using the BTS scheme and visualized with Blender. The simulation has 16,000 point particles, but I only showed 50 of them as large spheres with random color (so the video is quite misleading). The effective gravitational force is toward the center because you have that many particles in a spherical configuration. The reason that at some point they all suddenly change speed and (mostly) move toward the center is that the rendering software has its own clock (the frame count) which may or may not match the data output from the simulation software. If in a certain frame the position of a particle is unknown, Blender guesses the position by interpolation (which here is linear but could be more sophisticated). To test that this interpolation works, I took data with a large gap, and the result is what you expect. If particles move only a little bit between snapshots, then this interpolation is fine, but after this large gap the positions are more or less randomized, so in the video all particles move in straight lines to their new and rather arbitrary positions. 3D model of a rib cage I made this visualization using medical data. First, The CD-ROM from the radiology lab it had the CT scan in this very weird data format called DICOM that is apparently a standard for medical imaging. I used a Python package called pydicom to read it into a 3D numpy array representing the density (or opacity to X-rays or something like that). The array's shape was 512x512x465 and the values were integers from 0 to 4095. The scan resolution was about 30.2 dpi (which for a 2D document would be extremely poor). Strangely along the z- (spinal-) axis the resolution is slightly different, 33.9 dpi. I used MayaVi to plot an isosurface to my liking. I had to play a lot with the contour value: if too low, too many soft tissues show up, if too high, the costal cartilages doesn't show (that's some piece of cartilage that connects each rib to the sternum). Unfortunately even with this careful choice, the 3D model had quite a lot of soft tissues which would have been very difficult to exclude in the Python script. So I exported the model and used Blender to manually remove the soft tissues. This was quite a hassle, but when it's done it's relatively easy to set up this rotation animation in Blender and render in high quality. This is a simple script with two functions that translate between AQI and PM2.5 in μg/m3. See Gareth's investigation into pollution in Beijing and how it effects visibility. def PiecewiseLinear(X, Y, x): i = np.argwhere(x<=X)[0,0] if X[i]==x: return Y[i] return Y[i-1]+(Y[i]-Y[i-1])/(X[i]-X[i-1])*(x-X[i-1]) C = np.array([0, 12, 35.5, 55.5, 150.5, 250.5, 350.5, 500]) I = np.array([0., 50, 100, 150, 200, 300, 400, 500]) AQI2ugmc = lambda x: PiecewiseLinear(I, C, x) ugmc2AQI = lambda x: PiecewiseLinear(C, I, x) from pylab import * Palette = ['#00e400', '#ffff00', '#ff7e00', '#ff0000', '#99004c', '#7e0023', '#7e0023'] for i in range(len(I)-1): plot(I[i:i+2], C[i:i+2], color=Palette[i], lw=2) xlabel('$\mathrm{AQI}$') # $...$ for font consintency ylabel(r'$\mathrm{PM_{2.5}\ [\mu g \, m^{-3}]}$') savefig('AQI.svg') show() Water snow or frost on Mars as seen by Viking 2 (Credit: NASA/JPL) I gave a KIAA pizza talk about Mars in January 2015. This was an overview of topics related to Mars that were being discussed at the time. I curated much of the information from various Wikipedia articles and NASA press releases. The focus is the Martian atmosphere, transient detection of methane, and the Martian meteorite ALH84001. IAU Symposium #312 I was a core local organizing committee member of the 312th Symposium of the International Astronomical Union (IAU) Star Clusters and Black Holes in Galaxies across Cosmic Time (see link) that took place at the National Science Library of Chinese Academy of Sciences in Beijing in summer 2014. While I didn't choose this mouthful of a name for the conference, I designed the lovely poster and was chief editor of the conference proceedings (hardcover by Cambridge University Press, ISBN-13: 9781107078727). The poster design was done in Inkscape mostly. I ended up liking the result but it was quite a hassle and I had to try to satisfy too many people in the committee, I will certainly think twice before volunteering to such a task again. The proceedings editing was done in Latex with a lot of Python scripting to administer the many proceedings papers and forming them into a single work.
CommonCrawl
Vítor Hugo Fernandes Delgado, Manuel, and Vítor H. Fernandes. "Abelian kernels of monoids of order-preserving maps and of some of its extensions." Semigroup Forum. 68 (2004): 335-356.Website Delgado, Manuel, and Vítor H. Fernandes. "Abelian kernels of some monoids of injective partial transformations and an application." Semigroup Forum. 61 (2000): 435-452.Website Delgado, Manuel, and Vítor H. Fernandes. "Abelian kernels, solvable monoids and the abelian kernel length of a finite monoid." Semigroups and languages. World Sci. Publ., River Edge, NJ, 2004. 68-85. Araújo, João, Vítor H. Fernandes, Manuel M. Jesus, Victor Maltcev, and James D. Mitchell. "Automorphisms of partial endomorphism semigroups." Publicationes Mathematicae Debrecen. 79.1-2 (2011): 23-39. Fernandes, Vítor H., and Teresa M. Quinteiro. "Bilateral semidirect product decompositions of transformation monoids." Semigroup Forum. 82 (2011): 271-287. Abstract Summary: In this paper we consider the monoid $\mathcal {OR}_{n}$ of all full transformations on a chain with $n$ elements that preserve or reverse the orientation, as well as its submonoids $\mathcal {OD}_{n}$ of all order-preserving or order-reversing elements, $\mathcal {OP}_{n}$ of all orientation-preserving elements and $\mathcal {O}_{n}$ of all order-preserving elements. By making use of some well known presentations, we show that each of these four monoids is a quotient of a bilateral semidirect product of two of its remarkable submonoids. Fernandes, Vítor H., Gracinda M. S. Gomes, and Manuel M. Jesus. "The cardinal and the idempotent number of various monoids of transformations on a finite chain." Bulletin of the Malaysian Mathematical Sciences Society. 34.2 (2011): 79-85. Abstract Summary: We consider various classes of monoids of transformations on a finite chain, in particular of transformations that preserve or reverse either the order or the orientation. Being finite monoids we are naturally interested in computing both their cardinals and their idempotent numbers. Fibonacci and Lucas numbers play an essential role in the last computations. Fernandes, Vítor H., and Teresa M. Quinteiro. "The cardinal of various monoids of transformations that preserve a uniform partition." Bulletin of the Malaysian Mathematical Sciences Society. 35.4 (2012): 885-896. Fernandes, Vítor H., Gracinda M. S. Gomes, and Manuel M. Jesus. "Congruences on monoids of order-preserving or order-reversing transformations on a finite chain." Glasg. Math. J.. 47 (2005): 413-424.Website Fernandes, Vítor H., Gracinda M. S. Gomes, and Manuel M. Jesus. "Congruences on monoids of transformations preserving the orientation of a finite chain." J. Algebra. 321 (2009): 743-757.Website Fernandes, Vitor H. "A division theorem for the pseudovariety generated by semigroups of orientation preserving transformations on a finite chain." Comm. Algebra. 29 (2001): 451-456.Website Fernandes, Vítor H., and Paulo G. Santos. "Endomorphisms of semigroups of order-preserving partial transformations." Semigroup Forum (10.1007/s00233-018-9948-z). 99 (2019): 333-344. AbstractWebsite In this paper we characterize the monoids of endomorphisms of the semigroups PO_n and POI_n of all order-preserving partial transformations and of all order-preserving partial permutations, respectively, of a finite n-chain. Li, De Biao, and Vítor H. Fernandes. "Endomorphisms of semigroups of oriented transformations." Semigroup Forum (DOI 10.1007/s00233-022-10325-y). Online (2022). AbstractWebsite In this paper, we characterize the monoid of endomorphisms of the semigroup of all oriented full transformations of a finite chain, as well as the monoid of endomorphisms of the semigroup of all oriented partial transformations and the monoid of endomorphisms of the semigroup of all oriented partial permutations of a finite chain. Characterizations of the monoids of endomorphisms of the subsemigroups of all orientation-preserving transformations of the three semigroups aforementioned are also given. In addition, we compute the number of endomorphisms of each of these six semigroups. Fernandes, Vítor H., M. M. Jesus, V. Maltcev, and J. D. Mitchell. "Endomorphisms of the semigroup of order-preserving mappings." Semigroup Forum. 81 (2010): 277-285.Website Fernandes, Vítor H. "The idempotent-separating degree of a block-group." Semigroup Forum. 76 (2008): 579-583.Website André, J. M., V. H. Fernandes, and J. D. Mitchell. "Largest 2-generated subsemigroups of the symmetric inverse semigroup." Proc. Edinb. Math. Soc. (2). 50 (2007): 551-561.Website Dimitrova, I., Vítor H. Fernandes, and J. Koppitz. "The maximal subsemigroups of semigroups of transformations preserving or reversing the orientation on a finite chain." Publicationes Mathematicae Debrecen. 81.1-2 (2012): 11-29. Fernandes, V. H. "The monoid of all injective order preserving partial transformations on a finite chain." Semigroup Forum. 62 (2001): 178-204. Fernandes, Vitor H. "The monoid of all injective orientation preserving partial transformations on a finite chain." Comm. Algebra. 28 (2000): 3401-3426.Website Fernandesh, V. U. "A new class of divisors of semigroups of isotone mappings of finite chains." Izv. Vyssh. Uchebn. Zaved. Mat. (2002): 51-59. Fernandes, Vitor H. "Normally ordered inverse semigroups." Semigroup Forum. 56 (1998): 418-433.Website Fernandes, Vítor H. "Normally ordered semigroups." Glasg. Math. J.. 50 (2008): 325-333.Website Fernandes, Vítor H., and Teresa M. Quinteiro. "A note on bilateral semidirect product decompositions of some monoids of order-preserving partial permutations." Bull. Korean Math. Soc.. 53.2 (2016): 495-506. AbstractWebsite In this note we consider the monoid $PODI_n$ of all monotone partial permutations on $\{1,\ldots,n\}$ and its submonoids $DP_n$, $POI_n$ and $ODP_n$ of all partial isometries, of all order-preserving partial permutations and of all order-preserving partial isometries, respectively. We prove that both the monoids $POI_n$ and $ODP_n$ are quotients of bilateral semidirect products of two of their remarkable submonoids, namely of extensive and of co-extensive transformations. Moreover, we show that $PODI_n$ is a quotient of a semidirect product of $POI_n$ and the group $\mathcal{C}_2$ of order two and, analogously, $DP_n$ is a quotient of a semidirect product of $ODP_n$ and $\mathcal{C}_2$. Dimitrova, I., Vítor H. Fernandes, and J. Koppitz. "A note on generators of the endomorphism semigroup of an infinite countable chain." Journal of Algebra and its Applications (DOI: 10.1142/S0219498817500311). 16 (2017): 1750031 (9 pages). AbstractWebsite In this note, we consider the semigroup $O(X)$ of all order endomorphisms of an infinite chain $X$ and the subset $J$ of $O(X)$ of all transformations $\alpha$ such that $|Im(\alpha)|=|X|$. For an infinite countable chain $X$, we give a necessary and sufficient condition on $X$ for $O(X) = < J >$ to hold. We also present a sufficient condition on $X$ for $O(X) = < J >$ to hold, for an arbitrary infinite chain $X$. Fernandes, Vítor H. "On divisors of pseudovarieties generated by some classes of full transformation semigroups." Algebra Colloq.. 15 (2008): 581-588. Fernandes, Vítor H., and M. V. Volkov. "On divisors of semigroups of order-preserving mappings of a finite chain." Semigroup Forum. 81 (2010): 551-554.Website Oriented transformations on a finite chain: another description Endomorphisms of semigroups of oriented transformations On the Rank of Monoids of Endomorphisms of a Finite Directed Path On the monoid of partial isometries of a finite star graph Partial Automorphisms and Injective Partial Endomorphisms of a Finite Undirected Path The Vagner-Preston representation of a block-group
CommonCrawl
A random effects meta-analysis model with Box-Cox transformation Yusuke Yamaguchi1, Kazushi Maruo2, Christopher Partlett3 & Richard D. Riley4 In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed. Meta-analysis is a useful statistical tool for combining results from independent studies, for example where estimates of a treatment effect (e.g odds ratio, mean difference or standardised mean difference) from randomised controlled trials are pooled in order to make inferences about an overall summary effect. A random effects meta-analysis model that assumes different true treatment effects underlying different studies is often needed as it allows for unexplained heterogeneity across studies [1]. In the random effects model, the true treatment effects for each study are usually assumed to follow a normal distribution; thus, an overall mean (summary) effect is obtained by estimating the mean parameter of this distribution. In this article, we focus on problems caused by an inappropriate normality assumption of the random effects distribution, in particular in regard to the impact on the mean effect estimate, quantification of heterogeneity and prediction interval. Turner et al. [2] suggested that the misspecification of the random effects distribution seriously affected the estimates of the random effects variances. Lee and Thompson [3] showed that the shape of the predictive distributions of the treatment effect was substantially affected by the shape of the assumed random effects distribution. The normality assumption may therefore be a restrictive assumption for meta-analysts who are interested in producing a summary treatment effect, quantifying heterogeneity and deriving a prediction interval, especially if the true random effects distribution is skewed. Alternative parametric distributions have been considered for the random effects distribution in mixed models; for example, t-distribution [4], gamma or mirrored gamma distribution [2], Laplace (double-exponential) distribution [5], skewed normal or skewed t-distribution [3], mixture distributions [6]. And also, as an approach to outliers in meta-analysis, Baker and Jackson [7] proposed a model that allows the random effects to be long-tailed, which provides a down-weighting of outliers and removes the necessity for an arbitrary decision to exclude the outliers. Gumedze and Jackson [8] used likelihood ratio test statistics to detect and down-weight outliers in the meta-analysis. However, each has disadvantages as discussed in Lee and Thompson [3]; for example, the mixture distributions can fail in situations where there are a few outliers. When assuming a skewed distribution for the random effects in a meta-analysis, the mean and the variance are not appropriate representatives for summarising the skewed true treatment effects. The overall mean for the skewed treatment effects would be pulled in the direction of the extreme observed estimates; hence, it could result in misleading conclusions from the meta-analysis. It is also not straightforward to quantify the impact of heterogeneity, such as I 2, if there is a non-normal random effects distribution. Indeed, Higgins et al. [9] mentioned that some alternative parametric distributions may not have parameters that naturally describe an overall effect, or the heterogeneity across studies. Here, we propose a novel random effects meta-analysis model, where a Box-Cox transformation [10] is applied to the observed treatment effect estimates. The aim of the Box-Cox transformation is to achieve approximate normality of the overall distribution of the observed treatment effect estimates after transformation. The use of the Box-Cox transformation in linear models has been studied extensively [11–14]. In particular, Gurka et al. [15] provided an extension of the Box-Cox transformation to linear mixed models and demonstrated that a single transformation parameter would simultaneously help achieve normality of both the random effects and the residual error. However, the Box-Cox transformation has not been used commonly in the context of meta-analysis. Indeed, a work by Kim et al. [16] is the only meta-analytic application of the Box-Cox transformation that we are aware of. They proposed a multivariate response Box-Cox regression model for modelling individual patient data (IPD). However, the approach by Kim et al. [16] cannot apply to the cases of more readily available aggregate data (such as observed estimates of the treatment effect and their standard errors), because their model just allows the individual patient responses to be transformed and thus requires IPD. We rather consider transforming the observed treatment effect estimates using the Box-Cox transformation and suggest summarising the overall effect by an overall median rather than the overall mean, and quantifying the impact of heterogeneity by an interquartile range rather than commonly used I 2. The method no longer requires the IPD. In this section, we introduce two motivating examples which will be used for illustrating the proposed model. In the "Methods" section, we introduce the standard normal random effects models, and describe how to make the Bayesian inference in the random effects meta-analysis from the following viewpoints: the overall mean effect, the heterogeneity and the prediction interval. And then, we describe our new random effects model with the Box-Cox transformation. In the "Results" section, we conduct a simulation study to examine the performance of our proposed model under some situations where true random effects follow non-normal distributions, and compare the results with those from the standard normal random effects model. Moreover, we illustrate our proposed model using the examples. Finally, we conclude this article with some discussion. Motivating examples Example 1: Teacher expectancy on pupil IQ Raudenbush [17] reviewed randomised experiments of the effects of teacher expectancy on pupil IQ (see also Raudenbush and Bryk [18] for the details). The research question was: do pupils have a better performance if their teacher expected them to perform well? In each of 19 experiments identified, after administering an intelligence test to a sample of students, a randomly selected portion of the students were identified to their teachers as "likely to experience substantial intellectual growth" (the treatment group). All students were tested again, and the standardised mean difference between the test scores of students in the treatment group and those of the other students was evaluated as a treatment effect. The data from the 19 experiments was obtained from Table 18.2 in Hartung et al. [19]. Figure 1a shows a forest plot and a histogram of the estimates of the standardised mean differences, with positive values indicating a higher mean score for the treatment (high-expectancy) group. Although the histogram is a slightly naive display because it ignores the different weighting (number of participants) in the studies, it does suggest the presence of positive skewness in the observed distribution of the estimates. Forest plot and histogram. a 19 experiments investigating teacher expectancy on pupil IQ, b 22 studies investigating antidepressants for reducing pain in fibromyalgia syndrome Example 2: Antidepressants for reducing pain in fibromyalgia syndrome Hauser et al. [20] reported a meta-analysis of randomised controlled trials to investigate the efficacy of antidepressants for fibromyalgia syndrome, which is a chronic pain disorder associated with multiple debilitating symptoms. 22 trials using different classes of antidepressants were involved in the analysis, and estimates of the standardised mean difference in pain (for the antidepressant group minus the control group) were combined using a random effects model. The data was obtained from Figure 3 in Riley et al. [21]. Figure 1b shows a forest plot and a simple histogram of estimates of the standardised mean differences, with negative values indicating a benefit for the antidepressants. The histogram suggests the presence of negative skewness on the estimates. Normal random effects model We first consider the standard normal random effects model for a meta-analysis of k studies. Let y i and \(\sigma _{i}^{2}\) be an estimate of a treatment effect and its variance observed from the ith study (i=1,…,k), respectively. Then the normal random effects model is given by $$\begin{array}{@{}rcl@{}} &y_{i}=\theta_{i}+\epsilon_{i},\\ &\theta_{i}=\theta+u_{i},\\ &\epsilon_{i}\sim N\left(0,\sigma_{i}^{2}\right),\quad u_{i}\sim N\left(0,\tau^{2}\right) \end{array} $$ where θ i is the true (but unknown) treatment effect for the ith study and is represented by the sum of θ and u i . The u i is assumed to follow a normal distribution with mean zero and variance τ 2, indicating that the true treatment effect for the ith study, θ i , is normally distributed about θ with the between-study variance τ 2. ε i is a sampling error within the ith study and is assumed to follow a normal distribution with mean zero and variance \(\sigma _{i}^{2}\), where the within-study variance \(\sigma _{i}^{2}\) is commonly considered to be known. Of key interest is an estimate of the mean parameter of the random effects distribution, θ, as this provides the mean treatment effect of the included studies. Also of interest is an estimate of τ 2, to quantify the amount of heterogeneity and to derive a 95 percent prediction interval [9]. Bayesian estimation of model parameters We here use a Bayesian approach for estimating parameters involved in the normal random effects model (1). Marginalising the true treatment effect (θ i ) from a joint distribution of y i and θ i , we have y i \(y_{i}\sim N\left (\theta,\tau ^{2}+\sigma _{i}^{2}\right)\). Given θ and τ 2, the conditional density function of y=(y 1,…,y k ) is written as $$\begin{array}{@{}rcl@{}} p(y|\theta,\tau^{2})=\prod\limits_{i=1}^{k}\frac{1}{\sqrt{2\pi}\left(\tau^{2}+\sigma_{i}^{2}\right)^{1/2}}\exp\left \{-\frac{(y_{i}-\theta)^{2}}{2\left(\tau^{2}+\sigma_{i}^{2}\right)}\right \}. \end{array} $$ Then a posterior distribution of θ and τ 2 can be given as $$\begin{array}{@{}rcl@{}} p\left(\theta,\tau^{2}|y\right)\propto p\left(y|\theta,\tau^{2}\right)p\left(\theta,\tau^{2}\right) \end{array} $$ where p(θ,τ 2) is a prior density for θ and τ 2. Since minimally informative prior distributions are appropriate in the absence of definite priori information, we here use the following vague priors: $$\begin{array}{@{}rcl@{}} \theta&\sim&N(0,10000),\\ \tau&\sim&U(0,b) \end{array} $$ where b is a constant value given by practitioners. It is well known that the results from Bayesian meta-analyses could be potentially sensitive to the choice of prior distributions, especially to the prior of the between-study variance τ 2 (e.g. see Lambert et al. [22] for the details). Various non-informative priors for τ 2 have been suggested in previous researches; for example, a uniform prior on τ [23, 24], a uniform prior on log(τ 2) [25], an inverse-gamma prior on τ 2 [26] and a half-Cauchy prior on τ [24]. We consider the uniform prior on τ in the range of (0,b), where the upper limit, b, should be decided according to the individual situations. The uniform prior on the standard deviation increasingly becomes known as a reasonable alternative to a more general inverse-gamma prior on variance (e.g. see Gelman [24] for the details). In practice the sensitivity of specified priors should be investigated by applying many other priors for the parameters or by using prior distribution based on empirical evidence [27, 28], though we in this article avoid the extensive discussion for the prior. In the Bayesian framework, a posterior mean and a 95 percent credible interval are commonly used for summarising the posterior distribution. We implement our Bayesian analysis by using Markov chain Monte Carlo (MCMC) methods, with a free R software and its rstan package (see the Stan Modelling Language User's Guide and Reference Manual [29] for the details). The source code for conducting meta-analyses with the normal random effects model (1) is shown in Additional file 1. Quantification of heterogeneity The magnitude of heterogeneity across studies can be quantified by the posterior estimate of the between-study variance τ 2 or its square root. In the Bayesian framework, we obtain the posterior distribution of τ 2 and its credible interval, which can be used for quantifying the magnitude of the between-study heterogeneity of the true treatment effects. However, the between-study variance may be sensitive to the metric of the treatment effect, and thus this is not necessarily appropriate for the purpose of comparing several meta-analyses in terms of the heterogeneity [30]. If we are interested in what proportion of the observed variance reflects real differences in the treatment effect, the I 2 proposed by Higgins and Thompson [30] is useful for this purpose. Under the normal random effects model (1), the I 2 is expressed as a function of τ 2, given by $$\begin{array}{@{}rcl@{}} I^{2}=\frac{\tau^{2}}{\tau^{2}+s^{2}} \end{array} $$ $$\begin{array}{@{}rcl@{}} s^{2}=\frac{(k-1)\sum_{i=1}^{k} 1/\sigma_{i}^{2}}{\left(\sum_{i=1}^{k} 1/\sigma_{i}^{2}\right)^{2}-\sum_{i=1}^{k} \left(1/\sigma_{i}^{2}\right)^{2}}. \end{array} $$ Here, s 2 is referred to as 'typical' within-study variance. In this article, we calculate I 2 based on the estimated τ 2 during each sample of the Bayesian estimation process; that is, we summarise the posterior distribution of I 2 derived by using samples of τ 2 drawn from its posterior distribution. Prediction interval In the Bayesian framework, a predictive distribution of the treatment effect in a new study is given by $$\begin{array}{@{}rcl@{}} p(\theta_{\text{new}}|y)=\int\int p\left(\theta_{\text{new}}|\theta,\tau^{2}\right)p\left(\theta,\tau^{2}|y\right)d\theta d\tau^{2}. \end{array} $$ Following estimation of model (1) using the MCMC, a (100−q) percent prediction interval is obtained by taking (q/2)th and (100−q/2)th quantiles of samples drawn from the predictive distribution (5). For example, lower and upper bounds of 95 percent prediction interval are given by 2.5th and 97.5th quantiles of samples from the predictive distribution, respectively. Note that this is just one option for obtaining the 95 percent prediction interval, and other ways of defining the interval can be chosen depending on where we want to take the lower and upper limits. When interest lies in predicting probability that the treatment is effective by more than a clinically important difference in a new study, we can find this by calculating the proportion of samples drawn from the predictive distribution which satisfy a specified criteria for the effectiveness of the treatment (e.g. odds ratio < 80 percent). The sampling can be achieved by first drawing samples of parameters from the posterior distribution p(θ,τ 2|y) and then drawing samples from p(θ new|θ,τ 2) with fixed parameters obtained in the previous step [4]. In the second step, the drawing is performed by θ new∼N(θ,τ 2). In this manner, the prediction interval accounts for the heterogeneity in true treatment effects and naturally incorporates all parameter uncertainty (e.g. in θ and τ 2). It should be interpreted differently from the credible interval for the mean effect, which only indicates the uncertainty in the mean effect itself, not the entire distribution of true treatment effects across studies [21]. Random effects model with Box-Cox transformation Box-Cox transformation Before giving our proposed model, we first introduce the Box-Cox transformation for a standard consideration of a continuous variable. The aim of the Box-Cox transformation is to achieve approximate normality of a variable (say, y i ) after transformation [10]. Roughly saying, it can be used for changing scale of data so that the transformed data are distributed symmetrically. In particular, we consider a normalised shift transformation given by $$ y_{i}(\lambda,\alpha)= \left\{ \begin{aligned} &{\frac{(y_{i}+\alpha)^{\lambda}-1}{\lambda\dot{g}(\alpha)^{\lambda-1}}},\qquad \lambda \neq 0 \\ &\log(y_{i}+\alpha)\dot{g}(\alpha),\quad\, \lambda = 0 \\ \end{aligned} \right. $$ for y i +α>0 (i=1,…,k), where we keep y i for ease of notation, though y i could refer to any continuous measure (not just an effect size). λ and α denote a transformation and a shift parameter respectively, and these parameters are estimated from the observed data. \(\dot {g}(\alpha)\) is a geometric mean of y i +α for i=1,…,k. The normalisation using the geometric mean \(\dot {g}(\alpha)\) could lead a stable estimation of λ and α, in comparison with a standard Box-Cox transformation without the normalisation. To be exact, it is proper to assume that the transformed variable y i (λ,α) follows a truncated normal distribution except for the case of λ=0, because of the condition that y i +α must be a positive value. When interest lies in inference in original scale before transformation (not in the scale after transformation), we need to specify the distribution of the observed values before the Box-Cox transformation and deal with the truncation precisely [31–34]. However, these are beyond the scope of this article. For mathematical convenience, we below assume that the transformed variable y i (λ,α) follows a normal distribution with no consideration of the truncation. Proposed meta-analysis model and its estimation Let y i 's be the treatment effect estimates (e.g. log odds ratio or mean difference) from the available studies in a meta-analysis. We propose the following random effects model for the Box-Cox transformed y i : $$\begin{array}{@{}rcl@{}} y_{i}(\lambda,\alpha)&=&\mu_{i}+\epsilon_{i},\\ \mu_{i}&=&\mu+u_{i},\\ \epsilon_{i}\sim N\left(0,\phi_{i}^{2}(\lambda,\alpha)\right),&&u_{i}\sim N(0,\tau^{2}). \end{array} $$ The model structure is basically same as the normal random effects model (1), though now the Box-Cox transformation (6) is applied to the observed treatment effect estimates for each study and μ i denotes a true effect of the Box-Cox transformed variable y i (λ,α) which has a 'known' variance of \(\phi _{i}^{2}(\lambda,\alpha)\) (see section below). The proposed model aims to improve the overall normality of the observed treatment effects estimates (y i ) across studies; their overall distribution is the sum of the random effects distribution of true effects and the within-study sampling distribution of estimates. As long as the studies have reasonable sample size, the within-study sampling distribution of y i will be approximately normal due to the central limit theorem. However, there is no such guarantee for the random effects distribution [7], and thus any asymmetry in the random effects distribution will consequently cause asymmetry in the overall distribution for y i . The following processes are required to implement the proposed random effects model (7). Definition of variance of the Box-Cox transformed treatment effect estimate In the proposed model (7), the variance of the Box-Cox transformed treatment effect estimate, \(\phi _{i}^{2}(\lambda,\alpha)\); i.e. the variance of y i (λ,α) given μ i , must be defined. Since the variance needs to be assigned for each study separately, we here approximate the variance of y i by a first order Taylor series about y i (λ,α)=E[y i (λ,α)] as follows: $$\begin{array}{@{}rcl@{}} {{} \begin{aligned} V[y_{i}]&\approx V[y_{i}(\lambda,\alpha)]\left\{\left.\frac{\partial y_{i}}{\partial y_{i}(\lambda,\alpha)}\right|_{y_{i}(\lambda,\alpha)=E[y_{i}(\lambda,\alpha)]}\right\}^{2}\\ &=\left\{ \begin{array}{ll} V[y_{i}(\lambda,\alpha)]\dot{g}(\alpha)^{2\lambda-2}\left\{\lambda\dot{g}(\alpha)^{\lambda-1}E[y_{i}(\lambda,\alpha)]+1\right\}^{2/\lambda-2}, & \lambda \neq 0 \\ {\frac{V[y_{i}(\lambda,\alpha)]}{\dot{g}(\alpha)^{2}}}\exp\left\{ {\frac{2E[y_{i}(\lambda,\alpha)]}{\dot{g}(\alpha)}}\right\}, & \lambda = 0 \\ \end{array} \right.. \end{aligned}} \end{array} $$ For \({V\,[\!y_{i}]=\sigma _{i}^{2}}\), E [ y i (λ,α)]=μ and \({V\,[\!y_{i}(\lambda,\alpha)]=\phi _{i}^{2}} (\lambda,\alpha)\), we have an approximation of the variance of the Box-Cox transformed treatment effect estimate, written by $$ \phi_{i}^{2}(\lambda,\alpha)\approx \left\{ \begin{aligned} &{\frac{\sigma_{i}^{2}}{\dot{g}(\alpha)^{2\lambda-2}}}\left\{\lambda\dot{g}(\alpha)^{\lambda-1}\mu+1\right\}^{2-2/\lambda},\quad \lambda \neq 0 \\ &\sigma_{i}^{2}\dot{g}(\alpha)^{2}\exp\left\{ {-\frac{2\mu}{\dot{g}(\alpha)}}\right\}, \qquad\qquad\quad\, \lambda = 0 \\ \end{aligned} \right. $$ where recall α is the shift parameter, λ is the transformation parameter, \(\dot {g}(\alpha)\) is the geometric mean of y i +α for i=1,…,k, μ is the mean parameter of the random effects distribution in the transformed scale and \(\sigma _{i}^{2}\) is the within-study variance from the ith study. The relationship between variances before and after transformation has been applied for stabilising variance [35, 36] or representing inhomogeneity variances in linear models with Box-Cox transformation weighting [37]. Frequentist estimation of λ and α We treat the transformation parameter λ and the shift parameter α as non-stochastics; i.e. we first estimate these parameters by a maximum likelihood estimation, and then make inference about the other parameters μ and τ 2 conditioning on \(\lambda =\hat {\lambda }\) and \(\alpha =\hat {\alpha }\), where \(\hat {\lambda }\) and \(\hat {\alpha }\) are maximum likelihood estimates of λ and α respectively. Maruo and Goto [34] has investigated the influence of not considering the uncertainty associated with estimation of λ, and showed the confidence interval around the median from an univariate analysis with the Box-Cox transformation was slightly liberal (from two to three percent). A log likelihood function for (μ,τ 2,λ,α) is given by $$ \begin{aligned} l(\mu,\tau^{2},\lambda,\alpha)&=\sum\limits_{i=1}^{k}\left[\vphantom{\frac{\left(y_{i}(\lambda,\alpha)-\mu\right)^{2}}{2(\tau^{2}+\phi^{2}_{i}(\lambda,\alpha))}}-\frac{1}{2}\log\left(\tau^{2}+\phi^{2}_{i}(\lambda,\alpha)\right)\right.\\ &\quad-\left.\frac{\left(y_{i}(\lambda,\alpha)-\mu\right)^{2}}{2(\tau^{2}+\phi^{2}_{i}(\lambda,\alpha))}\right]. \end{aligned} $$ A grid search procedure is one simple approach for finding \(\hat {\lambda }\) and \(\hat {\alpha }\) which maximises the log likelihood (9) with respect to λ and α. For a large set of values for (λ,α), the log likelihood can be rewritten as l(μ,τ 2,λ,α)=l λ,α (μ,τ 2) where μ and τ 2 vary but λ and α are fixed. Maximising l λ,α (μ,τ 2) with respect to μ and τ 2, we obtain their estimates for the fixed λ and α as $$\begin{array}{@{}rcl@{}} \left(\hat{\mu}(\lambda,\alpha),\hat{\tau}^{2}(\lambda,\alpha)\right)=\underset{\mu,\,\tau^{2}}{\mathrm{arg\,max}}\ l_{\lambda,\alpha}\left(\mu,\tau^{2}\right). \end{array} $$ Substituting the estimates \({\hat {\mu }(\lambda,\alpha)}\) and \({\hat {\tau }^{2}(\lambda,\alpha)}\) into l λ,α (μ,τ 2), then we have a log likelihood \(l_{\lambda,\alpha }(\hat {\mu }(\lambda,\alpha)\), \(\hat {\tau }^{2}(\lambda,\alpha))\) for the fixed λ and α. Then, we obtain a set of (λ,α) for which the log likelihood takes the largest value as the approximate values of \(\hat {\lambda }\) and \(\hat {\alpha }\). An issue known as non-regular problem is caused in the maximum likelihood estimation of α because the range of the distribution is determined by the unknown shift parameter α [38, 39]. For example, it is argued that the likelihood function of α fails to have a local maximum [38]. In this article, we focus on the inference in the original scale before transformation; hence, we assume the concern about the estimation of α would not have substantial impact than if we were interested in the exact estimation of the transformation and the shift parameter (λ and α) themselves. This could be an area of further research. Given \(\hat {\lambda }\) and \(\hat {\alpha }\) (i.e. optimum transformation of the treatment effect, \(y_{i}(\hat {\lambda },\hat {\alpha })\) for i=1,…,k and their variances), we take a Bayesian approach to estimation of the unknown parameters from the Box-Cox meta-analysis model (7), μ and τ 2. Marginalising the true treatment effect (μ i ) from a joint distribution of \(y_{i}(\hat {\lambda },\hat {\alpha })\) and μ i , we have \(y_{i}(\hat {\lambda },\hat {\alpha })\sim N(\mu,\tau ^{2}+\phi _{i}^{2}(\hat {\lambda },\hat {\alpha }))\). The posterior distribution of μ and τ 2 is given by $$\begin{array}{@{}rcl@{}} p(\mu,\tau^{2}|y;\hat{\lambda},\hat{\alpha})\propto p(y|\mu,\tau^{2};\hat{\lambda},\hat{\alpha})p(\mu,\tau^{2}) \end{array} $$ $$\begin{array}{@{}rcl@{}} {\begin{aligned} p(y|\mu,\tau^{2};\hat{\lambda},\hat{\alpha})=\prod_{i=1}^{k}\left [\frac{1}{\sqrt{2\pi}(\tau^{2}+\phi_{i}^{2}(\hat{\lambda},\hat{\alpha}))^{1/2}}\right.\\ \left.\exp\left \{-\frac{(y_{i}(\hat{\lambda},\hat{\alpha})-\mu)^{2}}{2(\tau^{2}+\phi_{i}^{2}(\hat{\lambda},\hat{\alpha}))}\right \}\right ]. \end{aligned}} \end{array} $$ We assume the vague priors for μ and τ 2 in the same way as (2); i.e. μ∼N(0,10000) and τ∼U(0,b), where b is a constant value given by practitioners. It is straightforward to draw samples from the posterior distribution (10) by MCMC. The source code for conducting meta-analyses with the proposed model (7) is shown in Additional file 1, which includes the step of finding the maximum likelihood estimates of λ and α. Interpretation of results A median overall treatment effect We first define a true effect of the untransformed variable y i as $$\begin{array}{@{}rcl@{}} \theta^{\ast}_{i}\equiv \left\{ \begin{array}{ll} \left\{\lambda\dot{g}(\alpha)^{\lambda-1}\mu_{i}+1\right\}^{1/\lambda}-\alpha, & \lambda \neq 0 \\ \exp\left\{{\frac{\mu_{i}}{\dot{g}(\alpha)}}\right\}-\alpha, & \lambda = 0 \\ \end{array} \right. \end{array} $$ which is derived by back-transforming the μ i . Since we are interested in estimating an overall effect in original scale before transformation (i.e. a centre of the distribution of \(\theta _{i}^{\ast }\), not of μ i ), it is useful to consider statistical measures induced from the distribution of \(\theta _{i}^{\ast }\). Note that μ i ∼N(μ,τ 2), then the pth percentile of the distribution of μ i is given by μ+τ z p , where z p denotes the pth percentile of a standard normal distribution. Thus, substituting μ+τ z p into (11), we obtain the pth percentile of the distribution of \(\theta _{i}^{\ast }\) as $$\begin{array}{@{}rcl@{}} \xi_{p}= \left\{ \begin{array}{ll} \left\{\lambda\dot{g}(\alpha)^{\lambda-1}(\mu+\tau z_{p})+1\right\}^{1/\lambda}-\alpha, & \lambda \neq 0 \\ \exp\left\{{\frac{\mu+\tau z_{p}}{\dot{g}(\alpha)}}\right\}-\alpha, & \lambda = 0 \\ \end{array} \right.. \end{array} $$ And also, the median of the distribution of \(\theta _{i}^{\ast }\) is given by $$\begin{array}{@{}rcl@{}} \xi_{50}= \left\{ \begin{array}{ll} \left\{\lambda\dot{g}(\alpha)^{\lambda-1}\mu+1\right\}^{1/\lambda}-\alpha, & \lambda \neq 0 \\ \exp\left\{{\frac{\mu}{\dot{g}(\alpha)}}\right\}-\alpha, & \lambda = 0 \\ \end{array} \right.. \end{array} $$ The median (13) can now be used for the inference of an overall (summary) treatment effect on the original scale. We recommend using the median as a representative of centre of skewed distributions, which is more robust than the mean against the skewness and the outliers on the observed treatment effect estimates. Quantification of heterogeneity using the ratio of IQR squares Under the normal random effects model (1), the between-study variance τ 2 and the I 2 can be used for quantifying the magnitude and the impact of the heterogeneity across studies, respectively. However, when considering skewed distributions, variance is not the most appropriate measure for describing the spread of the distributions. In general, the variance is defined as an expected value of the squared deviation from the mean, though in the skewed-data situation the data is no longer distributed symmetrically around the mean. Due to the skewness or the heavy-tailedness of the data, the variance may lead a wrongly large spread of the distribution. That is, under the proposed model (7), the variance of the distribution of \(\theta _{i}^{\ast }\) does not provide appropriate information about the heterogeneity across studies. For this reason, we here use an interquartile range (IQR) instead of the variance, which is defined as the difference between 75th and 25th quantiles for the distribution of \(\theta _{i}^{\ast }\); i.e. ξ 75−ξ 25 from (12). Against the skewness of the data, the IQR is known as a more robust measure of spread than the variance. Note that the IQR of a normal distribution is exactly equal to the product of its standard deviation and z 75−z 25. Therefore, if we observe normally distributed treatment effect estimates, a measure of $$\begin{array}{@{}rcl@{}} \frac{\xi_{75}-\xi_{25}}{z_{75}-z_{25}} \end{array} $$ from the proposed model (7) would be close to the square root of between-study variance from the normal random effects model (1). For this comparability, we recommend using the measure of (14), which is known as normalised IQR, for quantifying the magnitude of the heterogeneity. We also define a criteria for quantifying the impact of the heterogeneity for the skewed treatment effects. Note that \(y_{i}(\lambda,\alpha)\sim N\left (\mu,\tau ^{2}+\phi _{i}^{2}(\lambda,\alpha)\right)\), then the pth percentile of the distribution of y i (λ,α) is given by \(\mu +(\tau ^{2}+\phi _{i}^{2}(\lambda,\alpha))^{1/2}z_{p}\). Substituting a 'typical' within-study variance like (4) into the \(\phi _{i}^{2}(\lambda,\alpha)\) and back-transforming the pth percentile into the original scale, we obtain the pth percentile of the distribution of y i as $$ \nu_{p}= \left\{\!\!\! \begin{aligned} &\left\{\lambda\dot{g}(\alpha)^{\lambda-1}(\mu\,+\,(\tau^{2}\,+\,d^{2})^{1/2}z_{p})\,+\,1\right \}^{1/\lambda}\,-\,\alpha,\quad \lambda \neq 0 \\ &\exp\left \{{\frac{\mu+(\tau^{2}+d^{2})^{1/2}z_{p}}{\dot{g}(\alpha)}}\right \}-\alpha, \qquad\qquad\quad\!\lambda = 0 \\ \end{aligned} \right. $$ $$\begin{array}{@{}rcl@{}} d^{2}=\frac{(k-1)\sum_{i=1}^{k} 1/\phi_{i}^{2}(\lambda,\alpha)}{\left(\sum_{i=1}^{k} 1/\phi_{i}^{2}(\lambda,\alpha)\right)^{2}-\sum_{i=1}^{k} \left(1/\phi_{i}^{2}(\lambda,\alpha)\right)^{2}} \end{array} $$ denotes the 'typical' within-study variance of the Box-Cox transformed variables. Obviously from the definition by (3), the I 2 has an aspect of the proportion of the between-study variation that is due to the heterogeneity across studies (variance of \(\theta _{i}^{\ast }\)) to the total variation in the treatment effect estimates (total variance of y i ). In the similar concept, we now consider using a ratio of IQR squares alternative to the I 2, which is defined as $$ \frac{(\text{IQR}\ \text{of~the~distribution~of} ~\theta_{i}^{\ast})^{2}}{(\text{IQR}\ \text{of~the}\ \text{distribution}\ \text{of}\ y_{i})^{2}}=\frac{(\xi_{75}-\xi_{25})^{2}}{(\nu_{75}-\nu_{25})^{2}}. $$ The ratio of IQR squares would be comparable with the I 2 when the treatment effect estimates are normally distributed, because of the comparability between the IQR and the between-study variance. Under the proposed model (7), we first consider a predictive distribution of the Box-Cox transformed treatment effect which is given by $$\begin{array}{@{}rcl@{}} p(\mu_{\text{new}}|y;\lambda,\alpha)=\int\!\!\!\int p(\mu_{\text{new}}|\mu,\tau^{2})p(\mu,\tau^{2}|y;\lambda,\alpha)d\mu d\tau^{2}. \end{array} $$ As described in the previous section, the sampling from p(μ new|y;λ,α) can be achieved by first drawing samples of parameters from the posterior distribution p(μ,τ 2|y;λ,α) and then drawing samples from p(μ new|μ,τ 2) with fixed parameters obtained in the previous step. In the second step, the drawing is performed by μ new∼N(μ,τ 2). Then, we obtain the samples from the predictive distribution of the treatment effect by back-transforming the samples of μ new as $$\begin{array}{@{}rcl@{}} \theta^{\ast}_{\text{new}}= \left\{ \begin{array}{ll} \left\{\lambda\dot{g}(\alpha)^{\lambda-1}\mu_{\text{new}}+1\right\}^{1/\lambda}-\alpha, & \lambda \neq 0 \\ \exp\left\{{\frac{\mu_{\text{new}}}{\dot{g}(\alpha)}}\right\}-\alpha, & \lambda = 0 \\ \end{array} \right.. \end{array} $$ A (100−q) percent prediction interval can be obtained by taking (q/2)th and (100−q/2)th quantiles of samples drawn from the predictive distribution (17). Again, note that this is just one option for obtaining the 95 percent prediction interval as mentioned in the previous section. Another transformation for dealing with the negative skewness As described in the previous section, the Box-Cox transformation (6) requires the condition that y i +α must be a positive value for i=1,…,k, which can cause difficulty in estimating the model parameters. This may also be more problematic when the treatment effect estimates have negative skewness, because the shift parameter is subject to inflation in such situation. In order to avoid the negative skewness on the treatment effect estimates, we here consider another transformation using a sign inversion. The transformation with the sign inversion described below will be applied only when the observed treatment effect estimates are negatively skewed. We first distinguish which direction the skewness is in on the treatment effect estimates. A sample skewness with inverse-variance weightings defined as $$\begin{array}{@{}rcl@{}} \frac{\sum_{i=1}^{k}\left.\left ({\frac{y_{i}-\bar{y}_{w}}{s_{w}}}\right)^{3}\right/\sigma_{i}^{2}}{\sum_{i=1}^{k} 1/\sigma_{i}^{2}} \end{array} $$ can be used for this, where $$\begin{array}{@{}rcl@{}} \bar{y}_{w}=\frac{\sum_{i=1}^{k} y_{i}/\sigma_{i}^{2}}{\sum_{i=1}^{k} 1/\sigma_{i}^{2}},\quad s_{w}^{2}=\frac{\sum_{i=1}^{k} (y_{i}-\bar{y}_{w})^{2}/\sigma_{i}^{2}}{\sum_{i=1}^{k} 1/\sigma_{i}^{2}}. \end{array} $$ If the weighted sample skewness (18) take a negative value, we invert the sign of the treatment effect estimates (i.e. multiply the estimates by −1) and then apply the Box-Cox transformation to the inverted estimates. That is, we use the following transformation for the negatively skewed data: $$\begin{array}{@{}rcl@{}} y_{i}(\lambda,\alpha)= \left\{ \begin{array}{ll} {\frac{(-y_{i}+\alpha)^{\lambda}-1}{\lambda\dot{h}(\alpha)^{\lambda-1}}}, & \lambda \neq 0 \\ \log(-y_{i}+\alpha)\dot{h}(\alpha), & \lambda = 0 \\ \end{array} \right. \end{array} $$ where \(\dot {h}(\alpha)\) is now a geometric mean of −y i +α for i=1,…,k. For each study, the same within-study variances can be assigned to the inverted treatment effect estimates. The random effects model (7) with the transformation (19) is applied in the same manner as the implementing procedures described in the previous section. And also, instead of (11), the true effect of the untransformed variable y i is now defined as $$\begin{array}{@{}rcl@{}} \theta^{\ast}_{i}\equiv \left\{ \begin{array}{ll} -\left\{\lambda\dot{h}(\alpha)^{\lambda-1}\mu_{i}+1\right\}^{1/\lambda}+\alpha, & \lambda \neq 0 \\ -\exp\left\{{\frac{\mu_{i}}{\dot{h}(\alpha)}}\right\}+\alpha, & \lambda = 0 \\ \end{array} \right.. \end{array} $$ Then, instead of (12) and (15), we have the pth percentiles of the distribution of \(\theta _{i}^{\ast }\) and y i as follows: $$\begin{array}{@{}rcl@{}} \xi_{p}= \left\{ \begin{array}{ll} -\left\{\lambda\dot{h}(\alpha)^{\lambda-1}(\mu+\tau z_{p})+1\right\}^{1/\lambda}+\alpha, & \lambda \neq 0 \\ -\exp\left\{{\frac{\mu+\tau z_{p}}{\dot{h}(\alpha)}}\right\}+\alpha, & \lambda = 0 \\ \end{array} \right. \end{array} $$ $$ \nu_{p}\,=\,\left\{\!\!\! \begin{aligned} &-\left\{\lambda\dot{h}(\alpha)^{\lambda-1}(\mu\,+\,(\tau^{2}\,+\,d^{2})^{1/2}z_{p})\,+\,1\!\right\}^{1/\lambda}\!+\alpha,\,\,\, \lambda \neq 0\\ &-\exp\!\left\{{\frac{\mu+(\tau^{2}+d^{2})^{1/2}z_{p}}{\dot{h}(\alpha)}}\right \}+\alpha,\qquad\qquad\quad\!\! \lambda = 0 \\ \end{aligned} \right. $$ which can be used for estimating the overall median effect and the ratio of IQR squares. The prediction interval is also obtained by the same procedure described in the previous section, except for the step of back-transforming the samples of μ new. Instead of (17), we here use $$\begin{array}{@{}rcl@{}} \theta^{\ast}_{\text{new}}= \left\{ \begin{array}{ll} -\left\{\lambda\dot{h}(\alpha)^{\lambda-1}\mu_{\text{new}}+1\right\}^{1/\lambda}+\alpha, & \lambda \neq 0 \\ -\exp\left\{{\frac{\mu_{\text{new}}}{\dot{h}(\alpha)}}\right\}+\alpha, & \lambda = 0 \\ \end{array} \right. \end{array} $$ for obtaining samples from the predictive distribution. Implementation of our proposed model We here summarise an implementation procedure of our proposed model using the Box-Cox transformation with the sign inversion for negatively skewed data: Calculate the weighted sample skewness (18). If the weighted sample skewness calculated in Step 1 takes a negative value, invert the sign of observed treatment effect estimates and then move to Step 3; otherwise just move to Step 3. Calculate the maximum likelihood estimates of the transformation (λ) and the shift (α) parameter using the log-likelihood function (9). Perform the Bayesian estimation (MCMC sampling) for the other parameters given the maximum likelihood estimates of the transformation and the shift parameter. Calculate the measures of interest (overall median, normalised IQR and ratio of IQR squares) using the MCMC samples obtained in Step 4. Draw samples from the predictive distribution using the MCMC samples obtained in Step 4, and calculate the prediction interval. Steps 1 and 2 are needed only when applying the sign inversion. Without the sign inversion, we will begin the procedure from Step 3. We conducted a simulation study to examine the comparative performance of the standard normal random effects model (1) and the proposed model (7). Since the proposed model allows the presence of skewness on the treatment effect estimates, we supposed some situations where the true treatment effects had a skewed distribution, and compared results from the two models in terms of the estimation of overall effect and the quantification of heterogeneity. We also supposed another situation where the treatment effect estimates were normally distributed. In such situation, the two models are expected to provide similar results. Table 1 shows an overview of the simulation study. Under several scenarios of random effects distributions, we considered simulating 10,000 meta-analyses of k studies, where the number of studies was fixed in each simulation with k∈{5,10,20,40}. A treatment effect estimate y i and a within-study variance \(\sigma _{i}^{2}\) for the ith study (i=1,…,k) were randomly generated with the procedures of Steps 1-6 in Table 1. We below describe each step in detail. Table 1 Overview of the simulation study In Step 1, a random effects distribution was chosen from candidates. We considered a variety of random effects distributions (normal distribution, skew-normal distribution [40, 41], shifted exponential distribution and shifted log-normal distribution) which a true treatment effect θ i for the ith study was drawn from. The normal distributions were chosen for examining how the proposed model worked in the case of symmetrically distributed data that could be precisely fit by the normal random effects model. The skew-normal distributions were chosen for imitating situations with moderate to large skewness in a positive and a negative directions. The shifted exponential and the shifted log-normal distributions were chosen for imitating situation with heavy-tailed data as well as positive skewness. True parameters in the random effects distributions were specified so that the median of the distribution became equal to zero, and the normalised IQR square of the distribution became one of either (0.025, 0.067, 0.400). The setting of zero overall median means a null hypothesis of no treatment effect. The scenario of the true normalised IQR square is equivalent to setting the true ratio of the IQR squares as (20.0%, 40.0%, 80.0%) which are obtained by plugging in the true normalised IQR squares under the 'typical' within-study variance of 0.100, such as 0.200=0.025/(0.025+0.100). Table 2 shows the values of true parameters included in each random effects distribution. And also, the random effects distributions are graphically illustrated for each scenario in Additional file 2: Figure S1, Figure S2, Figure S3, Figure S4, Figure S5, Figure S6 and Figure S7 show density functions of the random effects distribution for the scenarios 1-3, 4-6, 7-9, 10-12, 13-15, 16-18 and 19-21, respectively. Table 2 Scenarios of random effects distributions and their true parameters In Step 2, we set the number of studies, mean of the distribution for the within study variance and true parameters of the random effects distribution. In Steps 3-6, we obtained treatment effect estimates for each study. In particular, the within-study variance \(\sigma _{i}^{2}\) was drawn from a normal distribution with mean σ 2 and variance 0.040 conditioned on \(0.010<\sigma _{i}^{2}<(2\sigma ^{2}-0.010)\). The mean of the normal distribution, σ 2, was chosen so that the 'typical' within-study variance (4), which depended on the number of studies involved in the meta-analysis, became 0.100 on average. We set the value of σ 2 to either 0.1089, 0.1122, 0.1147, 0.1158 in each simulation with k=5,10,20,40, respectively. In Step 7, using the generated meta-analysis data, we fit the normal random effects model (1) and the proposed model (7) separately. In the proposed model, we also applied the transformation with the sign inversion for the negatively skewed data. Note that the transformation with the sign inversion is applied only when the observed treatment effect estimates are negatively skewed. And then, in Steps 8-9, we computed the posterior medians and the 95 percent credible intervals of: the overall mean and the I 2 from the normal random effects model (1), the overall median and the ratio of IQR squares from the proposed model (7). In Steps 11-14, we calculated the following quantities for comparing the two models (normal random effects model/proposed model): Bias around the true overall median: (mean of the posterior medians of the overall mean/the overall median) −(true overall median of 0.000) Root mean square error (RMSE) around the true overall median: ((standard deviation of the posterior medians of the overall mean/the overall median) 2+(bias around the true overall median)2) 1/2 Coverage probability of the true overall median (%): the proportion of the time that the 95 percent credible intervals of the overall mean/the overall median contained the true overall median of 0.000 Bias around the true ratio of IQR squares: (mean of the posterior medians of the I 2/the ratio of IQR squares) −(true ratio of IQR squares given by one of either (20.0%, 40.0%, 80.0%)) RMSE around the true ratio of IQR squares: ((mean of the posterior medians of the I 2/the ratio of IQR squares) 2+(bias around the true ratio of IQR squares)2) 1/2 Coverage probability of the true ratio of IQR squares (%): the proportion of the time that the 95 percent credible intervals of the I 2/the ratio of IQR squares contained the true ratio of IQR squares given by one of either (20.0%, 40.0%, 80.0%) We notice that using the terms of bias, RMSE and coverage probability for the results from the normal random effects model is not necessarily correct. This is because the normal random effects model provided the results of overall mean and I 2, which were not the targeted true values (or the reference values). However, in this article, the overall median and the ratio of IQR squares are highly recommended for representing the overall effect and quantifying the heterogeneity in the meta-analysis of skewed data, respectively. Then, the above quantities are useful for assessing how the findings under the two models can be different from the recommended inferential measures in skewed-data situations. Before estimation of model (7) for a particular simulated dataset, the grid search procedure was performed for estimating λ and α for the dataset. The candidate values of λ were specified in a range of −3.00≤λ≤6.00 with a step size of 0.01. We considered constituting a subset of α as the minimum values of {(y i +α):i=1,…,k}; i.e. α ∗=α+ min{y i :i=1,…,k}. The candidate values of α ∗ were specified in a range of 0.01≤α ∗≤2.01 with a step size of 0.10. We used the normal and the uniform prior for the mean and the variance parameter respectively, as described in the previous section. The upper limit of the uniform prior distribution on τ was given by b=10 for each model. For the Bayesian estimation of model (1) and (7), the iterative process of the MCMC algorithm produced three chains each with 20,000 samples of parameters. We discarded the first 2,000 samples (so-called burn-in samples) in order to prevent dependence on the starting values. And also, we took a sample at only every 2nd iteration (thinning) in order to avoid autocorrelation between the samples taken. Therefore in total, 24,000 samples of parameters were drawn. We graphically checked the convergence of MCMC sampling using first 5 simulations for each scenario, with no diagnostic methods. Additional file 2: Table S1, Table S2, Table S3 and Table S4 show results from the two models, for each scenario of the number of studies k = 5, 10, 20 and 40, respectively. And also, Additional file 2: Table S5 and Table S6 show summary statistics of estimates for the transformation (λ) and the shift (α) parameter, for the scenario of the number of studies k=40. Note that the summary statistics were calculated using 10,000 estimates of the parameters for each scenario of random effects distribution. To make clear the differences between the two models, we depicted the bias, the RMSE and the coverage probability in the following figures: Figure 2 plots the results for the overall mean or the overall median, with the between-study variation (the true ratio of IQR squares: Small = 20.0%, Moderate = 40.0%, Large = 80.0%) on the horizontal axis. The number of studies was fixed as k = 20. Figure 3 plots the results for the I 2 or the ratio of IQR squares, with the between-study variation (the true ratio of IQR squares: Small = 20.0%, Moderate = 40.0%, Large = 80.0%) on the horizontal axis. The number of studies was fixed as k = 20. Figure 4 plots the results for overall mean or the overall median, with the number of studies (k = 10, 20 and 40) on the horizontal axis. The true ratio of IQR squares was fixed as 80.0% (i.e. the scenario of large between-study variation). Figure 5 plots the results for the I 2 or the ratio of IQR squares, with the number of studies (k = 10, 20 and 40) on the horizontal axis. The true ratio of IQR squares was fixed as 80.0% (i.e. the scenario of large between-study variation). Bias, RMSE and coverage probability of the overall mean or the overall median for the scenario of the number of studies k=20. The overall mean from the normal random effects model (cross/solid line), and those of the overall median from the proposed model (black circle/broken line: Box-Cox transformation, black triangle/dotted line: Box-Cox transformation with the sign inversion) Bias, RMSE and coverage probability of the I 2 or the ratio of IQR squares for the scenario of the number of studies k=20. The I 2 from the normal random effects model (cross/solid line), and those of the ratio of IQR squares from the proposed model (black circle/broken line: Box-Cox transformation, black triangle/dotted line: Box-Cox transformation with the sign inversion) Bias, RMSE and coverage probability of the overall mean or the overall median for the scenario of true ratio of IQR squares = 80.0% (large between-study variation). The overall mean from the normal random effects model (cross/solid line), and those of the overall median from the proposed model (black circle/broken line: Box-Cox transformation, black triangle/dotted line: Box-Cox transformation with the sign inversion) Bias, RMSE and coverage probability of the I 2 or the ratio of IQR squares for the scenario of true ratio of IQR squares = 80.0% (large between-study variation). The I 2 from the normal random effects model (cross/solid line), and those of the ratio of IQR squares from the proposed model (black circle/broken line: Box-Cox transformation, black triangle/dotted line: Box-Cox transformation with the sign inversion) The nominal level of coverage probability is 95 percent. All the scenarios of the random effects distributions are displayed in the same panels in order of the normal (N), the skew-normal with moderate positive skewness (pSN1), the skew-normal with large positive skewness (pSN2), the skew-normal with moderate negative skewness (nSN1), the skew-normal with large negative skewness (nSN2), the shifted exponential (EXP) and the shifted log-normal (LN) from left to right. And also, in each figure, (i) cross marks and solid lines represent the normal random effects model, (ii) black circle marks and broken lines represent the proposed model using Box-Cox transformation (6), (iii) black triangle marks and dotted lines represent the proposed model using Box-Cox transformation with the sign inversion (19) for the negatively skewed data. We below refer to the normal random effects model, the proposed model without and with the sign inversion as NRE, BC and BC-SI respectively. Overall treatment effect When the normal distributions were assumed as the true random effects distribution, the NRE and the BC-SI provided unbiased estimations of the overall effect, regardless of the scenarios of the between-study variation and the number of studies. The overall median from the BC was subject to a negative bias in the scenario of the large between-study variation and the small number of studies, though this bias decreased as the number of studies increased. The NRE, the BC and the BC-SI provided similar RMSEs, except for the scenario of the small number of studies where the RMSEs from the NRE were smaller than those from the BC and the BC-SI. The coverage probabilities from the NRE were slightly larger than those from the BC and the BC-SI in all the scenarios, but all these coverage probabilities were close to the nominal level of 95 percent. When the skew-normal distributions were assumed as the true random effects distribution, the overall means from the NRE were pulled in the direction of skewness and substantially different from the true zero overall median, especially in the scenarios of the large between-study variation. In the scenarios of the positive skewness (pSN1 and pSN2), the biases of the overall mean from the NRE increased positively; conversely in the scenarios of negative skewness (nSN1 and nSN2), those increased negatively. The degree of bias was larger in the scenario of the large skewness (pSN2 and nSN2). And also, regarding the overall means from the NRE in the scenarios of the large between-study variation and the large number of studies, the RMSEs were inflated and the coverage probabilities were substantially below the nominal level of 95 percent. On the other hand, the overall medians from the BC and the BC-SI had the smaller biases regardless of the scenarios of the between-study variation and the number of studies. In the scenario of the negative skewness (nSN1 and nSN2) and the large between-study variation, the BC was subject to a negative bias, though this bias decreased as the number of studies increased. The BC and the BC-SI provided quite similar RMSEs and coverage probabilities in the scenarios of the positive skewness (pSN1 and pSN2); while, in the scenarios of the negative skewness (nSN1 and nSN2) and the large between-study variation, the coverage probabilities from the BC were below the nominal level of 95 percent. This indicates that the BC could have difficulty in dealing with the negatively skewed data as expected, but the BC-SI performs well. When the shifted exponential and the shifted log-normal distributions were assumed as the true random effects distribution, the overall means from the NRE were substantially different from the true zero overall median, especially in the scenarios of the large between-study variation. And also, in such situation, the RMSEs were seriously inflated and the coverage probabilities were below the nominal level of 95 percent, which became more noticeable as the number of studies increased. This would be because the scenarios including larger number of studies tended to generate more heavy-tailed data. On the other hand, the overall medians from the BC and the BC-SI were similar and had much smaller biases in comparison with the overall means from the NRE. The BC and the BC-SI also provided similar results of the RMSE and the coverage probability, which were much better than those from the NRE especially in the scenarios of the large between-study variation and the large number of studies. In summary, we found that the overall mean from the NRE could be substantially influenced by the skewness on the random effects distribution. Taking into account that the overall mean from the NRE was pulled in the direction of skewness and had the lower coverage probability, the NRE might therefore produce overall effect estimates that do not reflect the median treatment effect if the overall distribution of treatment effect estimates is skewed or heavy-tailed. Moreover, it was indicated that the sign inversion in the Box-Cox transformation could be an effective way for precisely estimating the overall median of the negatively skewed treatment effect estimates. When the normal distributions were assumed as the true random effects distribution, the NRE, the BC and the BC-SI provided similar results of the bias, the RMSE and the coverage probability, regardless of the scenarios of the between-study variation and the number of studies. When the skew-normal distributions were assumed as the true random effects distribution, the NRE, the BC and the BC-SI provided similar results in almost all of the scenarios. In the scenarios of the large between-study variation, the coverage probabilities of I 2 from the NRE were slightly lower than those of the ratios of IQR squares from the BC-SI. In the scenarios of the negative skewness (nSN1 and nSN2), the ratios of IQR squares from the BC were subject to negative biases and had the larger RMSEs in comparison with the NRE and BC-SI. This again indicates that the BC could have difficulty in dealing with the negatively skewed data. When the shifted exponential and the shifted log-normal distributions were assumed as the true random effects distribution, the I 2 values from the NRE were larger than the ratios of IQR squares from the BC and the BC-SI in the scenarios of the large between-study variation. The RMSEs from the NRE, the BC and the BC-SI were quite similar, though the coverage probabilities of I 2 from the NRE were seriously below the nominal level of 95 percent in the scenarios of the large between-study variation, compared with those of the ratios of IQR squares from the BC and the BC-SI. This became more noticeable as the number of studies increased, which would be again because the scenarios including larger number of studies tended to generate more heavy-tailed data. The BC and the BC-SI provided quite similar results of the bias, the RMSE and the coverage probability, regardless of the scenarios of the between-study variation and the number of studies. In summary, we found that the I 2 from the NRE was influenced by the skewness on the random effects distribution. In particular, the heavy-tailed data seriously affected the estimation of I 2 in the NRE. Moreover, it was again indicated that the sign inversion in the Box-Cox transformation could be an effective way for precisely estimating the ratios of IQR squares of the negatively skewed treatment effect estimates. Performance when the number of studies is small Additional file 2: Table S1 shows results for the scenario of the number of studies k=5. In regard to the estimation of the overall treatment effect, having small number of studies had a limited influence on bias. Indeed, the biases on the overall median from the BC and the BC-SI as well as the overall mean from the NRE were similar to those for the scenario of the number of studies k=10, except for the overall median from the BC for the scenario of the negative skewness (nSN1 and nSN2) where the negative biases were increased. However, the coverage probabilities from the BC and the BC-SI were below nominal level of 95 percent for almost all the scenarios. In particular, the BC-SI provided around 90 percent coverage probabilities for the scenarios of the small and the moderate between-study variation. In contract, the coverage probabilities from the NRE were substantially above the nominal level of 95 percent. These indicate an issue of meta-analysing the small number of studies. In regard to the quantification of heterogeneity, the NRE, the BC and the BC-SI were subject to large positive bias of the I 2 or the ratio of IQR squares, which inflated their RMSEs. From these findings, we conclude our proposed model is applicable even when the number of studies is 5, but may have difficulty in ensuring sufficient accuracy in estimation of the overall treatment effect and quantification of heterogeneity. Consider now application to the examples described in the previous section. We applied the normal random effects model (1) and the proposed model (7) to each example, and estimated the posterior distributions of parameters of interest in each model. The transformation with the sign inversion was also applied to example 2 (the weighted sample skewnesses were 2.123 and −1.847 in example 1 and 2 respectively). Note that the transformation with the sign inversion is applied only when the observed treatment effect estimates are negatively skewed. Before estimation of model (7) for each example, the grid search procedure was performed for estimating λ and α. The candidate values of λ were specified in a range of −3.00≤λ≤6.00 with a step size of 0.01. We considered constituting a subset of α as the minimum values of {(y i +α):i=1,…,k}; i.e. α ∗=α+ min{y i :i=1,…,k}. The candidate values of α ∗ was specified in a range of 0.01≤α ∗≤2.01 with a step size of 0.10. We used the normal and the uniform prior for the mean and the variance parameter respectively, as described in the previous section. The upper limit of the uniform prior distribution on τ was given by b=10 for each model. For the Bayesian estimation, the iterative process of the MCMC algorithm produced three chains each with 2,000,000 samples of parameters. We discarded the first 5000 samples (so-called burn-in samples) in order to prevent dependence on the starting values. And also, we took a sample at only every 5th iteration (thinning) in order to avoid autocorrelation between the samples taken. Therefore in total, 1,185,000 samples of parameters were drawn. We again graphically checked whether the burn-in samples were sufficient and that the MCMC chains converged, with no diagnostic methods. Overall treatment effect and quantification of heterogeneity Table 3 shows the posterior median and the 95 percent credible interval of: the overall mean and the square root of between-study variance from the NRE, the overall median and the normalised IQR from the BC and the BC-SI. In example 1, the posterior median of the overall mean from the NRE was noticeably larger than that of the overall median from the BC. In example 2, the posterior medians of the overall mean from the BC and the BC-SI were quite similar to each other, but noticeably larger than that of the overall mean from the NRE. Note that the observed treatment effect estimates in example 1 were subject to the positive skewness, in contrast we observed the negatively skewed treatment effect estimates in example 2; this causes the overall means from NRE to be forced toward the direction of skewness in each example. Table 3 Posterior median and 95 percent credible interval of: overall mean and square root of between-study variance from the normal random effects model, overall median and normalised IQR from the proposed model In both examples, the 95 percent credible intervals of the overall median from the BC and the BC-SI were substantially narrower than those of the overall mean from the NRE, indicating the misspecification of the random effects distribution led to the inflation of the between-study variance in the NRE. Indeed, in both examples, we found larger posterior medians of the square root of between-study variance from the NRE in comparison with the normalised IQRs from the BC and the BC-SI. Figure 6a shows the posterior distributions of the overall mean from the NRE and those of the overall median from the BC and the BC-SI for each example. The overall medians had sharper peak of posterior densities than the overall mean in both examples. Posterior and predictive distribution. a Posterior distribution of the overall mean from the normal random effects model (solid line), and of the overall median from the proposed model (broken line: Box-Cox transformation, dotted line: Box-Cox transformation with the sign inversion), b Predictive distribution with 95 percent prediction interval from the normal random effect model (solid line), and those from the proposed model (black circle/broken line: Box-Cox transformation, black triangle/dotted line: Box-Cox transformation with the sign inversion) Table 4 shows the posterior medians and the 95 percent credible intervals of the I 2 from the NRE, and the ratio of IQR squares from the BC and the BC-SI. In example 2, the results from the BC and the BC-SI were quite similar. The ratios of IQR squares from the BC and the BC-SI were substantially smaller than the I 2's from the NRE in both examples. The NRE would conclude moderate heterogeneity for the meta-analyses of the examples; however, taking into account the inflation of the between-study variance from the NRE, the I 2's are more likely to be overestimated. On the other hand, the BC and the BC-SI would conclude low heterogeneity for the same examples. Table 4 Posterior median and 95 percent credible interval of: I 2 from the normal random effects model, ratio of IQR squares from the proposed model; 95 percent prediction intervals from each model Prediction interval and predictive probability Table 4 also shows the 95 percent prediction intervals from the two models. In example 2, the results from the BC and the BC-SI were quite similar. In both examples, the prediction intervals from the BC and the BC-SI were substantially narrower than those from the NRE. This is likely due to the inflation of the between-study variance from the NRE. Especially in example 2, the BC and the BC-SI provided much stronger evidence of efficacy of the treatment, with the upper bound of the 95 percent prediction interval now much further below 0. Figure 6b shows the predictive distributions from the NRE, the BC and the BC-SI for each example. The 95 percent prediction intervals were also depicted on the same panel, where the cross, the black circle and the black triangle represent the medians of predictive distribution from the NRE, the BC and the BC-SI, respectively. We found the BC and the BC-SI provided skewed prediction intervals, which reflects the asymmetry detected and the asymmetric predictive distribution; whereas the NRE method gave symmetrical prediction intervals in both examples. We computed the predictive probability that the treatment is truly effective in a new study. Figure 7 shows the results from the NRE, the BC and the BC-SI. Note that the predictive probability is a kind of cumulative probability and is defined for each example as follows: P(θ new>x) or \(P(\theta ^{\ast }_{\text {new}}>x)\) for example 1 (larger is more beneficial), and P(θ new<x) or \(P(\theta ^{\ast }_{\text {new}}<x)\) for example 2 (smaller is more beneficial) where x is a specified value of treatment effect and is represented on the horizontal axis in Fig. 7. We below describe the details of computation and the results for each example: Predictive probability. The normal random effect model (solid line) and the proposed model (broken line: Box-Cox transformation, dotted line: Box-Cox transformation with the sign inversion) Example 1. Since a positive value indicates a higher mean score for the treatment group, we obtained the predictive probability of a beneficial treatment by counting the number of samples drawn from the predictive distribution which were larger than specified values on the horizontal axis. The probability curve from the NRE was located entirely over those from the BC. That is, the NRE predicted larger probabilities of the true effect being in favour of treatment than the BC. For instance, the probability of the true treatment effect being above 0.1 was 0.428 for the NRE but 0.221 for the BC. Since a negative value indicates a benefit for the antidepressants, we obtained the predictive probability by counting the number of samples drawn from the predictive distribution which were smaller than specified values on the horizontal axis. The results from the BC and the BC-SI were quite similar. When the size of specified difference was small (e.g. from −0.3 to 0.0), the predictive probabilities from the NRE were slightly smaller than those from the BC and the BC-SI. In contrast, when the size of specified difference was large (e.g. from −0.8 to −0.4), the predictive probabilities from the NRE were larger than those from the BC and the BC-SI. We proposed a new random effects model based on the Box-Cox transformation to deal with skewness in the overall distribution of the observed treatment effect estimate for meta-analysis. The simulation study shows that the proposed model has the potential to provide more appropriate inferences in the presence of skewness, especially in regard to the estimation of the overall treatment effect and the quantification of heterogeneity. The simulation study indicates that the normal random effects model gives an overall mean that is pulled in the direction of skewness, and is thus an inappropriate summary for representing the centre of skewed data. Similarly, I 2 from the normal random effects model can be inflated given skewed treatment effect estimates, because it overestimates the random effects variance. This also causes prediction intervals that are too wide. Our proposed model substantially reduces these problems. It is flexible that the observed data determine the shape of distribution and thus the required Box-Cox transformation to ensure the normality of transformed treatment effect estimates. We suggest using the overall median effect to summarise the proposed meta-analysis model, back on the original scale of interest. The median is known as a more robust summary measure than the mean against the skewness and the outliers on the observed data. We also defined the ratio of IQR squares under the proposed model for quantifying the impact of heterogeneity in the meta-analysis. When considering the skewed data, the variance is no longer the best measure for describing the spread of the distribution. We recommend the (normalised) IQR of the true effects distribution as a measure for quantifying the extent of the heterogeneity. The ratio of IQR squares has an aspect of the proportion of the between-study variation that is due to the heterogeneity across studies to the total variation in the treatment effect estimates, which is the same concept as I 2 from the normal random effects model. In the simulation study, the ratio of IQR squares reduced the inflation of I 2 when the treatment effect estimates were skewed or heavy-tailed. We note that our simulation assumes that sample sizes in each study are large enough for the central limit theorem to apply, such that (a) treatment effect estimates do have a normal distribution within studies, (b) the variance of the estimate is well-estimated (such that it can essentially be assumed known). Thus, situations with small studies are not considered, but this would be useful for further research. The application to the two examples illustrated the two models could provide different conclusions for the summary effect and the amount of heterogeneity for the same meta-analysis data. In addition, given skewness the applications indicate the proposed model better predicts the treatment effect in a new study over the normal random effects model. The normal random effects model provided symmetric predictive distributions and its 95 percent prediction interval; on the other hand, the proposed model provided the skewed shape of predictive distributions and its asymmetric 95 percent prediction interval as expected. The difference in the shape of predictive distributions had a significant impact on the predictive probabilities that the treatment is effective in a new study. Another limitation is that, although our simulations covered a wide range of scenarios and were computationally intensive, other scenarios still need to be investigated. In particular, we did not consider when sample sizes within studies are small, and we only considered when it could be correctly assumed that study estimates were normally distributed and their variances were known. This allowed any asymmetry to be due to the random effects distribution, rather than the within-study distributions. Further research in situations of small trials and/or rare events would be welcome. Note that the parameters included in our proposed model are estimated by two stages. We first get the point estimates of the transformation and the shift parameter (λ and α) using the profile likelihood function, and then estimate the other parameters (μ and τ) using the Bayesian approach conditioned on \(\lambda =\hat {\lambda }\) and \(\alpha =\hat {\alpha }\). Although a simultaneous estimation of all parameters (λ, α, μ and τ) within the framework of Bayesian approach is more straightforward, we have had a difficulty in the convergence of MCMC sampling for this. Therefore, we take a procedure of first finding a transformation to normalise the treatment effect estimates (i.e. the transformation and the shift parameter are dealt with as non-stochastics), and then making inferences conditioned on the maximum likelihood estimates of the transformation and the shift parameter. However, there are some limitations of the proposed model and further research is required. For interpretation and presentation of the meta-analysis results, the Bayesian approach is used for estimating the model parameters. We are interested in functions of the estimated parameters rather than the estimated parameters themselves; i.e. (i) the overall median (13) which represents a summary treatment effect, (ii) the normalised IQR (14) which quantifies the magnitude of heterogeneity, (iii) the ratio of IQR squares (16) which quantifies the impact of heterogeneity. The Bayesian approach using the MCMC method is straightforward enough to estimate these measures with variability, because uncertainty (i.e. variance estimation) of the estimated measures can be obtained by using MCMC samples directly (e.g. mean, median, standard deviation, 2.5th and 97.5th quantiles of the MCMC samples), without additional distributional assumptions or asymptotic approximations. The 95 percent prediction interval is also computed in a simple manner by the Bayesian approach, which can be obtained by taking 2.5th and 97.5th quantiles of samples drawn from the predictive distribution (17). However, a frequentist approach may be another useful option in some situations. When considering the frequentist estimation for the proposed model, it is not straightforward to derive asymptotic distributions (and also the 95 percent confidence intervals) of the maximum likelihood estimators for the measures of interest. We expect a bootstrap method can be used for solving this issue. In addition, it is not necessarily clear how the choice of prior for the between-study variance parameter makes impact on the results from our proposed model. The uniform prior on the standard deviation scale has been known as a reasonable non-informative prior for the conventional normal random effects model, though this may not be the case for our proposed model. Further extensive simulation studies will be needed for assessing this. Finally, we note that our proposed model assumes the meta-analysis data available is representative of the populations of interest (like all meta-analysis models). In particular, if asymmetry in the observed treatment effect estimates is due to bias, for example publication bias and selective reporting, then the summary result, the heterogeneity measures and the predictive inference may be inappropriate (as then the random effects distribution is inappropriately captured). Then, a concern may arise when it is difficult to distinguish possible causes of skewness on the observed treatment effect estimates. Several scenarios can be considered for the reason why the treatment effect estimates are skewed across studies; for example, (i) the treatment effect distribution suffers from the publication bias and/or the selective reporting, (ii) the treatment effect distribution is a mixture of two different distributions, (iii) the treatment effect distribution is truly skewed, (iv) the treatment effect distribution is skewed simply due to estimation errors. It should be performed first to explore the possible causes of skewness. When the scenario (i) or (ii) is true, our proposed model may not be appropriate, but is likely to be more robust than the normal random effects model. This is because extreme study results locating on one side are expressed as the tail of the treatment effect distribution in our proposed model. Therefore, we conclude our proposed model is applicable for all the scenarios and to is likley to produce more suitable meta-analysis results in comparison to the conventional normal random effects model. However, further research and extended simulations are needed to critically examine this in more detail, especially in situations where publication and selection biases are causing the asymmetry. Our proposed model aims to reduce non-normality in the random effects distribution by observing the non-normality in the overall distribution of the y i 's. Therefore, it is likely to perform best when the between-study heterogeneity is large relative to the within-study variability, such that skewness in the overall distribution can be detected and will be due to asymmetry in the true treatment effects. Further research may also consider applying the Box-Cox transformation to just the random effects distribution in model (1) (i.e. to just the θ i 's). Although Gurka et al. [15] suggests that the Box-Cox transformation in a mixed effects model should be viewed in terms of their success in normalising the total error, the y i themselves do not need to be transformed, and thus left on their original scale familiar to meta-analysts. We proposed a random effects meta-analysis with Box-Cox transformation to deal with the skewness in meta-analysis data. The proposed meta-analysis model has the potential to provide more robust inferences for summary treatment effects when the random effects distribution is skewed. It could be used to examine the robustness of traditional meta-analysis results, heterogeneity measures, and predictive inferences to skewed random effects distributions. However, further research would be welcome to examine the method in further simulated and empirical examples. BC: Proposed model using Box-Cox transformation without sign inversion BC-SI: Proposed model using Box-Cox transformation with sign inversion EXP: Shifted exponential distribution GLMMs: Generalised linear mixed models IPD: Individual patient data IQR: LN: Shifted log-normal distribution MCMC: Markov chain Monte Carlo NRE: nSN1: Skew-normal distribution with moderate negative skewness Skew-normal distribution with large negative skewness pSN1: Skew-normal distribution with moderate positive skewness Skew-normal distribution with large positive skewness RMSE: Root mean square error Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res Synth Methods. 2010; 1:97–111. Turner RM, Omar RZ, Thompson SG. Bayesian methods of analysis for cluster randomized trials with binary outcome data. Stat Med. 2001; 20:453–72. Lee KJ, Thompson SG. Flexible parametric models for random-effects distributions. Stat Med. 2008; 27:418–34. Smith TC, Spiegelhalter DJ, Thomas A. Bayesian approaches to random-effects meta-analysis: a comparative study. Stat Med. 1995; 14:2685–99. Demidenko E. Mixed Models: Theory and Applications. Hoboken, NJ: Wiley; 2005. Böhning D. Computer-assisted Analysis of Mixtures and Applications: Meta-analysis, Disease Mapping and Others. Boca Raton, FL: Chapman and Hall/CRC; 2000. Baker R, Jackson D. A new approach to outliers in meta-analysis. Health Care Manage Sci. 2008; 11:121–31. Gumedze FN, Jackson D. A random effects variance shift model for detecting and accommodating outliers in meta-analysis. BMC Med Res Methodol. 2011; 11:19. Higgins JPT, Thompson SG, Spiegelhalter DJ. A re-evaluation of random-effects meta-analysis. J R Stat Soc: Ser A. 2009; 172:137–59. Box GEP, Cox DR. An analysis of transformations. J R Stat Soc Ser B. 1964; 26:211–46. Carroll RJ, Ruppert D. Power transformations when fitting theoretical models to data. J Am Stat Assoc. 1984; 79:321–8. Sakia RM. Retransformation bias: a look at the box-cox transformation to linear balanced mixed anova models. Metrika. 1990; 37:345–51. Sakia RM. The box-cox transformation technique: a review. The Stat. 1992; 41:169–78. Lipsitz SR, Ibrahim JG, Molenberghs G. Using a box-cox transformation in the analysis of longitudinal data with incomplete responses. Appl Stat. 2000; 49:287–96. Gurka MJ, Edwards LJ, Muller KE, Kupper LL. Extending the box-cox transformation to the linear mixed model. J R Stat Soc: Ser A. 2006; 169:273–88. Kim S, Chen MH, Ibrahim JG, Shah AK, Lin J. Bayesian inference for multivariate meta-analysis box-cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs. Stat Med. 2013; 32:3972–990. Raudenbush SW. Magnitude of teacher expectancy effects on pupil iq as a function of the credibility of expectancy induction: A synthesis of findings from 18 experiments. J Educ Psychol. 1984; 76:85–97. Raudenbush SW, Bryk AS. Empirical bayes meta-analysis. J Educ Stat. 1985; 10:75–98. Hartung J, Knapp G, Sinha BK. Statistical Meta-Analysis with Applications. Hoboken, NJ: Wiley; 2008. Häuser W, Bernardy K, Üceyler N, Sommer S. Treatment of fibromyalgia syndrome with antidepressants: a meta-analysis. J Am Med Assoc. 2008; 301:198–209. Riley RD, Higgins JPT, Deeks JJ. Interpretation of random effects meta-analyses. Br Med J. 2011; 342:549. Lambert PC, Sutton AJ, Burton PR, Abrams KR, Jones DR. How vague is vague? a simulation study of the impact of the use of vague prior distributions in mcmc using winbugs. Stat Med. 2005; 24:2401–028. Gelman A, Carlin JB, Stern HS, Rubin DB. Bayesian Data Analysis. London: Chapman and Hall/CRC; 2003. Gelman A. Prior distributions for variance parameters in hierarchical models. Bayesian Anal. 2006; 1:515–34. Spiegelhalter DJ. Bayesian methods for cluster randomized trials with continuous responses. Stat Med. 2001; 20:435–52. Spiegelhalter DJ, Abrams KR, Myles J. Bayesian Approaches to Clinical Trials and Health-care Evaluation. London: Wiley; 2004. Turner RM, Davey J, Clarke MJ, Thompson SG, Higgins JPT. Predicting the extent of heterogeneity in meta-analysis, using empirical data from the cochrane database of systematic reviews. Int J Epidemiol. 2012; 41:818–27. Rhodes KM, Turner RM, Higgins JPT. Predictive distributions were developed for the extent of heterogeneity in meta-analyses of continuous outcome data. J Clin Epidemiol. 2014; 68:52–60. Stan Modeling Language User's Guide and Reference Manual. http://mc-stan.org/. Accessed 13 July 2017. Higgins JPT, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002; 21:1539–58. Goto M, Inoue T, Tsuchiya Y. On estimation of parameters in power-normal distribution. Bull Inf Cybern. 1984; 21:41–53. Johnson NL, Kotz S, Balakrishnan N. Continuous Univariate Distributions. New York: John Wiley & Sons; 1994. Maruo K, Shirahata S, Goto M. Underlying assumptions of the power-normal distribution. Behaviormetrika. 2011; 38:85–95. Maruo K, Goto M. Percentile estimation based on the power-normal distribution. Comput Stat. 2013; 28:341–56. Bartlett MS. The use of transformations. Biometrics. 1947; 3:580–619. Kulinskaya E, Morgenthaler S, Staudte RG. Meta Analysis: A Guide to Calibrating and Combining Statistical Evidence. Chichester: John Wiley & Sons; 2008. Box GEP, Hill WJ. Correcting inhomogeneity of variance with power transformation weighting. Technometrics. 1974; 16:385–9. Atkinson A, Pericchi LR, Smith RL. Grouped likelihood for the shifted power transformation. J R Stat Soc Ser B. 1991; 52:473–82. Cheng RCH, Traylor L. Non-regular maximum likelihood problems. J R Stat Soc Ser B. 1995; 57:3–44. Azzalini A. A class of distributions which includes the normal ones. Scand J Stat. 1985; 12:171–8. Azzalini A. The Skew-Normal and Related Families. Cambridge: Cambridge University Press; 2013. The methods used to generate the simulated datasets are described in the Results section. The data used in the examples are included in published articles. Japan-Asia Data Science, Development, Astellas Pharma Inc., 2-5-1, Nihonbashi-Honcho, Chuo-ku, Tokyo, 103-8411, Japan Yusuke Yamaguchi Department of Clinical Epidemiology, National Center of Neurology and Psychiatry, 4-1-1, Ogawahigashi-cho, Kodaira, Tokyo, 187-8551, Japan Kazushi Maruo National Perinatal Epidemiology Unit, University of Oxford, Oxford, OX1 2JD, UK Christopher Partlett Research Institute for Primary Care and Health Sciences, Keele University, Staffordshire, ST5 5BG, UK Richard D. Riley YY devised the study and wrote the first draft. All authors contributed to the interpretation of the results and approved the final version. Correspondence to Yusuke Yamaguchi. R code. Our Bayesian analyses in the simulation study and the application were implemented by using the R software. We provide R functions nremeta and bcremeta for the Bayesian estimations of the normal random effects model and the proposed model respectively. (PDF 45 kb) Results of the simulation study. We provide the following details related to designs and results of the simulation study: (i) density functions of the random effects distribution, (ii) full tables of the results, (iii) summary statistics of estimates for the transformation and the shift parameter. (PDF 2467 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Yamaguchi, Y., Maruo, K., Partlett, C. et al. A random effects meta-analysis model with Box-Cox transformation. BMC Med Res Methodol 17, 109 (2017). https://doi.org/10.1186/s12874-017-0376-7 Random effects model Skewed data
CommonCrawl
Ideal shape for a long, skinny reaction mass for LEO to cis-lunar and beyond? (a "space rail gun") @Ingolifs' answer got me procrastinating thinking further. What you're asking about is a mass driver space station in low earth orbit that a payload or ship launched from earth can dock with, and then be propelled into deep space. The delta-V needed to go from LEO to an escape trajectory is about 3 km/s, and to go from there to a Hohmann transfer to the outer reaches of the solar system requires a further 5 or so km/s. I'm sure a mass driver could be made to provide such delta-Vs, though it would have to be rather long in order to not smush the payload with its massive acceleration. One big problem I see is Newton's third law. Due to recoil, the orbital mass driver will have the same momentum backwards as the craft will have going forwards. This means its orbit will be altered... The railgun's orbit will be of course altered, but if the rail gun is long enough to minimize acceleration for payloads like living humans, it will be subjected to some complex torques as the projectile accelerates along it and pushes it backwards. These will set this long structure rotating, and may even bend it under these transverse loads and tidal effects, unless it is made strong and heavy (and expensive) enough to withstand this. Tumbling of something very long, heavy, and solid in LEO also means drag, reentry, and danger to folks on the ground, so you'd like to keep it re-oriented fairly nose-on to minimize drag as quickly as possible. Is the best shape a straight line, or should it be somewhat curved in order to track what the spacecraft's trajectory would look like as it accelerated from circle to elliptical to hyperbolic tangents? I would suppose that the rail-gun-as-reaction-mass would have to be much heavier than the projectile, but it isn't realistic to treat it as infinitely heavy. With some suitable assumptions of length and mass and target (Moon, or Mars) what would be the ideal shape for this object in LEO? note: I've added the math and physics tags to indicate that I'm looking for a quantitative, reasoned answer, not just something like "it happens so fast that it wouldn't matter" type of hand-waving. Thanks! orbital-mechanics physics mathematics rail-gun $\begingroup$ My physics may be a bit rusty so i'm not sure if this is correct, if you assume you're trying to reach 8 km/s and the railgun accelerates at a reasonable 10g, it will take 81 seconds of acceleration to reach that speed. The distance travelled in that time is just over 32 km. That's how long the railgun would need to be. If you allow a much higher acceleration (maybe your probe is made of solid tungsten or something), you can have a much shorter length, if you're transporting humans, it would have to be much longer. $\endgroup$ – Ingolifs Nov 8 '18 at 7:41 $\begingroup$ @Ingolifs maybe 320 km? $x=\frac{1}{2} a t^2=\frac{1}{2} (10 \times 9.8) 81^2$ $\endgroup$ – uhoh Nov 8 '18 at 7:46 $\begingroup$ Oops, yes, missed a zero. $\endgroup$ – Ingolifs Nov 8 '18 at 7:59 $\begingroup$ A 32-km long gun only 3 times as heavy as the payload you're accelerating? That seems unlikely. And if the gun really is that light, 3 km/s imparted to the payload means the rail gun accelerates by 1 km/s in the opposite direction. $\endgroup$ – Hobbes Nov 8 '18 at 8:34 I've answered some aspects of this, but considering a 1km railgun in my answer to the "parent" question. A much longer railgun doesn't make a lot of difference to the calculations there, except for the acceleration and power. The issues with reaction de-orbiting the railgun are the same. Concerning the shape of a longer railgun. Let us consider a 10g 4 km/s delta-V launcher (basically a longer version of the one in my other answer) which would need to be about 80km long. It's shape will be some sort of blend between the original circular orbit and the hyperbolic orbit of the probe when it is launched. I can't work out the actual curve, but the 80km long launcher will deviate from straight by a few kilometers. What is sadly true is that that shape will be quite unstable due to tidal forces. It is long and thin pointing broadly but not exactly along the orbit. It will experience significant tidal forces pulling it to a radial orientation. It's also big enough that lunar tides may be a consideration. Steve LintonSteve Linton 10k11 gold badge2626 silver badges5353 bronze badges Not the answer you're looking for? Browse other questions tagged orbital-mechanics physics mathematics rail-gun or ask your own question. Orbital Railgun for launching deep space probes Why Earth flybys? Can a booster be designed to withstand 10,000 Gs? If a rocket launched due East from a high latitude, what happens to its path? Help me build a space shotgun so I can shoot Venus from Jupiter Earth->Mars: Porkchop, departure burn and orbit inclination How long would a Falcon Heavy last in orbit around Mars before degrading? What is the optimal angle for a solar-sail deorbit towards the Sun when radial thrust is included? How to calculate mechanical stress on certain parts of rockets? Calculating a de-orbit burn, is this problem written correctly?
CommonCrawl
XOR of one-way function Considering the top answer to the question "If xor-ing a one way function with different input, is it still a one way function?"… The function is no longer one-way. we build a counter example in the following way. Assume $g$ is a one-way function that preserves size, and define $f$ on input $w=bx_1x_2$ in the following way, $$f(bx_1x_2) = \begin{cases} g(x_1)\,x_2 & b=0 \\ x_1\, g(x_2) & b=1 \end{cases}$$ (assuming $b\in\{0,1\}$ and $|x_1|=|x_2|$.) It is easy to see that $f$ is also one-way — to invert it, you need to either invert $g$ on the first half or invert $g$ on the second half. Now we show how to invert $h$. Assume you are given $h(u,v)=Z$, we write it as $h(u,v)= z_1z_2$ with $|z_1|=|z_2|=n$. Then a possible preimage of $Z$ is $$u=0 \,0^n \,\langle g(0^n)\oplus z_2\rangle$$ $$v=1 \, \langle g(0^n)\oplus z_1\rangle \, 0^n$$ because $f(u) = g(0^n)\, \langle g(0^n)\oplus z_2\rangle$ and $f(v) = \langle g(0^n)\oplus z_1\rangle \, g(0^n)$ thus their XOR gives exactly $z_1\,z_2$ as required. Wouldn't this counter-example imply that we've inverted $f$? Consider the reduction where we take in $f(x_1)$ and $f(x_2)$: then we could compute $f(x_1) \oplus f(x_2)$, invert this to $x_1x_2$, and then we have inverted $f$ as well. Is the quoted answer correct? If so, why, given my considerations outlined above? cryptography one-way-functions xor vrume21vrume21 There are two reasons that counter-example wouldn't "imply that we've inverted". Despite that answer's (probably-not-necessarily-correct) opening sentence, it's only a counterexample to "$h$ is necessarily one-way", not to "$h$ is one-way". That answer's attack does not require inverting ​ $\hspace{.04 in}f(x_1) \oplus f(x_2)$ ​ "to $x_1x_2$", it just involves finding a preimage of ​ $\hspace{.04 in}f(x_1) \oplus f(x_2)$ ​ under $h$. The question there asks: "If $f()$ is one way, is it true that $h(x_1,x_2)=f(x_1)\oplus f(x_2)$ is also one way." This claim is incorrect. There exists some $f$ whose respective $h$ is not one way anymore. However, the above doesn't apply to all $f$'s. It is possible that there exists some one-way $f$ whose respective $h$ remains one-way. But it needs not happen, as the example in that answer proves. moreover, inverting $h$ means finding two inputs $x'_1, x'_2$ whose XOR after $f$ is given. this doesn't mean you get the same $x_1$ you started with. In better words: say you are given $y=f(x)$. now you try to invert it. So you pick some value $x'$ compute $y'=f(x')$ and invert $y \oplus y'$. Say after inverting, you get values $u,v$. All you know is that $f(u) \oplus f(v) = f(x) \oplus f(x')$, but there needs not be any relation between $f(x)$ and $f(u)$. Specifically, it is possible that $u\ne x$ and even $f(u) \ne f(x)$, etc. Ran G.Ran G. What will i obtain if i apply a xor-ing a one way function and it's input? If xor-ing a one way function with different input, is it still a one way function? Length-preserving one-way functions Assumptions of One Way Functions Length Preserving One way function If a Ptime function loses x bits of data at every call, is it a one way function? Do one way function exist? Proof of composition of one way function is not one way in general Any evidence that one way functions exist? Is the sum $f+g$ of two one-way-functions a one-way-function?
CommonCrawl
How does being out at work relate to discrimination and unemployment of gays and lesbians? Karel Fric ORCID: orcid.org/0000-0002-6002-73071 Journal for Labour Market Research volume 53, Article number: 14 (2019) Cite this article This article empirically investigates the relationships in the workplace between homonegativity, the disclosure of sexual orientation, perceived discrimination, the reporting of discriminatory incidents and an individual's employment status. I utilize information reported by gays and lesbians in the EU lesbian, gay, bisexual and transgender (LGBT) survey. The data was analysed using generalised structural equation modelling and the logistic regression model. The results indicate that gays and lesbians conceal their sexual orientation more in hostile workplaces. A higher level of concealment is linked with an increased perception of discrimination and with a lower likelihood of reporting discriminatory incidents. Perceived discrimination and (unlike hypothesised) also concealment of sexual orientation positively relate to the probability of being unemployed. This implies a vicious circle in which hostile attitudes force gay employees to conceal their sexuality which in turn limits their ability to confront discriminatory behaviour. There is extensive evidence that gays and lesbians face discrimination in the workplace (Eurofound 2016; Valfort 2017). Research review by Fric (2017) indicates that gay peopleFootnote 1 face barriers when accessing employment. Recent surveys among gays and lesbians in Europe show that a considerable amount of respondents experienced discrimination or harassment in the workplace (Eurofound 2016). However, sexual orientation discrimination is rarely reported and scarcely results in court cases in Europe (van Balen et al. 2011). The lack of official cases may lead to the conclusion that discrimination against sexual minorities is not a common problem in the labour market. Such interpretation has implications for policies on this issue. It is desirable to understand what the relationships are between (perceived) discrimination, the employment status and reporting of discrimination in gay people. Is perceived discrimination related to employment status? How does the perception of being discriminated at work relate to the reporting of discrimination incidents? How do disclosure of sexual orientation and sexual prejudice in the workplace influence these outcomes? In this article I try to answer these questions. I formulate several hypotheses that I empirically test using the European Union Lesbian, Gay, Bisexual and Transgender (EU LGBT) surveyFootnote 2 data. I applied structural equation model and verified the results with the logistic regression model. I am not aware of any study that would empirically test the relationship between my concepts of interest. The research has concentrated on the antecedents of disclosure of sexual orientation in the workplace (such as company policies, extent of disclosure in other contexts) and the effects of disclosure (for example on employees' commitment, job satisfaction or stress levels). I identified only limited research that would link the extent of disclosure of sexual orientation in the workplace to perceived discrimination. For example, Ragins and Cornwell (2001) found that gay employees were more likely to conceal their sexual orientation at work (and to have turnover intentions) if they perceived greater workplace discrimination than those who reported less discrimination. According to Ragins et al. (2007), perceptions of past discrimination positively predicted fears about disclosure of sexual orientation. Surprisingly, perceptions of past discrimination were positively related to the extent of disclosure of sexual orientation in current positions. Schneider (1986) observed that prior job loss due to disclosure of sexual identity impacted subsequent decisions and concerns about revealing one's sexuality to co-workers. To test the relationships between (perceived) discrimination, employment status and reporting discrimination, I formulate a model which also encompasses the concepts of disclosure of sexual orientation and homonegativity in the workplace. The model also takes into account contextual factors and a subject's demographic characteristics which are presumed to affect the observed outcomes. In this section I describe the relevant concepts and how they relate to each other. Based on this I formulate the hypotheses. My model is schematically depicted in Fig. 1. Bold lines mark the hypothesised relationships. Non-bold lines stand for control variables. The single-headed arrows indicate causality (from the antecedent to the consequent) and double-headed arrows mutual relationship. The model of causalities related to sexual orientation discrimination in the workplace and the path model for the structural equation model Employment status in this article refers to being (un)employed. Sexual orientation discrimination is defined as a less favourable treatment in the labour market because of one's sexual orientation. This definition excludes so-called positive discrimination and is more restrictive than the definition by Arrow (1973), according to whom labour market discrimination exists when two equally qualified individuals are treated differently in the labour market on the basis of a personal characteristic unrelated to productivity. The term homonegativity is used as a synonym for sexual prejudiceFootnote 3 against lesbians and gays. Even though homonegativity and discrimination in the workplace are conceptually closely related, I treat them as two distinct concepts. Discrimination refers to discriminatory incidents or negative conduct perceived by the subjects that were targeted at themselves. Homonegativity relates to a subjects' perception of attitudes, climate and conduct towards gay people in their workplace in general (i.e. not directly targeted at the subjects themselves). Concealment at work ↔ discrimination at work Even though people can reportedly estimate one's sexual orientation based on body movements (Johnson et al. 2007), facial cues (Freeman et al. 2010; Brewer and Lyons 2017) or voice (Fasoli et al. 2017), sexual orientation is traditionally viewed as a non-observable type of diversity (Milliken and Martins 1996). Direct discrimination on basis of sexual orientation requires knowledge or suspicion that an employee is gay. Gay people may not experience direct discrimination if no one knows or suspects that they are gay, even though they may experience indirect discrimination through the presence of a hostile environment (Ragins and Cornwell 2001). The model by Chung (2001) postulates that identity management is one of strategies that gay employees can use to cope with potential discrimination. The level of concealment (disclosure) is assumed to affect the extent of discriminatory behaviour. But there is also an opposite causality. While deciding on how to manage information related to their sexual orientation, gay people assess the benefits and costs of coming out (Rostosky and Riggle 2002). Because disclosure of one's sexual orientation can increase the risk of social rejection, prejudice and discrimination (Chaudoir and Fisher 2010), gay employees are more likely to conceal when they fear discrimination and stigma (see stigma theory by Ragins and Cornwell 2001). Hypothesis 1 The concealment of sexual orientation in the workplace will be positively related to perceived discrimination. To correctly estimate the relationship between the concealment of sexual orientation and perceived discrimination in the workplace, homonegativity needs to be taken into account. Homonegativity at work ↔ discrimination at work Homosexuality is still associated with stigma in Western societies. Theory and research have consistently indicated that stigmas evoke negative attributions about the target and that they lead to prejudice (Ragins et al. 2007). Prejudice often predicts discrimination toward persons with stigmatized identities (Pichler et al. 2010) even though other factors moderate this relationship (Herek 2000). For example, prejudiced individuals may be guarded about expressing overt, formal forms of discrimination but they may still exhibit—perhaps unintentionally—bias in more subtle ways. An opposite causality may also take place. Presence of discrimination may affect the level of negativity against lesbians and gays. Following the justification-suppression model of Crandall and Eshleman (2003), expression of prejudice is restrained by individual's beliefs, values and social norms. Tolerance of anti-gay discriminatory behaviour in the workplace may be seen as legitimization of prejudice against gay people and exacerbate its level. The homonegativity in the workplace will be positively related to perceived discrimination. Homonegativity at work ↔ Concealment at work The model of managing concealable stigmas at work views anticipated acceptance of the concealable stigma as the primary predictor of revealing or concealing the stigma. The acceptance refers to interpersonal/organisational climate, culture, policies, procedures and representation of LGBT in the organisation. Gay people are expected to conceal (reveal) their sexual orientation more if they perceive the environment as more rejecting (accepting). When a gay person is not certain to what extent they should disclose sexual orientation, they may use information seeking behaviours—so-called signalling (Jones and King 2013). In an opposite direction, disclosure of sexual orientation in the workplace is expected to influence the attitudes towards gay people. (Previous) exposure to homosexuality or knowledge of a gay person is related to individual's attitudes towards homosexuality—the less people are in a (conscious) contact with gays and lesbians, the more hostile attitudes they have toward them (see for example Herek and Capitanio 1996; Estrada and Weiss 1999; Basow and Johnson 2000; Cotten-Huston and Waite 2000; Levina et al. 2000; Horvath and Ryan 2003). The concealment of sexual orientation in the workplace will be positively related to the homonegativity in the workplace. In the model I link the concepts of concealment, homonegativity and perceived discrimination at work to reporting discriminatory incidents and to the probability of being unemployed. Reporting covers different actions such as confiding in a trusted person, confronting the perpetrator(s), engaging management, or taking legal action. According to Stangor et al. (2003), discriminatory incidents are reported only if they are suspected and affirmed as such by the victim. This is more likely with certain types of behaviour or perpetrators and it depends on a victim's cognitive, affective and motivational processes. When deciding whether to report/confront discrimination publicly, the victims weigh the costs and benefits of reporting. Most people who experience discrimination do not file a formal claim (Bell et al. 2013). The reluctance to report discrimination (particularly to authorities or legal institutions) partly stems from the perception that the costs of reporting discrimination are too severe (fear of retaliation or being perceived as a troublemaker) (Major and Kaiser 2008). Gay people face an additional cost if they (partly) conceal their sexual orientation. Publicly reporting discrimination could involve spreading awareness about their sexual orientation. This is particularly undesirable in an environment hostile towards gays and lesbians. Concealment of sexual orientation by gay people will be negatively related to reporting discrimination. Being unemployed Research suggests that gays (and depending on a study also lesbians) have different unemployment probabilities than their straight counterparts. This difference is usually explained by labour demand and labour supply factors. I concentrate on factors related to (the experience of) discrimination. For a more thorough theoretical overview see Fric (2017). Bell et al. (2013) postulate that stigmatised individuals can be disadvantaged in access to employment or in treatment (compensation, promotion, harassment, etc.). A specific case of differential treatment is discriminatory job loss which is an involuntary separation due to inequitable treatment based on personal factors that are irrelevant to performance. Discrimination may have feedback effects on the behaviour of the victim. Neoclassical labour supply theory extended with the concept of cognitive dissonance suggests that discriminated workers may cut back labour supply or withdraw from the labour market altogether (Goldsmith et al. 2004). This is supported by the empirical evidence (Habtegiorgis and Paradies 2013). Discrimination may also negatively affect the employee's motivation, self-esteem and self-efficacy which play an important role in access to employment (Kanfer et al. 2001). Discrimination can also negatively impact an employee's labour market prospects. Victims are less likely to receive good references and stating discrimination as a reason for leaving the previous employer can be detrimental for employment chances. The resulting prolonged unemployment makes it even more difficult to become re-employed as lengthy unemployment is a signal to employers that something is "wrong" with the applicant (Goffman 2009). Because discrimination may lead to job separation, longer expected unemployment duration and decreased labour supply, I hypothesise that: Perceived discrimination will be positively related to the probability of being unemployed. Given that homosexuality is a non-observable stigma and that discrimination is more likely to occur when gay people disclose their sexual orientation, I assume that ceteris paribus: The concealment of sexual orientation in the workplace will be negatively related to the probability of being unemployed. Hypotheses 1 to 4 partly replicate previous research and they allow to control for important contextual factors in which relationships tested by hypotheses 5 and 6 take place. Testing hypotheses 5 and 6 represents the main contribution of this paper. Their importance goes beyond the academic research—because unemployment can be detrimental to individual's socioeconomic status, the potential significant relationship between unemployment and perceived workplace discrimination/concealment of sexual orientation could have policy implications. Other predictors The relationships in the model may be influenced by contextual factors and subjects' demographic characteristics. To account for such effects, I control for unemployment rate, presence of anti-discriminatory legislation, perception of prevalence of general discrimination against lesbians/gays in a given country (which is a distinct concept from the perception of discrimination in the workplace against oneself), subjects' education, and age. It is important to control for sex because of different challenges that gays and lesbians face in the labour market. While there is relatively consistent evidence that gays are disadvantaged compared to heterosexual men, the position of lesbians compared to heterosexual women seems to be more questionable (Drydakis 2014; Fric 2017). The reason may be that public attitudes towards gays are less positive than towards lesbians, especially in heterosexual men (see for example the meta-analysis by Kite and Whitley 1996). Gays are also commonly stereotyped as feminine or effeminate while lesbians are often believed to be overly masculine (Tilcsik 2011). Given these different perceptions, the behaviour of employers, colleagues or customers toward gays and lesbians may not be uniform. To account for these differences I formulate separate Structural Equation Model (SEM) models for gays and lesbians and in logistic regression models I introduce interaction terms with sex. I used data from the EU LGBT survey which was conducted by the European Union Agency for Fundamental Rights in 27 European Union Member States and Croatia between April and July 2012. The total sample of the survey is 93,079 respondents, whereof 59,490 identified themselves as gay and 16,170 as lesbian. The EU LGBT survey was not carried out as an online non-random survey because of lacking a sampling frame, target population characteristics and a consensus on the operational definition of LGBT people. The participants were self-selected and had to "opt-in" into the survey. This may have excluded respondents who are less motivated to take part in the survey. The survey was mostly promoted through online media and LGBT organisations which could affect the sample composition: groups with higher access to- and use of internet (young, more-educated, higher-income and male respondents) may be overrepresented (FRA 2013). One of the main advantages of the EU LGBT survey is that it includes measures of sexual orientation. This often not the case in other large scale surveys or censuses. As a self-administered, online survey guaranteeing full anonymity to its respondents it decreases the risk of respondents concealing information about their sexual orientation because of social desirability bias (Robertson et al. 2017). The survey also provides information on respondents' experiences in the workplace and the extent to which they hide (disclose) their sexual orientation. This information is not matched by surveys that are representative for the whole population and that (in some waves) include measures of sexual orientation. For the purpose of my research I kept only respondents who are gays or lesbians and who are not transgender. The reason for exclusion of bisexual and transgender respondents is that they may face specific issues that are not covered by this study. Laumann et al. (1994) define homosexuality according to three dimensions—sexual behaviour, desire and self-identification. Because self-identification is arguably the most important in the workplace context (from all dimensions this one is most probable to be observed by the employer and colleagues), I identified gay people according to this dimension. In my analysis I only included respondents who had a paid job in the 5 years preceding the survey. This threshold was chosen because some variables used for operationalisation of my theoretical concepts relate to respondents' behaviour and experiences in employment during the 5 years preceding the survey. After checking for the consistency and completeness of respondents' answers, I dropped 15,259 (20.2%) observations which were incomplete or inconsistent. The final sample used for the analysis consisted of 48,161 gays and 12,240 lesbians. Table 1 provides descriptive statistics of the sample. Table 1 Descriptive statistics of the survey sample used in the analysis, split by sex Based on the original data I calculated several new variables. The overview of all variables used in the analysis is provided in Table 2. I briefly discuss the most important variables—reporting, unemployed, concealment, homonegativity and perceived discrimination. Table 2 Overview of variables used in the analysis, sorted alphabetically The dummy variable reporting captures whether the most recent discrimination incident at work was reported by the respondent or someone else. It obtains non-missing values only for respondents who felt personally discriminated in the 12 months preceding the survey and for whom the most recent discrimination incident happened at work (in total 6843 observations). For all other observations reporting was coded as missing because no information was available on whether a potential discrimination incident at work was reported or not.Footnote 4 More detailed analysis into who reported the discriminatory incidents, to whom and how, was not possible because the survey does not provide such information. The dummy variable unemployed captures respondents' employment status. Respondents are seen as unemployed if they had a job anytime during the 5 years preceding the survey and reported their current status as 'unemployed'. My definition of unemployment is broader than the official definition by the International Labour Organization (1982). I treat all respondents as unemployed if they reported so, disregarding whether they are available or looking for a job. This is done so as to not exclude those who became discouraged after experiencing workplace discrimination and dropped out of the labour force (Leppel 2009). I replicated the analysis and excluded unemployed respondents who were not looking for a job in the past 12 months and I came to the same conclusions. Observations for those whose current employment status was student, retired person, person in unpaid work or other and observations with inconsistencies were assigned a missing value. The variables concealment, homonegativity and perceived discrimination are individual level indices capturing concealment of sexual orientation, homonegativity and perceived discrimination in the workplace that were reported by the respondents. They are used in the logistic regression models but not in the structural equation models (see "Method" section). Regarding homonegativity, the EU LGBT Survey didn't include any questions that directly captured the workplace attitudes toward gay people. For this reason, I used a proxy measure based on the respondent's report of (1) witnessing negative comments or conduct against colleague(s) perceived to be LGBT and (2) experiencing generally negative attitude at work against LGBT people. I assume that this proxy measure is strongly positively related with the concept of homonegativity. Figures 2 and 3 summarise the relative incidence of unemployment in gays and lesbians as a function of indices of concealment, homonegativity and perceived discrimination.Footnote 5 There appears to be an U-shaped relationship between concealment and respondents' unemployment rate—respondents who are very overt or very closed about their sexuality at work seem to have a higher unemployment rates than those who engage in a more elaborate identity management. Both perceived discrimination and (especially) homonegativity seem to have a positive linear relationship with the unemployment rate. Unemployment rate of gays (in %) depending on the value of concealment, perceived discrimination and homonegativity indeces Unemployment rate of lesbians (in %) depending on the value of concealment, perceived discrimination and homonegativity indeces In the SEM, the core concepts of the model—homonegativity, concealment and discrimination at work—are latent variables operationalised using multiple variables. Figure 1 shows in dashed rectangles which variables were used to operationalise each concept. More details on the calculation of the concepts are provided in "Method" section. The model described in the section Theoretical background assumes several co-dependencies between the theorised concepts (see the path model in Fig. 1). Given the complexity of the model, the SEM technique was used for the estimation. The concepts of homonegativity, concealment and discrimination at work are unobservable and are treated as latent constructs. In the path model they are shown in ovals and the double-headed arrows between them symbolise that they are mutually correlated. They are grounded by manifest variables (shown in dashed rectangles) which are observable. SEM assumes continuous and multivariate normally distributed data in the population (Finney and DiStefano 2006). By using the Shapiro–Wilk test I found that the data violates the normality assumption. Moreover, variables discrexp, openfear, reporting and unemployed are dichotomous variables with Bernoulli distribution and variables age, workopen, workhide, negcondct, witcondct, expnegatt, education, colgknow and colgopen are categorical variables. This could result in incorrect standard errors of model parameter estimates. For this reason I apply the Generalised Structural Equation Model, which doesn't assume multivariate normal distribution and can handle non-continuous data. I specify a measurement model, which relates responses to latent variables (Skrondal and Rabe-Hesketh 2005). Following Skrondal and Rabe-Hesketh (2005), I formulate the measurement model as $$x_{j}^{*} = \nu + {\rm B}z + \varLambda \xi + \delta_{j} ,$$ for latent response variables unemployed and reporting. For all other latent response variables, the measurement model is formulated as $$x_{j}^{*} = \nu + \varLambda \xi + \delta_{j} ,$$ (1b) where \(x_{j}^{*}\) are latent continuous responses, \(\nu\) a vector of intercepts, \(\varLambda\) a factor loading matrix, \(\xi\) a vector of latent variables, \(\delta\) a vector of unique factors for j index units. \({\rm B}\) is a regression parameter matrix for the regression of \(x_{j}^{*}\) on a vector of observed explanatory variables \(z\) (the demographic and country-level control variables)Footnote 6 The observed categorical response \(x_{ij}\) is related to latent continuous response \(x_{ij}^{*}\) via a threshold model. For ordinal observed responses I assume that $$x_{{ij}} = \left\{ {\begin{array}{*{20}ll} {0 \quad if \quad - \infty < x_{{ij}}^{*} \le k_{{1i}} } \\ {\begin{array}{*{20}ll} {1 \quad if \quad k_{{1i}} < x_{{ij}}^{*} \le k_{{2i}} } \\ \qquad \qquad \quad \vdots \\ \end{array} } \\ {S \quad if \quad k_{{Si}} < x_{{ij}}^{*} \le \infty } \\ \end{array} } \right.$$ Dichotomous observed responses are a special case where S = 1. I use generalised latent variables model, with a measurement model in forms $$g\left( {\mu_{j} } \right) = \nu + \varLambda \xi + {\rm B}z$$ for the variables unemployed and reporting, while for all other variables it has form $$g\left( {\mu_{j} } \right) = \nu + \varLambda \xi$$ where \(g\left( \cdot \right)\) is a vector of link functions and \(\mu_{j}\) a vector of conditional means of the responses given quantities as defined in Eqs. (1a) and (1b). Because I use dichotomous and categorical variables, I select logit as the link function: $$logit\left( {\mu_{j} } \right) = ln\left( {\frac{{\Pr \left( {\mu_{j} } \right)}}{{1 - \Pr \left( {\mu_{j} } \right)}}} \right) = \nu + \varLambda \xi + {\rm B}z$$ for variables unemployed and reporting and for all other variables $$logit\left( {\mu_{j} } \right) = ln\left( {\frac{{\Pr \left( {\mu_{j} } \right)}}{{1 - \Pr \left( {\mu_{j} } \right)}}} \right) = \nu + \varLambda \xi .$$ To fit the model, I used the gsem procedure in Stata software.Footnote 7 Because the maximum likelihood estimation method formally assumes conditional normality, the option robust has been selected during the calculation. The reported results are therefore robust to heteroscedasticity of the errors (StataCorp LP 2013). The gsem procedure deletes the missing values equation-wise. This means that a given observation will not be used in equations containing a variable where this observation has a missing value (and in products of such equations) (StataCorp LP 2013). To fit the specified model I used the alternative-starting-values procedure as described in StataCorp LP (2013). This entailed that I firstly fitted a simplified model and used its solution as starting values to fit a more complex model. I repeated this procedure until I was able to fit the original model.Footnote 8 Because of differences between gays and lesbians (as described in "Theoretical background" section), I fitted two separate models—one for gays and another for lesbians. The current version of Stata doesn't support calculation of goodness of fit statistics for the gsem model. For this reason I do not report goodness of fit statistics for my SEM throughout the paper. To control the validity of the results from the SEM with regards to hypotheses 4, 5 and 6, I fitted six logistic regression models (LRM) specified as follows: $$logit\left( y \right) = ln\left( {\frac{\Pr \left( y \right)}{1 - \Pr \left( y \right)}} \right) = \alpha + {\rm B}x_{k}$$ where y refers to the dependent variable, α to the intercept, \(x_{k}\) to the vector of k explanatory variables and \({\rm B}\) to regression parameter matrix. I specified three models for both independent variables unemployed and reporting. The models include a base model, a model with country dummy variables and a model with interactions with sex. Potential differences in the results between SEM and LRM could be caused by the following factors: SEM estimates the whole model as shown in Fig. 1 while LRM estimates separate models for probability of unemployment/reporting discrimination; Workplace homonegativity, perceived discrimination and concealment of sexual orientation are calculated differently in both methods—as latent variables in SEM and as indices in LRM; LRM doesn't assume a mutual relationship between workplace homonegativity, perceived discrimination and concealment of sexual orientation while SEM does; Incorrect specification of the model(s). The results of the SEM and LRM were similar unless stated otherwise. The outcomes of the SEM are illustrated in Fig. 4. The full output of SEM is reported in table A1 in the annex and the outcomes of the LRM in table A2. Summary of results of SEM analysis. Estimates for gays are shown in black font and estimates for lesbians are shown in grey bold font. Start sign * means that the coefficient is statistically significant at 5%, ** at 1% and *** at 0.1%. r refers to correlation coefficient, β to odds ratios of logistic regression for observed independent variables (shown in rectangles) and λ to odds ratios of logistic regression for latent exogenous variables (shown in ovals) with mean 0 and standard deviation s. The reference category for variable education is 'Primary education or lower' and for variable age it is '18–29 years old' Consistently with Hypothesis 1, there was a weak positive (and significant) correlation between the concealment of sexual orientation and perceived discrimination in the workplace for both lesbians and gays. In other words subjects who are less open about their homosexuality more often report that they feel discriminated. This relationship is also mediated by homonegativity: perceived discrimination is strongly positively correlated with homonegativity (as predicted by hypothesis 2) and homonegativity has a moderately strong positive correlation to concealment (confirming hypothesis 3). The latter is consistent with the model of managing concealable stigmas at work by Jones and King (2013) according to which lesbians and gays conceal their sexuality more in hostile environments. Consistently with Hypothesis 4, a discriminatory incident is less likely to be reported by the subjects who are less open about their sexuality. The LRM shows weakly statistically significant effect of sex where the level of concealment has a more profound negative effect on lesbians' readiness to report discrimination than in case of gays. Reporting is also positively associated to perceived discrimination and negatively to homonegativity in the workplace (although the latter is not significant for lesbians). The findings regarding contextual variables are less consistent across sex. Presence of anti-discriminatory legislation and institutions is negatively related to gays' probability of reporting a discriminatory incident, while the positive effect is found in lesbians (though lacking statistical significance in the SEM). LRM confirms that the difference between lesbians and gays is statistically significant. The finding for gays is remarkable—discrimination incidents are less likely to be reported in countries with more extensive anti-discrimination legislation and institutions. This could indicate that anti-discrimination legislation and institutions on its own do not increase readiness to report discrimination. An alternative explanation could be that the nature of discrimination differs between countries and that it is possibly less serious (and hence less likely to be reported) in countries with more extensive legal protection. The effect of public attitudes on discrimination reporting is consistent between SEM and LRM. Lesbians are more likely to report discrimination in countries with more negative public attitudes but for gays this relationship is negative and weak. The difference between gays and lesbians is statistically significant (see the model with interactions in LRM). In agreement with Hypothesis 5, lesbians and gays who perceived being discriminated at work were statistically significantly more likely to be unemployed (in both SEM and LRM). The interaction term with sex was not significant meaning that discrimination perception doesn't relate to unemployment probability differently in lesbians compared to gays. I will discuss these outcomes in a more detail in the following section. In contradiction with hypothesis 6, in SEM concealment of sexual orientation at work was, ceteris paribus, positively and significantly related to unemployment for both lesbians and gays. The LRM confirmed this finding only for lesbians. For gays the unemployment probability and concealment were not statistically significantly related. Another contradiction between SEM and LRM was found in the relationship between homonegativity and unemployment. In SEM, both variables were negatively related for gays and no statistically significant relationship was found for lesbians. In contrast, homonegativity had a positive association with unemployment in LRM, which became statistically insignificant once I included interactions with sex. I observed a negative (U-shaped)Footnote 9 relationship between an individual's education attainment (age) and unemployment probability. Country level unemployment rate and discrimination prevalence in country were both positively and statistically significantly related to a subject's probability of being unemployed. I have formulated a model of causalities between perceived discrimination, homonegativity and sexual orientation disclosure in the workplace and the reporting of discrimination and an individual's employment status. I have empirically tested the relationships between these concepts using survey data. The main contribution of my approach is that it allowed to simultaneously estimate relationships between several concepts of interest. Because I used cross-sectional data with no time dimension, I could not establish the causal direction in observed relationships (De Vaus 2001). Despite this shortcoming, my analysis provided a number of insights. My results indicate that perceived discrimination directed against gay people in the workplace relates to their employment status. As discussed earlier, this could be due to discriminatory job loss or cognitive dissonance. Perceived discrimination can also have an indirect effect on employment status—unfavourable treatment (such as a lower promotion rate or less supportive mentors) can limit career development, especially if accumulated over time. This leads to a comparative disadvantage for discriminated individuals when applying for a job even in absence of direct discrimination in access to employment. The relationship between perceived discrimination and being unemployed is positive and significant for both gays and lesbians. For gays, this is in line with previous research which showed that homosexuality forms a barrier in their access to employment (Fric 2017). However, the literature is inconclusive for lesbians, providing some evidence that—despite being discriminated in accessing employment—lesbians are more probable to be employed compared to heterosexual women (Fric 2017). My findings suggest that workplace discrimination has qualitatively the same impact on lesbians as it has on gays when it comes to the link with unemployment. Hence favourable labour market outcomes of lesbians as to straight women seem to be driven by labour supply factors rather than by (the lack of) discrimination. What role do concealment of sexual orientation and homonegativity play in this story? The outcomes suggest that (ceteris paribus) the more subjects conceal their sexual orientation at work, the likelier they are to be unemployed. In LRM the convex shape of relationship between concealment and unemployment (shown in Figs. 2 and 3) disappeared once I controlled for individual and contextual variables. These findings are unexpected in light of the theoretical predictions. The review by Fric (2017) indicates that job applicants whose homosexuality is disclosed are disadvantaged (compared to their heterosexual counterparts), especially if the employers are male. Because silence constitutes an implicit claim to be heterosexual (Button 2001), gay people who disclose their sexual orientation should experience a prolonged job search and a higher unemployment rate than those who conceal it. The observed sign of relationship could be caused by other factors for which I didn't control in my analysis. For example, gays and lesbians who are less open about their sexuality may concentrate in sectors (or occupations) with higher general unemployment rate. Or certain personality trait (for example self-esteem) may relate both to higher concealment of sexual orientation and to higher unemployment probability. The analysis gave an inconsistent answer to how workplace homonegativity relates to unemployment probability. This could indicate that homonegativity affects unemployment mostly indirectly via incidence of discriminatory incidents and via concealment of sexual orientation. The analysis shows that reporting discriminatory incidents positively relates to perception of discrimination. While this is not a ground-breaking finding, it is worthwhile to look at what roles the concealment of sexual orientation and homonegativity play: subjects who conceal their sexual orientation at work are somehow more likely to perceive being discriminated and less likely to report discrimination. This is coherent with the theoretical prediction that gay people will face additional cost of reporting discrimination if they (partly) conceal their sexual orientation. Another finding which is consistent with the predictions is that discriminatory incidents will be more likely unreported in workplaces with higher homonegativity. In SEM this relationship was statistically significant only for gays while in LRM for both sexes (the interaction term with sex was not statistically significant). The negative relationship suggests that reporting discriminatory incident has higher perceived costs in environments where homophobic attitudes and conduct are more prevalent. In these contexts, the victims (or witnesses) probably fear the repercussions of reporting discriminatory behaviour more. The findings indicate the existence of a vicious circle in the workplace, especially for closeted lesbians and gays who work in more hostile workplaces. Even if they fully conceal their sexual orientation, they seem to experience (indirect) discrimination due to a hostile work environment or because their colleagues and/or employer suspects that they are gay. Concealing sexual orientation makes them more vulnerable to discrimination by limiting their possibilities of confronting discriminatory incidents—by reporting such incidents they risk that their sexual orientation would be publicly revealed. The data suggest that discriminatory incidents are less likely to be reported in hostile workplaces. Ironically, these are the workplaces where discrimination and harassment is most likely to occur. This can explain a relatively low incidence of official discriminatory complaints on the grounds of sexual orientation, especially in countries with relatively more hostile public attitudes toward homosexuality as found by Eurofound (2016). According to EU LGBT survey, less than 13% of most recent discriminatory incidents in the workplace were (officially) reported. The lack of official complaints is often interpreted as evidence that discrimination against gay people in the European labour market is not frequent. In the light of my findings, the lack of complaints is rather a sign that gay people do not dare to report discriminatory incidents because of pervasive homophobia and fears of their sexuality being publicly revealed. It is noteworthy, that my data only captures discrimination encountered by the respondents. The level of potential discrimination (i.e. discrimination that would take place if the respondents' sexual orientation was always fully known) is probably considerably higher. Finally, the direct and indirect labour market discrimination based on sexual orientation is forbidden in the European Union by the Employment Equality Directive (2000/78/EC). The legislation seems to only partly solve the problem of sexual orientation discrimination. Its effectiveness may be weakened by a low readiness to report discriminatory incidents. Under these circumstances, the policy response could target public attitudes towards homosexuality as a means of influencing workplace homonegativity (which is an important predictor of workplace discrimination). At the same time, the policy should aim to create a safe workplace where lesbians and gays would be comfortable to disclose their sexual orientation and report potential discriminatory incidents. Directions for future research Several questions still remain to be answered. Firstly, more research is needed into the relationship between the disclosure of sexual orientation and employment status. What are the channels between (perceived) discrimination and unemployment? Do gay people voluntary choose to leave discriminatory workplaces (or even labour market altogether) or does the job separation follow discriminatory lay off initiated by employers? Or is higher unemployment probability a consequence of comparative disadvantage that gay employees accumulate over time from small discriminatory incidents? Answers to these questions could help to formulate an adequate policy response aiming at decreasing discriminatory job separations of lesbian and gay employees. More research is also needed into the causalities regarding reporting discriminatory incidents based on sexual orientation. Would my findings vary if different forms of reporting discrimination were concerned (such as engaging the HR department, a trade union or taking a legal action)? And how do different forms of reporting affect a victim's workplace experiences and outcomes? Answers to these questions could help to design effective procedures for reporting and addressing sexual orientation discrimination. This study has a number of limitations. First of all, the measure of workplace discrimination is based on a subject's perception and as such it is conceptually different from real discrimination. In real life it is often difficult to objectively determine whether discrimination took place or not and subject's perceptions may not necessarily reflect the reality (Chung 2001). So far, the research has made little use of self-reported data on discrimination due to concerns about their validity and bias relating to inflated discrimination reports. Over-reporting of discrimination on a large scale could bias the research results and in my analysis it could lead to establishing a false relationship between perceived discrimination and other constructs (unemployment, etc.). However, the evidence does not support such concerns and in contrast minorities seem to be more likely to underreport their experiences with discrimination (Habtegiorgis and Paradies 2013). Despite these conceptual limitations, perceived discrimination is worth looking at—if an action is perceived as discriminatory, it may adversely impact employees' morale, work attitudes, and job behaviours (Ragins and Cornwell 2001). Secondly, the measure of reporting discrimination is based on subjects' retrospective reports of how they handled the most recent discriminatory incident. This measure may be biased upwards because subjects tend to recall instances when they reported discrimination rather than instances when they failed to do so. This could result in overestimation of the extent to which discrimination is reported. Besides that, it is difficult to assess the type and severity of discriminatory events that subjects considered (Major and Kaiser 2008). The data also don't distinguish whether the incidents were reported by the subjects themselves or someone else. The third limitation is connected to using an online survey data. Because of social stigma and privacy concerns, gay people are to a large extent a hidden population. This results in a lack of sampling frame. Online surveys partly address this issue as they are widely accessible and provide subjects with privacy and anonymity. For this reason, online surveys are frequently used to approach gays and lesbians. Their drawback is a limited external validity (Göçmen and Yilmaz 2016). As discussed in "Data" section, some groups of gay and lesbian population may be underrepresented in my sample. I used statistical controls to account for (what I identified as) relevant individual characteristics. However, it remains unclear to what extent I succeeded to control for the most relevant characteristics and whether the sample per se included sufficient information on behaviour and experiences of the least visible strata of the target population. The findings of my study may not be generalizable to the whole population of gay people in the European Union. They are likely to be especially valid for groups that are best represented in the EU LGBT survey, i.e. respondents who are young, more educated, male and possibly those who are more accepting of their sexual orientation and open about it. Finally, in my analysis I didn't control for variables such as region, occupation, existence of company level policies, etc. This was partly due to data unavailability and partly due to complexity of the proposed model. Inclusion of these variables into the model could provide an additional insight into the examined associations. For example, existence of anti-discriminatory company policies could mediate the relationship between workplace homonegativity and reporting of discriminatory incidents. The future research could address this shortcoming. I empirically tested how workplace homonegativity, concealment of sexual orientation and discrimination relate to an individual's employment status and the reporting of discriminatory incidents. The results supported the majority of my hypotheses. The outcomes support the assumption that hostility against gays and lesbians projects into discriminatory behaviour which in turn can justify such prejudice. The results also suggest that stigma theory's prediction that hostility and discrimination against lesbians and gays negatively impacts their readiness to publicly disclose their sexual orientation. An opposite causality is also possible—the lack of (conscious) contact with gay people can increase prejudice and discriminatory behaviour against them. Concealment of sexual orientation seems to form an important barrier in reporting sexual orientation discrimination. The findings also indirectly support the prediction of discriminatory job loss model by Bell et al. (2013) that discrimination may result in job separation. Alternatively, experiencing discrimination could negatively affect one's labour supply via cognitive dissonance. Contrary to my expectations, I observed a positive relationship between the concealment of sexual orientation in the workplace and an individual's unemployment probability even after controlling for individual and country-specific characteristics. The datasets analysed during the current study are available in the The UK Data Service repository subject to special licence access". European Union Agency for Fundamental Rights (FRA). (2016). European Union Lesbian, Gay, Bisexual and Transgender Survey, 2012: Special Licence Access. [data collection]. UK Data Service. SN: 7956, http://doi.org/10.5255/UKDA-SN-7956-1 Unless stated differently I use adjective gay to represent both lesbians and gays. European Union Agency for Fundamental Rights (FRA). (2016). European Union Lesbian, Gay, Bisexual and Transgender Survey, 2012: Special Licence Access. [data collection]. UK Data Service. SN: 7956, http://doi.org/10.5255/UKDA-SN-7956-1. The term sexual prejudice refers to negative attitudes towards individuals because of their sexual orientation (Herek 2000). In the EU LGBT survey, the respondents are asked whether they felt discriminated in the past 12 months (question c4) and where the most recent incident of discrimination took place (question c5). The information on whether discriminatory incident at work was reported or not (variable c6) is available only if it was respondent's most recent incident. An interested reader can find detailed statistics from the survey in the survey data explorer at https://fra.europa.eu/en/publications-and-resources/data-and-maps/survey-fundamental-rights-lesbian-gay-bisexual-and. Note that some statistics may differ from those reported here because I dropped observations with inconsistencies. The variable reporting has only a limited amount of observations with known values, which considerably limits the sample size for model which has reporting as dependent variable. In this model I therefore don't include age and education as control variables. StataCorp. (2013). Stata Statistical Software: Release 13.1. College Station, TX: StataCorp LP. The full syntax is available upon request. U-shaped relationship in SEM for gays and negative relationship in LRM and SEM for lesbians. Arrow, K.: The theory of discrimination. Discrimination in labor markets. http://www.econ.iastate.edu/classes/econ321/rosburg/Arrow-TheTheoryofDiscrimination.pdf (1973). Accessed 21 Mar 2015 Basow, S.A., Johnson, K.: Predictors of homophobia in female college students. Sex Roles 42(5/6), 391–404 (2000). https://doi.org/10.1023/A:1007098221316 Bell, M.P., et al.: Introducing discriminatory job loss: antecedents, consequences, and complexities. J. Manag. Psychol. 28(6), 584–605 (2013). https://doi.org/10.1108/jmp-10-2012-0319 Brewer, G., Lyons, M.: Is gaydar affected by attitudes toward homosexuality? Confidence, labeling bias, and accuracy. J. Homosex. 64(9), 1241–1252 (2017). https://doi.org/10.1080/00918369.2016.1244443 Button, S.B.: Organizational efforts to affirm sexual diversity: a cross-level examination. J. Appl. Psychol. 86(1), 17–28 (2001). https://doi.org/10.1037//0021-9010.86.1.17 Chaudoir, S.R., Fisher, J.D.: The disclosure processes model: understanding disclosure decision making and postdisclosure outcomes among people living with a concealable stigmatized identity. Psychol. Bull. 136(2), 236–256 (2010). https://doi.org/10.1037/a0018193 Chung, Y.B.: Work discrimination and coping strategies: conceptual frameworks for counseling lesbian, gay, and bisexual clients. Career Dev. Q. 50, 33–44 (2001) Cotten-Huston, A.L., Waite, B.M.: Anti-homosexual attitudes in college students: predictors and classroom interventions. J Homosex. 38(3), 117–133 (2000). https://doi.org/10.1300/j082v38n03 Crandall, C., Eshleman, A.: A justification-suppression model of the expression and experience of prejudice. Psychol Bull (2003). https://doi.org/10.1037/0033-2909.129.3.414 De Vaus, D.A.: Research Design in Social Research. Sage Publications Ltd, London (2001) Drydakis, N.: Sexual orientation and labour market outcomes. IZA World Labor (2014). https://doi.org/10.15185/izawol.111 Estrada, A.X., Weiss, D.J.: Attitudes of military personnel toward homosexuals. J. Homosex. 37(4), 83–97 (1999). https://doi.org/10.1300/j082v37n04_05 Eurofound: Working life experiences of LGBT people and initiatives to tackle discrimination. Dublin. https://www.eurofound.europa.eu/observatories/eurwork/articles/working-conditions-labour-market-law-and-regulation/working-life-experiences-of-lgbt-people-and-initiatives-to-tackle-discrimination (2016) Eurostat: Unemployment by sex and age—annual average (variable une_rt_a). http://appsso.eurostat.ec.europa.eu/nui/show.do?wai=true&dataset=une_rt_a (2017). Accessed 9 Jan 2017 Fasoli, F., et al.: Gay- and lesbian-sounding auditory cues elicit stereotyping and discrimination. Arch. Sex. Behav. 46(5), 1261–1277 (2017). https://doi.org/10.1007/s10508-017-0962-0 Finney, S., DiStefano, C.: Non-normal and categorical data in structural equation modeling. In: Structural equation modeling: a second course, pp. 269–314. Information Age Publishing, Greenwich (2006). http://books.google.com/books?hl=nl&lr=&id=iEv0y1MZKjcC&oi=fnd&pg=PA269&dq=structural+equation+model+normal+data&ots=5H8PPxQPu-&sig=r2YOCtqPwdEXPhLPJyvhxUyMH1E. Accessed 6 Feb 2017 FRA: EU LGBT survey Technical report: Methodology, online survey, questionnaire and sample. http://fra.europa.eu/en/publication/2013/eu-lgbt-survey-technical-report (2013) Freeman, J.B., et al.: Sexual orientation perception involves gendered facial cues. Pers. Soc. Psychol. Bull. 20, 1–14 (2010). https://doi.org/10.1177/0146167210378755 Fric, K.: Access to the labour market for gays and lesbians—research review. J. Gay Lesbian Soc. Serv. 29(4), 319–361 (2017). https://doi.org/10.1080/10538720.2017.1365671 Göçmen, İ., Yılmaz, V.: 'Exploring perceived discrimination among LGBT individuals in Turkey in education, employment, and health care: results of an online survey. J. Homosex. (2016). https://doi.org/10.1080/00918369.2016.1236598 Goffman, E.: Stigma: Notes on the Management of Spoiled Identity. Simon and Schuster (2009). http://books.google.com/books/about/Stigma.html?id=zuMFXuTMAqAC&pgis=1. Accessed 14 Oct 2014 Goldsmith, A.H., et al.: The labor supply consequences of perceptions of employer discrimination during search and on-the-job: Integrating neoclassical theory and cognitive dissonance. J. Econ. Psychol. 25(1), 15–39 (2004) Habtegiorgis, A.E., Paradies, Y.: Utilising self-report data to measure racial discrimination in the labour market. Aust. J. Labour Econ. 16(1), 5 (2013) Herek, G.: The psychology of sexual prejudice. Current directions in psychological science. http://cdp.sagepub.com/content/9/1/19.short (2000). Accessed 21 Mar 2015 Herek, G., Capitanio, J.: "Some of my best friends": intergroup contact, concealable stigma, and heterosexuals' attitudes toward gay men and lesbians. Pers. Soc. Psychol. Bull. http://psychology.ucdavis.edu/Rainbow/html/Best_Friends_96_pre.pdf. Accessed 19 Mar 2015 Horvath, M., Ryan, A.M.: Antecedents and potential moderators of the relationship between attitudes and hiring discrimination on the basis of sexual orientation. Sex Roles 48(3–4), 115–130 (2003). https://doi.org/10.1023/A:1022499121222 ILGA Europe: ILGA-Europe Rainbow Index, May 2012. http://www.ilga-europe.org/rainboweurope/2012 (2012). Accessed 23 Jan 2017 International Labour Organization: Resolution concerning statistics of the economically active population, employment, unemployment and underemployment. http://www.ilo.org/global/statistics-and-databases/standards-and-guidelines/guidelines-adopted-by-international-conferences-of-labour-statisticians/WCMS_087481/lang–en/index.htm (1982) Johnson, K.L., et al.: Swagger, sway, and sexuality: judging sexual orientation from body motion and morphology. J. Pers. Soc. Psychol. 93(3), 321–334 (2007) Jones, K.P., King, E.B.: Managing concealable stigmas at work: a review and multilevel model. J. Manag. 40(5), 1466–1494 (2013). https://doi.org/10.1177/0149206313515518 Kanfer, R., Wanberg, C.R., Kantrowitz, T.M.: Job search and reemployment: a personality-motivational analysis and meta-analytic review. J. Appl. Psychol. 86(5), 837–855 (2001) Kite, M.E., Whitley, B.E.: Sex differences in attitudes toward homosexual persons, behaviors, and civil rights a meta-analysis. Pers. Soc. Psychol. Bull. 22(4), 336–353 (1996). https://doi.org/10.1177/0146167296224002 Laumann, E.O., et al.: The Social Organization of Sexuality: Sexual Practices in the United States. University of Chicago Press, Chicago (1994). https://doi.org/10.1136/bmj.310.6978.540 Leppel, K.: Labour force status and sexual orientation. Economica 76, 197–207 (2009). https://doi.org/10.1111/j.1468-0335.2007.00676.x/full Levina, M., Waldo, C.R., Fitzgerald, L.F.: We're here, we're queer, we're on TV: The effects of visual media on heterosexuals' attitudes toward gay men and lesbians1. J. Appl. Soc. Psychol. 30(4), 738–758 (2000). https://doi.org/10.1111/j.1559-1816.2000.tb02821.x/full Major, B., Kaiser, C.R.: Perceiving and claiming discrimination. Handbook of Employment Discrimination Research: Rights and Realities, pp. 285–299. Springer, New York (2008). https://doi.org/10.1007/978-0-387-09467-0_14 Chapter Google Scholar Milliken, F., Martins, L.: Searching for common threads: understanding the multiple effects of diversity in organizational groups. Acad. Manag. Rev. http://amr.aom.org/content/21/2/402.short (1996). Accessed 21 Aug 2015 Pichler, S., Varma, A., Bruce, T.: Heterosexism in employment decisions: the role of job misfit. J. Appl. Soc. Psychol. 40(10), 2527–2555 (2010) Ragins, B.R., Cornwell, J.M.: Pink triangles: antecedents and consequences of perceived workplace discrimination against gay and lesbian employees. J. Appl. Psychol. 86(6), 1244–1261 (2001) Ragins, B., Singh, R., Cornwell, J.: Making the invisible visible: fear and disclosure of sexual orientation at work. J. Appl. Psychol. 92(4), 1103 (2007) Robertson, R.E., et al.: Estimates of non-heterosexual prevalence: the roles of anonymity and privacy in survey methodology. Arch. Sex. Behav. 47(4), 1069–1084 (2017) Rostosky, S.S., Riggle, E.D.B.: "Out" at work: the relation of actor and partner workplace policy and internalized homophobia to disclosure status. J Couns. Psychol. 49(4), 411–419 (2002). https://doi.org/10.1037//0022-0167.49.4.411 Schneider, B.: Coming out at work: bridging the private/public gap. Work Occup. (1986). https://doi.org/10.1177/0730888486013004002 Skrondal, A., Rabe-Hesketh, S.: Structural equation modeling: categorical variables. Encycl. Stat. Behav. Sci. (2005). https://doi.org/10.1002/0470013192.bsa596/full Stangor, C., et al.: Ask, answer, and announce: three stages in perceiving and responding to discrimination. Eur. Rev. Soc. Psychol. 14, 277–311 (2003). https://doi.org/10.1080/10463280340000090 StataCorp LP: STATA Structural Equation Modeling Reference Manual, p. 575. A Stata Press Publication, College Station (2013) https://www.google.nl/search?q=stat+sem+manual&sourceid=ie7&rls=com.microsoft:en-IE:IE-Address&ie=&oe=&gfe_rd=cr&ei=FFiTWN-fMo7H8AfHsojoAg. Accessed 2 Feb 2017 Tilcsik, A.: Pride and prejudice: employment discrimination against openly gay men in the United States. Am. J. Sociol. 117(2), 586–626 (2011). https://doi.org/10.1086/661653 Valfort, M.-A.: LGBTI in OECD Countries. OECD Publishing (2017). https://doi.org/10.1787/d5d49711-en van Balen, B., et al.: The situation of LGBT groups in the labour market in European Member States, report of the network of socio-economic experts in the field of anti-discrimination. http://ec.europa.eu/justice/discrimination/files/sen_synthesisreport2010parti_en.pdf (2011) This article was written under the lead of Prof. Dr. Ferry Koster (Erasmus University Rotterdam) and of Prof. Dr. Romke van der Veen (Erasmus University Rotterdam) and I would like to thank for their advice and support. Erasmus University Rotterdam, Rotterdam, The Netherlands Karel Fric The author analysed the data and was the sole contributor to the writing of the manuscript. The author read and approved the final manuscript. Correspondence to Karel Fric. The author declares that there is no competing interests. See Tables 3 and 4. Table 3 Full results of the structural equation model for gays and lesbians Table 4 Results of the logistic regression models with dependent variables unemployment and reporting Fric, K. How does being out at work relate to discrimination and unemployment of gays and lesbians?. J Labour Market Res 53, 14 (2019). https://doi.org/10.1186/s12651-019-0264-1 Disclosure of sexual orientation
CommonCrawl
Interference cancelation scheme with variable bandwidth allocation for universal filtered multicarrier systems in 5G networks Lei Chen ORCID: orcid.org/0000-0001-8800-58901,2,3 & J. G. Yu1,2 EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 1 (2018) Cite this article The universal filtered multicarrier (UFMC) is an appealing technique to eliminate out-of-band emission (OOBE) for fifth-generation (5G) networks. However, its signals that are modulated to the carriers, which are on the edges of one subband, are influenced by the filter. In this paper, an interference cancelation scheme is proposed to suppress the interference and to improve the multiuser system performance. Here, interference cancelation subcarriers are inserted on the edges to reduce the filter interference. This scheme ensures that the operating subregion or subband supports the variable bandwidth allocation to meet the requirements of 5G networks. Simulation results show that the bit error rate (BER) performance improves by 4 and 7 dB compared with that of the conventional UFMC when the corresponding Eb/N0 is 15 and 20 dB. Comparisons with both the standard OFDM and the GB OFDM are also reported. The results demonstrate that the proposed UFMC scheme outperforms the other two systems, especially compared with the GB OFDM system under the condition of the same spectral efficiency. One of the main prospective scenarios of 5G networks is machine-type communications (MTC) [1, 2], where the devices are generally one order of magnitude larger than human communication users. These devices and their corresponding traffic will generate pieces of spectrum that will be a primary challenge of 5G networks [3]. Therefore, 5G networks have to support high bit rate traffic with high spectral efficiency. The well-known orthogonal frequency division multiplexing (OFDM) is widely applied in multiuser systems because of its robustness and easy implementation based on fast Fourier transform (FFT) algorithms. Nevertheless, the predicted application scenarios of 5G networks present challenges where the OFDM can be applied in only a limited way, for example, the sporadic communication of MTC devices in the Internet of things (IoT), which makes it difficult to maintain the orthogonality among subcarriers in the strict synchronization process [1]. The OFDM symbol with cyclic prefix (CP) presented low spectral efficiency when used to solve the low latency requirements in tactile Internet applications [4]. Additionally, the high OOBE of OFDM represented a challenge for random and dynamic spectrum access systems [5]. These problems make OFDM vulnerable when solving frequency misalignments in multiuser scenarios, and the system is affected seriously by intercarrier interference (ICI). To overcome these difficulties, several new waveforms have attracted the attention of researchers, including UFMC, filter-bank-based multicarrier (FBMC), and generalized frequency division multiplexing (GFDM), because these waveforms have much lower sidelobe levels than that of OFDM systems. GFDM is suitable for noncontiguous frequency band allocation because it adopts a shortened CP via the tail biting technique [6, 7]. FBMC can make the sidelobes much weaker and the intercarrier interference issue far less crucial compared to those of OFDM by applying a filter to each of the subcarriers [8–10]; however, it is unfit for short bursts, such as those as in MTC, because of the long filter length [1, 11]. By contrast, to achieve relatively flexible band allocation, the UFMC waveform, which starts with an OFDM signal and includes the advantages of filtered OFDM and FBMC, was proposed [12, 13]. The UFMC is performed on groups of adjacent subcarriers by filtering to reduce both the sidelobe levels and intercarrier interference that result from poor time/frequency synchronization [3]. Moreover, the filter length of UFMC systems is shorter than that of FBMC systems due to their different bandwidths. Therefore, the UFMC is considered to be an appealing technique for 5G networks. Moreover, there are techniques to improve the performance and application of UFMC systems, for example, the performance evaluation in a scenario with relaxed synchronization [14], a frame structure and design targeting IoT provision [15], a field trial for performance evaluation [16], and filter optimization by considering both the carrier frequency and timing offset [17] or using the signal over in-band distortion and out-of-band leakage ratio [18]. In this paper, we focus on the interference in one subband and present a scheme to improve the system performance. An interference cancelation method is proposed for UFMC systems in this paper to further decrease the interference in one subband. This method, which is based on the ICI cancelation method used in OFDM systems, can be flexibly configured according to the specific bandwidth requirements of multiusers in 5G networks. The ICI cancelation method performs much better than the method used in standard OFDM systems [19]. However, the bandwidth efficiency is reduced several fold owing to the redundant modulation in the entire band. Based on the analysis of filter interference, we find that the subcarriers on the edges of a subband are greatly influenced by the existence of a transition zone, especially in low-cost devices with low-order filters. This phenomenon inspires us to modulate the subcarriers on the edges with the ICI cancelation method. Then, an interference cancelation method is proposed for UFMC systems to restrain the edge interference and prevent significant reductions in spectral efficiency. The rest of this paper is organized as follows. First, a summary of related work is given in Section 2. Then, we present a modified system model for UFMC systems and its corresponding interference cancelation method in Section 3. Furthermore, Section 4 analyzes the simulated BER performance based on the proposed scheme of UFMC systems under the condition of multiuser access. We also compare the performance with those of conventional UFMC, guard band (GB) OFDM, and standard OFDM. Finally, the conclusions are presented in Section 5. A few schemes have been proposed to mitigate the interference caused by time/frequency synchronization error in UFMC systems. One study [20] presented a novel filter optimization technique with both low complexity and high throughput to reduce inter-subband interference (ISBI). Two methods were applied: spectrum shaping with low complexity and carrier insertion between two filters. The filter optimization method provided a better signal-to-interference ratio (SIR) than that of the original method, improving the robustness against ISBI. Additionally, a similar scheme to reduce ISBI was proposed in [21], where the authors incorporated active interference cancelation (AIC) into the UFMC system to further reduce inter-subband interference and enable highly reliable communication. AIC is widely applied in multiband cognitive OFDM system by inserting specific subcarriers on both sides of the primary user to actively eliminate interference between primary and secondary users. By contrast, Lei Zhang et al. concentrated on time synchronization by bringing the CP into UFMC systems. The authors analyzed the conditions for interference-free one-tap equalization for an imperfect transceiver; then, the corresponding channel equalization algorithms were proposed and validated by simulations [22]. Additionally, [23] established a multiservice framework based on a subband-filtered multicarrier system to analyze the desired signal, intersymbol interference (ISI), ICI, ISBI, and noise. Inter-serviceband interference cancelation algorithms were also proposed by precoding the information symbols at the transmitter. In this process, a certain GB was inserted between different types of services to mitigate the interference. Although all the above schemes provide better performance than that of conventional systems for ICI, ISBI, and ISI reduction, researchers have not considered the effect of the filter in the subband, which also results in system performance degradation due to partial loss of information. Therefore, we concentrate on this situation and propose an interference cancelation scheme to further improve system performance. System model and proposed interference cancelation scheme To achieve efficient spectrum access for 5G networks, various influential factors must be considered in the design, such as the number of devices and the bandwidth requirement. Therefore, 5G networks have to have a much higher degree of flexibility and scalability than those of former generations. The UFMC, which is an attractive waveform for 5G networks, is vulnerable due to the issues described above. Thus, we propose an interference cancelation scheme for UFMC systems to solve this problem and present the scheme in detail. UFMC system model Figure 1 shows the UFMC system model and our proposed interference cancelation scheme. Compared with standard OFDM systems, the entire band of this model with N subcarriers is divided into M subbands, which correspond to M pieces of equipment. Each subband can be allocated to either one piece of equipment or physical resource block (PRB) in LTE, and each piece of equipment occupies a different amount of consecutive subcarriers determined by its service type [24]. Additionally, the subband sidelobe level can be significantly suppressed by using a bandpass filter (BPF). However, filtering has some negative effects on a certain number of subcarriers, especially on the edges of the subband. Thus, the proposed scheme is shown in Fig. 1. Block diagram of modified UFMC under consideration of CFO. This figure illustrates the UFMC system model together with the proposed interference cancelation scheme The process of modulation-demodulation shown in Fig. 1, including the transmitter and receiver, is as follows. At the transmitter, the modulation control unit uses subcarrier modulation strategy to generate interference cancelation and data subcarriers to reduce the interference; then, by means of an N-point inverse discrete Fourier transform (IDFT) converter, the frequency-domain subband signal X i (k) is converted into a time-domain signal x i (n), with output length N. After the IDFT operation on each subband, the signal passes to the BPF with length L, so the length of a UFMC symbol becomes N+L−1 because of the convolution process. Both the Doppler effect due to moving equipment and local oscillator misalignment between transceivers have to be considered to model the carrier frequency offset (CFO), and the transmitted signal of the UFMC is generated by summing all filtered subband signals. From the view of the receiver, a 2N-point discrete Fourier transform (DFT) is performed after appending zeros, and a subband allocation unit is used to estimate the symbols in individual subbands. Eventually, the demodulation control unit adopts a similar strategy as that of the modulation block to complete the signal estimations for both the interference cancelation and data subcarriers. A mathematical analysis of the above process is presented in the following. For an arbitrary ith subband B i (i ∈[1 : M]), the frequency domain signal X i (k) of the ith equipment is transformed to the time domain x i (n) by the IDFT, and its expression is $$ {x}_{i}(n)=\frac{1}{N}\sum_{k\in {B}_{i}}^{}{X}_{i}(k){e}^{j\frac{2\pi}{N}nk},\quad n=0,1,\cdots,N-1 $$ Then, the complete original signal in the frequency domain X(k) is the sum of each X i (k) $$ X(k)=\sum_{i=1}^{M}{X}_{i}(k) $$ By filtering through BPF, the output signal t i (n) is the result of discrete linear convolution between the filter impulse response f i (n) and the time-domain signal x i (n). As previously mentioned, f i (n) has length L, and t i (n) has length N+L−1. Therefore, the formula of UFMC symbol y(n), in consideration of CFO, is expressed as $$ y(n)=\sum_{i=1}^{M}c_{i}(n)\cdot t_{i}(n)=\sum_{i=1}^{M}c_{i}(n)\cdot \left(x_{i}(n)\ast f_{i}(n)\right) $$ where c i (n) is the time-domain frequency-offset expression of the ith subband with the same length as t i (n), and * denotes the linear convolution operator. In the frequency domain, \({\hat {C}_{i}}\,({k})\) is the 2N-point DFT of c i (n) and can be presented as $$\begin{array}{@{}rcl@{}} {\hat{C}}_{i}(k)&{}={}&\frac{1}{2N}\sum_{n=0}^{N+L-2}{e}^{j\frac{2\pi }{2N}\left(2\varepsilon-k\right)n} \\ &{}={}&\frac{sin\left[\frac{\pi}{2N} \left(2\varepsilon-k\right) \left(N+L-1\right)\right]}{2N\cdot sin\left[\frac{\pi }{2N}(2\varepsilon-k)\right]}\\ &&\cdot {e}\ ^{j\frac{\pi}{2N} \left(2\varepsilon-k(N+L-2)\right)} \end{array} $$ where ε denotes the relative CFO for subband i. This equation shows the frequency offset acting on subcarrier k, which is caused by CFO, that damages the orthogonality between carriers, that is, the ICI. On the receiving end, a 2N-point DFT is used to perform the conversion from a time-domain signal to a frequency-domain signal. Then, we can derive the received symbols \(\hat {Y}\,(k)\) as $$\begin{array}{@{}rcl@{}} \hat{Y}(k)&{}={}&\sum_{l=1}^{M}\sum_{d=0}^{2N-1}{\hat{C}}_{l}(k-d){\hat{X}}_{l}(d){\hat{F}}_{l}(d)+\hat{E}(k) \\ &{}={}&\sum_{d=0}^{2N-1}{\hat{C}}_{i}(k-d){\hat{X}}_{i}(d){\hat{F}}_{i}(d) \\ &&{+}\:\sum_{\substack{l=1 \\ l\neq i}}^{M}\sum_{d=0}^{2N-1}{\hat{C}}_{l}(k-d){\hat{X}}_{l}(d){\hat{F}}_{l}(d)+\hat{E}(k) \end{array} $$ where the signals of both \(\hat {X} _{i}\,({k})\) and \(\hat {F}_{i}\,({k})\) with period 2N are 2N-point DFTs of x i (n) and f i (n), respectively, and \(\hat {E}\,({k})\) is an additive noise sample of subcarrier k. To gradually illustrate the relationship between N-point sequence X i (k) and Y i (k) and separate the desired signal part from the interference part in Eq. (5), we first derive \(\hat {X}_{i}\,({k})\) from Eq. (1) as follows $$ {\hat{X}}_{i}(k)=\left\{ \begin{array}{lcc} {X}_{i}\left(\frac{k}{2}\right) & \text{if}\ k\ \text{is even} \\ \\ \sum\limits_{m\in {B}_{i}}^{}{X}_{i}(m)\frac{sin\left(\frac{\pi }{2}\left(2m-k\right)\right)}{N\,sin\left(\frac{\pi }{2N}\left(2m-k\right)\right)} \\ \qquad\quad \cdot {e}^{j\frac{\pi }{2}\left(2m-k\right)\left(1-\frac{1}{N}\right)} & \text{if}\ k\ \text{is odd} \end{array}\right. $$ Equation (6) indicates that the odd subcarriers contain part of the signal energy and the interference, which comes from other subcarriers because of the 2N-point DFT. Additionally, the 2N-point received sequence has the same conditions. By combining 2N-point signal Ŷ(k) with the relationship between N-point DFT and 2N-point DFT, we obtain the expression for N-point received signal Y(k) $$\begin{array}{@{}rcl@{}} Y(k)&{}={}&\hat{Y}\left(\frac{m}{2}\right) \qquad \text{if}~ m=2k \\ &&m=0,1,\cdots,2N-1 \\ &&k=0,1,\cdots,N-1 \end{array} $$ Then, we consider the interference in only one subband because the ISBI from other subbands is suppressed sufficiently by filters. To simplify the analytical model, all the signals in odd subcarriers are ignored. According to Eqs. (5), (6), and (7), we separate the desired signal from the received symbols and obtain the N-point received signal of the ith subband as $$\begin{array}{@{}rcl@{}} {Y}_{i}(k)&{}={}&\sum_{d\in {B}_{i}}^{}{C}_{i}(k-d){X}_{i}(d){F}_{i}(d)+E(k) \\ &{}={}&{C}_{i}(0){X}_{i}(k){F}_{i}(k) \\ &&{+}\:\sum_{\substack{d\in {B}_{i} \\ d\neq k}}^{}{C}_{i}(k-d){X}_{i}(d){F}_{i}(d)+E(k) \end{array} $$ where C i (k) and F i (k) are N-point DFTs of c i (n) and f i (n), respectively, and E(k) is the N-point representation of \(\hat {E}\)(k). In Eq. (8), the first term represents the desired signal, where C i (0) takes its maximum given no frequency offset. The second term indicates the interference components, where the sequence C i (k−d) is the ICI coefficient between the kth and dth subcarriers in the ith subband under the assumption that the kth subcarrier is the desired signal and the dth subcarrier is the interference. In other words, Eq. (8) shows that the received signal has been distorted by the existence of interference from other subcarriers. We focus on the effects of CFO and the filter using an additive white Gaussian noise (AWGN) channel so that the sequence S i (k−d) is defined as the interference coefficient to explain the interference degree between the kth and dth subcarriers in the ith subband. Its influence on the system is denoted as $$ {S}_{i}(k-d)={C}_{i}(k-d){F}_{i}(d) $$ Then, we derive the complete received symbols as $$ Y(k)=\sum_{i=1}^{M}{Y}_{i}(k) $$ This frequency-domain signal Y(k) that has been demodulated by the receiver is treated as the X(k) of the transmitter in conventional UFMC systems. Proposed interference cancelation scheme Compared to OFDM, UFMC systems have greater robustness against CFO because of the introduced filters. However, our current work shows that the carriers on the two edges of the subband are influenced by the filter, which leads to degradation of system performance. Therefore, we need an interference suppression scheme to decrease the sensitivity of internal carriers to the filter. Coding techniques have recently been used to reduce ICI. The authors in [25] proposed a reduction technique based on a geometric interpretation of the peak interference to carrier ratio (PICR) for OFDM signals and focused on the effects of CFO in OFDM systems to reduce PICR. Another coding technique, called the ICI self-cancelation scheme, was used to suppress the interference between adjacent subcarriers with simple algorithms, by modulating one data symbol onto a pair of subcarriers with predefined weighting coefficients [19, 26]. Then, the generated interference self-canceled, and the system performed much better than standard OFDM systems. Nevertheless, the redundant modulation caused a reduction in spectral efficiency of at least one half. The mentioned schemes focused on ICI; however, our target is to reduce the interference of both filters and ICI. To avoid significant reductions in spectral efficiency, a new interference cancelation scheme is proposed by introducing an ICI cancelation scheme into UFMC systems. Based on our analysis of carriers in the affected region of the filter, we find that the greater the distance to the subband edge is, the weaker the interference of the filter. Therefore, we concentrate on the internal interference of the filter for each subband. Here, each subband is regarded as a protected object, and the interference cancelation subcarriers are inserted in pairs on the two edges. A diagram of the process in shown in Fig. 2. Bandwidth allocation of the proposed scheme for UFMC. This figure illustrates the bandwidth allocation of the three parts in one subband for the proposed scheme In this figure, we divide each subband into three carrier blocks. The middle position is allocated to the data carriers, and the interference cancelation carriers are placed on the two edges. Each block occupies variable bandwidth to meet the flexible requirements for 5G networks because of the diversity of the access equipment (AE) and filter type. The bandwidth of each subband is reconfigurable to support diverse packet transmission efficiently. The corresponding mathematical analysis is presented in the following. The arbitrary ith subband B i is divided into three parts, that is, B i =[Ai1,Ai2,Ai3], and the interference cancelation carriers are constrained in either Ai1 or Ai3. Simultaneously, the original signal X i (d) is defined to be −X i (d+1), e.g., X i (d+1)=−X i (d), where d∈Ai1, Ai3, and d is even. Then, the received signal, including the interference cancelation carriers in Ai1 and Ai3, becomes $$\begin{array}{@{}rcl@{}} {}{Y'}_{i,{A}_{i1}}\left(k\right)&\,=\,&\sum_{\substack{d\in {A}_{i1} \\ d=\text{even}}}\! {X}_{i}(d)[{C}_{i}\left(k-d\right){F}_{i}\left(d\right) \\ &&{-}\:{C}_{i}\left(k-\!(d+1)\right){F}_{i}(d+1)]+{E}_{i,{A}_{i1}}\left(k\right) \end{array} $$ $$\begin{array}{@{}rcl@{}} {} {Y'}_{i,{A}_{i3}}\left(k\right)&\,=\,&\sum_{\substack{d\in {A}_{i3} \\ d= {even}}}^{}{X}_{i}(d)[{C}_{i}\left(k-d\right){F}_{i}\left(d\right) \\ &&{-}\:{C}_{i}\left(k-(d+1)\right){F}_{i}(d+1)]+{E}_{i,{A}_{i3}}(k) \end{array} $$ These two equations show that the received desired signals in these regions are disturbed by the even carriers, and the coefficient of X i (d) becomes an important factor in determining the strength of the interference. Thus, the previous interference coefficient in Eq. (9) becomes $$ {} {S'}_{i}(k-d)={C}_{i}(k-d){F}_{i}(d)-{C}_{i}\left(k-(d+1)\right){F}_{i}(d+1) $$ and the remaining received signal in Ai2, which contains unmixed data carriers, is expressed as $$ {Y'}_{i,{A}_{i2}}(k)=\sum_{d\in {A}_{i2}}^{}{X}_{i}(d){C}_{i}(k-d){F}_{i}(d)+{E}_{i,{A}_{i2}}(k) $$ Then, the whole received signal can be written as $$ {Y'}_{i}(k)={Y'}_{i,{A}_{i1}}(k)+{Y'}_{i,{A}_{i2}}(k)+{Y'}_{i,{A}_{i3}}(k) $$ To compare with the original scheme, the desired signal of the proposed scheme is assumed to transmit on subcarrier "0" (the edge of one subband). The difference between the original |S i (k−d)| and the proposed |S′ i (k−d)| is presented in Fig. 3, which is on a logarithm scale with k=0 and N=64. In Ai1 and Ai3, (a) |S′ i (k−d)|<|S i (k−d)| for most of the d values and (b) the total number of interference signals is reduced to half because we include only even terms in the summation in Eqs. (11) and (12). Consequently, the interference signals in Eq. (15) are much smaller than those in Eq. (8) owing to reductions in both the number of interference signals and the amplitudes of the interference coefficients. A comparison among |S i (k−d)|, |S′ i (k−d)| and |S″ i (k−d)|. This figure shows a comparison among three interference coefficients for the proposed scheme An interference cancelation demodulation scheme, corresponding with the modulation strategy, is used to further reduce the interference. In the modulation process, each signal on the k+1th subcarrier (k denotes an even number) is multiplied by −1 and summed with that on the kth subcarrier. Thus, in the demodulation, the desired signal in Ai1 or Ai3 is determined by the difference between Y′ i (k) and Y′ i (k+1), and it can be derived as $$\begin{array}{*{20}l} {Y^{\prime\prime}}_{i,{A}_{i1}}(k) = &{Y'}_{i,{A}_{i1}}(k)-{Y'}_{i,{A}_{i1}}(k+1) \\ = &\sum_{\substack{d\in {A}_{i1} \\ d=\text{even}}}^{}{X}_{i}(d)[-{C}_{i}\left(k-(d+1)\right){F}_{i}(d+1) \\ &\ {+}\:{C}_{i}(k-d)\left({F}_{i}(d)+{F}_{i}(d+1)\right) \\ &\ {-}\:{C}_{i}\left(k-(d-1)\right){F}_{i}(d)]+{E}_{i,{A}_{i1}}(k)\\ &-{E}_{i,{A}_{i1}}(k+1) \end{array} $$ $$\begin{array}{*{20}l} {Y^{\prime\prime}}_{i,{A}_{i3}}(k) = &{Y'}_{i,{A}_{i3}}(k)-{Y'}_{i,{A}_{i3}}(k+1) \\ = &\sum_{\substack{d\in {A}_{i3} \\ d= {even}}}^{}{X}_{i}(d)[-{C}_{i}(k-(d+1)){F}_{i}(d+1) \\ &\ {+}\:{C}_{i}(k-d)\left({F}_{i}(d)+{F}_{i}(d+1)\right) \\ &\ {-}\:{C}_{i}\left(k-(d-1)\right){F}_{i}(d)]+{E}_{i,{A}_{i3}}(k) \\&-{E}_{i,{A}_{i3}}(k+1) \end{array} $$ In addition, the signal in Ai2, which does not include the interference cancelation carriers, is the same as in Eq. (14), that is, $$ {}{Y^{\prime\prime}}_{i,{A}_{i2}}(k)=\sum_{d\in {A}_{i2}}^{}{X}_{i}(d){C}_{i}(k-d){F}_{i}(d)+{E}_{i,{A}_{i2}}(k) $$ Eventually, the estimated signal in the ith subband is denoted as $$ {}{Y^{\prime\prime}}_{i}(k)={Y^{\prime\prime}}_{i,{A}_{i1}}(k)+{Y^{\prime\prime}}_{i,{A}_{i2}}(k)+{Y^{\prime\prime}}_{i,{A}_{i3}}(k) $$ Therefore, the whole estimated signal can be represented as $$ Y^{\prime\prime}(k)=\sum_{i=1}^{M}{Y^{\prime\prime}}_{i}(k) $$ Following the above analysis, the corresponding interference coefficient of the estimated signal is denoted as $$\begin{array}{@{}rcl@{}} {S^{\prime\prime}}_{i}(k-d)&{}={}&-{C}_{i}\left(k-(d+1)\right){F}_{i}(d+1) \\ &&{+}\:{C}_{i}(k-d)\left({F}_{i}(d)+{F}_{i}(d+1)\right) \\ &&{-}\:{C}_{i}\left(k-(d-1)\right){F}_{i}(d) \end{array} $$ The amplitude of |S′′ i (k−d)| and its comparison with both |S i (k−d)| and |S′ i (k−d)| are shown in Fig. 3. In this figure, we can observe that |S′ i (k−d)| is smaller than |S i (k−d)| and that |S′′ i (k−d)| is even smaller than |S′ i (k−d)| for the majority of d. This result indicates that the proposed demodulation scheme further reduces the interference to estimate signals whose range is in Ai1 or Ai3. The above scheme can be further validated by the carrier-to-interference power ratio (CIR) [27]. Additive noise is omitted in the process of deducing the theoretical expression for the CIR, and the sequence S(k−d) is defined to be the universal interference coefficient as $$\begin{array}{*{20}l} S(k-d)= \left\{ \begin{array}{ccl} {S^{\prime\prime}}_{i}(k-d) & \qquad{d\!\in {A}_{i1}, {A}_{i3}} \\ {S}_{i}(k-d) & {{d\!\in {A}_{i2}}} \end{array}\right. \end{array} $$ We obtain the desired signal power on the kth subcarrier according to Eqs. (16–18) and (22), $$\begin{array}{@{}rcl@{}} E\left[{|R(k)|}^{2}\right]&{}={}&E\left[{|{X}_{i}(k)S(0)|}^{2}\right] \\ &{}={}&E\left[{|{X}_{i}(k)|}^{2}\right]{|S(0)|}^{2} \end{array} $$ Meanwhile, the average power of the interference signal is calculated under the assumption that the transmitted data X i (k) have a mean of zero and are statistically independent. The average power can be represented as $$ \begin{aligned} E\left[\!{|I(k)|}^{2}\right]\ =&\ E\left[{|\sum_{\substack{d\in {B}_{i} \\ d\neq k}}^{}{X}_{i}(d)S(k-d)|}^{2}\right]\\ =&\ E\left[\sum_{\substack{d\in {B}_{i} \\ d\neq k}}^{}{X}_{i}(d)S(k-d)\sum_{\substack{m\in {B}_{i} \\ m\neq k}}^{}{{X}_{i}}^{*}(m){S}^{*}(k-m)\right] \\ =&\ E\left[{|{X}_{i}(d)|}^{2}\right]\sum_{\substack{d\in {B}_{i} \\ d\neq k}}^{}{|S(k-d)|}^{2} \end{aligned} $$ Thus, the expression of CIR for subcarrier k can be derived as $$\begin{array}{@{}rcl@{}} \text{CIR}&{}={}&\frac{E\left[{|R(k)|}^{2}\right]}{E\left[{|I(k)|}^{2}\right]} \\ &{}={}&\frac{E\left[{|{X}_{i}(k)|}^{2}\right]{|S(0)|}^{2}}{E\left[{|{X}_{i}(d)|}^{2}\right]\sum_{\substack{d\in {B}_{i} \\ d\neq k}}^{}{|S(k-d)|}^{2}} \end{array} $$ From Eq. (25), the CIR expression for the proposed scheme, where the desired signal is on subcarrier "0," is derived as $$ {CIR}=\frac{{|{S^{\prime\prime}}_{i}(0)|}^{2}}{\sum_{\substack{d\in {A}_{i1},{A}_{i3} \\ d= {even} \\ d\neq k}}^{}{|{S^{\prime\prime}}_{i}(-d)|}^{2}+\sum_{\substack{d\in {A}_{i2}}}^{}{|{S}_{i}(-d)|}^{2}} $$ and the CIR expression of the conventional UFMC system can be represented as $$ {CIR}=\frac{{|{S}_{i}(0)|}^{2}}{\sum_{\substack{d\in {B}_{i} \\ d\neq k}}^{}{|{S}_{i}(-d)|}^{2}} $$ Equation (27) has the same assumption as that of Eq. (26), that is, the desired signal is on subcarrier "0". However, to analyze the effect of the filter in the subband, we place the desired signal in the middle subband. Then, Eq. (27) becomes $$ {CIR}=\frac{{|{S}_{i}(0)|}^{2}}{\sum_{\substack{d\in {B}_{i} \\ d\neq k}}^{}{|{S}_{i}(k-d)|}^{2}} $$ Based on Eqs. (26–28), the CIR curves of these three situations are shown in Fig. 4, which also includes the CIR of a standard OFDM system. In this figure, the conventional UFMC systems, whose desired signal is on the edge of the subband, have a greater than 4-dB CIR reduction compared with the standard OFDM systems due to the influence of the filter. If the desired signal is in the middle subband of the conventional UFMC system, its CIR is almost the same as that of standard OFDM systems. Therefore, the interference of the filter on these signal is negligible. By contrast, the proposed scheme improves more than 12 dB compared with conventional UFMC systems in the range 0<ε≤0.5, and our scheme improves 8 dB compared with the standard OFDM systems. CIR comparison for different systems. This figure shows the CIR comparison for different systems, including the proposed UFMC, the conventional UFMC and the standard OFDM This analysis shows that the proposed scheme restrains the interference of the filters and improves the system performance at the receiver. Moreover, the signal-to-noise ratio of the system is enhanced because the coherent addition doubles signal level while increasing the noise level by a factor of only \(\sqrt {2}\) due to noncoherent addition. On the other hand, the actual spectral efficiency of the proposed scheme is reduced by the utilization of the repetition coding method. Therefore, we define (a) α as the ratio of the subcarrier amount in the middle subband to that in the whole subband and (b) β as the spectral efficiency to compare with that of standard OFDM systems. It is obvious that α≤ 1. Then, β of the proposed scheme is obtained as \(\left [\alpha +(1-\alpha)\frac {1}{2}\right ]\) (b/s/Hz), and it is smaller than 1 (b/s/Hz) of the standard OFDM system. To meet the required spectral efficiency, a larger signal alphabet size can be used to increase the band utilization. For example, combining QPSK modulation with the proposed scheme increases β to [1+α] (b/s/Hz). The spectral efficiency β is also affected by the coefficient α, which is determined by the amount of edge subcarriers. This amount is related to both the type and the detailed parameters of the filter. Moreover, no complex coding methods are required for our proposed scheme, so it is easy to implement but just slightly increases the system complexity. Here, we present four simulation experiments and their corresponding numerical results to verify the performance of the proposed scheme for UFMC systems. The experiments include (a) the uncoded BER performance evaluation under the conditions of different ratios α, (b) the effect of CFO on BER performance, (c) the influence of the number of AEs on system performance, and (d) the mean square error (MSE) simulation for the CFO estimation. To validate the proposed UFMC, the standard and GB OFDM systems, where the GB OFDM and the proposed UFMC have the same parameter β, are introduced and compared. Moreover, we define ε to represent the normalized CFO. We introduce the raised cosine filter in the UFMC. Its roll-off factor is 1/2, which indicates that there are 1/4 carriers on each edge of the subband affected by the filter. Moreover, we set α to be one of [1/2 3/4] to analyze the effect of α on the system performance. According to the above definition of α, α=[1/2 3/4] means that there are [1/4 1/8] interference cancelation carriers inserted on the two edges, respectively. Additionally, an interference cancelation carrier value from 1/4 to 1/8 indicates that the interference is enhanced. Furthermore, we set ε as one of [0.02 0.04 0.06 0.08 0.1]. Experiment (a) is performed on four AEs, with different ε values used to represent different access conditions. Two AEs are the primary users whose ε values are equal to 0.1, and the ε values for the other two AEs are 0.08. The simulation results are shown in Fig. 5, which presents the BER performance of the proposed system with different α. BER performance comparison with different ratio factor α. This figure illustrates the BER performance of the proposed UFMC with different ratio factor and compared with the other systems As shown in this figure, the standard OFDM system has the worst performance due to the high ISBI. Furthermore, the proposed scheme for UFMC outperforms that of the GB OFDM under the condition of the same β. For instance, the performance of the proposed scheme is nearly twice as good as that of GB OFDM when Eb/N0 is 20 dB and α is 1/2, and the corresponding β is 3/4. This result is the same as that of α=3/4(β=7/8). Moreover, α also influences the BER performance of the system. For example, when α is changed from 1/2 to 1, the proposed UFMC becomes the conventional UFMC, and its BER increases from 10 −3 to 10 −2 when Eb/N0 is 20 dB. By contrast, the performance is improved as α decreases, especially for higher Eb/N0 (> 10 dB). Note that a small α indicates a reduction in spectral efficiency. Thus, the selection of α is important for the proposed scheme. Generally, we should compromise between BER and spectral efficiency in practical applications. The second experiment is implemented to analyze the effect of CFO on the proposed scheme and to compare the results with those of the other systems, in which we use the same CFO value for all AEs because of the poor ability of the OFDM to suppress ISBI. The simulation results are shown in Fig. 6. BER performances comparison with different CFO. This figure depicts the effect of CFO on the performance of the proposed scheme in comparison with the others systems The proposed UFMC has the best performance, the GB OFDM has the second best performance, and the worst performance is that of the standard OFDM. The comparison of the proposed UFMC and the GB OFDM is performed with the same parameter β. In addition, the BER performance degrades as the CFO in these systems increases because of the enhancement of ICI. Due to the ISBI, both the standard OFDM and the GB OFDM show a substantial reduction in BER in the region of ε<0.6. Additionally, the proposed scheme demonstrates superior performance compared with that of the conventional UFMC for α equal to 1/2 or 3/4. However, the interference of the filter degrades the BER performance as α increases. The effect of the number of AEs on BER performance is analyzed in experiment (c), and the corresponding results are presented in Fig. 7. The parameters are the same as those of experiment (a). The figure shows that the proposed scheme has better performance than that of the GB OFDM for the same β and number of AEs, and it also outperforms the conventional UFMC system under conditions of different α and number of AEs. These results are demonstrated by the BER value presented in the figure when Eb/N0 is 20 dB. The proposed scheme improves 7 and 2.5 dB compared with conventional UFMC and GB OFDM, respectively, under the condition of four AEs. For eight AEs, the improvements are 3 and 1.8 dB. Additionally, the proposed scheme outperforms the others even when the number of AEs is increased, although the BER performance degrades under these conditions. BER performance evaluation of different number of AE for the proposed UFMC. This figure illustrates the effect of the different number of AE on BER performance and compared with the other systems We analyze the MSE performance by increasing Eb/N0 in the final experiment. Here, the pilot signal is inserted in the middle subband for both the proposed UFMC and GB OFDM. The corresponding results are shown in Fig. 8. The proposed UFMC outperforms the other systems; however, the result of the conventional UFMC is similar to that of the proposed UFMC when Eb/N0 is less than 5 dB because of the high noise power. In conclusion, the proposed UFMC provides better performance than those of the three other systems. MSE performance comparison for CFO estimation. This figure illustrates the MSE performance comparison for CFO estimation, where standard OFDM, GB OFDM, conventional UFMC, and the proposed UFMC are considered In this paper, we proposed an interference cancelation scheme to mitigate the effects of both the filter and CFO by introducing an ICI self-cancelation scheme into the UFMC system to flexibly allocate the bandwidth in terms of the different requirements for 5G networks. To reduce the interference, our main focus is on the internal interference of the filter. Each subband was regarded as a protected object, and the interference cancelation subcarriers were inserted in pairs on the two edges. This proposed method avoids the significant reduction in spectral efficiency in the current system. In addition, the filter interference was reduced to further improve the system performance. The corresponding simulation results showed that the proposed scheme had better performance than that of the conventional UFMC because the filter interference on the edges was effectively suppressed. We also compared the proposed scheme with the standard and GB OFDM systems. The simulation results showed that the standard OFDM system had the worst performance because of the serious ISBI, while the proposed UFMC outperformed the GB OFDM under the condition of the same spectral efficiency. G Wunder, et al, 5GNOW: non-orthogonal, asynchronous waveforms for future mobile applications. IEEE Commun. Mag.52(2), 97–105 (2014). G Fettweis, S Alamouti, 5G: personal mobile internet beyond what cellular did to telephony. IEEE Commun. Mag.52(2), 140–145 (2014). JG Andrews, et al, What will 5G be?. IEEE J. Sel. Areas Commun.32(6), 1065–1082 (2014). GP Fettweis, The tactile internet: applications and challenges. IEEE Veh. Technol. Mag.9(1), 64–70 (2014). E Hossain, D Niyato, Z Han, Dynamic Spectrum Access and Management in Cognitive Radio Networks (Cambridge university press, Cambridge, 2009). G Fettweis, M Krondorf, S Bittner, in Proc. IEEE Veh. Technol. Conf. GFDM—Generalized Frequency Division Multiplexing (IEEEBarcelona, 2009), pp. 1–4. R Datta, D Panaitopol, G Fettweis, in Proc. IEEE Int. Symp. Commun. Inf. Technol. (ISCIT). Analysis of cyclostationary GFDM signal properties in flexible cognitive radio (IEEEGold Coast, 2012), pp. 663–667. MG Bellanger, in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing. Specification and design of a prototype filter for filter bank based multicarrier transmission (IEEESalt Lake City, 2001), pp. 2417–2420. P Siohan, C Siclet, N Lacaille, Analysis and design of OFDM/OQAM systems based on filterbank theory. IEEE Trans. Signal Process.50(5), 1170–1183 (2002). B Farhang-Boroujeny, OFDM versus filter bank multicarrier. IEEE Signal Process. Mag.28(3), 92–112 (2011). F Schaich, T Wild, Y Chen, in Proc. IEEE Veh. Technol. Conf. Waveform contenders for 5G—suitability for short packet and low latency transmissions (IEEESeoul, 2014), pp. 1–5. F Schaich, T Wild, in Proc. 6th Int. Symp. Commun. Control Signal Process. (ISCCSP). Waveform contenders for 5G-OFDM vs. FBMC vs. UFMC (IEEEAthens, 2014), pp. 457–460. V Vakilian, T Wild, F Schaich, S ten Brink, JF Frigon, in Proc. IEEE GLOBECOM Broadband Wireless Access Workshop. Universal-filtered multi-carrier technique for wireless systems beyond lte (IEEEAtlanta, 2013), pp. 223–228. F Schaich, T Wild, in Proc. 11th Int. Symp. Wireless Commun. Syst. (ISWCS). Relaxed synchronization support of universal filtered multi-carrier including autonomous timing advance (IEEEBarcelona, 2014), pp. 203–208. A Ijaz, et al, Enabling massive IoT in 5G and beyond systems: PHY radio frame design considerations. IEEE Access. 4:, 3322–3339 (2017). P Guan, et al, 5G field trials—OFDM-based waveforms and mixed numerologies. IEEE J. Sel. Areas Commun.35(6), 1234–1243 (2017). X Wang, T Wild, F Schaich, in Proc. IEEE Veh. Technol. Conf. (VTC Spring). Filter optimization for carrier-frequency- and timing-offset in universal filtered multi-carrier systems (IEEEGlasgow, 2015), pp. 1–6. X Wang, T Wild, et al, in Proc. European Wireless Conf. Universal filtered multi-carrier with leakage-based filter optimization (VDEBarcelona, 2014), pp. 1–5. Y Zhao, SG Haggman, Intercarrier interference self-cancellation scheme for OFDM mobile communication systems. IEEE Commun. Lett.49(7), 1185–1191 (2001). M Mukherjee, L Shu, V Kumar, P Kumar, R Matam, in Proc. Int. Wireless Commun. and Mobile Computing Conf. (IWCMC). Reduced out-of-band radiation-based filter optimization for UFMC systems in 5G (IEEEDubrovnik, 2015), pp. 1150–1155. H Wang, Z Zhang, Y Zhang, C Wang, in Proc. Int. Conf. on Wireless Commun. Signal Processing (WCSP). Universal filtered multi-carrier transmission with active interference cancellation (IEEENanjing, 2015), pp. 1–6. L Zhang, P Xiao, A Quddus, Cyclic prefix-based universal filtered multicarrier system and performance analysis. IEEE Signal Process. Lett.23(9), 1197–1201 (2016). L Zhang, A Ijaz, et al, Subband filtered multi-carrier systems for multi-service wireless communications. IEEE Trans. Wireless Commun. 16(3), 1893–1907 (2017). H Kim, J Bang, S Choi, D Hong, in Proc. IEEE Wireless Commun. and Networking Conf. Resource block management for uplink UFMC systems (IEEEDoha, 2016), pp. 1–4. B Smida, Coding to reduce the interference to carrier ratio of OFDM signals. EURASIP J. Wirel. Commun. Netw. 2017(1), 1–11 (2017). YH Peng, et al, Performance analysis of a new ICI-self-cancellation-scheme in OFDM systems. IEEE Trans. Consum. Electron.53(4), 1333–1338 (2007). PH Moose, A technique for orthogonal frequency division multiplexing frequency offset correction. IEEE Trans. Commun.42(10), 2908–2914 (1994). This work is supported in part by the National High Technology Research and Development Program (863 Program) of China under Grant 2015AA016901 and in part by the National Natural Science Foundation of China under Grant 61531007. The authors would like to thank the editor and anonymous reviewers for their constructive comments, which helped us to improve the manuscript. School of Electronic and Engineering, Beijing University of Posts and Telecommunications, Beijing, 100876, China Lei Chen & J. G. Yu Beijing Key Laboratory of Work Safety Intelligent Monitoring, Beijing, 100876, China College of Electronic Information Engineering, Hebei University, Baoding, 071002, China Lei Chen J. G. Yu CL conceived and designed the study and then performed the experiments and wrote the paper. YJ reviewed and edited the manuscript. Both authors read and approved the final manuscript. Correspondence to Lei Chen. Chen, L., Yu, J.G. Interference cancelation scheme with variable bandwidth allocation for universal filtered multicarrier systems in 5G networks. J Wireless Com Network 2018, 1 (2018). https://doi.org/10.1186/s13638-017-1011-3 Universal filtered multicarrier Interference cancelation Multiuser access Bandwidth allocation Carrier frequency offset Non-Orthogonal Multiple Access Techniques in Emerging Wireless Systems
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Why is there a deep mysterious relation between string theory and number theory, elliptic curves, $E_8$ and the Monster group? Why is there a deep mysterious relation between string theory and number theory (Langlands program), elliptic curves, modular functions, the exceptional group $E_8$, and the Monster group as in Monstrous Moonshine? Surely it's not just a coincidence in the Platonic world of mathematics. Granted this may not be fully answerable given the current state of knowledge, but are there any hints/plausibility arguments that might illuminate the connections? string-theory mathematical-physics research-level twistor59 $\begingroup$ At least this question is a bit childish. If anybody had an answer to this, he would publish it with a lot of "celebrations", and we all would know "why", in principle at least. $\endgroup$ – Georg Feb 7 '11 at 14:48 $\begingroup$ There are lots of interesting and appropriate questions involving these topics but this broad "why" question is not going to get any kind of reasonable answer. I'd suggest you reword the question to make it a more specific question about some aspect of these relations that you are interested in. $\endgroup$ – pho Feb 7 '11 at 15:10 $\begingroup$ I actually voted this question thumbs-up. It's a good question and I would like to know the most accurate answer, too. Clearly, the rough sketch of the answer is that string theory just knows about all important and exceptional structures in mathematics. But why does it know them? What is the logic that dictates that "other solutions" of a theory whose main physical goal is "only" to unify the interactions including gravity with quantum mechanics produces all other maths, including maths we used to think was totally abstract? Why did you close this very good question? $\endgroup$ – Luboš Motl Feb 8 '11 at 6:37 $\begingroup$ I agree with Luboš, the question should remain open. "Arduous" could also try asking at Math Overflow. (P.S. some of the specific connections listed come from the "modular invariance" of string theory, the need for one-loop amplitudes to be invariant under "large" reparametrizations of the world-sheet. This means that modular forms and their properties are relevant - thus Langlands - and also establishes a link to lattices - mathoverflow.net/questions/24604/… ) $\endgroup$ – Mitchell Porter Feb 8 '11 at 7:54 $\begingroup$ I still think a more specific question would be better, but I can see that there might be some interesting and useful answers so I've voted to reopen. $\endgroup$ – pho Feb 8 '11 at 14:20 I'll answer the relation between string theory and $E(8)$ -- a common appearance of $E(8)$ in string theory is in the gauge group of Type HE string theory $E(8)\times E(8)$ (see here for an explanation why). But it's interesting physically because it embeds the standard model subgroup. $$SU(3)\times SU(2)\times U(1)\subset SU(5)\subset SO(10)\subset E(6)\subset E(7)\subset E(8)$$ Indeed, the ones in between are GUT subgroups, and $E(8)$ happens to be the "largest" of the exceptional lie groups. Wikipedia has some things to say about the connections to monstrous moonshine, I'm not familiar with it. See [1], [2] re: the connections to number theory. Another example is how "1+2+3+4=10" demonstrates a 10-dimensional theory's ability to explain the four fundamental fources -- EM is the curvature of the $U(1)$ bundle, the weak force is the curvature of the $SU(2)$ bundle, the strong is the curvature of the $SU(3)$ bundle and gravity is the curvature of spacetime. [Archiving Ron Maimon's comment here in case it gets deleted --] There is another point, that E(8) is has embedded E6xSU(3), and on a Calabi Yau, the SU(3) is the holonomy, so you can easily and naturally break the E8 to E6. This idea appears in Candelas Horowitz Strominger Witten in 1985, right after Heterotic strings and it is still the easiest way to get the MSSM. The biggest obstacle is to get rid of the MS part--- you need a SUSY breaking at high energy that won't wreck the CC or produce a runaway Higgs mass, since it seems right now there is no low-energy SUSY. Abhimanyu Pallavi SudhirAbhimanyu Pallavi Sudhir $\begingroup$ SO(10) is not a subgroup of U(5). Why would a TOE need E(8) just because it is the largest exceptional group? The 1,2,3,4 numerology is rather weak since you are just looking at groups with these numbers in them that appear in very different ways. $\endgroup$ – Philip Gibbs - inactive Aug 9 '13 at 10:19 $\begingroup$ @PhilipGibbs: Fixed the SO(10) U(5) probem . The $E(8)$ logic was supposed to be intuitive . The 1,2,3,4 thing isn't numerology, it isn't so different, by the way . $\endgroup$ – Abhimanyu Pallavi Sudhir Aug 9 '13 at 10:25 $\begingroup$ There is another point, that E(8) is E6xSU(3), and on a Calabi Yau, the SU(3) is the holonomy, so you can easily and naturally break the E8 to E6. This idea appears in Candelas Horowitz Strominger Witten in 1985, right after Heterotic strings and it is still the easiest way to get the MSSM. The biggest obstacle is to get rid of the MS part--- you need a SUSY breaking at high energy that won't wreck the CC or produce a runaway Higgs mass, since it seems right now there is no low-energy SUSY. $\endgroup$ – Ron Maimon Aug 22 '13 at 22:04 $\begingroup$ @DImension10AbhimanyuPS: ok, but you shouldn't write what I said, which is technically wrong--- E8 is not E6xSU(3), it's a simple group, but it has an embedded E6xSU(3) and fills in the off-diagonal parts with extra crud that's broken when you have SU(3) gauge fluxes which follow the holonomy of the manifold. The precise decomposition is described in detail in Green Schwarz and Witten, which has a nice description of E8. $\endgroup$ – Ron Maimon Aug 23 '13 at 2:15 $\begingroup$ @RonMaimon: I know, but I think that is clear (that $E(8)$ is not $E(6)\times SU(3)$. $\endgroup$ – Abhimanyu Pallavi Sudhir Aug 23 '13 at 4:03 Number of dimensions in string theory and possible link with number theory Why do the mismatched 16 dimensions have to be compactified on an even lattice? Why do people care about Mathieu groups and related things? (Something about monstrous moonshine) Does the renormalization group apply to string theory? What is the relationship between string net theory and string / M-theory? Is there a relation between the weak scale and the intermediated string scale? Relation between cohomology and the BRST operator Is the number of dimensions predicted by String Theory related to the Poincare group? Relation between Topological String Theory and Physical String Theory?
CommonCrawl
Development and validation of a stochastic molecular model of cellulose hydrolysis by action of multiple cellulase enzymes Deepak Kumar1,2 & Ganti S. Murthy1 Cellulose is hydrolyzed to sugar monomers by the synergistic action of multiple cellulase enzymes: endo-β-1,4-glucanase, exo-β-1,4 cellobiohydrolase, and β-glucosidase. Realistic modeling of this process for various substrates, enzyme combinations, and operating conditions poses severe challenges. A mechanistic hydrolysis model was developed using stochastic molecular modeling approach. Cellulose structure was modeled as a cluster of microfibrils, where each microfibril consisted of several elementary fibrils, and each elementary fibril was represented as three-dimensional matrices of glucose molecules. Using this in-silico model of cellulose substrate, multiple enzyme actions represented by discrete hydrolysis events were modeled using Monte Carlo simulation technique. In this work, the previous model was modified, mainly to incorporate simultaneous action enzymes from multiple classes at any instant of time to account for the enzyme crowding effect, a critical phenomenon during hydrolysis process. Some other modifications were made to capture more realistic expected interactions during hydrolysis. The results were validated with experimental data of pure cellulose (Avicel, filter paper, and cotton) hydrolysis using purified enzymes from Trichoderma reesei for various hydrolysis conditions. Hydrolysis results predicted by model simulations showed a good fit with the experimental data under all hydrolysis conditions. Current model resulted in more accurate predictions of sugar concentrations compared to previous version of the model. Model results also successfully simulated experimentally observed trends, such as product inhibition, low cellobiohydrolase activity on high DP substrates, low endoglucanases activity on a crystalline substrate, and inverse relationship between the degree of synergism and substrate degree of polymerization emerged naturally from the model. Model simulations were in qualitative and quantitative agreement with experimental data from hydrolysis of various pure cellulose substrates by action of individual as well as multiple cellulases. During bioethanol production from lignocellulosic biomass, cellulose hydrolysis can be achieved using chemicals or biological catalysts (enzymes). Although acid hydrolysis is a relatively fast process, it suffers from some major limitations such as high operational cost, by-product formation, corrosion of equipment, neutralization requirement, high disposal cost (Bansal et al. 2009; Wang et al. 2012). Therefore, enzymatic hydrolysis is considered more feasible option during bioethanol production and has been the focus of research in last several decades. However, due to extensive hydrogen bonding, cellulose chains form a recalcitrant crystalline structure, which is difficult to degrade and require a much higher amount of enzymes (40–100 times) for hydrolysis compared to that of starch (Merino and Cherry 2007; Wang et al. 2012). Cellulose is hydrolyzed to glucose by synergetic action of multiple cellulase enzymes, such as endoglucanases (EG) (EC3.2.1.4), exoglucanases [also known as cellobiohydrolases (CBH)] (EC3.2.1.91) and β-glucosidase (BG) (EC3.2.1.21) (Bansal et al. 2009; Kadam et al. 2004; Zhang and Lynd 2004). Although all these enzymes have a different mode of action, they act in highly cooperative ("synergism") action for efficient degradation cellulose. Exoglucanases adsorb only from the chain ends (CBH I from reducing end and CBH II from the non-reducing end) and act in a processive manner to produce mainly cellobiose units. Processive enzymes remain bound to the glucose chain after cleaving a cellobiose molecule and will continue to cleave cellobiose units until a minimum chain length is reached. On the other side, endoglucanases are non-processive enzymes that act randomly on the surface glucose chains, hydrolyze one/few accessible internal bonds in the glucose chains and produce new chain ends. β-glucosidases hydrolyze the cellobiose and short soluble oligomers to glucose and complete the hydrolysis process (Fig. 1). (Adapted from Kumar and Murthy 2013) Hydrolysis of cellulose by action of various cellulase enzymes. Red color represents crystalline region and black color are in amorphous region Due to high cost of cellulase enzymes (up to 30% of ethanol cost) and low sugar yields, the hydrolysis process is one of the major obstacles in the commercialization of cellulosic ethanol production (Bansal et al. 2009; Kadam et al. 2004; Kumar and Murthy 2011). There is potential for cost reduction by improving the understanding of the process, by testing a wide array of enzymes and various substrates under different conditions to determine optimum hydrolysis conditions. Designing highly efficient cellulase mixtures ("optimized enzyme cocktails") that can yield high hydrolysis rate at minimal enzyme dosage, is one such approach. Cellulase extracted from various microorganisms contain different amount of each enzyme and many commercial preparations consist of mixes from a different organism. For example, cellulase from Trichoderma reesei contains low fractions of β-glucosidase enzyme and this enzyme is added to the cellulase preparation to increase hydrolysis rates. It has been reported that synthetic enzyme mixture (designer combinations) of cellulase enzymes can give relatively higher hydrolysis yields (Ballesteros 2010; Banerjee et al. 2010a, b; Besselink et al. 2008). Currently, the only reliable method for designing optimal cellulase mixtures involves extensive experimentation using statistically designed combinations of various enzyme levels (Baker et al. 1998; Banerjee et al. 2010a, b; Berlin et al. 2007; Gao et al. 2010). Since conducting such large number of hydrolysis experiments is expensive, time-consuming and labor intensive, a comprehensive hydrolysis model that can that can capture process dynamics and predict hydrolysis profile under various scenarios could be an alternate feasible approach. However, due to multiple variables such as use of several enzymes acting synergistically, complex cellulose structure, and dynamic enzyme–substrate interactions make it difficult to develop mathematical models that can predict accurate hydrolysis profile under different operating conditions. Using a novel stochastic molecular modeling approach, in which each hydrolysis event is translated into a discrete event, we developed a first three-dimensional mechanistic cellulose hydrolysis model. The model captured the structural properties of cellulose, enzyme properties, the effect of reaction conditions, and most importantly dynamic changes in these properties (Kumar and Murthy 2013). Other than accurate predictions of hydrolysis profile, this modeling approach incorporates detailed structural features of cellulose and provides unique advantages compared to mathematical models, such as tracking of multiple oligomers as well as chain distribution, tracking of morphological changes in cellulose, elimination of the need for parameter changes with a change in experimental data set. Please refer to our earlier paper (Kumar and Murthy 2013) for more detailed comparison of this modeling approach and comparisons to other modeling approaches. Although the previous model incorporated significant cellulose structural details and complex enzyme–substrate interactions, it did not include the simultaneous action of multiple classes of enzymes at any instant of time. During each iteration, only one class of enzymes (e.g., EG, CBH I or CBH II) was acting, which ignores the enzyme crowding/jamming effect, a critical phenomenon at high enzyme concentrations (Hall et al. 2010; Igarashi et al. 2011). This work presents the updated model with incorporation of enzyme jamming phenomenon by modeling simultaneous action of multiple enzymes, and also including other practical considerations, such as oligomer solubility and glucose production by cellobiohydrolase enzymes. Our earlier model was validated with very limited experimental data from the literature. In this work, the model simulations were validated with comprehensive experimental data sets obtained from hydrolysis of pure cellulose (Avicel, filter paper, and cotton) using purified T. reesei enzymes. Experiments were performed with purified CBH I and CBH II under various hydrolysis conditions, to cover the effect of enzyme loadings, substrate properties, and product inhibition. Celluclast, a commercial cellulase from T. reesei (Lot # CCN03141), was donated by Novozymes (Novo, Bagsvaerd, Denmark). P-Aminophenyl β-d-cellobioside (sc-222106, Lot #K213), used as an affinity ligand for cellobiohydrolase, was purchased from Santa Cruz Biotechnology Inc. (Santa Cruz, CA, USA). All other chemicals required for protein purification and hydrolysis experiments were purchased from Sigma-Aldrich (Milwaukee, WI). Whatman No. 1 filter paper (Whatman, Inc., Florham Park, NJ) and cotton balls (Kroger Co., Cincinnati, Ohio) were used as the pure cellulose samples for the hydrolysis experiments. The commercial β-glucosidase (Novozyme 188) from Aspergillus niger was purchased from Sigma-Aldrich (Milwaukee, WI). Stochastic hydrolysis model Development of this comprehensive model consisting cellulose structural details and complex enzyme–substrate interactions consisted of in silico representative cellulose model, enzyme characterization, and developing algorithms for modeling the enzyme actions. In this model, cellulose was modeled based on the structure of cellulose Iβ, the most abundant cellulose form in higher plants. The structure was modeled as a group of microfibrils (MF) (2–20 nm diameter), and each microfibril contains multiple elementary fibrils (EF), the basic building block of cellulose with about 3.5 nm diameter and containing 36 glucose chains (Chinga-Carrasco 2011; Fan and Lee 1983; Lynd et al. 2002). The number of EF in an MF, glucose molecules in one chain of glucose (i.e., degree of polymerization, DP), was assumed to be constant during each simulation. These parameters were dynamically determined at the beginning of the cellulose structure simulation, based on the type of cellulose simulated. The degree of crystallinity in cellulose (50–90%) is a critical factor affecting the cellulose hydrolysis, as amorphous regions are believed to be relatively more susceptible to enzyme action and determine initial hydrolysis rates. To capture this important property in this model, glucose chain in each EF were assumed to pass through multiple crystalline regions (200 glucose molecules long regions) separated by amorphous regions. The concept of modeled cellulose structure and its resemblance with actual cellulose structure is illustrated in Fig. 2. (Adapted from Kumar 2014) Structure of cellulose: a actual cellulose structure; b structure of cellulose simulated in model. Glucose molecules in red color represent crystalline region and glucose molecules in black color are in amorphous region. Each glucose molecule in the modeled microfibril was given a unique serial number as its identity, and a big data set containing other parameters (e.g., reducing/non-reducing end, EF surface, MF surface, crystalline or amorphous, soluble, non-soluble, distance from chain end, etc.) that describe structural properties of that bond. During developing algorithms for cellulase actions, enzyme accessibility was determined based on these parameters (data set with each glucose molecule) and action pattern of enzymes. For additional details of cellulose model please refer to earlier publications (Kumar 2014; Kumar and Murthy 2013). Cellulase enzymes vary in mode of actions, and for this model, the enzymes were classified into eight classes depending upon their structure and mode of action (e.g., non-processive endocellulase with cellulose binding molecule (CBM), processive CBH I with CBM, processive CBH II with CBM, etc.). Please refer to our earlier paper (Kumar and Murthy 2013) for more details on enzyme classifications, their characteristics, and mode of actions. Cellulose hydrolysis is dependent on biomass-dependent extrinsic factors (crystallinity, accessibility, and DP) and enzyme action is dependent on intrinsic factors (enzyme activity, stability with pH and temperature, etc.). The extrinsic factors were modeled in the simulated cellulose structure described above. Enzyme activity (depends on enzyme origin and level of purification) and enzyme loading (amount of enzyme/g substrate; based on experimental conditions) information was transformed into theoretical maximum turnover number (maximum possible number of bonds hydrolyzed per unit time for each enzyme) (N hi_max) (Eq. 1) for each class of enzyme. $$N_{{{\text{hi\_max}}}} = E_{i} *U_{i} *6.023*10^{17} * \frac{{G_{\text{Sim}} }}{{6.023*10^{23} }}*162*S_{i} ,$$ where 'E i ' is amount of 'ith' enzyme used (mg cellulose); 'U i ' is activity of 'ith' enzyme (IU/mg enzyme); 'G sim' is the number of glucose molecules simulated in the model; "162" is the average molecular weight of anhydrous glucose; 'S i ' is stability of 'ith' enzyme under experimental conditions (temperature and pH). Value of "S i " could be calculated for any enzyme using empirical equations developed, such as Arrhenius rate relationship for temperature. Value of 'S i ' is a real number between 0 and 1. These numbers were further transformed to numbers of hydrolyzed bonds per microfibril based on the total number of microfibrils simulated and mode of action of enzymes. For example, for endoglucanase enzymes, these numbers were proportional to relative glucose molecules on the surface of microfibril. On the other hand, for CBH I and CBH II, these numbers were proportional to a relative number of chain ends available in one microfibril. Please refer to Kumar (2014) for more details. The hydrolysis process was modeled using Monte Carlo simulation technique, which has been used successfully earlier for modeling the starch hydrolysis (Marchal et al. 2001, 2003; Murthy et al. 2011; Wojciechowski et al. 2001). The overall schematic for simulating the enzymatic hydrolysis for each enzyme is shown in Fig. 3 and detailed description is provided in Kumar and Murthy (2013). All the required substrate–enzyme interactions, such as binding of CBH only on chain ends, the higher binding probability of binding EG on MF surface than at EF surface, were incorporated into the model using algorithms. It was also made sure that sufficient glucose molecules (based on the size of enzyme) are available to allow binding. Basic schematic for hydrolysis simulations in model. (Detailed schematics provided in Additional files 1 and 2.) Only one class of enzymes was modeled working at a time, so the model did not account for the enzyme crowding effect (locations occupied by other class of enzymes at the same time). These effects were incorporated in the modified model discussed in next section. Other than cellulose structural restrictions, some probabilities were defined corresponding to enzyme action. For example, the probability of hydrolysis of a β-1,4 bond hydrolysis located in amorphous regions was more than that of in crystalline region by an endoglucanase enzyme. Choice was made by generating a random number at each decision point and comparing it with the defined probability. The hydrolysis event would happen only in the case when the random number was greater than the probability of hydrolysis. Number of iterations were restricted using a counter (Fig. 3). If all conditions for hydrolysis were met for that bond, it was converted to broken bond and the counter was incremented. Similarly, the counter was given an increment corresponding to unsuccessful events also (in case binding or hydrolysis does not occur). After each broken bond, it was made sure to change properties of other glucose molecules in that chain (e.g., chain length, distance from chain end, solubility, etc.). If a glucose chain becomes soluble, part of the chain just beneath the soluble chain is exposed and becomes accessible to enzymes. The concept is described in detail elsewhere (Kumar 2014). Modifications in the model The model described in above section was the first report of a comprehensive stochastic model for cellulose hydrolysis that successfully captured the cellulose structural features (three dimensional), enzyme characteristics, and dynamic enzyme–substrate interactions. In this work, the model was further modified to capture more realistic expected interactions during hydrolysis by incorporating the (1) simultaneous action of enzymes from multiple classes at any instant of time to account for the enzyme crowding; (2) partial solubility of cello-oligomers with DP 6–13, and (3) production of glucose by exocellulase. In the previous version of the model, the model was simulated based on the iterative concept only; however, in real conditions multiple enzyme molecules act simultaneously and block the hydrolysis sites for each other (Igarashi et al. 2011). Enzyme crowding and simultaneous action of enzymes were incorporated in the current model by calculating the number of enzyme molecules based on the enzyme loading, their molecular weight, and number of glucose molecules simulated. The iterations are performed for every minute of hydrolysis and properties of substrate are changed after that at the end of the 1-min time step. For processive enzymes, once an enzyme molecule bound to chain end, it remains bound at the end of 1-min time step and continues further down the chain until it reaches the end of the chain or desorbs from the molecule as per its probability. Exocellulase enzyme binds to multiple cellulase chains (three chains in the model) (Asztalos et al. 2012; Levine et al. 2010), so it is essential that all three chains must be accessible to the enzyme (on surface and not blocked by other enzyme) for binding of the enzyme. In the previous version of model, it was assumed that glucose molecules equal to size of CBM only are required on surface and unblocked for binding, however in the current model whole length of enzyme was considered (except linker, because it is flexible and is compressed during movement) (Wang et al. 2012). The detailed schematics explaining algorithms developed to model CBH I and EG actions have been provided in the in Additional files 1 and 2, respectively. Cellodextrins with DP < 6 are considered to be completely soluble, DP 6–13 partially soluble and above 13 are insoluble in water (Lynd et al. 2002; Zhang and Lynd 2004). In the previous version of the model, all oligomers with DP > 6 were considered insoluble. While the CBM of the enzymes cannot bind to these chains due to its large size, the catalytic domains of the enzymes will still act on the oligomers in solution. In the absence of reliable literature data, the soluble fraction of the oligomers was set as a function of DP in the range of DP 7–13. The oligomers with DP 7–9, 10–11, and 12–13 were assumed 75, 50, and 25% soluble, respectively. Oligomers with DP < 6 were assigned a 100% solubility, and while oligomers with DP more than 13 were set to 0% solubility. In the previous model, the CBH action could only produce cellobiose during cellulose hydrolysis. However, glucose formation during cellulose hydrolysis by CBH action has been observed by some researchers (Eriksson et al. 2002; Medve et al. 1998), and was also observed in our experiments (discussed later in the "Results and discussion" section). Therefore, the model was modified to include glucose formation in addition to the cellobiose. A probability of glucose formation was included in the model, and glucose/cellobiose formation was decided by generating a random number and comparing with that probability. The probabilities and increments associated with productive various events (productive binding, no binding, non-productive binding, etc.) are listed in Additional file 3: Table S1. Enzyme crowding/jamming phenomenon might not be critical at low enzyme dosages and during action of individual enzymes. Also, the other details incorporated into this model might be ignored if the final goal of the model is to simulate the sugar concentration only during the hydrolysis process. However, to simulate and optimize the composition/cocktail of enzymes, it is necessary to simulate the effects of each enzyme class carefully. Model implementation and simulations The algorithms of the hydrolysis model were written in C++ language. Random number generators were used in simulation of cellulose structure and hydrolysis process (Matsumoto and Nishimura 1998). The cellulose structure was simulated for three model cellulose substrates Avicel, filter paper and cotton, to cover the range of substrates with different structural properties (DP and degree of crystallinity). Avicel is low-DP cellulose, with DP only about 300 and crystallinity index 0.5–0.6; whereas, cotton has relatively very high DP (about 2000–2500) and crystallinity index of 0.85–0.95 (Zhang and Lynd 2004). Hydrolysis simulations were performed based on the experimental conditions: weight of solution (scale of hydrolysis), solid loading, cellulose content, total enzyme loading (mg protein/g cellulose), ratio of enzymes present (EG:CBH I:CBH II:BG), temperature, pH and hydrolysis duration. Enzyme activities can be determined from supplier, literature, or can be determined using standard protocols (Ghose 1987). Unless determined in the lab, specific activities of enzymes from T. reesei were assumed as 0.4, 0.08, and 0.16 IU/mg of EG I, CBH I and CBH II, respectively (Zhang and Lynd 2006) for model simulations. The output from model included several data files containing glucose concentrations, oligosaccharide concentrations, chain distribution profile (number of chains of various lengths), crystallinity index profile (ratio of crystallinity at various time intervals), solubility profile and data sheets for each microfibril (illustrating major properties associated with glucose molecules) at various times during hydrolysis. Model validation The data from model simulations were compared with various sets of experimental results from cellulose hydrolysis in our lab and from literature (Bezerra and Dias 2004; Bezerra et al. 2011) to validate the model under various hydrolysis conditions. Validation with experimental data The model was validated with the results obtained from hydrolysis of pure cellulosic substrates (filter paper and cotton) using purified CBH I and CBH II. The cellulases CBH I and CBH II were purified from Celluclast (Novozymes, Denmark) using a series of chromatography steps in BioLogic LP system (Bio-Rad Laboratories, Hercules, CA, USA). The purification experiments were performed at room temperature and the collected enzymes were transferred and stored in the refrigerator at 4 °C. Enzyme purification The flow diagram of steps followed in the CBH I and CBH II purification is shown in Fig. 4. In the first step of purification, the Celluclast enzyme mixture was desalted using Sephadex G-25 Fine (dimensions: 2.5 cm × 10 cm) gel filtration column. The protein was rebuffered in 50 mM Tris–HCl buffer (pH 7.0) at 5 mL/min. The desalted protein was fractionated by anion-exchange chromatography using DEAE-Sepharose column (dimensions: 2.5 cm × 10 cm). The sample was loaded using 50 mM Tris–HCl buffer (pH 7.0) at 5 mL/min flow rate and was eluted stepwise: 1st elution at 35%, and 2nd elution at 100% of 0.2 M sodium chloride in 0.05 M Tris–HCl buffer (pH 7) (Jäger et al. 2010). The flow-through from DEAE column (rich in CBH II enzymes) was concentrated and rebuffered in 50 mM sodium acetate buffer (pH 5.0) using Pellicon XL 50 Ultrafiltration Cassette, with biomax 10 (Millipore, USA). The rebuffered protein was spiked with gluconolactone (final concentration of 1 mM) and loaded on the p-aminophenyl cellobioside (pAPC) affinity column (dimensions: 1.5 cm × 10 cm) with 0.1 M sodium acetate, containing 1 mM gluconolactone and 0.2 M glucose (pH 5.0) at flow rate of 1.5 mL/min (Jeoh et al. 2007; Sangseethong and Penner 1998). The function of gluconolactone in the buffer is to suppress β-glucosidase activity, which otherwise can cleave the ligand (Sangseethong and Penner 1998). The bound CBH II protein was eluted using the running buffer containing 0.01 M cellobiose [100 mM sodium acetate buffer containing 1 mM gluconolactone, 0.2 M glucose, and 0.1 M cellobiose (pH 5.0)]. The purified CBH II from affinity column was concentrated and loaded on the phenyl Sepharose column (dimensions: 1.0 cm × 10 cm) for hydrophobic interaction chromatography to separate core and intact proteins (Sangseethong and Penner 1998). The sample was loaded in high salt (0.35 M ammonium sulfate in 25 mM sodium acetate buffer, pH 5.0) and eluted with linear gradient from running buffer to elution buffer [25 mM acetate buffer containing 20% ethylene glycol (v/v), pH 5.0]. Hydrophobic interaction chromatography was performed on the second elution (CBH I rich) from the anion-exchange column, after concentrating and rebuffering with 25 mM sodium acetate buffer. The enzyme was loaded in very high salt (0.75 M ammonium sulfate in 25 mM sodium acetate buffer, pH 5.0) and eluted with linear gradient from running buffer to elution buffer [25 mM acetate buffer containing 5% ethylene glycol (v/v), pH 5.0]. The purified CBH II and CBH I fractions from hydrophobic interaction column were concentrated and rebuffered in 50 mM sodium acetate buffer, pH 5.0. Protein containing fractions were determined by measuring absorbance at 280 nm. Flow diagram of the chromatography steps used for purification of the CBH I and CBH II enzymes. Blue lines in the sub-plots refer to absorbance at 280 nm and red line refers to conductivity The fractions collected from the chromatographic purifications steps shown in Fig. 4 were analyzed by SDS-polyacrylamide gel electrophoresis to check for their purity. Based on the molecular weight comparison with marker, and literature data, the single bands in the numbered lanes 1 and 2 of Fig. 5 correspond to CBH II (MW 54 kDa) and CBH I (MW 61–64 kDa), respectively (Jäger et al. 2010; Medve et al. 1998; Sangseethong and Penner 1998). The activities of CBH I and CBH II on Avicel were determined as 0.478 and 0.379 IU/mg of protein, respectively. SDS-polyacrylamide gel electrophoresis of the purified cellulase enzymes. M molecular mass marker, (1) CBH II (2) CBH I During protein purification, the protein concentrations in the samples were determined based on Bradford assay using Quick Start™ Bradford Protein Assay Kit (Bio-Rad, USA) and bovine serum albumin (BSA) as standard. The activities of purified CBH I and CBH II were determined on Avicel in 50 mM sodium acetate buffer, pH 5.0. 1 mL of Avicel solution (10 g/L) with final enzyme concentration of 0.1 mg/mL was incubated (mixed end to end) at 45 °C in 2 mL Eppendorf centrifuge tubes for 2 h (Jäger et al. 2010). After 2 h of incubation, the samples were heated at 95 °C for 5 min to stop the hydrolysis. The samples were centrifuged at 15,000 rpm for 5 min to separate the supernatant. The reducing sugar concentration in the supernatant was determined using dinitrosalicylic acid (DNS) assay and using glucose as standard. The hydrolysis experiments were conducted at 25 g/L cellulose (filter paper and cotton balls) concentration and various enzyme loadings (5, 10, and 15 mg/g cellulose) in 50 mM sodium acetate, pH 5.0, 10 mL total volume in 25 mL Erlenmeyer flasks closed with rubber stopper. 100 µL of 2% sodium azide was added in each flask to avoid microbial contamination. The experiments were carried out in controlled environment incubator shaker set at 45 °C and 125 rpm. 200 µL of sample was withdrawn at 3, 6, 9, 12, 18, 24, 36, 48, and 72 h to determine sugar concentrations and the hydrolysis profile. The samples were heated at 95 °C for 5 min to stop the reaction and were prepared for high-performance liquid chromatography (HPLC) analysis. All experiments were performed in triplicate. Validation with literature data Model simulations were performed for Avicel hydrolysis by CBH I using experimental conditions mentioned in Bezerra and Dias (2004). Figures 6 and 7 illustrate the comparison of model simulations and experimental data from hydrolysis of cellulose at 5 and 2.5% solid loadings, respectively. The data from simulation of previous version of model (Kumar and Murthy 2013) were also plotted in these figures to demonstrate the differences in hydrolysis profiles. Results from experimental data and model simulation data were in qualitative and quantitative agreement at both 25 and 50 g/L Avicel loadings. Coefficient of determination (R 2) was found 0.97 (at 5% Avicel loading) and 0.94 (at 2.5% Avicel loading) and was higher than that obtained from previous model results: 0.70 (at 5% Avicel loading) and 0.89 (at 2.5% Avicel loading). Coefficient of determination value was low for previous version of model because, at such high enzyme loadings, enzyme crowding effect become predominant, which was not captured in the previous version of model. Current model captures the crowding effect and therefore results in more accurate cellobiose concentrations. Quantitative match of model simulations with experimental data for various substrate–enzyme ratios also indicated that this model successfully captured the cellobiose inhibition effect. Comparison of model simulations with additional experimental data from literature (lower enzyme loadings) is illustrated in Additional file 3: Figures S1a and S1b. Comparison of model simulations (current and previous version) and experimental data of cellobiose production during hydrolysis of Avicel (50 g/L) at 20 mg/g cellulose loading. The data points are from literature studies (Bezerra and Dias 2004; Bezerra et al. 2011), solid lines are from the new model predictions, and dotted lines are for predictions from previous version of model Comparison of model simulations (current and previous version) and experimental data of cellobiose production during hydrolysis of Avicel (25 g/L) at 40 mg/g cellulose loading The data points are from literature studies (Bezerra and Dias 2004; Bezerra et al. 2011), solid lines are from the new model predictions, and dotted lines are for predictions from previous version of model Validation with experimental data from current study Hydrolysis of filter paper by CBH I and CBH II The results from model simulations and actual experiments of hydrolysis of filter paper at various loadings of CBH I and CBH II are shown in Figs. 8, 9, 10 and 11, respectively. Comparison of model simulations (solid lines) with experimental data during hydrolysis of filter paper (25 g/L) at CBH I loading of 10 mg/g cellulose: a sugar production, b rate of conversion of cellulose Comparison of model simulations and experimental observations of sugar production during hydrolysis of filter paper (25 g/L) at a CBH I loading of 5 mg/g cellulose, b CBH I loading of 15 mg/g cellulose Comparison of model simulations (solid lines) with experimental data during hydrolysis of filter paper (25 g/L) at CBH II loading of 10 mg/g cellulose: a sugar production, b rate of conversion of cellulose Comparison of model simulations and experimental observations of sugar production during hydrolysis of filter paper (25 g/L) at a CBH II loading of 5 mg/g cellulose, b CBH II loading of 15 mg/g cellulose In all of the cases, the model fitted filter paper hydrolysis data and predicted the sugar profiles and hydrolysis rates. It is very important to note that except enzyme activities (determined experimentally in this case), no other model parameter was changed during simulations under these hydrolysis conditions vs. earlier literature-based experimental conditions. Excellent fitting of model with data with studies from two different lab groups demonstrates the robustness and potential usability (scope of using for any hydrolysis conditions with parametrization issues) of this model. As expected, cellobiose, followed by glucose was the major product during hydrolysis by CBH I or CBH II. The model predictions were accurate in determining both cellobiose and glucose concentrations during hydrolysis. The previous version of model did not account for the glucose formation during cellulose hydrolysis by cellobiohydrolases, hence, did not fit with the experimental data (Additional file 3: Figure S2). Small amounts of cellotriose were also observed both in experimental data and model simulations (data not shown). The cellobiose production rate is high at the beginning of hydrolysis and decreases significantly due to cellobiose inhibition on the CBH I and II enzymes. The inhibition effect was captured by the model and was also further validated when the effect disappears on removal of cellobiose by converting it to glucose through β-glucosidase action (discussed later in the manuscript). The R 2 values between experimental and model data for cellobiose production during filter paper hydrolysis by CBHI and CBH II were in the range of 0.65–0.90 and 0.77–0.81, respectively. It was observed from the results that increase in enzyme loading did not result in the significant increase in the final sugar yields. The enzymes used in the hydrolysis experiments had very high activity, and possibly increasing the enzyme loading resulted in the enzyme crowding effect due to limited availability of chain ends. It can be observed from the results, that the phenomenon was well captured by the model, as the simultaneous action of multiple enzyme, their blockage by each other was considered in the model. Effect of beta-glucosidase addition (exo-BG synergism) Cooperative action of different enzymes, known as synergism, is one of the most important phenomenon observed in cellulose degradation (Andersen et al. 2008; Bansal et al. 2009; Wang et al. 2012; Zhang and Lynd 2004). Synergism between CBH I and/or CBH II and β-glucosidase enzymes is very important for the conversion of cellulose (Zhang and Lynd 2004). This synergism occurs mainly because of strong inhibition effect of cellobiose on the CBH performance. Primary product of CBH action on cellulose is cellobiose, a strong inhibitor to CBH activity (Andersen 2007; Ballesteros 2010; Fan et al. 1987; Mosier et al. 1999; Zhang and Lynd 2004). Cellobiose buildup is prevented by the action of β-glucosidase which further hydrolyzes cellobiose to glucose and results in CBH and β-glucosidase synergism. The synergistic effect of β-glucosidase addition was observed during filter paper hydrolysis by CBH I and CBH II and is illustrated in Figs. 12 and 13, respectively. Effect of β-glucosidase addition on cellulose hydrolysis by CBH I (25 g/L filter paper; CBH I: 10 mg/g cellulose) Effect of β-glucosidase addition on cellulose hydrolysis by CBH II (25 g/L filter paper; CBH II: 10 mg/g cellulose) Cellulose conversions after 72 h of hydrolysis were observed about 82.7 and 15.1% higher for CBH I (10 mg/g glucans) and CBH II (10 mg/g glucans), respectively, in presence of excess β-glucosidase than those in absence of β-glucosidase. To determine the model accuracy in predicting this trend, action of CBH I and CBH II was simulated in absence and presence of β-glucosidase. It can be observed from Figs. 11 and 12 that model simulations capture this synergism successfully both for CBH I and CBH II enzymes. The synergism was lower for CBH II compared to CBH I, possibly due to relatively less inhibitory effect of cellobiose on CBH II. The observation of relatively lower cellobiose inhibition towards CBH II was also reported in a comprehensive study on cellobiose inhibition using 14C-labeled cellulose substrates, conducted by Teugjas and Väljamäe (2013). In that study, it was reported that enzymes from glycoside hydrolase (GH) family 7 were most sensitive to cellobiose inhibition followed by family 6 CBHs and endoglucanases (EGs). The model simulations successfully followed the trend observed experimentally. Other than cellulose conversion, hydrolysis rate of filter paper by CBH I and CBH II with excess of β-glucosidase was markedly higher than that of CBH enzymes acting alone (data not reported). Effect of structural properties of cellulose Cellulose hydrolysis is highly affected by the structural properties of cellulose. In case of cellobiohydrolases (CBH I and CBH II) action, where the enzymes act on chain ends only, fractions of reducing/non-reducing ends relative to total glucose molecules would be critical factor affecting hydrolysis. For example, the percentage of chain ends for filter paper with average chain DP of 700 is 0.13% compared to 0.05% for bacterial cellulose with average DP of 2000 and 0.033% for cotton with DP of 3000 (Zhang and Lynd 2004, 2006). Therefore, it would be expected that CBH I and CBH II would hydrolyze filter paper more efficiently compared to cotton or bacterial cellulose, as there are relatively many more reducing/non-reducing ends. The expected trends were observed in both experimental and model simulations of CBH I and CBH II action on filter paper and cotton (Fig. 14a, b). Effect of cellulose structural properties on hydrolysis: cellulose conversion during hydrolysis of filter paper and cotton (25 g/L) by action of a CBH I at loading of 10 mg/g cellulose b CBH II at loading of 10 mg/g cellulose The cellulose conversion after 72 h of cotton hydrolysis was observed 77.0 and 92.6% less than those of filter paper hydrolysis by action of CBH I and CBH II, respectively. The model had a good fit with experimental data for cotton hydrolysis in case of CBH I. For CBH II, although the absolute values of predicted cellulose conversion were higher than the actual values, the expected trend was observed. So, the model simulations successfully captured the inverse relationship between substrate DP and hydrolysis of cellulose and predicted 73 and 81.1% reduction in cellulose conversion for cotton compared to filter paper for CBH I and CBH II, respectively. Similar results have been reported in literature from both experimental as well as modeling studies (Wood 1974; Zhang and Lynd 2006). As endoglucanases act on the surface chains, their activity is not severely affected by fraction of chain ends; however, degree of crystallinity plays an important role in deciding their performance. Bonds in the amorphous region are more susceptible to hydrolysis compared to those in crystallinity regions because of higher accessibility of enzymes in amorphous regions (Chang and Holtzapple 2000). This behavior was also successfully captured by model simulations as cellulose conversion by action of endoglucanases on cotton (highly crystalline cellulose, CrI 0.85–0.90) was found 57.2% lower than that of filter paper (semi-crystalline cellulose, CrI 0.4–0.5) after 48 h of hydrolysis (Additional file 3: Figure S3). Other model simulations Enzymatic hydrolysis of cellulose by individual enzymes As discussed in sections above, model was simulated for filter paper hydrolysis by action of individual cellulase enzymes. Hydrolysis rates during action of individual enzymes (EGI, CBH I and CBH II) on filter paper (25 g/L) are presented in Fig. 15. For all enzyme classes, there was significant drop in hydrolysis rates after few initial hours of hydrolysis and then the rate became nearly constant. This decrease in rate after few hours of initial hydrolysis is widely an observed phenomenon and is believed to occur due to morphological changes in the cellulose structure (e.g., decrease in glucose chains on the surface, increased percentage of crystallinity regions). These changes affect the enzyme–substrate interactions by limiting the accessibility of cellulase enzymes to glucose chains and results in rapid decline in hydrolysis rate (Zhang and Lynd 2004; Zhou et al. 2009). As observed in experimental results also, cellobiose is the major product formed during cellulose hydrolysis by CBH I and CBH II, which also acts as a strong inhibitor to these enzymes and negatively affects the hydrolysis rate. Model predictions of hydrolysis rate by action of individual enzymes on filter paper (filter paper, 25 g/L; all enzymes, 10 mg/g glucans) After 48 h hydrolysis of filter paper by EG I, it was observed (from model simulations) that concentrations of oligomers with DP 2–4 and glucose were higher compared to cellopentaose and cellohexaose concentrations (Additional file 3: Figure S4). There was increase in concentrations of cellopentaose and cellohexaose during initial few hours (3–4 h), and after that their concentrations started decreasing. This trend was also expected because of change in availability of glucose molecules on surface. Surface glucose chains are easily accessible during initial phase of hydrolysis, where endoglucanases act randomly to producing short chains. As the hydrolysis progress, availability of these glucose chains decreases, and enzymes start acting on soluble sugars. Concentration of sugars with DP 2–4 did not decrease as EG I was assumed to act only on oligomers with DP > 4. In case of hydrolysis by CBH I and CBH II, all soluble sugars except cellobiose, glucose, and cellotriose were produced in negligible amounts (less than 0.01 mg/L, data not reported). Endo–exo synergism The endo–exo synergism is a highly effective synergism that has been reported in many studies and plays critical role in the hydrolysis rates and yields (Andersen 2007; Medve et al. 1998; Väljamäe et al. 1999; Zhang and Lynd 2004). The model was simulated for hydrolysis of filter paper and cotton for individual and combined EG and CBH I. Simulations were performed for 48 h assuming 25 g/L substrate concentration at enzyme loadings of 10 mg/g glucans (individually and total of 20 mg/g glucans in mixture, with EG to CBH I ratio of 1:1). Figure 16 illustrates the comparison between theoretical conversion (addition of cellulose conversions during hydrolysis by individual enzymes) and actual conversion (cellulose conversion during hydrolysis by enzymes acting simultaneously). Endo–exo synergism during hydrolysis of filter paper and cotton cellulose 25 g/L, EG I 10 mg/g glucans, CBHI 10 mg/g glucans (total 20 mg enzymes when acting in mixture 1:1). Solid lines are results from combined action of enzymes and lines with points (theoretical) are sum of conversions from action of individual enzymes The common measure of synergism is "Degree of synergism (DS)", which is defined as follows (Eq. 2): $${\text{Degree of synergism}} = \frac{{\Delta C_{\text{mixed}} }}{{\mathop \sum \nolimits_{i = 1}^{n} \Delta C_{i} }},$$ where ∆C mixed is cellulose conversion obtained from mixture of 'n' enzymes; ∆C i is cellulose conversion obtained from an individual action of 'ith' enzyme. It can be seen from Fig. 16 that the expected synergism was observed in the model simulations. The degree of synergism increased initially and then decreased towards the end of hydrolysis. Similar trends have been observed by other researchers: Kleman-Leyer et al. (1996) for hydrolysis of cotton and Medve et al. (1998) for hydrolysis of Avicel. The phenomenon can be explained by the fact that at the beginning of hydrolysis, the surface molecules are accessible for EG action, and chain ends were sufficient for CBH action. As the hydrolysis progresses, endoglucanases create additional chain ends and increase their availability for exoglucanases, which results in high hydrolysis rate and synergism. However, with further progress in hydrolysis, product (cellobiose and glucose mainly) inhibition becomes very dominant and total yields are not significantly higher than the case where the enzymes work individually. The highest values of degree of synergism were 1.33 and 4.35 for filter paper and cotton, respectively. The values of DS obtained from model simulations are consistent with the reported values in literature (Medve et al. 1998; Väljamäe et al. 1999; Zhang and Lynd 2004; Zhou et al. 2010). A large variation in the DS values can be observed in literature studies, possibly because several factors such as total time of hydrolysis, purity of enzymes, activity of enzymes, and enzyme loadings can affect the synergism. The synergism was observed higher for cotton hydrolysis than that of in case of filter paper hydrolysis. The inverse relationship between DS and substrate DP was expected and has been reported in literature (Andersen 2007; Srisodsuk et al. 1998; Zhang and Lynd 2004). A comprehensive review on hydrolysis from Zhang and Lynd (2004) compiled DS values from various studies and reported low DS values (1.3–2.2) for Avicel and high DS values (4.1–10) for cotton and bacterial cellulose from synergism of T. reeesi enzymes. During cellulose hydrolysis by only CBH I, its accessibility to chain ends is very limited and cellulose conversion is very less. The accessibility is further reduced for substrates like cotton, with very high degree of polymerization. During combined action of EG and CBH I, creation of additional chain ends by random action of EG increases the substrate availability for action of CBH I, which results in more effective hydrolysis. A novel approach of stochastic molecular modeling based on basic sciences and computer algorithms was used to model complex cellulose hydrolysis process. In this work, the model was further improved by incorporating some critical phenomenon, especially the enzyme crowding effect, and the model was validated with actual hydrolysis experiments using purified enzymes. Model was accurate in predicting the cellulose hydrolysis profiles obtained from experimental studies from both literature as well as this work. The model captured the dynamics of cellulose hydrolysis during action of individual as well as multiple cellulase enzymes. Model results successfully followed all important trends, such as product inhibition, low cellobiohydrolase activity on high DP substrates, low endoglucanases activity on crystalline substrate, inverse relationship between degree of synergism and substrate DP, observed experimentally and reported in literature studies. The model was robust and has high potential usability as could be observed from the fact that model simulations fitted well with the experimental data from both literature as well as current work, without changes to any model parameters (except enzyme activity). Andersen N (2007) Enzymatic hydrolysis of cellulose—experimental and modeling studies. Technical University of Denmark, Copenhagen Andersen N, Johansen KS, Michelsen M, Stenby EH, Krogh KBRM, Olsson L (2008) Hydrolysis of cellulose using mono-component enzymes shows synergy during hydrolysis of phosphoric acid swollen cellulose (PASC), but competition on Avicel. Enzym Microb Technol 42:362–370 Asztalos A, Daniels M, Sethi A, Shen T, Langan P, Redondo A, Gnanakaran S (2012) A coarse-grained model for synergistic action of multiple enzymes on cellulose. Biotechnol Biofuels 5:55 Baker JO, Ehrman CI, Adney WS, Thomas SR, Himmel ME (1998) Hydrolysis of cellulose using ternary mixtures of purified celluloses. Appl Biochem Biotechnol 70:395–403 Ballesteros M (2010) Enzymatic hydrolysis of lignocellulosic biomass. In: Waldron K (ed) Bioalcohol production: biochemical conversion of lignocellulosic biomass. CRC Press, Boca Raton Banerjee G, Car S, Scott-Craig JS, Borrusch MS, Bongers M, Walton JD (2010a) Synthetic multi-component enzyme mixtures for deconstruction of lignocellulosic biomass. Bioresour Technol 101:9097–9105 Banerjee G, Car S, Scott-Craig JS, Borrusch MS, Walton JD (2010b) Rapid optimization of enzyme mixtures for deconstruction of diverse pretreatment/biomass feedstock combinations. Biotechnol Biofuels 3:22 Bansal P, Hall M, Realff MJ, Lee JH, Bommarius AS (2009) Modeling cellulase kinetics on lignocellulosic substrates. Biotechnol Adv 27:833–848 Berlin A, Maximenko V, Gilkes N, Saddler J (2007) Optimization of enzyme complexes for lignocellulose hydrolysis. Biotechnol Bioeng 97:287–296 Besselink T, Baks T, Janssen AEM, Boom RM (2008) A stochastic model for predicting dextrose equivalent and saccharide composition during hydrolysis of starch by α-amylase. Biotechnol Bioeng 100:684–697 Bezerra RMF, Dias AA (2004) Discrimination among eight modified Michaelis-Menten kinetics models of cellulose hydrolysis with a large range of substrate/enzyme ratios. Appl Biochem Biotechnol 112:173–184 Bezerra RMF, Dias AA, Fraga I, Pereira AN (2011) Cellulose hydrolysis by cellobiohydrolase Cel7A Shows mixed hyperbolic product inhibition. Appl Biochem Biotechnol 165:178–189 Chang VS, Holtzapple MT (2000) Fundamental factors affecting biomass enzymatic reactivity. Appl Biochem Biotechnol 84:5–37 Chinga-Carrasco G (2011) Cellulose fibres, nanofibrils and microfibrils: the morphological sequence of MFC components from a plant physiology and fibre technology point of view. Nanoscale Res Lett 6:417 Eriksson T, Karlsson J, Tjerneld F (2002) A model explaining declining rate in hydrolysis of lignocellulose substrates with cellobiohydrolase I (Cel7A) and endoglucanase I (Cel7B) of Trichoderma reesei. Appl Biochem Biotechnol 101:41–60 Fan L, Lee Y (1983) Kinetic studies of enzymatic hydrolysis of insoluble cellulose: derivation of a mechanistic kinetic model. Biotechnol Bioeng 25:2707–2733 Fan L, Gharpuray MM, Lee YH (1987) Cellulose hydrolysis. Biotechnology monographs, vol 3. Springer, Berlin Gao D, Chundawat SPS, Krishnan C, Balan V, Dale BE (2010) Mixture optimization of six core glycosyl hydrolases for maximizing saccharification of ammonia fiber expansion (AFEX) pretreated corn stover. Bioresour Technol 101:2770–2781 Ghose T (1987) Measurement of cellulase activities. Pure Appl Chem 59:257–268 Hall M, Bansal P, Lee JH, Realff MJ, Bommarius AS (2010) Cellulose crystallinity—a key predictor of the enzymatic hydrolysis rate. FEBS J 277:1571–1582 Igarashi K et al (2011) Traffic jams reduce hydrolytic efficiency of cellulase on cellulose surface. Science 333:1279–1282 Jäger G et al (2010) Practical screening of purified cellobiohydrolases and endoglucanases with α-cellulose and specification of hydrodynamics. Biotechnol Biofuels 3:18 Jeoh T, Ishizawa CI, Davis MF, Himmel ME, Adney WS, Johnson DK (2007) Cellulase digestibility of pretreated biomass is limited by cellulose accessibility. Biotechnol Bioeng 98:112–122 Kadam KL, Rydholm EC, McMillan JD (2004) Development and validation of a kinetic model for enzymatic saccharification of lignocellulosic biomass. Biotechnol Prog 20:698–705 Kleman-Leyer KM, Siika-Aho M, Teeri TT, Kirk TK (1996) The cellulases endoglucanase I and cellobiohydrolase II of Trichoderma reesei act synergistically to solubilize native cotton cellulose but not to decrease Its molecular size. Appl Environ Microbiol 62:2883–2887 Kumar D (2014) Biochemical conversion of lignocellulosic biomass to ethanol: experimental, enzymatic hydrolysis modeling, techno-economic and life cycle assessment studies. Oregon State University, Corvallis Kumar D, Murthy GS (2011) Impact of pretreatment and downstream processing technologies on economics and energy in cellulosic ethanol production. Biotechnol Biofuels 4:27 Kumar D, Murthy GS (2013) Stochastic molecular model of enzymatic hydrolysis of cellulose for ethanol production. Biotechnol Biofuels 6:63 Levine SE, Fox JM, Blanch HW, Clark DS (2010) A mechanistic model of the enzymatic hydrolysis of cellulose. Biotechnol Bioeng 107:37–51 Lynd LR, Weimer PJ, Van Zyl WH, Pretorius IS (2002) Microbial cellulose utilization: fundamentals and biotechnology. Microbiol Mol Biol Rev 66:506–577 Marchal L, Zondervan J, Bergsma J, Beeftink H, Tramper J (2001) Monte Carlo simulation of the α-amylolysis of amylopectin potato starch. Bioprocess Biosyst Eng 24:163–170 Marchal L, Ulijn R, Gooijer CD, Franke G, Tramper J (2003) Monte Carlo simulation of the α-amylolysis of amylopectin potato starch. 2. α-amylolysis of amylopectin. Bioprocess Biosyst Eng 26:123–132 Matsumoto M, Nishimura T (1998) Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans Model Comput Simul 8:3–30 Medve J, Karlsson J, Lee D, Tjerneld F (1998) Hydrolysis of microcrystalline cellulose by cellobiohydrolase I and endoglucanase II from Trichoderma reesei: adsorption, sugar production pattern, and synergism of the enzymes. Biotechnol Bioeng 59:621–634 Merino S, Cherry J (2007) Progress and challenges in enzyme development for biomass utilization. Biofuels 108:95–120 Mosier N, Hall P, Ladisch C, Ladisch M (1999) Reaction kinetics, molecular action, and mechanisms of cellulolytic proteins. Recent Progress Bioconversion Lignocellul 65:23–40 Murthy GS, Johnston DB, Rausch KD, Tumbleson M, Singh V (2011) Starch hydrolysis modeling: application to fuel ethanol production. Bioprocess Biosyst Eng 34:879–890 Sangseethong K, Penner MH (1998) p-Aminophenyl β-cellobioside as an affinity ligand for exo-type cellulases. Carbohydr Res 314:245–250 Srisodsuk M, Kleman-Leyer K, Keränen S, Kirk TK, Teeri TT (1998) Modes of action on cotton and bacterial cellulose of a homologous endoglucanase–exoglucanase pair from Trichoderma reesei. Eur J Biochem 251:885–892 Teugjas H, Väljamäe P (2013) Product inhibition of cellulases studied with 14C-labeled cellulose substrates. Biotechnol Biofuels 6:104 Väljamäe P, Sild V, Nutt A, Pettersson G, Johansson G (1999) Acid hydrolysis of bacterial cellulose reveals different modes of synergistic action between cellobiohydrolase I and endoglucanase I. Eur J Biochem 266:327–334 Wang M, Li Z, Fang X, Wang L, Qu Y (2012) Cellulolytic enzyme production and enzymatic hydrolysis for second-generation bioethanol production. Adv Biochem Eng Biotechnol. 128:1–24. https://doi.org/10.1007/10_2011_131 Wojciechowski PM, Koziol A, Noworyta A (2001) Iteration model of starch hydrolysis by amylolytic enzymes. Biotechnol Bioeng 75:530–539 Wood T (1974) Properties and mode of action of cellulases. In: Biotechnology and bioengineering symposium, vol 5, pp 111–133 Zhang YHP, Lynd LR (2004) Toward an aggregated understanding of enzymatic hydrolysis of cellulose: noncomplexed cellulase systems. Biotechnol Bioeng 88:797–824 Zhang YHP, Lynd LR (2006) A functionally based model for hydrolysis of cellulose by fungal cellulase. Biotechnol Bioeng 94:888–898 Zhou W, Hao Z, Xu Y, Schüttler HB (2009) Cellulose hydrolysis in evolving substrate morphologies II: numerical results and analysis. Biotechnol Bioeng 104:275–289 Zhou W, Xu Y, Schüttler HB (2010) Cellulose hydrolysis in evolving substrate morphologies III: time-scale analysis. Biotechnol Bioeng 107:224–234 DK, and GM developed the model and designed experiments. DK conducted experiments, analyzed data and prepared the manuscript. GM reviewed the results, helped in data analysis and edited the manuscript. All authors read and approved the final manuscript. Authors gratefully acknowledge the support by National Science Foundation through NSF Grant No. 1236349 from Energy for Sustainability program, CBET Division. All data generated and analyzed during this study are included in within the manuscript in the form of graphs and tables. The authors promise to provide any missing data on request. Ethical approval and consent to participate Biological and Ecological Engineering, Oregon State University, Corvallis, OR, USA Deepak Kumar & Ganti S. Murthy Agricultural and Biological Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA Ganti S. Murthy Correspondence to Ganti S. Murthy. Schematic of algorithm for CBH action for cellulose hydrolysis. Schematic of algorithm for EG action for cellulose hydrolysis. Values of parameters used for EG I, CBH I and CBH II action; Figure S1. Comparison of model simulations (previous and new version) with experimental data from literature: S1a for Avicel (50 g/L) and S1b for Avicel (25 g/L); Figure S2. Comparison of model simulations (old version and current model) with experimental data during hydrolysis of filter paper by CBH I; Figure S3. Endoglucanases action on substrates with different crystallinity; Figure S4. Glucose production profile during action of endoglucanases on filter paper. Kumar, D., Murthy, G.S. Development and validation of a stochastic molecular model of cellulose hydrolysis by action of multiple cellulase enzymes. Bioresour. Bioprocess. 4, 54 (2017). https://doi.org/10.1186/s40643-017-0184-2 Cellulase purification Synergism
CommonCrawl
Prescribing the Q-curvature on the sphere with conical singularities DCDS Home Convergence of space-time discrete threshold dynamics to anisotropic motion by mean curvature November 2016, 36(11): 6331-6377. doi: 10.3934/dcds.2016075 Groups of asymptotic diffeomorphisms Robert McOwen 1, and Peter Topalov 1, Northeastern University, Boston, MA 02115, United States, United States Received October 2015 Revised June 2016 Published August 2016 We consider classes of diffeomorphisms of Euclidean space with partial asymptotic expansions at infinity; the remainder term lies in a weighted Sobolev space whose properties at infinity fit with the desired application. We show that two such classes of asymptotic diffeomorphisms form topological groups under composition. As such, they can be used in the study of fluid dynamics according to the approach of V. Arnold [1]. Keywords: Groups of diffeomorphisms, Camassa-Holm equation, Euler equation., asymptotic expansions, weighted Sobolev spaces. Mathematics Subject Classification: Primary: 58D17, 35Q31; Secondary: 76N1. Citation: Robert McOwen, Peter Topalov. Groups of asymptotic diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6331-6377. doi: 10.3934/dcds.2016075 V. Arnold, Sur la geometrié differentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluids parfaits,, Ann. Inst. Fourier, 16 (1966), 319. doi: 10.5802/aif.233. Google Scholar R. Bartnik, The mass of an asymptotically flat manifold,, Comm. Pure Appl. Math, 39 (1986), 661. doi: 10.1002/cpa.3160390505. Google Scholar I. Bondareva and M. Shubin, Uniqueness of the solution of the Cauchy problem for the Korteweg - de Vries equation in classes of increasing functions,, Moscow Univ. Math. Bulletin, 40 (1985), 53. Google Scholar I. Bondareva and M. Shubin, Equations of Korteweg-de Vries type in classes of increasing functions,, J. Soviet Math., 51 (1990), 2323. doi: 10.1007/BF01094991. Google Scholar J. P. Bourguignon and H. Brezis, Remarks on the Euler equation,, J. Func. Anal., 15 (1974), 341. Google Scholar R. Camassa and D. Holm, An integrable shallow water equation with peaked solitons,, Phys. Rev. Lett., 71 (1993), 1661. doi: 10.1103/PhysRevLett.71.1661. Google Scholar M. Cantor, Perfect fluid flows over $\mathbbR^n$ with asymptotic conditions,, J. Func. Anal., 18 (1975), 73. doi: 10.1016/0022-1236(75)90030-0. Google Scholar A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach,, Ann. Inst. Four. Grenoble, 50 (2000), 321. doi: 10.5802/aif.1757. Google Scholar D. Ebin and J. Marsden, Groups of diffeomorphisms and the motion of an incompressible fluid,, Ann. Math., 92 (1970), 102. doi: 10.2307/1970699. Google Scholar H. Inci, T. Kappeler and P. Topalov, On the regularity of the composition of diffeomorphisms,, Mem. Amer. Math. Soc., 226 (2013). doi: 10.1090/S0065-9266-2013-00676-4. Google Scholar T. Kappeler, P. Perry, M. Shubin and P. Topalov, Solutions of mKdV in classes of functions unbounded at infinity,, J. Geom. Anal., 18 (2008), 443. doi: 10.1007/s12220-008-9013-3. Google Scholar C. Kenig, G. Ponce and L. Vega, Global solutions for the KdV equation with unbounded data,, J. Diff. Equations, 139 (1997), 339. doi: 10.1006/jdeq.1997.3297. Google Scholar R. McOwen, The behavior of the Laplacian on weighted Sobolev spaces,, Comm. Pure Appl. Math., 32 (1979), 783. doi: 10.1002/cpa.3160320604. Google Scholar R. McOwen, Partial Differential Equations: Methods and Applications,, 2nd ed, (2003). Google Scholar R. McOwen and P. Topalov, Asymptotics in shallow water waves,, Discrete Contin. Dyn. Syst., 35 (2015), 3103. doi: 10.3934/dcds.2015.35.3103. Google Scholar R. McOwen and P. Topalov, Spatial asymptotic expansions in the incompressible Euler equation,, arXiv:1606.08059., (). Google Scholar A. Menikoff, The existence of unbounded solutions of the Korteweg-de Vries equation,, Comm. Pure Appl. Math., 25 (1972), 407. doi: 10.1002/cpa.3160250404. Google Scholar P. Michor and D. Mumford, A zoo of diffeomorphisms groups on $\mathbbR^n$,, Ann. Glob. Anal. Geom., 44 (): 529. doi: 10.1007/s10455-013-9380-2. Google Scholar G. Misiolek, A shallow water equation as a geodesic flow on the Bott-Visaro group,, J. Geom. Phys., 24 (1998), 203. doi: 10.1016/S0393-0440(97)00010-7. Google Scholar D. Montgomery, On continuity in topological groups,, Bull. Amer. Math. Soc., 42 (1936), 879. doi: 10.1090/S0002-9904-1936-06456-6. Google Scholar Yongsheng Mi, Boling Guo, Chunlai Mu. On an $N$-Component Camassa-Holm equation with peakons. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1575-1601. doi: 10.3934/dcds.2017065 Helge Holden, Xavier Raynaud. Dissipative solutions for the Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1047-1112. doi: 10.3934/dcds.2009.24.1047 Zhenhua Guo, Mina Jiang, Zhian Wang, Gao-Feng Zheng. Global weak solutions to the Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 883-906. doi: 10.3934/dcds.2008.21.883 Milena Stanislavova, Atanas Stefanov. Attractors for the viscous Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 159-186. doi: 10.3934/dcds.2007.18.159 Defu Chen, Yongsheng Li, Wei Yan. On the Cauchy problem for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 871-889. doi: 10.3934/dcds.2015.35.871 Yu Gao, Jian-Guo Liu. The modified Camassa-Holm equation in Lagrangian coordinates. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2545-2592. doi: 10.3934/dcdsb.2018067 Andrea Natale, François-Xavier Vialard. Embedding Camassa-Holm equations in incompressible Euler. Journal of Geometric Mechanics, 2019, 11 (2) : 205-223. doi: 10.3934/jgm.2019011 Stephen C. Anco, Elena Recio, María L. Gandarias, María S. Bruzón. A nonlinear generalization of the Camassa-Holm equation with peakon solutions. Conference Publications, 2015, 2015 (special) : 29-37. doi: 10.3934/proc.2015.0029 Li Yang, Zeng Rong, Shouming Zhou, Chunlai Mu. Uniqueness of conservative solutions to the generalized Camassa-Holm equation via characteristics. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5205-5220. doi: 10.3934/dcds.2018230 Yongsheng Mi, Chunlai Mu. On a three-Component Camassa-Holm equation with peakons. Kinetic & Related Models, 2014, 7 (2) : 305-339. doi: 10.3934/krm.2014.7.305 Shouming Zhou, Chunlai Mu. Global conservative and dissipative solutions of the generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1713-1739. doi: 10.3934/dcds.2013.33.1713 Shihui Zhu. Existence and uniqueness of global weak solutions of the Camassa-Holm equation with a forcing. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5201-5221. doi: 10.3934/dcds.2016026 Feng Wang, Fengquan Li, Zhijun Qiao. On the Cauchy problem for a higher-order μ-Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 4163-4187. doi: 10.3934/dcds.2018181 Danping Ding, Lixin Tian, Gang Xu. The study on solutions to Camassa-Holm equation with weak dissipation. Communications on Pure & Applied Analysis, 2006, 5 (3) : 483-492. doi: 10.3934/cpaa.2006.5.483 Priscila Leal da Silva, Igor Leite Freire. An equation unifying both Camassa-Holm and Novikov equations. Conference Publications, 2015, 2015 (special) : 304-311. doi: 10.3934/proc.2015.0304 Stephen Anco, Daniel Kraus. Hamiltonian structure of peakons as weak solutions for the modified Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4449-4465. doi: 10.3934/dcds.2018194 David F. Parker. Higher-order shallow water equations and the Camassa-Holm equation. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 629-641. doi: 10.3934/dcdsb.2007.7.629 Shaoyong Lai, Qichang Xie, Yunxi Guo, YongHong Wu. The existence of weak solutions for a generalized Camassa-Holm equation. Communications on Pure & Applied Analysis, 2011, 10 (1) : 45-57. doi: 10.3934/cpaa.2011.10.45 Alberto Bressan, Geng Chen, Qingtian Zhang. Uniqueness of conservative solutions to the Camassa-Holm equation via characteristics. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 25-42. doi: 10.3934/dcds.2015.35.25 Jae Min Lee, Stephen C. Preston. Local well-posedness of the Camassa-Holm equation on the real line. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3285-3299. doi: 10.3934/dcds.2017139 Robert McOwen Peter Topalov
CommonCrawl
Equity premium puzzle The equity premium puzzle refers to the inability of an important class of economic models to explain the average premium of the returns on a well-diversified U.S. equity portfolio over U.S. Treasury Bills observed for more than 100 years. The term was coined by Rajnish Mehra and Edward C. Prescott in a study published in 1985 titled The Equity Premium: A Puzzle,[1][2]. An earlier version of the paper was published in 1982 under the title A test of the intertemporal asset pricing model. The authors found that a standard general equilibrium model, calibrated to display key U.S. business cycle fluctuations, generated an equity premium of less than 1% for reasonable risk aversion levels. This result stood in sharp contrast with the average equity premium of 6% observed during the historical period. In simple terms, the investor returns on equities have been on average so much higher than returns on U.S. Treasury Bonds, that it is hard to explain why investors buy bonds, even after allowing for a reasonable amount of risk aversion. In 1982, Robert J. Shiller published the first calculation that showed that either a large risk aversion coefficient or counterfactually large consumption variability was required to explain the means and variances of asset returns.[3] Azeredo (2014) shows, however, that increasing the risk aversion level may produce a negative equity premium in an Arrow-Debreu economy constructed to mimic the persistence in U.S. consumption growth observed in the data since 1929.[4] The intuitive notion that stocks are much riskier than bonds is not a sufficient explanation of the observation that the magnitude of the disparity between the two returns, the equity risk premium (ERP), is so great that it implies an implausibly high level of investor risk aversion that is fundamentally incompatible with other branches of economics, particularly macroeconomics and financial economics. The process of calculating the equity risk premium, and selection of the data used, is highly subjective to the study in question, but is generally accepted to be in the range of 3–7% in the long-run. Dimson et al. calculated a premium of "around 3–3.5% on a geometric mean basis" for global equity markets during 1900–2005 (2006).[5] However, over any one decade, the premium shows great variability—from over 19% in the 1950s to 0.3% in the 1970s. To quantify the level of risk aversion implied if these figures represented the expected outperformance of equities over bonds, investors would prefer a certain payoff of $51,300 to a 50/50 bet paying either $50,000 or $100,000.[6] The puzzle has led to an extensive research effort in both macroeconomics and finance. So far a range of useful theoretical tools and numerically plausible explanations have been presented, but no one solution is generally accepted by economists. 1 Theory 3 Possible explanations 3.1 The equity premium: a deeper puzzle 3.2 Individual characteristics 3.3 Equity characteristics 3.4 Tax distortions 3.5 Implied volatility 3.6 Anwar Shaikh explanation 3.7 Market failure explanations 3.8 Denial of equity premium 4 Implications The economy has a single representative household whose preferences over stochastic consumption paths are given by: E 0 [ ∑ t = 0 ∞ β t U ( c t ) ] {\displaystyle E_{0}\left[\sum _{t=0}^{\infty }\beta ^{t}U(c_{t})\right]} where 0 < β < 1 {\textstyle 0<\beta <1} is the subjective discount factor, c t {\textstyle c_{t}} is the per capita consumption at time t {\textstyle t} , U() is an increasing and concave utility function. In the Mehra and Prescott (1985) economy, the utility function belongs to the constant relative risk aversion class: U ( c , α ) = c ( 1 − α ) 1 − α {\displaystyle U(c,\alpha )={\frac {c^{(1-\alpha )}}{1-\alpha }}} where 0 < α < ∞ {\textstyle 0<\alpha <\infty } is the constant relative risk aversion parameter. When α = 1 {\displaystyle \alpha =1} , the utility function is the natural logarithmic function. Weil (1989) replaced the constant relative risk aversion utility function with the Kreps-Porteus nonexpected utility preferences. U t = [ c t 1 − ρ + β ( E t U t + 1 1 − α ) ( 1 − ρ ) / ( 1 − α ) ] 1 / ( 1 − ρ ) {\displaystyle U_{t}=\left[c_{t}^{1-\rho }+\beta (E_{t}U_{t+1}^{1-\alpha })^{(1-\rho )/(1-\alpha )}\right]^{1/(1-\rho )}} The Kreps-Porteus utility function has a constant intertemporal elasticity of substitution and a constant coefficient of relative risk aversion which are not required to be inversely related - a restriction imposed by the constant relative risk aversion utility function. Mehra and Prescott (1985) and Weil (1989) economies are a variations of Lucas (1978) pure exchange economy. In their economies the growth rate of the endowment process, x t {\textstyle x_{t}} , follows an ergodic Markov Process. P [ x t + 1 = λ j | x t = λ i ] = ϕ i , j {\displaystyle P\left[x_{t+1}=\lambda _{j}|x_{t}=\lambda _{i}\right]=\phi _{i,j}} where x t ∈ { λ 1 , . . . , λ n } {\textstyle x_{t}\in \{\lambda _{1},...,\lambda _{n}\}} . This assumption is the key difference between Mehra and Prescott's economy and Lucas' economy where the level of the endowment process follows a Markov Process. There is a single firm producing the perishable consumption good. At any given time t {\textstyle t} , the firm's output must be less than or equal to y t {\textstyle y_{t}} which is stochastic and follows y t + 1 = x t + 1 y t {\textstyle y_{t+1}=x_{t+1}y_{t}} . There is only one equity share held by the representative household. We work out the intertemporal choice problem. This leads to: p t U ′ ( c t ) = β E t [ ( p t + 1 + y t + 1 ) U ′ ( c t + 1 ) ] {\displaystyle p_{t}U'(c_{t})=\beta E_{t}[(p_{t+1}+y_{t+1})U'(c_{t+1})]} as the fundamental equation. For computing stock returns 1 = β E t [ U ′ ( c t + 1 ) U ′ ( c t ) R e , t + 1 ] {\displaystyle 1=\beta E_{t}\left[{\frac {U'(c_{t+1})}{U'(c_{t})}}R_{e,t+1}\right]} R e , t + 1 = ( p t + 1 + y t + 1 ) / p t {\displaystyle R_{e,t+1}=(p_{t+1}+y_{t+1})/p_{t}} gives the result.[7] They can compute the derivative with respect to the percentage of stocks, and this must be zero. Much data exists that says that stocks have higher returns. For example, Jeremy Siegel says that stocks in the United States have returned 6.8% per year over a 130-year period. Proponents of the capital asset pricing model say that this is due to the higher beta of stocks, and that higher-beta stocks should return even more. Others have criticized that the period used in Siegel's data is not typical, or the country is not typical. Possible explanations A large number of explanations for the puzzle have been proposed. These include: rejection of the Arrow-Debreu model in favor of different models, modifications to the assumed preferences of investors, imperfections in the model of risk aversion, the excess premium for the risky assets equation results when assuming exceedingly low consumption/income ratios, and a contention that the equity premium does not exist: that the puzzle is a statistical illusion. Kocherlakota (1996), Mehra and Prescott (2003) present a detailed analysis of these explanations in financial markets and conclude that the puzzle is real and remains unexplained.[8][9] Subsequent reviews of the literature have similarly found no agreed resolution. The equity premium: a deeper puzzle Azeredo (2014) showed that traditional pre-1930 consumption measures understate the extent of serial correlation in the U.S. annual real growth rate of per capita consumption of non-durables and services ("consumption growth").[10] Under alternative measures proposed in the study, the serial correlation of consumption growth is found to be positive. This new evidence implies that an important subclass of dynamic general equilibrium models studied by Mehra and Prescott (1985) generates negative equity premium for reasonable risk-aversion levels, thus further exacerbating the equity premium puzzle. Individual characteristics Some explanations rely on assumptions about individual behavior and preferences different from those made by Mehra and Prescott. Examples include the prospect theory model of Benartzi and Thaler (1995) based on loss aversion.[11] A problem for this model is the lack of a general model of portfolio choice and asset valuation for prospect theory. A second class of explanations is based on relaxation of the optimization assumptions of the standard model. The standard model represents consumers as continuously-optimizing dynamically-consistent expected-utility maximizers. These assumptions provide a tight link between attitudes to risk and attitudes to variations in intertemporal consumption which is crucial in deriving the equity premium puzzle. Solutions of this kind work by weakening the assumption of continuous optimization, for example by supposing that consumers adopt satisficing rules rather than optimizing. An example is info-gap decision theory,[12] based on a non-probabilistic treatment of uncertainty, which leads to the adoption of a robust satisficing approach to asset allocation. Equity characteristics A second class of explanations focuses on characteristics of equity not captured by standard capital market models, but nonetheless consistent with rational optimization by investors in smoothly functioning markets. Writers including Bansal and Coleman (1996), Palomino (1996) and Holmstrom and Tirole (1998) focus on the demand for liquidity. Tax distortions McGrattan and Prescott (2001)[citation needed] argue that the observed equity premium in the United States since 1945 may be explained by changes in the tax treatment of interest and dividend income. As Mehra (2003)[citation needed] notes, there are some difficulties in the calibration used in this analysis and the existence of a substantial equity premium before 1945 is left unexplained. Graham and Harvey have estimated that, for the United States, the expected average premium during the period June 2000 to November 2006 ranged between 4.65 and 2.50.[13] They found a modest correlation of 0.62 between the 10-year equity premium and a measure of implied volatility (in this case VIX, the Chicago Board Options Exchange Volatility Index). Anwar Shaikh explanation Anwar Shaikh argues that in the classical framework the equity premium is a consequence of fractional-reserve banking and competition.[14] In the most abstract model of a fractional-reserve bank in classical economics, a bank's capital consists only of its reserves R. The bank attracts deposits D such that the deposits cover a fraction ρ = R/D of the reserves, then creates loans L, which are covered by a fraction d = D/L of the deposits. The bank then obtains a profit rate of r = i n c o m e c a p i t a l = i ⋅ L R = i ρ ⋅ d {\displaystyle r={\frac {\mathsf {income}}{\mathsf {capital}}}={\frac {i\cdot L}{R}}={\frac {i}{\rho \cdot d}}} where i = r·ρ·d is the interest rate on loans. Since ρ·d = R/L < 1, the profit rate r is higher than the interest rate i. In a competitive market, the interest rates will be equalized across banks. Since bond holders compete with banks in the credit market, their returns are equalized with the bank interest rate. Stock returns, on the other hand, are equalized with the profit rate r and there is no mechanism that equalizes equity and bond rates of return. In a more realistic classical model, the bank interest rate is the sum of r·ρ·d and a positive term that depends on banks' operating costs and the price level, so that the equity premium is smaller than in the abstract model. The premium r−i must still be greater than zero for there to be an incentive for firms to borrow. The difference between interest rate and profit rate is, however, not a risk premium, but a structural factor. Market failure explanations Two broad classes of market failure have been considered as explanations of the equity premium. First, problems of adverse selection and moral hazard may result in the absence of markets in which individuals can insure themselves against systematic risk in labor income and noncorporate profits. Second, transaction costs or liquidity constraints may prevent individuals from smoothing consumption over time. Denial of equity premium This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Equity premium puzzle" – news · newspapers · books · scholar · JSTOR (February 2010) (Learn how and when to remove this template message) A final possible explanation is that there is no puzzle to explain: that there is no equity premium.[citation needed] This can be argued from a number of ways, all of them being different forms of the argument that we don't have enough statistical power to distinguish the equity premium from zero: Selection bias of the US market in studies. The US market was the most successful stock market in the 20th century. Other countries' markets displayed lower long-run returns (but still with positive equity premiums). Picking the best observation (US) from a sample leads to upwardly biased estimates of the premium. Survivorship bias of exchanges: exchanges often go bust (just as governments default; for example, Shanghai stock exchange during 1949 communist takeover), and this risk needs to be included – using only exchanges which have survived for the long-term overstates returns. Exchanges close often enough for this effect to matter.[15] Low number of data points: the period 1900–2005 provides only 105 independent years which is not a large enough sample size to run statistical analyses with full confidence, especially in view of the black swan effect. Windowing: returns of equities (and relative returns) vary greatly depending on which points are included. Using data starting from the top of the market in 1929 or starting from the bottom of the market in 1932 (leading to estimates of equity premium of 1% lower per year), or ending at the top in 2000 (vs. bottom in 2002) or top in 2007 (vs. bottom in 2009 or beyond) completely change the overall conclusion. However, in all windows considered, the equity premium is always greater than zero. A related criticism is that the apparent equity premium is an artifact of observing stock market bubbles in progress. Note however that most mainstream economists agree that the evidence shows substantial statistical power. The magnitude of the equity premium has implications for resource allocation, social welfare, and economic policy. Grant and Quiggin (2005) derive the following implications of the existence of a large equity premium: Macroeconomic variability associated with recessions is expensive. Risk to corporate profits robs the stock market of most of its value. Corporate executives are under irresistible pressure to make short-sighted decisions. Policies—disinflation, costly reform that promises long-term gains at the expense of short-term pain, are much less attractive if their benefits are risky. Social insurance programs might well benefit from investing their resources in risky portfolios in order to mobilize additional risk-bearing capacity. There is a strong case for public investment in long-term projects and corporations, and for policies to reduce the cost of risky capital. Transaction taxes could be either for good or for ill.[clarification needed] Ellsberg paradox Loss aversion List of cognitive biases Economic puzzle Forward premium anomaly Real exchange-rate puzzles ^ Mehra, Rajnish; Edward C. Prescott (1985). "The Equity Premium: A Puzzle" (PDF). Journal of Monetary Economics. 15 (2): 145–161. doi:10.1016/0304-3932(85)90061-3. ^ Handbook of the Equity Risk Premium, edited by Rajnish Mehra ^ "Consumption, Asset Markets, and Macroeconomic Fluctuations," Carnegie Rochester Conference Series on Public Policy 17 203-238 ^ Azeredo, F. (2014). "The equity premium: a deeper puzzle" (PDF). Annals of Finance. 10 (3): 347–373. doi:10.1007/s10436-014-0248-7. ^ Dimson, Elroy; Marsh, Paul; Staunton, Mike (2008). "The Worldwide Equity Premium: A Smaller Puzzle". Handbook of the Equity Risk Premium. Amsterdam: Elsevier. ISBN 978-0-08-055585-0. SSRN 891620. ^ Mankiw, N. Gregory; Zeldes, Stephen P. (1991). "The Consumption of Stockholders and Nonstockholders". Journal of Financial Economics. 29 (1): 97–112. CiteSeerX 10.1.1.364.2730. doi:10.1016/0304-405X(91)90015-C. ^ The Equity Premium Puzzle: A Review ^ Kocherlakota, Narayana R. (March 1996). "The Equity Premium: It's Still a Puzzle" (PDF). Journal of Economic Literature. 34 (1): 42–71. ^ Mehra, Rajnish; Edward C. Prescott (2003). "The Equity Premium Puzzle in Retrospect" (PDF). In G.M. Constantinides, M. Harris and R. Stulz (ed.). Handbook of the Economics of Finance. Amsterdam: North Holland. pp. 889–938. ISBN 978-0-444-51363-2. ^ Benartzi, Shlomo; Richard H. Thaler (February 1995). "Myopic Loss Aversion and the Equity Premium Puzzle" (PDF). Quarterly Journal of Economics. 110 (1): 73–92. doi:10.2307/2118511. JSTOR 2118511. ^ Yakov Ben-Haim, Info-Gap Decision Theory: Decisions Under Severe Uncertainty, Academic Press, 2nd edition, Sep. 2006. ISBN 0-12-373552-1. ^ Graham, John R.; Harvey, Campbell R. (2007). "The Equity Risk Premium in January 2007: Evidence from the Global CFO Outlook Survey". Working Paper. SSRN 959703. ^ Shaikh, Anwar (2016). Capitalism: Competition, Conflict, Crises. Oxford University Press. pp. 447–458. ISBN 9780199390632. ^ Performance Persistence - Stephen J. Brown and William N. Goetzman (1995) he Journal of Finance Vol. 50, No. 2 (Jun., 1995), pp. 679-698 (20 pages) https://www.jstor.org/stable/2329424?seq=1#metadata_info_tab_contents Haug, Jørgen; Hens, Thorsten; Woehrmann, Peter (2013). "Risk Aversion in the Large and in the Small". Economics Letters. 118 (2): 310–313. doi:10.1016/j.econlet.2012.11.013. hdl:11250/164171.
CommonCrawl
Search all SpringerOpen articles Journal for Labour Market Research Modelling reallocation processes in long-term labour market projections Volume 50 Supplement 1 Retirement ages reform (pp. 1 – 28) Modellierung von Anpassungsprozessen in langfristigen Arbeitsmarktprojektionen Tobias Maier1, Caroline Neuber-Pohl1, Anke Mönnig2, Gerd Zika3 & Michael Kalinowski1 Journal for Labour Market Research volume 50, pages 67–90 (2017)Cite this article Long-term labour market projections are a popular tool for assessing future skill needs and the possibility of skill shortages. It is often noted that reallocation processes in the German labour market are hindered due to its strong standardization and occupational segmentation. However, it is possible that persons leave the occupation for which they have been trained for. Disregarding such reallocations and their dynamics in the projection model is likely to distort the results and lead to inaccurate practical advice. In this article, we describe for the first time, how reallocations in the labour market can be modelled using occupational flexibility matrices and wage dynamics. Here, it is shown that employers react to labour scarcity by increasing wages to attract workers who to some extent can adjust their mobility behaviour accordingly. We analyse the aggregate impact of this implementation of a reallocation process of labour supply on the projection results by the means of scenario comparisons. Our results suggest that considering reallocations but also additionally their dynamics has substantial effects on the projection outcomes. They help draw an insightful picture of the future labour market and prevent over- or understating the potential for labour shortages in several occupations. We conclude that the assumptions about how reallocations differ by occupation and to what extent they can be realized by wage impulses is essential for projection results and their interpretation. Furthermore, we find that in the German labour market, wage adjustments cannot balance the labour demand and supply for occupations completely. Langfristige Arbeitsmarktprojektionen stellen ein beliebtes Analyseinstrument dar, um zukünftige Fachkräftebedarfe und -engpässe aufzuzeigen. Es wird oft angemerkt, dass gerade der stark standardisierte und beruflich segmentierte deutsche Arbeitsmarkt Reallokationsprozesse von Arbeitsangebot und -bedarf nach Berufen erschwert. Nichtsdestotrotz sind Wechsel aus dem erlernten Beruf keine Seltenheit und müssen bei einer langfristigen Projektion nach Berufen berücksichtigt werden, sofern keine inadäquaten Handlungsempfehlungen aus vermeintlichen Fachkräfteengpässen und -überschüssen abgeleitet werden sollen. In diesem Artikel beschreiben wir erstmals, wie die Implementierung eines Reallokationsprozesses durch berufliche Flexibilitätsmatrizen und berufsfeldspezifischer Löhne stattfinden kann. So zeigen wir, dass Arbeitgeber auf Engpässe durch Lohnerhöhungen reagieren, woraufhin Arbeitnehmer ihr Mobilitätsverhalten anpassen. Anhand von Szenarien analysieren wir die Auswirkungen unterschiedlicher Annahmen zur Lohnentwicklung in den Berufen und deren Effekte auf das Anpassungsverhalten des Arbeitsangebots. Unsere Ergebnisse zeigen, dass sich die Berücksichtigung beruflichen Mobilitätsverhaltens sowie eine dynamische Entwicklung desselben substanziell in den langfristigen Projektionsergebnissen niederschlagen. Hierdurch ergibt sich ein differenzierteres Bild über mögliche Fachkräfteengpässe und -überhänge sowie mögliche Handlungsempfehlungen. Als Fazit lässt sich festhalten, dass mögliche Lohnanpassungen und damit verbundene Berufswechsel zu einem besseren Ausgleich von Arbeitsangebot und -nachfrage nach Berufen führen können und dass Annahmen über den Ablauf dieser Prozesse das Ergebnis stark beeinflussen. Zudem können wir für den deutschen Arbeitsmarkt konkludieren, dass nicht alle theoretischen Engpässe lediglich über Lohnerhöhungen lösbar sind. The German economy and labour market are subject to structural change over time. Demographic change, technological progress, and globalisation will frame the behaviour of market participants. Political planners have a special interest in having some knowledge about the future – be it for budgetary planning or preliminary policy assessments. In addition, regarding future developments of the labour market, a concern is whether the supply of skills will suffice the demand of the economy, such that growth can spur, or whether there is a possibility of labour shortages. Here, long-term labour market projections are a more and more popular tool for policy consulting (Wilson 2001). Today many countries have such projections (cf. for example CEDEFOP 2009 and 2012 for Europe; Dupuy 2012 for the Netherlands; Gajdos and Zmurkow-Poteralska 2014 for Poland; Bonin et al. 2007; Maier et al. 2014; and Vogler-Ludwig and Düll 2013 for Germany; Lapointe et al. 2008 for Canada; Lepic and Koucky 2012 for the Czech Republic; Lockard and Wolf 2012 for the US; Tiainen 2012 for Finland; Papps 2001for New Zealand; UK Commission for Employment and Skills 2011 for the UK). Especially in Germany, where the labour market is highly segmented into occupation-specific submarkets (cf. Mayer and Carroll 1987; Allmendinger 1989; Shavit and Müller 2000; OECD 2003), the balance of the labour demand and supply hinges on today's education attainment. Here, the occupation represents an institutional link between education and employment (c.f. Weber 1972; Mayer and Carroll 1987; Abraham et al. 2011). In such a market, workers cannot be regarded as homogeneous and perfectly substitutable. The production of different goods or services call for different specialized skills and, therefore, not every employee is suited for every job. This is why, for Germany it is essential to project occupation-specific labour demand and supply in order to yield insightful results (Lapointe et al. 2008; CEDEFOP 2012; Helmrich and Zika 2010). However, although these submarkets are linked to a specific occupation, they are not totally restrictive. The transferability of task-based human capital enables occupational mobility to related fields (Gathmann and Schönberg 2010). In fact, Nisic and Trübswetter (2012) calculate that every year about 3.4% of Germany's employed population change their occupation. To put this into perspective, Nisic and Trübswetter (2012) calculate a yearly rate of 10.8% in Great Britain. For Denmark, Groes et al. (2015) talk of a yearly occupational mobility rate of 20% and Moscarini and Thomsson (2007) estimate a monthly rate of 3.5% among male workers in the US. Thus, in the international comparison, a yearly rate of 3.4% may actually be a relatively small number. Nevertheless, this level of mobility can to a certain extent be thought to resolve misallocations of the working population. Furthermore, disregarding the opportunities and limitations of occupational flexibility and its dynamics in projection models is likely to distort the results (cf. Brücker et al. 2013; Brunow and Garloff 2011). Notwithstanding, projection models have to trade-off transparency of results and accuracy to some extent; accurately reflecting all underlying mechanisms may cause separate effects not to be identifiable and results not interpretable (Wilson 2001). Therefore, the decision of whether or not and how to implement reallocation dynamics in a projection model of the German labour market is not trivial. Helmrich and Zika (2010) for the first time model occupational flexibilities into a long-term projection of the German labour market, the BIBB-IAB qualification and occupational field projections (QuBe, henceforth). Based on this, Maier et al. (2014) propose a dynamic reallocation mechanism for the qube model, which redistributes labour supply to labour demand via occupational mobility given wages. This is a novel approach to model long-term projections and to our knowledge has not been done in any other labour market projection so far. In the model of Maier et al. (2014), employers respond to occupation-specific labour scarcity by raising wages, which in turn causes trained workers and workers from related disciplines to more often offer their work in this occupational submarket. In this paper, we wish to highlight the impact of this modelling approach on the QuBe projection results and the overall importance of considering reallocation mechanism in labour market projections in the context of the evaluation of possible hazards of labour supply shortages in the future. Our analysis will show in which occupations, we can rely on market mechanisms to solve possible labour shortages via wage dynamics and in which occupations, enterprises and policy makers have to intervene by for example improving working conditions in general or providing further educational training. In the following, we first discuss whether wage-based dynamics of the reallocation process are adequate by reviewing recent literature on this topic (Sect. 2). In the third section, we briefly give an intuitive introduction to the QuBe model and describe its reallocation mechanism in more detail. In the fourth section, we outline the different data sources used for the QuBe model and how the reallocation dynamics where operationalized. Sect. 5 presents results from scenario comparisons, which illustrate the effect of this modelling on the projection results. Here, we first assess the overall impact of implementing the reallocation mechanism in QuBe (Sect. 5.1). Then we show how the dynamic adjustment of employers and workers to each other take a great part in the overall effect (Sect. 5.2). After this, we discuss how the interpretation of the results are strongly influenced also by the implicitly modelled limitations of wage dynamics in balancing the labour market by presenting results from wage policy scenarios (Sect. 5.3) and discussing to what extent the calculated optimal flexibility of the workforce is achievable via the wage mechanism (Sect. 5.4). In Sect. 6, we conclude and give an outlook on future research. Theoretical assumptions and related empirical findings In order to account for reallocation dynamics in their projection model Maier et al. (2014) let employer-set wages partially depend on labour supply scarcity. Labour supply, in turn, responds to differences in relative wages of occupations by changing their occupational mobility behaviour in that the workers propensity to stay in their training occupation correlates positively with a lower outside option. In this model set-up, wage is the only explicit adjustment channel of employers and worker behaviour in response to misallocations of labour. All other factors, which influence mobility decisions of workers, are assumed to follow a constant time trend. Other factors, which drive wage setting of the employer, are assumed to relate to the production process and outside wage pressures. In the following sections, we will describe this mechanism in more detail and outline its empirical foundation and effect on the projection results. In this section, to start with, we will briefly discuss the choice of a purely wage driven mechanism reflecting on related literature on the topic of turnover, employer recruitment strategy, and the drivers of occupational mobility in general. The employer's adjustment mechanism Projection results are often said to exaggerate the extent of possible labour shortages in the future. This critique often addresses that adjustment mechanisms of employers are neglected in the analysis (cf. for example Brücker et al. 2013). Brunow and Garloff (2011) even reject the idea of future labour market shortages in total. They argue that in the event of a tightening labour market, employers have plenty of ways to adjust adequately and prevent a shortage situation. They suggest that firms will react to the anticipation of a shortage by substituting their labour demands by automating processes or hiring workers from abroad. Also firms could alter their stock of capital and produce less, thereby demanding less labour. Brunow and Garloff (2011) also highlight the importance of wages, which they consider 'upward flexible' enough to attract the necessary labour supply. Economic theory, likewise, predicts a relationship between wages and relative labour supply. Especially in the search and matching literature labour market tightness explicitly enters the wage equation such that a shortage of applicants always corresponds to higher wages (cf. for example Pissarides 2000). Montgomery (1991), for example, uses a related model set-up to explain wage differences across industries. Here, firms who value filling their vacancy most, pay the highest wage in order to overcome coordination problems and attract the most applicants to their opening. However, Bechmann et al. (2012) show that wage policy may be less important to German recruiters. They analyse data of the IAB Establishment panelFootnote 1, where firms were asked which strategies they used or would use to alleviate labour shortages. The most important strategy, in fact, seems to be further training of the current workforce, which was chosen as very important by 42% of the surveyed firms. Next to other means of recruiting from within the company, as for example later retirements or apprenticeship programs, also the attractiveness of the job offer was stated to be targeted. With 34% of the establishments highlighting its importance making the offer desirable seems to be the second most important strategy of firms. In contrast, wages seem to be less important. Only 11% of the firms consider paying higher wages as an important strategy. It is, however, still a strategy for 47% of the surveyed firms, even though 36% indicate that a main problem concerning recruiting is, in fact, too high wage demands of applicants (Bechmann et al. 2012). Eventually, Dustman and Glitz (2015) and Dustman et al. (2009) find empirical evidence for the impact of the structure of skill supply on wages. Using IAB Establishment Panel data from 1985 and 1995, Dustman and Glitz (2015) investigate whether employers in West Germany react to a change in the skill mix of the workforce by adjusting wages or the production intensity, where they distinguish between switching to production of goods, which can be produced by the skills available, or producing the same goods but adjusting the skill application. They conclude that firms adjust mainly by the latter. Concerning wage adjustments, they find that wages are only significantly elastic with respect to skill supply in the nontradable and manufacturing sector, where a 1% increase of skill supply corresponds to a 0.4% and 0.1% decrease in wages, respectively. Dustman et al. (2009) come to a similar conclusion. Taking advantage of the change in skill structure of the German labour market induced by the reunification, they show that the relative abundance of lower skilled workers after the integration of the East German Länder increased skill returns. To sum up, there is evidence of firms reacting to labour market tightness by raising wages in order to attract sufficient applicants to their vacancies. However, the extent of the wage mechanism may be relatively small as firms also use other strategies to overcome recruitment problems. These include training and solutions for better working conditions (cf. Bechmann et al. 2012). The worker's adjustment mechanism In labour economics, there has been a long debate about whether job or occupational mobility is associated with a wage gain or a penalty. The classic island model by Lucas and Prescott (1974) would predict that negative demand shocks motivate workers (low skilled first) to leave their job to seek higher wage opportunities. Likewise the search and matching literature (c.f. Pissarides 2000 for an overview) predicts a positive relationship between job mobility and outside wages, as workers are rational and only move if incentivized. For the German labour market, Fitzenberger and Spitz-Oener (2004) find an overall positive relationship between occupational switches and wages, thereby supporting that occupational mobility mainly serves as a career seeking device. However, there is also always a non-negligible share of job switchers who have experienced downward mobility (cf. Gibbons and Katz 1991). Whereas voluntary quits are most often associated with separations to higher paying jobs, involuntary lay-offs are associated with a switch to lower wages (McLaughlin 1991), which Gibbons and Katz (1991) explain with the 'lemon effect' causing laid-off workers having troubles with finding a new job. The importance of the nature of the switch is also highlighted by recent results of Fitzenberger et al. (2015). Providing evidence concerning the occupational mobility of recent apprenticeship completers in the German labour market, they find that mere job switches inside the occupation but between firms most often lead to a wage loss, while occupational mobility is associated with a wage gain in most cases. However, they point out that occupation-and-firm switches only result in a gain if this switch reflects an occupational upgrading, while occupation switches within the firm, which reflect a switch to a better fitting position, are usually associated with a wage gain. Other research points toward the increasing wage inequality. Groes et al. (2015) point out that mainly low and high income earners switch occupations and that downward mobility seems to be a phenomenon of low income earners. An explanation for this, according to Groes et al. (2015), is that occupations with rising productivities layoff their low skilled workers (and typically low wage earners), leaving them to seek work in other occupations, while high skilled workers move out of the declining productivity occupations in order to obtain higher wages. As a result, again only the high skilled workers are hypothesized to experience wage increases when switching their occupation. The literature on task biased technological change explains the observed trends in wage inequality by job polarization. Emerging new technologies, which automate many routine tasks, and globalisation, which poses new opportunities for offshoring (see also Grossman and Rossi-Hansberg 2008), cause redundancy of domestic labour in some occupations (see for a summary Acemoglu and Autor 2011; Goos et al. 2009). Such a trend can also be found for Germany (cf. Spitz-Oener 2006). Cortes (2016) explains this polarization effect further by the induced sorting on ability among the workforce. According to this, more able workers will sort into occupations with higher non-routine, cognitive task shares, while less able workers switch to high routine, non-cognitive jobs. Therefore, only the more able workers will experience a rise in wages upon a job switch. Yet another interpretation for the duality of wage outcomes upon occupational changes is presented by Gathmann and Schönberg (2010) and also Geel and Backes-Gellner (2011). They attribute the probability of a wage gain after a switch to the proportion of specificity of the acquired skills in the former occupation. Geel and Backes-Gellner (2011) show that the higher the specificity of skills, the lower occupational mobility. In addition, Gathmann and Schönberg (2010) also show that occupational mobility mostly entails switches to related fields, where skills are best transferable. Apart from the share of specific human capital needed in an occupation, Damelang et al. (2015) indicate that also to the degree of standardisation and occupational closure is important. A higher degree of regulation (meaning the existence of occupation specific VET certificates and study programs) reduces the propensity of leaving the occupation. Additionally, there are of course also other factors driving job mobility aside from monetary incentives. Cotton and Tuttle (1986), Shaw et al. (1998), Pollmann-Schult (2006), Böckermann and Ilmakunnas (2009), Cottini et al. (2011) all emphasize the importance of physical and psychological hygiene, as well as, a good work life balance for retention of employees. Furthermore, on more regional level, regional mobility within an occupation has to be considered as an alternative to occupational mobility (Reichelt and Abraham 2015). Furthermore, note that other mechanisms that do not concern occupational mobility may also be used in projection models. Ehing and Moog (2013) point out that the size of the future workforce hinges on assumptions about future labour force participation. Zika et al. (2012) suggest that the amount of hours a person wishes to work significantly impact labour supply, especially in occupations with large shares of part-time workers. This suggests that one could also implement a mechanism, which assumes workers to react to changes in the labour market by altering their participation or their working volume. Also migration flows could dynamically adjust to the labour market situation in a projection model. However, such mechanisms have not been implemented in any projection model so far. In the QuBe model all of these measures are assumed to be stable or to follow a trend in their development. To sum up, in theory wage impulses should create an incentive to switch occupations. However, not all occupational switches are found to be associated with an increase in wages. Therefore, in the aggregate the effect of wages on occupational mobility may be mediated by downward mobility of a part of the occupation switchers. Indications that the possibility of downward movements is associated with the nature of the task or the prior income level, suggest that wage effects should differ by occupations. In addition, other factors concerning the perceived attractiveness of the occupation seem to have an important impact of occupational mobility. The BIBB-IAB qualification and occupational field projections In this section, we will describe the underlying model. The QuBe model is a joint project of the Federal Institute for Vocational Education and Training (BIBB) and the Institute for Employment Research (IAB) in collaboration with the Fraunhofer Institute for Applied Information Technology (FIT) and the Institute of Economic Structural Research (GWS). As this paper focuses on possible reallocation mechanisms of labour demand and supply to overcome long-term mismatches at the occupational level, we will only briefly touch on the derivation of labour demand and supply in the QuBe projections and describe the implemented reallocation mechanism more thoroughly. The reader is referred to Maier et al. (2014, 2015) for a detailed description of the model. Note that the working volume is central to the demand side model and results are also available in aggregate hours of work. However, for simplicity in this paper we only focus on results evaluated in the number of persons involved. The underlying model projects a development path (the baseline scenario) of the German economy into the future given that the currently observable behavioural patterns and trends in the goods, labour and education market will continue on their develop path until 2030. As such, it does not necessarily represent the most likely development, but can be understood as an outlook on the possible structure of the future labour market when every market participant keeps on her current path of motion. Using this approach enables a straight forward interpretation of the results and makes them easily comparable to outcomes of alternative scenarios. In this spirit, modes of behaviour, which cannot be empirically verified, are considered infeasible for the resulting baseline scenario. Thus, for example technological progress is only captured by a constant trend and not assumed to accelerate until 2030. We do, however, implement future changes which have been enacted by legislation and have a relevant effect on the outcome during the projection period. As an example, the baseline scenario takes the new German pension age of 67 into account. Fig. 1 gives a highly simplified overlook of the QuBe model. Two concurrent processes essentially determine labour market outcomes: The evolution of labour supply driven by demographic change (left box) and the evolution of labour demand, which is driven by economic structural change (right box). Both labour supply and demand developments are projected until 2030. Essential to the model is the distinction between the training occupation, which workers are associated with on the supply side, and exercised occupation, which workers relate to on the demand side of the labour market. The QuBe model (Source: QuBe projections; 3rd wave) On the supply side, we project the numbers of new labour supply, those leaving the labour market, and ultimately the total supply given their sex, age, qualification level, and training occupation. For this purpose, the Fraunhofer FIT developed a cohort component model (c.f. Whelpton 1936; Blien et al. 1990; more specifically for QuBe see also Kalinowski and Quinke 2010), which subdivides the population according to sex, age, and qualification characteristics and extrapolates the in- and outflows of these subgroups into the future (BIBB-FIT model). The movements between groups summarize ageing given births and deaths, migration, and qualification attainment behaviour. The latter is simulated with a nested transition model of the German education system. Here, pupils are allocated and transitioning between high school tracks, entering the vocational education system, switching between higher education and vocational training programs and, finally, according to the overall completion rates of the different programs finishing by obtaining a credential assigning them to a qualification level and according to the prevailing empiric rates of occupation attainment a training occupation, which they can use in the labour market to earn wage profits. Of course, infeasible transitions which cannot be identified in the data are not considered. Note further that people in or without any vocational education do not have a training occupation by definition and can, therefore, only be associated with an exercised occupation if they are economically active. The number of economically active persons for each subgroup is calculated using group specific participation rates, which are forecasted with a logistic trend model. On the demand side, we calculate the total number of persons needed to manufacture and provide the total number of goods and services produced in Germany given their qualification and exercised occupation for each economic sector. We refer to this as realised demand; vacancies are not taken into account.Footnote 2 While the short-term may be concerned with, for example, dealing with the consequences of the euro crisis, structural change is the essential determinant of labour demand in the long-term. In pursuance of accurately reflecting structural change, QuBe relies on the QINFORGE model developed by the GWS – a further development of the IAB-INFORGE model (Meyer et al. 2007; Schnur and Zika 2009; Maier et al. 2015). QINFORGE is an econometric input-output model for Germany, which is deeply disaggregated by economic sectors and commodity groups. To describe this model in a very simplified way, let state, employers, and private households invest and consume, thereby generating demand. On top, there is a demand for German products from abroad. Also, international trade poses price pressures on exports and imports, which affect price levels for consumption but also production goods in Germany. This affects the demand for imported goods and also raises unit costs for German products. Given the individual input-output interdependencies of the economic sectors, the production level is raised or lowered accordingly. Production results in value creation and employment, leading again to a reaction of consumption and investments. In an iterative process these described interdependencies between the different economic actors determine the final growth path of Germany and the level of employment per economic sector, which, according to the structure of each sector, translates to a demand of labour for each exercised occupation. Having derived both labour demand and supply, we continue now with a more detailed description of the reallocation mechanism, which connects both sides (see Fig. 1). Sect. 3.1 will be concerned with the wage adjustment mechanism of employers, while Sect. 3.2 will outline the occupational flexibility adjustment mechanism of workers. Together both mechanisms form the reallocation process imbedded in the QuBe model. However, we wish to point out that such a reallocation mechanism could easily be transferred to other projection models. Modelling wage adjustment due to skill shortages This section describes the labour demand adjustment mechanism through the wage channel with respect to labour market tightness. Note that the occupation dimension to a very high extent already captures the informational input of qualification. The starting point is the occupation specific wage, which is a function of the total average wage in the economy (\(W\)), and a scarcity term. The latter is given by the ratio of labour demand (\(ld_{o}\)) and supply (\(ls_{o}\)) in the occupation and operationalizes the overall tightness within the occupational submarket. \(W\) itself is a function of aggregate per capita labour productivity, overall fluctuation in prices and an aggregate term of the labour market tightness for the entire economy. Additionally, a constant is included, which captures all occupation-specific time invariant factors, which also determine occupation wages. This captures, for example, the extent to which employers could overcome labour shortages by raising employee productivity by innovative technologies or further training within a certain occupation (cf. Sect. 2.1). $$w_{o}=\alpha _{1}+\alpha _{2}W+\alpha _{3}\frac{ld_{o}}{ls_{o}}$$ In a further step, the industry- and occupation-specific wage (\(w_{o,i}\)) is modelled. Here, note that the QuBe model assumes an underlying productivity-based wage policy. Thus, industry level wage differences within occupations are explained by differences in labour productivity. Thus, $$w_{o,i}=\beta _{1}+\beta _{2}w_{o}+\beta _{3}lpp_{i},$$ where \(lpp_{i}\) denotes the industry specific productivity of labour. Again, a constant is included to account for any time invariant determinants of the level of industry- and occupation-specific wages. After modelling the wage dependency on labour scarcity, the industry and occupation specific wage is integrated into the projection of labour demand. Demand for labour by occupation and industry is explained by the relative application of the occupation in the economic sector as given by its contribution to total industry volume of work, i. e. occupation- and industry-specific volume of work relative to total industry volume of work. The industry-specific volume of work is driven by the output level and constraint by industry-specific wage costs. Also, due to technological progress it is explained by a decreasing time trend indicating the growing efficiency of labour inputs. The connection between volume of work and labour scarcity is modelled by Eq. 3. $$\frac{vow_{o,i}}{vow_{i}}=\gamma _{1}+\gamma _{2}\frac{w_{o,i}}{w_{i}}+\gamma _{3}t$$ The equation states that the relative differences in work inputs between occupations in the same industry is explained by a time trend (\(t\)) and the relative wage difference (\(\frac{w_{o,i}}{w_{i}}\)). The latter depends on the occupation specific labour scarcity (cf. Eq. 1). Thus, relatively scarce labour will be relatively pricy such that its application in the production process measured by its volume of work is lowered. Given that the amount of annual hours worked by one labourer in this industry and occupation does not change, there will be a decrease in labour demand in this occupation in this industry. Note that an adverse shock to scarcity causes a perturbation, since the resulting change in labour demand will in turn alter the scarcity measure again, which moderates wages and labour demand. Such a perturbation also affects other industry wages through a change in aggregate income. This modifies consumer demand, which is the main driver for production in a lot of industries. An increased production level induces a raise in labour demand, which again starts off the process of wage adjustments in the affected industries. Modelling occupational flexibility due to wage adjustments This section outlines the reallocation process of labour supply on the occupational level through the wage channel. The basic idea is that within the model occupational switches are accounted for, i. e. it is not assumed that a person who has been trained in a certain occupation automatically is part of this occupation-specific labour supply. Therefore, the starting point of modelling this mechanism is the distribution of the skilled labour force by training occupation over all exercised occupations. Persons, for which the training and the exercised occupation are identical, are called stayers, henceforth. The share of stayers in the training occupation, \(to\), is denoted by \(stayer_{to}\). This stayer share is assumed to be time variant and reacts to impulses of the economic environment. In the model, these impulses are captured by outside wage opportunities given by a training occupation specific reference wage (\(w_{to}^{ref}\)), which is the weighted average of the wages of all (inside and outside) work opportunities, which are feasible (considering the distribution over exercised occupations) for a certain training occupation. The share of stayers is determined by equation $$stayer_{to}=\delta _{1}+\delta _{2}\frac{w_{to}}{w_{to}^{ref}}$$ where \(w_{to}\) denotes the wage in the training occupation, \(to\). The equation states that whenever a certain training occupation experiences an increase in wages while the wage level remains constant in all other reference occupations, it will become relatively more profitable to stay in the training occupation, thus, causing a rise in the share of stayers. The extent to which the intent to stay in the training occupation reacts to outside wage pressures is determined by \(\delta _{2}\), which is the training occupation-specific wage elasticity of the propensity to stay. Again, a wage rise triggers a perturbation, where the aggregate effects on labour supply cause a re-evaluation of wages and labour demand, which, in turn, causes preceding adjustments of the supply side and so on. Operationalization and estimation of the QuBe model In the following section, we briefly present the data used to estimate the QuBe model and point out some indication of the explanatory power of scarcity for labour demand and wages for labour supply, respectively, before we further highlight the magnitude of their impact by sensitivity analyses in the subsequent section. Data and classifications For the QuBe model, data from a number of sources was merged to generate a unique data set, which outlines a deeply disaggregated picture of the German economy and the labour market. For structural information, we rely on data of the years 1996 to 2011 retrieved from the German Microcensus (Labour Force Survey), which is a yearly sample survey of roughly 1% of the German households. It is the main source of information for the population structure with regard to age, sex, qualification level, employment status and training occupation (Maier and Helmrich 2012). It also provides data on the distribution of gainfully employed persons over industries and exercised occupations for the years 2005 to 2011 and can, therefore, also be used to analyse occupational switches. Furthermore, it contains data on self-employed and civil servants. No other survey delivers a more complete picture for all these characteristics. On the demand side, information on consumption, prices, and production for the years 1991 to 2011 is retrieved from the National Accounts of the Federal Statistical Office (FSO, henceforth). Especially, the input-output-tables enable a modelling of the interindustry dependencies within the production process. For the wage development, we retrieve daily wages for full-time employees of the years 1993 to 2011 from the IAB Employment History Data (EHD), which records all employment relationships subject to social security contributions in Germany and captures information about working days per person and wage totals by economic industry, occupation exercised and qualification level. By relying on this data set, note that we misrepresent wages of civil servants, self-employed and helping family members. Also, wages of top income earners are underestimated due to legal censorship in the upper income range. However, employees subject to social insurance contributions represent the majority of the work force (about 89% in 2015) and there is no larger and more detailed dataset on gross wages available in Germany. We, therefore, use the wage development of the EHD as indicator for the general occupation and industry specific wage development. Note also, that with the underlying data the new legislation on minimum wages is not yet accounted for.Footnote 3 Furthermore, we use the 12th Coordinated Population Forecast of the Federal Statistical Office 'Version 1–W2: Upper limit of the "medium" population' until 2060 to quantify the population by age and sex in the future. To be able to account for the current developments in the population in both absolute terms and in terms of their changed age structure, Version 1-W2 was adapted to the new results of the Census 2011. Note that Version 1‑W2 is meant to reflect an upper limit of the population, however, understates the current net migration inflows of, in particular, political and religious refugees. Accounting for this is likely to impact the projection outcomes. As an example, the demand for teachers may be increased considering the high share of young migrants. Therefore, the QuBe projection results, as well, are outdated in this sense. This illustrates how the plausibility of long-term projections strongly hinges on current beliefs of future developments. However, to show the effects of different modelling assumptions concerning the adjustment process on the projection results it can also be helpful to isolate effects from such factors. We, therefore, think that our results can be used to visualize the impact of the modelling of the reallocation process, even though the recent migration behaviour is not taken into account. For the calculation of new labour supply by qualification level and formal vocational qualification, the forecasts of the Conference of Ministers of Education and Cultural Affairs of the Länder in the Federal Republic of Germany of pupils and graduates from German high schools and university entrants until 2025 are used as a benchmark for the future development in schools and higher education. The retrieved entry, graduation and transition rates for 2025 are held constant thereafter until 2030. For both the supply and the demand side the date is aggregated using the same classification schemes. The International Standard Classification of Education 1997 is used to differentiate between qualification or skill levels. For the occupation dimension, the 369 occupational categories (3-digit code) of the 1992 Classification of Occupations (KldB92) are aggregated according to the 54 occupational fields (OF, henceforth) of Tiemann et al. (2008). Using the OF to distinguish between occupations prevents artefacts in the modelling of occupation switches, which particularly occur in the manufacturing sectors because the KldB92 is very detailed here. For an easier visualisation, we report our results for 20 main occupational fields (MOF, henceforth) – an aggregated version of the OF (see Table 5 in the appendix). Economic sectors are classified using the aggregation to 63 industries of the National Classification of Economic Activities of 2008 (Table 6 in the appendix). Table 1 Occupational flexibility matrix from formal vocational qualification to occupation exercised in 2011 for 20 MOF To harmonise the supply and demand side data, the number of persons in active employment as retrieved from the Microcensus is re-extrapolated to match the total number as recorded in the National Accounts, while retaining the structure of the population by age, sex, educational level and formal vocational qualification from the Microcensus. Throughout, 2011 is the base year of the QuBe projection. The reason is that firstly, the Microcensus 2011 was the latest available Microcensus when the 3rd wave of the QuBe project was computed. Secondly, it was also the last Microcensus, which used the KldB92 to classify occupations. Thereafter a harmonization of past data to the 2010 Classification of Occupations is needed. Estimation of the QuBe model In this section, we briefly outline how the before mentioned equations of the reallocation mechanisms were estimated. Using data from 1993 to 2011 on daily wages of full-time employees, working volume and labour productivity, Eqs. 1 to 3 were estimated adding an error term to the right hand side, where the subscripts \(o\) and \(i\) are captured by the 54 OF and the 63 aggregated economic sectors, respectively. The t‑test for the parameters of Eq. 1 indicate (at a significance level of 5%) that the measure of labor scarcity is a good, necessary and observable predictor for wage level differences between occupations. Especially for 'occupations concerning the production of chemicals and plastics wages' largely, significantly depend on labour market tightness. However, in 8 of the 54 OF, the effect of scarcity is found to be insignificant. An example is the 'public administration occupations'. An explanation could be the lack of variation in the scarcity variable in these OF. Eq. 2 uses the results of Eq. 1 for estimating occupation-specific wages in each of 63 industrial sectors. A potential of 3402 wages are estimated accordingly. However, not all occupation and industry combinations exist: taking 2010 as base year, only 75% of all possible combinations report employment. The corresponding regressions are estimated using ordinary least squares. The estimated parameters are evaluated against the R2 (greater than 0.90), Durbin-Watson test statistic (between −1 and 1), and the p-value (below or at 0.05). In total, it was possible to identify wage responsiveness in 1.513 occupation-specific industry wages which means that roughly 30 thousand employees are wage-sensitive in an econometric sense. Nonetheless, there exist some cases for which no conclusions about the existence of an industry-specific penalty or mark-up can be made, because either the coefficient of the industry-specific labour productivity is insignificant or the regression is subject to autocorrelation. In these 28% of the cases a default option is used, using the OF wage to update the industry specific OF wages. A similar approach is used for the estimation of Eq. 3, where in cases of autocorrelation or insignificance of the wage relation by default the relative inputs of occupations is kept constant. Therefore, not in all cases changes in the labour supply transmit a change in wages and likewise not all wage changes induce a change in the occupational structure of the industry. For the estimation of Eq. 4, firstly, the distribution of formally trained workers by 54 training OF over the exercised OF is calculated for each age, sex and qualification group for the years 2005 to 2011 using Microcensus data. Table 1 shows the aggregate distribution, the so-called flexibility matrix, for the year 2011 for the summarized 20 MOF, where the dark cells highlight the percentage of stayers. Overall, we can see that some groups of persons as distinguished by training OF are more concentrated on fewer exercised OF than others. MOF 20 'teaching occupations' is a classic example of high concentration. Next, the elasticity \(\delta _{2}\) of Eq. 4 is retrieved, estimating a model of the log share of stayers on the log wage to reference wage ratio, a constant and an error term. We estimate this model using the aggregated flexibility matrices over all age, sex and qualification groups for the 54 OF cross-sections and the years 2005 to 2011. For more robust results we pool OF of similar qualification profiles and historic wage responsiveness together to estimate this model as four separate fixed-effects panel models. Therefore, in each panel all persons associated with a certain training OF react in the same manner to wages in the model. However, the differences in occupational mobility according to age, sex and qualification are accounted for by using the different flexibility matrices for each group in the projection. Panel 1 comprises different OF who have shown high wage responsiveness in the past and consist of high shares of highly educated and very low shares of non-formally qualified workers. Panel 2 includes highly wage responsive OF with a workforce highly centred in the medium but also in the low qualification levels. Panel 3 consists of low wage responsive OF with a similar qualification make-up as panel 2. Finally, panel 4 contains miscellaneous OF with historically very low wage responsiveness.Footnote 4 Table 2 displays the results of the separate panel regressions. Note that we only find an elasticity for 36 of the 54 cases. The remaining cases, as for example 'health-care occupations not requiring a medical practice license', for which no significant elasticity can be found, do not react to wages in the model. In addition, people without any formal qualification are assumed to distribute over exercised OF, in which they comprised at least 3% of the workforce in 2011, according to labour demand, while the distribution over exercised OF of those in education are held constant in the projection. Table 2 Wage elasticitiy of stayers \(\boldsymbol{\delta }\) by OF (2005–2011) Note that the result that workers and employers of different training occupations and different economic sectors, respectively, do not adjust to changes in the labour market in the same magnitude, conforms to the discussion of Sect. 2: The reallocation process is also subject to influences other than wages. These are (only) implicitly contained in the QuBe model. However, the comparison of these wage elasticities to results of other studies is limited. The reasons are that (a) these elasticities do not resemble causal effects, but also capture other effects which relate to wages and mobility; and (b) because they are based on the relation of the occupation specific reference wage with the stayer rate (see Eq. 4). Because the reference wage contains also the own wage of each occupation proportional to the historic flexibility, this relation is higher than only looking at outside wages. Therefore, these elasticities are relatively high. Further, these wage elasticities of the stayer rate are kept constant over the projection period. Departing from this assumption would potentially also relevantly affect the projection outcomes. It is plausible, for example, that technological progress has an impact on the extent to which wages drive mobility decisions. New technologies are suggested to lead to either an increase of complexity of tasks to be performed by workers or a 'deskilling' of tasks, where specialized skills become redundant (cf. Ben-Ner and Urtasun 2013). A change in the skill requirements may lead to a change in mobility behaviour following the reasoning of Geel and Backes-Gellner (2011) and Gathmann and Schönberg (2010). Different outside opportunities may then also translate into a different receptiveness for relative wage changes. In the QuBe model, mainly in favour for keeping the model simple such that results are more transparent, this, however, is not accounted for. Scenario comparisons In this section, we will display the magnitude of effect of the previously described reallocation mechanism of the QuBe model on the projection outcomes and the practical recommendations based on them. For this purpose, we estimate labour demand and supply for various scenarios concerning a different occupational flexibility behaviour or wage setting assumptions. Firstly, in Sect. 5.1 we demonstrate the overall effect on the projection results from considering versus not considering a reallocation process. Secondly, in Sect. 5.2 we show, which effect can be attributed to the dynamics of worker adjustments with respect to wages. After this, we continue with scenario comparisons to highlight the limitations to wage adjustments in resolving labour shortages in the QuBe model and by that the importance of other determinants for occupational mobility, which are only implicitly modelled. We show that these limitations have a meaningful impact on the deduction of recommended actions to alleviate occupation-specific labour shortages. For this purpose, thirdly, in Sect. 5.3 we show how the economic environment of the employer matters for the result of different wage setting policies and the feasibility of such wage scenarios according the the QuBe model. Lastly, in Sect. 5.4 we complement the previous result by deriving the optimal stayer rates for the occupations and discuss to what extent these stayer rates are achievable by the means of wage policies. Note that throughout the following section, we implement the scenario assumptions on the level of the 54 OF. However, for a better visualization the results are always presented on the level of the 20 MOF. Implementing occupational flexibility To start with, Fig. 2 illustrates the effect of implementing a reallocation process by comparing the projection results of the QuBe baseline scenario (right hand side) with a scenario, in which workers were not allowed to switch and employers could not substitute skilled for unskilled or workers from different disciplines (left hand side). In the latter scenario, the projection results suggest that vast labour shortages are possible in 9 out of the 20 MOF. According to this, for 8 of these MOF shortages should have actually already been visible in 2010. In 2030 the deficit would grow to about 4.9 million skilled workers in this scenario. In comparison, taking the reallocation mechanism into account balances the labour market in all but 4 of these occupations; however, shortages appear until 2030 in 5 additional MOF. Skill shortages and surpluses with and without reallocation in 2005–2030. Source: QuBe project 3rd wave; own calculations Interestingly, now shortages could become especially imminent in the MOF 15 'Technical occupations' and the MOF 18 'Health occupations'. The technicians are frequent movers with a stayer share of only 33.9% (cf. Table 2) and are able to find work in a lot of different MOF. Also, the supply of skilled technicians is decreasing strongly until 2030 (see the decreasing surplus in the left hand graph over time) due to demographic change and retirement of the so-called 'baby boomers', who are more often trained in a manufacturing or technical occupation than younger cohorts. The health occupations, however, face another problem: Workers in this field are to a great extent loyal to their occupation as indicated by their stayer rate of 71.2% (cf. Table 2). Here as well, not enough workers are being trained in this field (again note left hand graph), while the demands are increasing due to the ageing of the population (Maier and Afentakis 2013). Ultimately, the total deficit in the baseline scenario is 0.3 million workers only, thus, revealing the substantial impact of a reallocation mechanism on the projection results. Therefore, not taking the empirically verifiable occupational mobility into account at all would exaggerate possible future shortages. Implementing flexibility dynamics Next, we will further analyse how the wage dynamics of occupational mobility as implemented in the baseline scenario of the QuBe model impact the projection results. For this purpose, consider a world, in which workers did not respond to wage changes, even if they occurred in occupations in which they could have very likely also found work and profited from a wage gain. Thus, in such a world the probability to stay and switch are time invariant. However, note that aggregate mobility in the occupations does change over time, as the age and qualification composition of the workforce changes due to demographic change. Therefore, comparing projection results for such a world with the QuBe baseline scenario enables us to disentangle the effect of wage responsiveness from structural effects. To visualize the concentration of the workforce, i. e. the possibilities to work with a certain formal vocational qualification in different OF, we calculate the Herfindahl-Hirschman-Index (HHI henceforth; cf. Hirschmann 1964) for the 20 MOF. $$\text{HHI}_{to}=\overset{20}{\underset{\mathrm{o}=1}\sum }\left (\frac{\mathrm{x}_{\mathrm{o}}}{\overset{20}{\underset{\mathrm{o}=1}\sum }\mathrm{X}_{\mathrm{o}}}\right )^{2}$$ where \(\mathrm{x}_{\mathrm{o}}\) represents the amount of workers in the exercised MOF \(o\) with the training MOF \(to\) for which the HHI is evaluated. As there are 20 MOF over which the labour force participants of a training occupation can disperse, \(\text{HHI}\in \left [1/20,1\right ]\), where the minimum value of 0.05 indicates an even distribution over all exercised MOF and the maximum value of 1 indicates perfect concentration on the training occupation. For the year 2011, the flattest empirical distribution is observed for persons with a formal vocational qualification in the MOF 5 'Other processing, producing and maintaining occupations' (HHI = 0.12). This MOF contains, for example, the textile processors, which have to switch occupations more often as the textile industry in Germany is being downsized. Only persons currently in education (HHI = 0.08) and persons with no vocational training (HHI = 0.09) were more evenly distributed. We found the highest concentration in the MOF 10 'Personal protection, guards and security occupations' (HHI = 0.64). Also the MOF 18 'Health occupations' (HHI = 0.52) and the MOF 20 'Teaching occupations' (HHI = 0.62) are highly concentrated. These 3 MOF have also the highest stayer rates. The mean HHI equals 0.32 weighted by the labour force participants in each training MOF. In Fig. 3, we now contrast the difference between constant and wage responsive flexibility. On the vertical axis, we plot the pure time trend of the HHI in the 20 MOF, i. e. the HHI in 2011 compared to 2030 of the 'no wage response' scenario. On the horizontal axis, we plot the HHI differences in 2030 between the baseline scenario with wage elastic flexibility and that without. Note how shifts along the vertical axis visualize pure structural effects, while shifts along the horizontal line show how the concentration of the workforce on exercised occupations increases or decreases as a result of wage incentives. Differences in HHI due to structural change (2030–2011) and wage development ('no wage response' vs. 'baseline'). Source: QuBe project 3rd wave; own calculations Fig. 3 illustrates that concentration hardly changes over time due to changes in the labour force composition. An exception is the MOF 10 'Personal protection, guards and security occupations'. This MOF interestingly has the highest HHI in 2011, which, however, is decreased by almost −0.05 units due to structural change only. Note that the other outlier of MOF 2 'Auxiliary workers, janitors' is actually very small in terms of trained labour supply. The wage mechanism of the baseline scenario leads to a higher degree of dispersion over exercised MOF in most training MOF. Wage responses cause the highest reduction in concentration in the MOF 12 'Cleaning, disposal occupations' and the MOF 18 'Health occupations'. Here, the projected wage growth fails that of alternative occupations in other MOF leading to higher occupational switching and, therefore, a greater dispersion. Note that the observed effect on the MOF 18 can be purely attributed to a change in dispersion of body care occupations, as doctors and nursing staff do not dynamically respond to wages in the baseline scenario (cf. Table 2). Also note that the MOF 12 and MOF 18 still have some of the highest stayer shares in 2030. In contrast, in the MOF 16 'Legal, management, and business science occupations' or 19 'Social occupations' the wage related increase in concentration level out the dispersion due to structural effects, such that these occupations have almost stable HHIs over time. The resulting labour demand and supply for each scenario in 2030 can be retrieved from Table 3. It can be observed that without accounting for wage responsive flexibility behaviour, the total deficit equals about 740,000 persons. This is more than twice the deficit of the baseline scenario with dynamics, which amounts to only 340,000 persons. Thus, 400,000 workers, which would be unemployed in other surplus occupations in the projection, are redistributed to the shortage occupations where wages are rising in the baseline scenario. Table 3 Labour demand and supply in 1000 persons by 20 MOF in 2030 in the baseline and 'no wage response' scenario However, in the MOF 4 'Construction, woodworking, plastic manufacture and processing occupations' the labour market actually gets tighter due to wage dynamics. Here, although the wage responsiveness of flexibility is actually not too high, the projected development of the outside wage options induces the workforce to switch more often to other occupations. In this case, the possibility of a future labour shortage may be understated when dynamic behaviour in occupational flexibility is not accounted for. Ultimately, we can conclude that assumptions about the wage responsiveness of labour mobility are crucial for assessing possible future labour market outcomes. The limitations to wage adjustments We now examine the impact of wage policies in greater detail and point out the importance of their limitations in the QuBe model for the interpretation of results. Shortages are partly projected, due to inferior wage developments in these occupations. Because outside wage opportunities are growing more strongly than in the own training OF, workers – where empirically verifiable – more often decide to switch occupations. Employers can take advantage of this by raising wages in occupations where labour is scarce. However, they are (depending on the industry) constraint by price competition with firms abroad and consumer demand. This is reflected in the QuBe model. To show to what extent employers can strategically use wage adjustments in this model, we implement further wage increases for shortage occupations (as singled out by the baseline projection results). We consider a scenario, where wage growth in the shortage occupations is increased by 10% until 2030 compared to the baseline wage development. Note that this represents an increase of a little more than 0.5% every year until 2030 on top of the projected wage growth of the baseline scenario. Since this represents a relatively small change, in a second scenario we increase wage growth in the shortage occupations by 20%, i. e. an additional increase of a little more than 1% every year until 2030. The results are presented in Table 4. Table 4 Labour demand and supply in 1000 persons by 20 MOF in 2030 in baseline model and different wage scenarios The results show (cf. Table 4) that with a wage increase of an additional 10% until 2030 for shortage occupations, labour shortages will be reduced by about 140,000 persons in 2030, so that the total deficit in this scenario equals 195,000 persons. Shortages could be prevented in 4 of the 9 shortage MOF of the baseline scenario, namely in the MOF 4 'Construction, woodworking, plastic manufacture and processing ocupations, MOF 7 'Commodity trade in retail', MOF 11 'Hotel, restaurant occupation, housekeeping', and the MOF 12 'Cleaning, disposal occupations'. Looking at the results, of the 20% increase in wage growth for baseline shortage occupations, the total deficit of labour supply equals about 115,000 persons, which is a reduction by even 225,000 persons compared to the baseline scenario. However, the labour market is balanced in only one additional MOF compared to a 10% increase until 2030, namely the MOF 9 'Transport, warehouse operatives, packers'. We can see that the balance in these MOF is mainly achieved by a reduction in labour demand. Since labour productivity remains unchanged, note that this corresponds to a reduction in production or service provision, respectively. In these occupations, outside price pressures are too high, such that large wage adjustments are infeasible for employers without reducing their output. Here, it is more realistic that alternative strategies would be used to retain workers or workers would be hired from abroad to keep the wage level low. The other shortage MOF, for which a shortage is projected until 2030 even after an additional wage increase of 20%, are the MOF 2 'Auxiliary workers, janitors', MOF 15 'Technical occupations', MOF 17 'Media, arts and social science occupations', and MOF 18 'Health occupations'. In all of these MOF, demand remains relatively stable, suggesting that here price pressures are less dominating, because production or service provision cannot simply be reduced. We leave the MOF 2 out of the discussion as they comprise a very small group of people and are not associated with dynamic behaviour in the QuBe model mainly due to data restrictions. The MOF 15 and 17 have comparably lower stayer rates of 39% and 43%, respectively, because their labour can be applied in very diverse fields. Here, the deficit is more severe in the MOF 15, mainly because workers trained in this field react much less to wage impulses. Most of the occupations in MOF 17 are attached with a wage elasticity of 2.2 in the baseline scenario, indicating that career seeking is a major determinant in the occupational flexibility behaviour of journalists, designers etc. In contrast, most of the occupations in the MOF 15 only react to wages with an elasticity of 0.57, suggesting that here other factors as for example better working conditions may strongly influence mobility decisions. In the MOF 18 it is only the body care occupations, which react to wage impulses. Increasing wages cannot reduce the shortage of doctors and nursing staff, because the baseline QuBe projection reflects that their fairly high occupational loyalty is not significantly driven by wage incentives. Also, a wage increase in these occupations does not considerably raise the inflow of labour supply into these occupations from other fields, which simulates the effect of strong working regulations concerning qualifying credentials and approbations (see also Pollmann-Schult 2006). Overall, we can conclude here that accounting for the limitations to wage setting policies within the projection model has significant impacts on the feasibility of scenarios aimed at overcoming shortages. This has important consequences for policy consulting and enhances the credibility of practical advice based on calculations of a projection model. The 'optimal' flexibility In the following, we examine, what kind of adjustments in the occupational flexibility behaviour would be needed to distribute unemployed workers evenly and to overcome labour shortages in every OF in 2030. Thus, this scenario entails a redistribution of the labour supply from surplus to shortage occupations. Also looking at the results from the previous section, we assess how wages can serve to achieve the resulting differences in the stayer rate according to the assumptions of the baseline scenario. Technically, we apply a RAS procedure. The RAS algorithm (cf. Bacharach 1970; Leontief 1951) is an iterative method of biproportional fitting of matrices, which is used to estimate elements of an unknown matrix based on known row and column sums and an initial estimate of the matrix. Transferred to this exercise, the RAS algorithm fits the cells of the flexibility matrix of 2030, such that column totals, i. e. labour supply in the exercised OF, are such that in every OF an equal unemployment rate is achieved. In doing so, the algorithm loops over occupations – starting with that with the highest unemployment rate – and redistributes the difference between the baseline surplus supply and that needed to achieve the targeted unemployment rate to other occupations. The reallocation is proportional to the initial flexibility matrix of the baseline scenario, such that workers trained in surplus occupations switch more (have a smaller stayer rate); however, to the same extent into the same exercised occupations. Fig. 4 visualizes the change in flexibility again using differences in the HHI indicating growing or declining concentration in the MOF. Here, the difference in the HHI between 2011 and 2030 in the baseline projection is plotted on the vertical axis against the HHI difference between the 2030 workforce of the baseline projection and the scenario using the optimal occupational flexibility matrix on the horizontal axis. MOF plotted to the left (right) of the 0 benchmark on the horizontal axis, indicate a need for a higher (lower) flexibility as compared to the baseline assumptions in order to clear the labour market in 2030. Needed adjustments of occupational flexibility to achieve equal unemployment rates in 2030. Source: QuBe project 3rd wave; own calculations Overall, the majority of the MOF actually should be more flexible in order to correspond optimally to labour demand. Especially, persons in the MOF 20 'teaching occupations' but also the MOF 13 'office and commercial services occupations', for which vast surpluses are projected due to demographic change and a rising educational attainment in these occupations, should more often consider switching their occupation in the future. In the MOF 20, the share of stayers would have to be reduced from 79.4 to 66.6% in 2030. In the MOF 13 a reduction of the share of stayers to 61.6% from its level of 67.2% in the baseline projection in 2030 would be needed. Note that this MOF also contains the public administrates, which mainly drive this result, here. They alone would need a reduction of the stayer ratio by more than 12 percentage points. However, in both of these MOF, workers do not react to increases in outside wages in the QuBe model and are very loyal to their training occupations (cf. Table 1 and 2). This poses a challenge of achieving such a reduction in stayer rates. Likely, this could not be accomplished by increasing wages in related fields, as other underlying factors as for example work place stability or reconciliation of family and work are stronger motivators for high stayer rates in these occupations. In contrast, persons trained in the MOF 18 'Health occupations' would need to stay in their occupation more often. The projected stayer rate of 67.8% in 2030 in the baseline scenario would have to increase to 71.9%. This complements the results of the previous section: Because switches into these occupations are quite unlikely due to work regulations, the needed increase in stayers would only be achievable via an even greater occupational loyalty or increased training of new supply. Since outside wages are not significantly important to doctors and nursing staff, the results again stress the impact of other factors, as for example working conditions, on making these occupations more attractive for policy recommendations to realize the increase in labour supply. Interestingly the shortage MOF 15 'Technicians', would hardly need any flexibility adjustments at all according to this calculation. Their optimal flexibility would entail a stayer share of 35.7% in 2030. Therefore, the adjustment from its baseline value of 33.2% would amount to merely 2.5 percentage points. Here, the redistribution from surplus fields is high enough such that only a small adjustment in the stayer rate suffices to balance the submarket for technicians. We find that almost 70% of the additional workforce in this MOF would be recruited from outside (mainly engineers and electrical occupations). Here, wage policies may serve to attract workers from related fields to some extent, however the persisting shortages even after large wage increases (cf. Sect. 5.3) suggest that again working conditions in this field may be more promising to target. In summary, for an optimal distribution of unemployed workers over the exercised occupations, stayer rates for many training occupation would have to differ. As already discussed in the previous section, wages are often an infeasible tool to reach the optimum, here. In the QuBe model, alternative determinants for occupational mobility are important for the interpretation of the results, although they are only implicitly accounted for. In the end, this is essential for deriving recommendations for practical actions, which most often is the ultimate aim of long-term labour market projection. Conclusion and discussion In this paper, we discuss and illustrate the necessity of implementing a dynamic reallocation process of labour supply into labour market projections and how the underlying assumptions strongly influence the plausibility of the projection results and their interpretation for policy consulting. Long-term projections have become very popular for guidance in political decision-making. Therefore, it is essential that the model set-up reflects country-specifics and can draw a plausible image of the possible future developments. In Germany, therefore, it is essential that a projection model (a) represents the occupational dimension of the German labour market and (b) reflects the extent to which workers skilled in different occupations can be substituted for each other (Helmrich and Zika 2010). These two aspects are essential for an assessment of possible reallocations of labour supply in respond to imminent shortages. The BIBB-IAB qualification and occupational field projections (Maier et al. 2014) is to our knowledge the only long-term projection model, which explicitly formulate such a reallocation process. In this model, the central link between demand and supply is wages: Employers raise wages in shortage occupations to make work in these fields more attractive and workers react to relative changes in their outside wage opportunities and adjust their intent to stay. The great degree of detail of the model by 63 economic sectors and 54 occupational fields provides a thorough description of the diverse adjustment behaviours of different groups of market participants. In this way, the projection results also implicitly capture reallocation behaviours, which are not driven by wage and scarcity, respectively. Our results show that not accounting for occupational flexibility at all, i. e. not modelling any reallocations in the labour market, would project vast shortages of almost 5 million skilled workers in 9 of the 20 main occupational fields in 2030. Compared to this, the baseline scenario, which accounts for dynamic adjustments on both sides, would only project a total deficit of about 340,000 workers in 2030. However, the reallocation process can be directly linked to shortages, which now appear in 'health occupations' and 'technical occupations'. In both of these main occupational fields inflows from other fields would not suffice to balance out the outflows of skilled workers to other related fields. Next, looking at the effect of dynamic adjustments of the flexibility behaviour of workers, we compare the baseline scenario to a scenario, where shares of stayers do not respond to wages. We find that dynamics can account for a difference in the deficit of labour supply of about 400,000 people in 2030. Shortages in the 'Construction, woodworking, plastics manufacture and processing occupations' actually become more severe in the projection results after considering a dynamic adjustment of workers. Here, wage dynamics reflect the tension between price and employer competition for labour supply. Furthermore, we illustrate how also the limitations to wage dynamic adjustments as captured by the QuBe model influence the interpretation of results and the derived recommendations for practical actions. For this, we look at the effect of different wage policies. We compare a 10% and a 20% increase of wages until 2030 for shortage occupations. We see that in the QuBe model these wage increases would be able to balance some occupational submarkets, however, mainly by a reduction of labour demand and, thus, a lower production or service provision in the economy. For the remaining shortage occupations in these scenarios, we discuss how wages as a policy tool simply are not effective given the QuBe assumptions about wage dynamics of occupational mobility. Especially for technicians, doctors, and nursing staff other factors related to working conditions may be more important for political actions. In the case of health occupations, also working regulations play an important role, which limits the extent to which workers from outside can be recruited for this field. We complement these results further, by calculating the 'optimal' flexibility of the workforce, which would evenly distribute unemployed workers over the occupations. We find that most of the workforce would have to be more flexible. In contrast, health personnel would need to stay more often within their training occupation. As they do not respond to wages empirically, again working conditions but also increased training of new supply may be more feasible policy implementations. Surprisingly, in the case of technicians no large adjustment of mobility behaviour would be needed, because also an increased inflow of workers from related fields would help to balance out deficits of labour supply. In this field, the sufficient provision of labour supply may be achieved, both to their own extent, by increasing wages and improving work conditions, but also by providing persons with related educational backgrounds further educational training to enhance specific needed skills. The results illustrate how for the derivation of plausible policy recommendation also the limitations to reallocations are central to modelling. Based on the QuBe model, however, we can only discuss the relative importance of other driving factors of occupational mobility in light of the restrictions of the wage dynamics. Therefore, also integrating, for example, working conditions into long-term labour market projection models may be an intriguing field of further studies. Furthermore, throughout our analyses we assume that the response of workers to outside wages in their mobility decisions is time invariant. Here, as well different set-ups where for example dynamics evolve subject to technological progress are possible and maybe a fruitful field for research. However, when advancing model set-ups in these ways, the transparency of results has to always be kept in mind as well (c.f. Wilson 2001). Lastly, in the discussed model, the potential of the offered amount of hours by the labour force has been assumed to be stable during the projection period (Zika et al. 2012). Furthermore, it is assumed that participation rates follow an increasing trend and migration inflow to Germany is kept constant according to the 12th Coordinated Population Forecast of the Federal Statistical Office. Of course, these measures could in principal also work as dynamic mechanisms in long-term labour market projections. In fact, this may work better for employers in occupations with strong wage setting constraints and workers in occupations with low wage responsiveness. As this has not been done thus far, in future studies it would be very interesting to assess the differences in projection results and policy advice between obtained from projections using these different mechanisms. The Establishment Panel of the Institute for Employment Research (IAB) representatively surveys about 16,000 German establishments on their employment policies and related topics since 1993. Vacancies are not taken into consideration in the QuBe long-term projections for four reasons:. Micro-macro problem: At the micro-economic level, the non-filling of a vacancy leads to a loss if it causes the company concerned to refuse orders and, thus, to restrict or not to expand production capacity. This does not, however, necessarily mean that there is a corresponding loss in production for the economy as a whole, i. e. at the macro-economic level. Indeed, it may instead lead to the acceptance of the order by another domestic company, which instead expands its production capacity, offsetting the potential loss in demand. Methodology: Without further background knowledge, no expansion demand can be deduced solely from an increase in vacancies, since the number of vacancies cannot be differentiated according to replacement and expansion demand. Long-term observation: From an economic point of view, vacancies only become a problem – if at all – if they cannot be filled. Even if we do not impute complete information or rational agents, problems with an unfilled vacancy should vanish with time as a result of the reallocation process. Therefore, we safely that the number of vacancies always returns to its frictional level in the long term. Data quality: Reported vacancies statistics by the Federal Employment Agency (BA) also contain vacancies that do not have to be filled necessarily. The reasons for this may be multifarious: neglect of reporting a successful filling by the company or duplicate reports. Although this problem does not arise with data of the Job Vacancy Survey conducted by the IAB, the data here is not available to a sufficient depth of occupational disaggregation. A preliminary assessment of the minimum wage policy based on the QuBe model was presented on the 11th International Conference Challenges of Europe in 2015. The results suggest a negative overall impact on the economy. Service-oriented industries and professions with low to medium-skilled qualifications are likely to be exposed the most. See also URL: https://www.efst.hr/eitconf/index.php?p=proceedings. It is also likely that the structure of the wage data plays a role in this case. The wage data of gainfully employed persons and the legal censorship in the upper income range probably do not represent an ideal measurement, particularly with regard to the OF of 'managing directors, auditors, management consultants' and 'legal occupations'. In the case of 'health-care occupations not requiring a medical practice license', for example, which also show a higher proportion of self-employed persons and a higher income, no positive elasticities can be demonstrated. Nevertheless, because of the absence of a more exact database, it seems appropriate to use the elasticities as given in Table 4 for the baseline projection. Abraham, M., Damelang, A., Schulz, F.: Wie strukturieren Berufe Arbeitsmarktprozesse? Eine institutionentheoretische Skizze. LASER Discussion Papers, vol. 55. Friedrich- Alexander -Universität, Erlangen-Nürnberg (2011) Acemoglu, D., Autor, D.: Skills, tasks and technologies: Implications for employment and earnings. In: Ashenfelter, O., Card, D. (eds.) Handbook of labor economics, vol. 4, pp. 1043–1171. Elsevier, Amsterdam (2011) Allmendinger, J.: Educational systems and labor market outcomes. Eur Sociol Rev 5, 231–249 (1989) Bacharach, M.: Biproportional matrices and input-output change. Cambridge University Press, Cambridge (1970) Bechmann, S., Dahms, V., Tschersich, N., Frei, M., Leber, U., Schwengler, B.: Fachkräfte und unbesetzte Stellen in einer alternden Gesellschaft: Problemlagen und betriebliche Reaktionen. IAB Forschungsbericht 13/2012. (2012) Ben-Ner, A., Urtasun, A.: Computerization and skill bifurcation: the role of task complexity in creating skill gains and losses. Ind Labor Relat Rev 66(1), 225–267 (2013) Blien, U., Reinberg, A., Tessaring, M.: Die Ermittlung der Übergänge zwischen Bildung und Beschäftigung. Mitt Arbeitsmarkt Berufsforsch 23(4), 181-204 (1990) Böckermann, P., Ilmakunnas, P.: Job disamenities, job satisfaction, quit intentions, and actual separations: putting the pieces together. Ind Relat (Berkeley) 48(1), 73–96 (2009) Bonin, H., Schneider, M., Qunike, H., Arens, T.: Zukunft von Bildung und Arbeit. Perspektiven von Arbeitskräftebedarf und -angebot bis 2020. IZA Research Report, vol. 9. (2007) Brücker, H., Klinger, S., Möller, J., Walwai, U.: Handbuch Arbeitsmarkt 2013. Analysen, Daten, Fakten. IAB Bibliothek, vol. 334. (2013) Brunow, S., Garloff, A.: Arbeitsmarkt und demografischer Wandel. Anpassungsprozesse machen dauerhaften Fachkräftemangel unwahrscheinlich. IAB Forum 2/2011. (2011) CEDEFOP: Skills supply and demand in Europe. Medium-term forecast up to 2020. Publications Office of the European Union, Luxembourg (2009) CEDEFOP: Future skills and demand in Europe. Forecast 2012. Research Paper, vol. 26. Publications Office of the European Union, Luxembourg (2012) Cortes, G.M.: Where have the middle-wage workers gone? A study of polarization using panel data. J Labor Econ 34(1), 63–105 (2016) Cottini, E., Kato, T., Westergaard-Nielsen, N.: Adverse workplace conditions, high-involvement work practices and labor turnover: evidence from Danish linked employer–employee data. Labour Econ 18(6), 872–880 (2011) Cotton, J.L., Tuttle, J.M.: Employee turnover: a meta-analysis and review with implications for research. Acad Manag Rev 11(1), 55–70 (1986) Damelang, A., Schulz, F., Vicari, B.: Institutionelle Eigenschaften von Berufen und ihr Einfluss auf berufliche Mobilität in Deutschland. Schmollers Jahrb 135, 307–334 (2015) Dupuy, A.: Forecasting expansion demand by occupation and education in the Netherlands. In: Arendt, L., Ulrichs, M. (eds.) Best practices in forecasting labour demand in Europe, pp. 127–147. IPISS, Warsaw (2012) Dustman, C., Glitz, A.: How do industries and firms respond to changes in local labor supply? J Labor Econ 33(3), 711–750 (2015) Dustmann, C., Ludsteck, J., Schönberg, U.: Revisiting the German wage structure. Q J Econ 124(2), 843–881 (2009) Ehing, D., Moog, S.: Erwerbspersonen- und Arbeitsvolumenprojektionen bis ins Jahr 2060. J Labour Mark Res 46(2), 167–182 (2013) Fitzenberger, B., Lickleder, S., Zwiener, H.: Mobility across firms and occupations among graduates from apprenticeship. IZA Discussion Paper, vol. 9006. (2015) Fitzenberger, B., Spitz-Oener, A.: Die Anatomie des Berufswechsels: Eine empirische Bestandsaufnahme auf Basis der BIBB/IAB-Daten 1998/1999. In: Franz, W., Ramser, H.J., Stadler, M. (eds.) Bildung Wirtschaftswissenschaftliches Seminar in Ottobeuren, vol. 33, pp. 29–54. Mohr Siebeck, Tübingen (2004) Gajdos, A., Zmurkow-Poteralska, E.: Employment forecasts by occupational groups in Poland. Polit Spoleczna Themat 2014(2), 14–20 (2014) Gathmann, C., Schönberg, U.: How general is human capital? A task-based approach. J Labor Econ 28, 1–49 (2010) Geel, R., Backes-Gellner, U.: Occupational mobility within and between skill clusters: an empirical anaysis based on the skill-weights approach. Empir Res Vocat Educ Train 3, 21–38 (2011) Gibbons, R., Katz, L.F.: Layoffs and lemons. J Labor Econ 9(4), 351–380 (1991) Goos, M., Manning, A., Salomons, A.: Job polarization in Europe. Am Econ Rev 99(2), 58–63 (2009) Groes, F., Kircher, P., Manovskii, I.: The U‑shape of occupational mobility. Rev Econ Stud 82(2), 659–692 (2015) Grossman, G.M., Rossi-Hansberg, E.: Trading tasks: a simple theory of offshoring. Am Econ Rev 98(5), 1978–1997 (2008) Helmrich, R., Zika, G. (eds.): Beruf und Qualifikation in der Zukunft. BIBB-IAB-Modellrechnungen zu den Entwicklungen in Berufsfeldern und Qualifikationen bis 2025. Bundesinstitut für Berufsbildung, Bielefeld (2010) Helmrich, R., Maier, T.: Employment forecasting in Germany – an occupational flexibility matrix approach. In: Arendt, L., Ulrichs, M. (eds.) Best practices in forecasting labour demand in Europe Instytut Pracy I Spraw Socialnych, Report II. pp. 103–126. (2012) Hirschmann, A.O.: The paternity of an index. Am Econ Rev 54(5), 761 (1964) Kalinowski, M., Quinke, H.: Projektion des Arbeitskräfteangebots bis 2025 nach Qualifikationsstufen und Berufsfeldern. In: Helmrich, R., Zika, G. (eds.) Beruf und Qualifikation in der Zukunft. BIBB-IAB-Modellrechnungen zu den Entwicklungen in Berufsfeldern und Qualifikationen bis 2025, pp. 103–124. Bundesinstitut für Berufsbildung, Bielefeld (2010) Lapointe, M., Charron, M., Claveau, G., Gendron, M., Gomez, E.G., Grenier, M., Ignaczak, L., Kim, J.-Y., Lamy, R., Pescarus, C., Tremblay-Côté, N., Vincent, N., Zou, Y.: Looking-ahead: a 10-year outlook for the Canadian labour market (2008–2017). Human Resources and Skills Development Canada, Gatineau (2008) Leontief, W.W.: The structure of American economy, 1919–1939. An empirical application of equilibrium analysis. Oxford University Press, Oxford (1951) Lepic, M., Koucky, J.: Employment forecasting in the Czech Republic. In: Arendt, L., Ulrichs, M. (eds.) Best practices in forecasting labour demand in Europe. IPISS, Warsaw (2012) Lockard, C.B., Wolf, M.: Occupational employment projections to 2020. Employment outlook 2010–2020. Mon Labor Rev 135(1), 84–108 (2012) Lucas, R.J., Prescott, E.: Equilibrium search and unemployment. J Econ Theory 7, 188–209 (1974) Maier, T., Afentakis, A.: Forecasting supply and demand in nursing professions: impacts of occupational flexibility and employment structure in Germany. Hum Resour Health 11(1), 24 (2013). doi:10.1186/1478-4491-11-24 Maier, T., Helmrich, R.: Creating the initial vocational qualification from the German microcensus. Paper presented at the ACSPRI Conferences, RC33 Eighth International Conference on Social Science Methodology, Sydney. (2012) Maier, T., Zika, G., Mönnig, A., Wolter, M.I., Kalinowski, M., Hänisch, C., Helmrich, R., Schandock, M., Neuber-Pohl, C., Bott, P., Hummel, M.: Wages and occupational flexibilities as determinants of the interactive QuBe labour market model. Discussion Papers No. 149. Federal Institute for Vocational Education and Training, Bonn (2014) Maier, T., Mönnig, A., Zika, G.: Labour demand in Germany by industrial sector, occupational field and qualification until 2025 – model calculations using the IAB/INFORGE model. Econ Syst Res 27(1), 19–42 (2015) Mayer, K.U., Carroll, G.R.: Jobs and classes: structural constraints on career mobility. Eur Sociol Rev 3(1), 14–38 (1987) McLaughlin, K.J.: A theory of quits and layoffs with efficient turnover. J Polit Econ 99(1), 1–29 (1991) Meyer, B., Lutz, C., Schnur, P., Zika, G.: National economic policy simulations with global interdependencies. A sensitivity analysis for Germany. Econ Syst Res 19(1), 37–55 (2007) Montgomery, J.D.: Equilibrium wage dispersion and interindustry wage differentials. Q J Econ 106(1), 163–179 (1991) Moscarini, G., Thomsson, K.: Occupational and job mobility in the US. Scand J Econ 109(4), 807–836 (2007) Nisic, N., Trübswetter, P.: Berufswechsler in Deutschland und Großbritannien. IAB-Kurzbericht 1/2012. (2012) OECD: Managing decentralisation – a new role for labour market policy. OECD Publications Service, Paris (2003) Papps, K.L.: Occupational and skill forecasting: a survey of overseas approaches with applications for New Zealand. Occasional Paper 2001/1. Labour Market Policy Group, Wellington (2001) Pissarides, C.A.: Equilibrium unemployment theory. MIT Press, Cambridge (2000) Pollmann-Schult, M.: Ausmaß und Struktur von arbeitnehmerinduzierter Abstiegsmobilität. Kolner Z Soz Sozpsychol 4, 573–591 (2006) Reichelt, M., Abraham, M.: Occupational and regional mobility as substitutes. A new approach to understanding job changes and wage inequality. IAB-Discussion Paper 14/2015. (2015) Schnur, P., Zika, G.: Das IAB/INFORGE-Modell. Ein sektorales makroökonomisches Projektions- und Simulationsmodell zur Vorausschätzung des längerfristigen Arbeitskräftebedarfs. IAB-Bibliothek 318. Institut für Arbeitsmarkt und Berufsforschung, Nürnberg (2009) Shavit, Y., Müller, W.: Vocational secondary education, tracking, and social stratification. In: Hallinan, M.T. (ed.) Handbook of sociology of education, pp. 437–452. Springer, New York (2000) Shaw, J.D., Delery, J.E., Jenkins, G.D., Gupta, N.: An organization-level analysis of voluntary and involuntary turnover. Acad Manag J 41(5), 511–525 (1998) Spitz-Oener, A.: Technical change, job tasks, and rising educational demands. Looking outside the wage structure. J Labor Econ 24(2), 235–270 (2006) Tiainen, P.: Employment forecasting in Finland. In: Arendt, L., Ulrichs, M. (eds.) Best practices in forecasting labour demand in Europe, pp. 51–62. IPISS, Warsaw (2012) Tiemann, M., Schade, H.-J., Helmrich, R., Hall, A., Braun, U., Bott, P.: Berufsfeld-Definitionen des BIBB auf Basis der KldB1992. Wissenschaftliche Diskussionspapiere No. 105. Bundesinstitut für Berufsbildung, Bonn (2008) UK Commission for Employment and Skills: Working Futures 2010–2020. Executive Summary, vol. 41. UK Commission for Employment and Skills, South Yorkshire (2011) Vogler-Ludwig, K., Düll, N.: Arbeitsmarkt 2030. Eine strategische Vorausschau auf Demografie, Beschäftigung und Bildung in Deutschland. Bertelsmann, Bielefeld (2013) Weber, M.: Wirtschaft und Gesellschaft. Mohr Siebeck, Tübingen (1972) Whelpton, P.K.: An empirical method of calculating future population. J Am Stat Assoc 31(195), 457–473 (1936) Wilson, R.: Forecasting skill requirements at national and company level. In: Descy, P., Tessaring, M. (eds.) Training in Europe. Second report on vocational training research in Europe 2000. Background report, pp. 561–609. Office for Official Publications of the European Communities, Luxembourg (2001) Zika, G., Helmrich, R., Kalinowski, M., Wolter, M.I., Hummel, M., Maier, T., Hänisch, C., Drosdowski, T.: In der Arbeitszeit steckt noch eine Menge Potenzial. Qualifikations- und Berufsfeldprojektionen bis 2030. IAB-Kurzbericht 18/2012. (2012) Federal Institute for Vocational Education and Training, Bonn, Germany Tobias Maier, Caroline Neuber-Pohl & Michael Kalinowski Institute of Economic Structures Research, Osnabrueck, Germany Anke Mönnig Institute for Employment Research, Nuremberg, Germany Gerd Zika Tobias Maier Caroline Neuber-Pohl Michael Kalinowski Correspondence to Tobias Maier. Table 5 Major Occupational Fields (MOF) and Occupational Fields (OF) Table 6 Structure of the NACE Rev. 2 Classification of Economic Activities used in the Projection Maier, T., Neuber-Pohl, C., Mönnig, A. et al. Modelling reallocation processes in long-term labour market projections. J Labour Market Res 50, 67–90 (2017). https://doi.org/10.1007/s12651-017-0220-x Issue Date: August 2017 Occupational mobility Wage development Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Additional Questions Regarding the Auto-Ignition Temperature I had some followup questions regarding a previous post I made here regarding the auto-ignition temperature and ASTM E659 For fuel temperature below AIT, we should still have finite reactants above activation energy, reacting, and heating the remaining mixture so additional reactants are above the activation energy. Theoretically as $t \rightarrow \infty$, could the fuel burn itself out like this, regardless of what temperature it is at (obviously the rate will be very different)? If so, is there an implicit rate requirement in defining the AIT that the above chain reaction process has to happen within a short duration? Given that ignition is defined as when a flash and temperature rise is seen, does this mean that the actual quantity of fuel burnt is unimportant and it is assumed that a significant enough amount is used? Is the reason for using an open flask in ASTM E659 likely just for simplicity, as opposed to a piston-cylinder arrangement that offers more control over air-fuel mixture? physical-chemistry combustion YandleYandle $\begingroup$ I don't have time for a proper answer, but in 1, it sounds like you've got the right idea. Think of paper browning; over centuries it will oxidize into brittle, ruined material, essentially burning to completion at room temperature air. Here the time limit they establish for a flame to appear is explicit, 10 minutes. They do state that some compounds have a longer delay at a given temperature before they ignite. The reaction rate you refer to would be different for each substance, since each is releasing a different amount of energy. $\endgroup$ – Jason Patterson Oct 15 '14 at 12:06 To understand the answer to (1), you have to think about what combustion actually is. It isn't a single reaction, with a single set of reactants, and a single set of products. It's actually a whole bunch of tiny steps (thousands) that all occur together. Those reactions happen among a whole bunch of unstable radical species (hundreds). When you put a match to a pool of gasoline, all that energy from the match starts tearing apart bonds in the fuel, leading to the formation of radicals. That's what triggers the initial release of energy, and what creates the cascade of ignition. So theoretically, no, you could never reach combustion products, through the pathway of combustion, because you'd never have that activation energy, the big kick of energy needed to create radicals and keep them alive long enough for them to create a chain reaction. The answer to (2) is, no, it doesn't matter how much fuel. Remember you're measuring a mass-independent property. The autoginition temperature isn't "5 degrees per gram". It's a single temperature. What matters is the ratio of oxygen to fuel. As to (3), it's a lot easier to operate an open flask than it is to operate a piston-cylinder, and it doesn't have a substantial impact on the air-fuel ratio if you do it correctly. charlesreid1charlesreid1 $\begingroup$ Regarding (1), given infinite time, every finite kinetic barrier will eventually be overcome at any finite temperature, no matter how high the barrier is, or how close to $0\ \mathrm{K}$. For example, it would take at most approximately $10^{1500}$ years for nuclei to spontaneously fuse into iron. Any chemical reaction would have a much, much lower kinetic barrier, and would happen in a much shorter timescale. Whether a puddle of fuel inside an atmosphere of oxygen at $300\ \mathrm{K}$ could have had enough time to oxidize completely during the current lifetime of the Universe, I don't know. $\endgroup$ – Nicolau Saker Neto Jun 6 '15 at 3:23 $\begingroup$ That's getting into the realm of metaphysics. The mass of fuel you'd need would be greater than the mass of carbon in the universe. Sounds like navel-gazing to me... $\endgroup$ – charlesreid1 Jun 6 '15 at 17:14 $\begingroup$ @charlesreid1 (1) A little confused. The answer implies that fuel-air require an ignition source to or else reaction will not occur. But aren't oxidation reactions spontaneous so we should expect the products to form so long as T (and thus forward rate constant) is greater than zero? My current though is that as t -> infinity, enough fuel will react such that equilibrium will reached (and depending on the heat of reaction there should be very little fuel remaining). $\endgroup$ – Yandle Jun 8 '15 at 4:39 $\begingroup$ A fuel-air mixture does require an ignition source to combust. a spark is just a big shot of energy to the system, transferred from one set of (say, metal) molecules to another set of (say, methane) molecules. that energy starts tearing apart molecules to create radicals, which initiates combustion. it isn't a spontaneous process at all. otherwise, all fuel in existence would spontaneously combust! $\endgroup$ – charlesreid1 Jun 8 '15 at 5:01 $\begingroup$ @charlesreid1 What I'm confused about is that, if we have an closed system of fuel-air, T and Ea are finite, and per Arrhenius equation forward rate constant and reactant concentration are all greater than zero. From this I expect a net formation of products (given that initial product concentration is zero) until equilibrium. I get the logic in your explanation but I also don't know the flaw in my logic. Also, isn't AIT technically a point where fuel spontaneous combust? $\endgroup$ – Yandle Jun 12 '15 at 5:40 Not the answer you're looking for? Browse other questions tagged physical-chemistry combustion or ask your own question. Mechanics behind auto-ignition temperature How much Carbon Dioxide would be required to displace enough oxygen to prevent ignition? Relationship between auto-ignition temperature with pressure and fuel Does the ignition point of flammable substances changes when provided a pure oxygen atmosphere? What is the difference between ignition temperature and flash point? How or Why does diesel/kerosene have a much higher 'flash point' but lower auto/self-ignition point than gasoline/petrol?
CommonCrawl
The impact of temperature on the life cycle of Gasterophilus pecorum in northwest China Ke Zhang1 na1, Heqing Huang2 na1, Ran Zhou1, Boru Zhang3, Chen Wang4, Make Ente5, Boling Li6, Dong Zhang1 & Kai Li1 The departure of the mature larvae of the horse stomach bot fly from the host indicates the beginning of a new infection period. Gasterophilus pecorum is the dominant bot fly species in the desert steppe of the Kalamaili Nature Reserve (KNR) of northwest China as a result of its particular biological characteristics. The population dynamics of G. pecorum were studied to elucidate the population development of this species in the arid desert steppe. Larvae in the freshly excreted feces of tracked Przewalski's horses (Equus przewalskii) were collected and recorded. The larval pupation experiments were carried out under natural conditions. There was a positive correlation between the survival rate and the number of larvae expelled (r = 0.630, p < 0.01); the correlation indicated that the species had characteristic peaks of occurrence. The main periods during which mature larvae were expelled in the feces were from early April to early May (peak I) and from mid-August to early September (peak II); the larval population curve showed a sudden increase and gradual decrease at both peaks. Under the higher temperatures of peak II, the adults developing from the larvae had a higher survival rate, higher pupation rate, higher emergence rate and shorter eclosion period than those developing from peak I larvae. Although G. pecorum has only one generation per year, its occurrence peaked twice annually, i.e. the studied population has a bimodal distribution, which doubles parasitic pressure on the local host. This phenomenon is very rarely recorded in studies on insect life history, and especially in those on parasite epidemiology. The period during which G. pecorum larvae are naturally expelled from the host exceeds 7 months in KNR, which indicates that there is potentially a long period during which hosts can become infected with this parasite. The phenomenon of two annual peaks of larvae expelled in feces is important as it provides one explanation for the high rate of equine myiasis in KNR. Horse stomach bot flies (Gasterophilus spp.) are common obligate parasites in equids [1, 2]. Gasterophilus larvae parasitize the digestive tract of equids and cause inflammation, ulceration, herniation and other symptoms [3,4,5]. The larvae absorb nutrients from the host and secrete toxins, and may lead to host death when the level of infection is severe [6, 7]. The genus Gasterophilus comprises 9 species globally [8, 9], of which 6 have been recorded in China [10]. The infection rate of bot flies in the equid population is 100% in Kalamaili Nature Reserve (KNR), Xinjiang, which is located in desert grassland [11]. Among the 6 species of horse stomach bot flies found in KNR, the rate of infection with G. pecorum is extremely high, accounting for 89–98% of all bot fly infections [11, 12]. In other regions of China, Gasterophilus intestinalis and Gasterophilus nasalis are the dominant horse stomach bot fly species [13, 14]. Horse stomach bot flies undergo complete metamorphosis; they have four developmental stages, i.e. egg, larva, pupa and adult, one generation per year, and the larvae take 9–10 months to develop in the digestive tract of the host [8]. The larvae develop in the digestive tract of equids until they are mature, then leave the host and develop into flies and start a new life cycle. The mature larva is the first stage of the host examined in vitro, and the population dynamics of these larvae determine subsequent population growth. The most significant feature of the life cycle of G. pecorum is its "unusual" reproductive strategy of laying eggs on grass [8]. Most studies on G. pecorum follow on from the early biological observations of Chereshnev in Kazakhstan [15], who found that the mature larvae mainly appear in early August to early September. However, a study designed to examine the development period of pupae reported a significant number of G. pecorum larvae collected in KNR in spring, thus the highest incidence of mature larvae was considered to be spring [16]. Based on the differences between some recent findings and existing life history records of G. pecorum, a systematic study of the population dynamics and growth of local G. pecorum was carried out in vitro to understand the development of this species in the desert steppe. Insect are poikilotherms, whose metabolism, life cycle and lifespan are influenced by the ambient temperature [17, 18]. Horse stomach bot flies are exposed to a relatively stable environment in the digestive tract of the host, but once expelled from the host will be affected by the external temperature [19]. Horse stomach bot flies are thermophilic insects that tend to live at high temperatures [20]. Higher temperatures within a preferential temperature range accelerate the development rate of insects [21, 22]. In addition, the survival rate of insects metamorphosing between different states is also affected by temperature [23,24,25]. Knowledge of the developmental response of insects to temperature is important for an understanding of their ecology and life history [26]. Thus, we conducted an experiment to investigate the effect of different temperatures on the developmental rate, survival, pupation and emergence of G. pecorum to determine parameters that can be used to predict and manage horse stomach bot fly populations in KNR. KNR is located in the desert subregion of northwest China (44°36′ ~ 46°00′N, 88°30′ ~ 90°03′E). KNR has an area of 18,000 km2, an altitude of 600–1464 m, an average annual temperature of 2.4 ℃, average annual precipitation of 159 mm and annual evaporation of 2090 mm. Winter in the desert steppe is cold and long, lasting from late October to early March [27, 28]. Protected animals in KNR include the reintroduced species Equus przewalskii (EN) and the endangered species Equus hemionus (NT), as well as Gazella subgutturosa (VU) and Ovis ammon (NT). Domestic livestock such as horses graze in KNR seasonally [29]. Based on the climate of KNR and the life history of stomach bot flies in the area, the population survey of G. pecorum larvae was carried out from early March to late September 2018. The feces of Przewalski's horses were inspected for this study. As the horses defecated one distinct pile in a single defecation event, we could count the number of piles of feces for the survey (Additional file 1). Larvae were collected from fresh feces 4–6 days/week, and the number of feces (piles of feces) and larvae were statistically analyzed on a weekly basis. We investigated fresh feces from 50–100 piles/week within 5 min after the horses defecated, and used tweezers to separate larvae from the feces. The larvae of G. pecorum were collected and third-instar stomach bot fly larvae identified according to Li et al.'s method [30]. $${\text{Proportion of feces containing larvae}}\;\left( {{\text{PF}},\% } \right)\; = \;\begin{array}{*{20}c} {\frac{{{\text{Number of feces with larvae}}}}{{{\text{Number of feces investigated}}}}\; \times \;100\% } \\ \end{array}$$ $${\text{Number of larvae per fece }}\left( {{\text{NL}}} \right) = \frac{{{\text{Number of larvae in feces}}}}{{{\text{Number of feces investigated}}}}$$ Transparent plastic cups (8 cm in diameter, 6 cm in height) were used as the pupation containers for G. pecorum larvae. Five larvae were placed in each cup, the cup mouth sealed with gauze and the larvae cultured in outdoor shade (low light intensity, where the photoperiod was that of the natural environment) in KNR. The pupation and eclosion behavior of G. pecorum were observed daily. The number of insects in each phase was recorded, and the survival rate, pupation rate and eclosion rate calculated. The temperature determined by thermometer at the larvae culture site was recorded at 2:00, 8:00, 14:00, 20:00 hours daily. To evaluate the survival, pupation and eclosion rates of the stomach bot flies, the following formulas were utilized: $${\text{Survival rate }}\left( {{\text{SR}},\% } \right) = \frac{{{\text{Number of surviving larvae}}}}{{{\text{Total number of larvae in feces}}}} \times 100\%$$ $${\text{Pupation rate }}\left( {{\text{PR}},\% } \right) = \frac{{{\text{Number of pupae }}}}{{{\text{Number of surviving larvae}}}} \times 100\%$$ $${\text{Eclosion rate }}({\text{ER}},\% )\; = \;\frac{{{\text{Number of adults}}}}{{{\text{Number of pupae}}}}\; \times \;100\%$$ Spearman's correlation analysis was used to analyze the relationship between the number of mature larvae and their survival rate; the significance level was set as α = 0.05. Data analysis was performed in SPSS 20.0, and the graphs were drawn by using Sigmaplot 12.0 and Graphpad prism 7. Population dynamics of mature larvae of G. pecorum A total of 2,021 piles of equine feces were examined, of which 443 (21.92%) contained G. pecorum larvae (Table 1). The proportion of feces containing larvae (PF) in early April and mid-to-late August was 56.03% and 53.23%, respectively. There were two obvious larval peaks with a significant range. In May and September, the PF gradually decreased and remained low. The PF was lower than 20% during three periods: March; mid-to-late May to the end of July; and mid-to-late September. Table 1 The number of piles of equine (Equus przewalskii) feces investigated and the proportion of feces containing Gasterophilus pecorum larvae (PF) in Kalamaili Nature Reserve (KNR) in 2018 A total of 704 larvae of G. pecorum were collected in this study. The average number of larvae per pile of feces (NL) was 0.35 during the entire investigation period; the highest number was in early April with an average of 1.40, followed by mid-to-late April and mid-to-late August, with averages of 0.79 and 0.84, respectively. The NL showed the same trend as the PF (Fig. 1). The population dynamics of Gasterophilus pecorum larvae in 2018 (mean/week) The percentages of larvae found in April and August were 42.47% and 28.55%, respectively. The percentages of larvae were 27.98% and 14.79% in early and mid-to-late April, respectively, and 23.86% in mid-to-late August; these percentages were significantly higher than those recorded for the other periods (p < 0.05). The number of larvae collected from early April to early May and from mid-August to early September accounted for 48.72% and 32.52% of the total number of larvae, respectively, and the cumulative proportion of the two phases accounted for 81.24%. Thus, the main periods of mature larvae emergence in KNR were from early April to early May (peak I) and from mid-August to early September (peak II) (Fig. 1), although larvae were expelled in feces continuously throughout March to September. The NL was derived from observations made on 10 wild horses in temporary captivity that showed that an adult Przewalski's horse defecates an average of 10.1 piles of feces per day. It was estimated that 749 larvae of G. pecorum were discharged from each horse, which was close to the figure calculated from the annual average number of infective G. pecorum expelled locally [11]. The plot of the cumulative number of larvae collected during the survey period showed a double S curve (Fig. 2). There was one spike in April and one in August, and the slopes of the curves showed that in the first phase the increase in the number of total larvae (Fig. 2a) and survival of larvae (Fig. 2b) were highest on 5 April, and in the second phase (Fig. 2c, d) they were both highest on 23 August. Cumulative G. pecorum larvae collected in 2018 Population development analysis The survival rates during the two peaks were 69.57% (peak I) and 73.27% (peak II) (p = 0.183, t = − 1.727), with a higher pupation rate for peak II (89.19%) than for peak I (66.83%) (p = 0.002, t = − 10.547). The eclosion rate during peak II (87.88%) was also higher than that during peak I (63.31%) (p = 0.002, t = − 9.525) (Fig. 3b). Comparison of the in vitro development of G. pecorum between the two peak periods. a Pupal development (days; d) during the two peak periods. b Survival rate (SR), pupation rate (PR) and eclosion rate (ER) of the two peak periods The average pupal period of peak I and peak II lasted for 34.05 days and 20.2 days, respectively (p < 0.001, t = 15.513) (Fig. 3a). The longest and shortest development periods of peak I were 39 days and 26 days, respectively, and the longest and shortest development periods of peak II were 24 days and 18 days, respectively. There was a positive correlation between the survival rate and the number of larvae (r = 0.630, p < 0.01), i.e. the survival rate of G. pecorum larvae was higher during the two peak periods. Population decreases in the three stages from the development of larvae (N) to adults (N') was as follows: N' = αβγN, where α is survival rate, β is pupation rate and γ is eclosion rate. The number of adults that developed from larvae during peak I and peak II are given by N1' and N2', respectively; N1' = 15.22% N, N2' = 23.98% N. Larvae collected during peak I that developed into adults accounted for 15.22% of the total larvae, and larvae collected in peak II that developed into adults accounted for 23.98% of the total larvae. The number of larvae collected in peak I was 1.48 times higher than that collected in peak II, but the higher survival rate, pupation rate and emergence rate of peak II resulted in 1.32 times more adults compared to peak I. There were differences in the sex ratio between the two peak periods. The ratio of males to females in peak I was 0.73, which was lower than that in peak II (0.76) (Table 2). The proportion of female adults in relation to the total number of larvae was 0.10% in peak II, which was slightly higher than that in peak I (0.09%); this indicated that peak II was of greater significance in terms of potential G. pecorum infections of equids in KNR. Table 2 The sex ratio of G. pecorum adults in the two peak periods Temperature characteristics during the periods that larva were expelled The ambient temperature rose rapidly from March to April, showed a slower increase from April to July, and began to decline after reaching the highest temperature of 38 ℃ on 20 July (Fig. 4a). The maximum daily temperature difference was 20 ℃, and the minimum and average daily temperature differences were 3 ℃ and 12.78 ℃, respectively. The temperature curve was parabolic, which is characteristic of an initially increasing then decreasing temperature (y = 0.8570 + 3.7449x + 0.1031x2 – 0.0240x3,R2 = 91.18%) (Fig. 4b). The maximum temperature, as shown by the fitted curve, was on 19 July (Fig. 4). Temperature changes at KNR in 2018. a Average daily temperature, highest temperature, lowest temperature, and temperature difference. b Fitted curve The study of insect population dynamics is an important part of insect ecological research [31, 32]. The prevalence of G. pecorum has been reported to be low in most countries and regions [2, 13, 33]. However, G. pecorum is common in the digestive tract of equids in the Mongolia-Xinjiang and Qinghai-Tibet regions of China [12, 34], central and northern Kazakhstan [14], the Republic of Mongolia [35] and the Yakut Republic of Russia [36]. There also tends to be a higher incidence of this species in certain regions. A study carried out on the period of pupal development of G. pecorum reported a model that can be used to predict the period of adult occurrence [16]. A study on the egg development period and the survival period of first-instar larvae of G. pecorum has also been completed (in submission). The results from these studies will further improve our understanding of this species and contribute to a better understanding of the epidemiology of disease caused by this parasite. These parasites can adapt to their environment, and have different performance characteristics in different environments [37,38,39]. The KNR is a desert grassland with high temperatures and drought in summer, a severe and long winter with low temperatures, and low annual precipitation. The unique environmental conditions of the KNR have enabled the specific occurrence of G. pecorum there. These special conditions differ from those of Kazakhstan, where an earlier study of G. pecorum showed that the larvae were only excreted there in August [15]. The findings reported here for KNR also differ from those reported for southern Italy, where larvae of G. pecorum were found in January, March and July [33], and for South Africa, where adult G. pecorum only appeared from February to May and in August [8]. The phenology of an insect affects its development dynamics [40, 41]. The development rhythm of some insects that produce one generation per year is closely related to their phenology [42, 43]. With the change of plant growth period, the population dynamics of Uroleucon rudbeckiae showed a peak period of correlation [44]. In the present study, the larval population of G. pecorum showed two peaks annually, which were closely related to the particularly arid climate of the desert steppe. As water is the most crucial factor for life in the desert steppe [45, 46], many of the plants that grow there are ephemeral. Due to the characteristics of local precipitation [47], some Stipa sp. showed a special secondary growth phenomenon due to adaptation to the environment [48]. We found that the locally dominant Stipa began to resume growth at the end of March and early April, with seed heads developing in May, and began to grow new leaves in late August, all of which confirmed the phenomenon of secondary growth. Some studies have shown that once water conditions are suitable, many desert plants develop rapidly and have a faster life cycle [49,50,51]; the effects of this are reflected in the KNR ecosystem by the phenomenon of the simultaneous occurrence of G. pecorum and vector plants. In contrast to the other 5 species of Gasterophilus found in this region, which lay eggs directly on the horse host, G. pecorum lays eggs on Stipa [52]. The parasite's population dynamics were found to match perfectly with the growth of vector plants and phenological changes, which also led to the local bimodal population distribution phenomenon of G. pecorum seen here. This phenomenon indicates that there are two main infection periods per year, which differs from the annual infection of animals that occurs with linear host–parasite phenomena [53, 54], and also differs from the particular characteristics of infection associated with phenology seen with most arthropod parasite infections [55,56,57]. In an appropriate temperature range, the higher the temperature, the more beneficial it is to the development of insects [58]. The average temperature of peak I (11.3 ± 5.3 ℃) was significantly lower than that of peak II (24.4 ± 2.7 ℃) (t = − 11.083, p < 0.001), which affected the survival rate of mature larvae. The environmental temperature during peak II was more beneficial for the subsequent pupal development stage. This resulted in a higher survival rate (survival rate, pupation rate and eclosion rate) of mature larvae from peak II, and a shorter pupal stage, which lead to the higher number of adults produced from peak II than from peak I larvae. The natural period in which G. pecorum larvae are expelled in horse feces in KNR exceeds 7 months. The close relationship between the bimodal population distribution of G. pecorum and the secondary growth of Stipa in the desert steppe of KNR is important as it can explain the high number of infective G. pecorum, high infection rate and dominance of Gasterophilus spp. in equids in the reserve. The larval population from peak II had a higher survival rate than that from peak I because of the suitable conditions for the development of the former. The results of this study demonstrate the highly co-evolutionary nature of the phenomenon described here in the desert steppe ecosystem, and reveal the high adaptability of organisms under adverse conditions. KNR: Kalamaili Nature Reserve PF: Proportion of feces containing larvae NL: Number of larvae per pile of feces Royce LA, Rossignol PA, Kubitz ML, Burton FR. Recovery of a second instar Gasterophilus larva in a human infant: a case report. Am J Trop Med Hyg. 1999;60:403–4. Mukbel R, Torgerson PR, Abo-Shehada M. Seasonal variations in the abundance of Gasterophilus spp. larvae in donkeys in northern Jordan. Trop Anim Health Pro. 2001;33:501–9. Cogley TP, Cogley MC. Inter-relationship between Gasterophilus larvae and the horse's gastric and duodenal wall with special reference to penetration. Vet Parasitol. 1999;86:127–42. Gökçen A, Sevgili M, Altaş MG, Camkerten I. Presence of Gasterophilus species in Arabian horses in Sanliurfa region. Turk Soc Parasitol. 2008;32:337–9. Moshaverinia A, Baratpour A, Abedi V, Mohammadi-Yekta M. Gasterophilosis in Turkmen horses caused by Gasterophilus pecorum (Diptera, Oestridae). Sci Parasitol. 2016;17:49–52. Smith MA, Mcgarry JW, Kelly DF, Proudman CJ. Gasterophilus pecorum in the soft palate of a British pony. Vet Rec. 2005;156:283–4. Pawlas M, Sotysiak Z, Nicpoń J. Existence and pathomorhological picture of gasterophilosis in horses from north-east Poland. Med Weter. 2007;63:1377–80. Zumpt F. Myasis in man and animals in the Old World. In: Morphology, biology and pathogenesis of myiasis-producing flies in systematic order. London: Butterworths; 1965. P. 110–128. Cogley TP. Key to the eggs of the equid stomach bot flies Gasterophilus Leach 1817 (Diptera: Gasterophilidae) utilizing scanning electron microscopy. Syst Entomol. 1991;16:125–33. Zhang BR, Huang HQ, Zhang D, Chu HJ, Ma XP, Li K, et al. Genetic diversity of common Gasterophilus spp. from distinct habitats in China. Parasites Vectors. 2018;11:474–85. Liu SH, Li K, Hu DF. The incidence and species composition of Gasterophilus (Diptera, Gasterophilidae) causing equine myiasis in northern Xinjiang China. Vet Parasitol. 2016;217:36–8. Huang HQ, Zhang BR, Chu HJ, Zhang D, Li K. Gasterophilus (Diptera, Gasterophilidae) infestation of equids in the Kalamaili Nature Reserve China. Parasite. 2016;23:36. Pandey VS, Ouhelli H, Verhulst A. Epidemiological observations on Gasterophilus intestinalis and Gasterophilus nasalis in donkeys from Morocco. Vet Parasitol. 1992;41:285–92. Ibrayev B, Lider L, Bauer C. Gasterophilus spp. infections in horses from northern and central Kazakhstan. Vet Parasitol. 2015;207:94–8. Chereshnev NA. Biological peculiarities of the botfly Gasterophilus pecorum Fabr (Diptera: Gasterophilidae). Dokl Akad Nauk SSSR. 1951;77:765–8 (in Russian). Wang KH, Zhang D, Hu DF, Chu HJ, Cao J, Li K, et al. Developmental threshold temperature and effective accumulated temperature for pupae of Gasterophilus pecorum. Chin J Vector Biol Control. 2015;26:572–5 (in Chinese). Gordon P, Harder LD, Mutch RA. Development of aquatic insect eggs in relation to temperature and strategies for dealing with different thermal environments. Biol J Linn Soc. 1996;58:221–44. Potter K, Davidowitz G, Woods HA. Insect eggs protected from high temperatures by limited homeothermy of plant leaves. J Exp Biol. 2009;212:3448–54. Knapp FW, Sukhapesna V, Lyons ET, Drudge JH. Development of third-instar Gasterophilus intestinalis artificially removed from the stomachs of horses. Ann Entomol Soc Am. 1979;72:331–3. Vincent HR, Ring TC. Encyclopedia of insects (second edition). In: Temperature, effects on development and growth. Oxford: Academic Press; 2009. p. 990–993. Ikemoto T. Intrinsic optimum temperature for development of insects and mites. Environ Entomol. 2015;34:1377–87. Wu TH, Shiao SF, Okuyama T. Development of insects under fluctuating temperature: a review and case study. J Appl Entomol. 2015;139:592–9. Verloren MC. On the comparative influence of periodicity and temperature upon the development of insects. Ecol Entomol. 2010;11:63–9. Roe A, Higley LG. Development modeling of Lucilia sericata (Diptera: Calliphoridae). Peer J. 2015;3:1–14. Karol G, Monika F. Effect of temperature treatment during development of Osmia rufa L. on mortality, emergence and longevity of adults. J Apic Sci. 2016;60:221–32. Régnière J, Powell J, Bentz B, Nealis V. Effects of temperature on development, survival and reproduction of insects: experimental design, data analysis and modeling. J Insect Physiol. 2012;58:634–47. Zang S, Cao J, Alimujiang K, Liu SH, Zhang YJ, Hu DF. Food patch particularity and forging strategy of reintroduced Przewalski's horse in north Xinjiang China. Turk J Zool. 2017;41:924–30. Zhou R, Zhang K, Zhang TG, Zhou T, Chu HJ, Li K, et al. Identification of volatile components from oviposition and non-oviposition plants of Gasterophilus pecorum (Diptera: Gasterophilidae). Sci Rep. 2020;10:15731. Liu G, Aaron BA, Zimmermann W, Hu DF, Wang WT, Chu HJ, et al. Evaluating the reintroduction project of Przewalski's horse in China using genetic and pedigree data. Biol Conserv. 2014;171:288–98. Li XY, Chen YO, Wang QK, Li K, Pape T, Zhang D. Molecular and morphological characterization of third instar Palaearctic horse stomach bot fly larvae (Oestridae: Gasterophilinae, Gasterophilus). Vet Parasitol. 2018;262:56–74. Heino J, Peckarsky BL. Integrating behavioral, population and large-scale approaches for understanding stream insect communities. Curr Opin Insect Sci. 2014;2:7–13. Modlmeier AP, Keiser CN, Wright CM, Lichtenstein JLL, Pruitt JN. Integrating animal personality into insect population and community ecology. Curr Opin Insect Sci. 2015;9:77–85. Otranto D, Milillo P, Capelli G, Colwell DD. Species composition of Gasterophilus spp. (Diptera, Oestridae) causing equine gastric myiasis in southern Italy: parasite biodiversity and risks for extinction. Vet Parasitol. 2005;133:111–8. Wang WT, Xiao S, Huang HQ, Li K, Zhang D, Chu HJ, et al. Diversity and infection of Gasterophilus spp. in Mongol-Xinjiang region and Qinghai Tibet region. Sci Silva Sin. 2016;52:134–9 (in Chinese). Dorzh C, Minár J. Warble flies of the families Oestridae and Gasterophilidae (Diptera) found in the Mongolian People's Republic. Folia Parasit. 1971;18:161–4. Reshetnikov AD, Barashkova AI, Prokopyev ZS. Infestation of horses by the causative agents of gasterophilosis (Diptera: Gasterophilidae): the species composition and the north-eastern border of the area in the Republic (Sakha) of Yakutia of the Russian Federation. Life Sci J. 2014;11:587–90. Gandon S, Ebert D, Olivieri I, Michalakis Y. Differential adaptation in spacially heterogeneous environments and host-parasite coevolution. In: Mopper S, Strauss SY, editors. Genetic structure and local adaptation in natural insect populations. London: Chapman and Hall; 1998. p. 325–42. Thomas F, Renaud F, Guégan JF. Parasitism and ecosystems. In: Parasitism and hostile environments. New York: Oxford University Press; 2015. p. 85–112. Machado TO, Braganca MAL, Carvalho ML, Andrade FJD. Species diversity of sandflies (Diptera: Psychodidae) during different seasons and in different environments in the district of Taquaruçú, state of Tocantins, Brazil. Mem I Oswaldo Cruz. 2012;107:955–9. Aliakbarpour H, Che SMR, Dieng H. Species composition and population dynamics of thrips (Thysanoptera) in mango orchards of Northern Peninsular Malaysia. Environ Entomol. 2010;39:1409–19. Palomo LAT, Martinez NB, Napoles JR, Leon OS, Arroyo HS, Graziano JV, et al. Population fluctuations of thrips (Thysanoptera) and their relationship to the phenology of vegetable crops in the central region of Mexico. Fla Entomol. 2015;98:430–8. Shibata E. Seasonal fluctuation and spatial pattern of the adult population of the Japanese pine sawyer, Monochamus alternatus Hope (Coleoptera: Cerambycidae), in young pine forests. Appl Entomol Zool. 2008;16:306–9. Haack RA, Lawrence RK, Heaton GC. Seasonal shoot-feeding by Tomicus piniperda (Coleoptera: Scolytidae) in Michigan. Great Lakes Entomol. 2018;33:1–8. Lamb RJ, Mackay PA. Seasonal dynamics of a population of the aphid Uroleucon rudbeckiae (Hemiptera: Aphididae): implications for population regulation. Can Entomol. 2016;149:300–14. Huxman TE, Smith MD, Fay PA, Knapp AK, Shaw MR, Loik ME, et al. Convergence across biomes to a common rain-use efficiency. Nature. 2004;429:651–4. Cleland EE, Collins ST, Dickson TL, Farrer EC, Gross KL, Gherardi LA, et al. Sensitivity of grassland plant community composition to spatial vs. temporal variation in precipitation. Ecology. 2013;94:1687–96. Yong SP, Zhu ZY. A certain fundamental characteristics of Gobi Desert vegetation in the centre Asia. Acta Sci Natl Univ Neimongol. 1992;23:235–44 (in Chinese). Cui NR. The flora records of main forage grass crops in Xinjiang. In: Book one. Urumqi: Xinjiang People's Publishing House; 1990. p. 140–157 (in Chinese). Ogle K, Reynolds JF. Plant responses to precipitation in desert ecosystems: integrating functional types, pulses, thresholds, and delays. Oecologia. 2004;141:282–94. Mckenna MF, Houle G. Why are annual plants rarely spring ephemerals? New Phytol. 2010;148:295–302. Tielbörger K, Valleriani A. Can seeds predict their future? Germination strategies of density-regulated desert annuals. Oikos. 2010;111:235–44. Liu SH, Hu DF, Li K. Oviposition site selection by Gasterophilus pecorum (Diptera: Gasterophilidae) in its habitat in Kalamaili nature reserve, Xinjiang China. Parasite. 2015;22:34. Epe C, Kings M, Stoye M, Böer M. The prevalence and transmission to exotic equids (Equus quagga antiquorum, Equus przewalskII, Equus africanus) of intestinal nematodes in contaminated pasture in two wild animal parks. J Zoo Wildl Med. 2001;32:209–16. Hu XL, Liu G, Zhang TX, Yang S, Hu DF, Liu SQ, et al. Regional and seasonal effects on the gastrointestinal parasitism of captive forest musk deer. Acta Trop. 2018;177:1–8. Teel PD, Marin SL, Grant WE. Simulation of host-parasite-landscape interactions: influence of season and habitat on cattle fever tick (Boophilus sp.) population dynamics. J Am Soc Nephrol. 1996;14:855–62. James PJ, Moon RD, Brown DR. Seasonal dynamics and variation among sheep in densities of the sheep biting louse Bovicola ovis. Int J Parasitol. 1998;28:283–92. Taylor B, Rahman PM, Murphy ST, Sudheendrakumar VV. Within-season dynamics of red palm mite (Raoiella indica) and phytoseiid predators on two host palm species in south-west India. Exp Appl Acarol. 2012;57:331–45. Hercus MJ, Loeschcke V, Rattan SIS. Lifespan extension of Drosophila melanogaster through hormesis by repeated mild heat stress. Biogerontology. 2003;4:149–56. We are grateful to the staff of KNR for their support and valuable technical assistance. This work was supported by the National Science Foundation of China (grant no. 31670538), and the Species Project (no. 2018123) of the Department for Wildlife and Forest Plants Protection, National Forestry and Grassland Administration, China. Ke Zhang and Heqing Huang contributed equally to this work Key Laboratory of Non-Invasive Research Technology for Endangered Species, School of Ecology and Nature Conservation, Beijing Forestry University, Beijing, 100083, China Ke Zhang, Ran Zhou, Dong Zhang & Kai Li Chongqing Academy of Environmental Science, Chongqing, 401147, China Heqing Huang Qinhuangdao Forestry Bureau, Qinhuangdao, 066004, Hebei, China Boru Zhang Mt. Kalamaili Ungulate Nature Reserve, Changji, 381100, Xinjiang, China Chen Wang Xinjiang Research Centre for Breeding Przewalski's Horse, Urumqi, 831700, Xinjiang, China Make Ente China National Environment Monitoring Centre, Beijing, 100012, China Boling Li Ke Zhang Ran Zhou Dong Zhang Kai Li KL, KZ and HQH conceived the study; KZ drafted the manuscript; KZ, HQH and RZ conducted the experiment; KZ, BRZ, CW and ME carried out the statistics; BLL, DZ and KL revised the manuscript. All authors read and approved the final manuscript. Correspondence to Kai Li. The study was performed in accordance with the relevant guidelines and regulations regarding animal welfare. All experimental protocols were approved by the Wildlife Conservation Office of Altay Prefecture and Beijing Forestry University. Method of larvae collection and definition of piles of feces. Zhang, K., Huang, H., Zhou, R. et al. The impact of temperature on the life cycle of Gasterophilus pecorum in northwest China. Parasites Vectors 14, 129 (2021). https://doi.org/10.1186/s13071-021-04623-7 Desert steppe Gasterophilus pecorum Mature larvae Bimodal population Survival rate Parasites of veterinary importance
CommonCrawl
Comparison of 6 % hydroxyethyl starch and 5 % albumin for volume replacement therapy in patients undergoing cystectomy (CHART): study protocol for a randomized controlled trial Tobias Kammerer†1Email author, Florian Klug†1, Michaela Schwarz2, Sebastian Hilferink1, Bernhard Zwissler1, Vera von Dossow1, Alexander Karl3, Hans-Helge Müller4 and Markus Rehm1 © Kammerer et al. 2015 Received: 9 December 2014 Accepted: 15 July 2015 The use of artificial colloids is currently controversial, especially in Central Europe Several studies demonstrated a worse outcome in intensive care unit patients with the use of hydroxyethyl starch. This recently even led to a drug warning about use of hydroxyethyl starch products in patients admitted to the intensive care unit. The data on hydroxyethyl starch in non–critically ill patients are insufficient to support perioperative use. We are conducting a single-center, open-label, randomized, comparative trial with two parallel patient groups to compare human albumin 5 % (test drug) with hydroxyethyl starch 6 % 130/0.4 (comparator). The primary endpoint is cystatin C ratio, calculated as the ratio of the cystatin value at day 90 after surgery relative to the preoperative value. Secondary objectives are inter alia the evaluation of the influence of human albumin and hydroxyethyl starch on further laboratory chemical and clinical parameters, glycocalyx shedding, intensive care unit and hospital stay and acute kidney injury as defined by RIFLE criteria (risk of renal dysfunction, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage kidney disease) criteria. There is a general lack of evidence on the relative safety and effects of hydroxyethyl starch compared with human albumin for volume replacement in a perioperative setting. Previously conducted studies of surgical patients in which researchers have compared different hydroxyethyl starch products included too few patients to properly evaluate clinical important outcomes such as renal function. In the present study in a high-risk patient population undergoing a major surgical intervention, we will determine if perioperative fluid replacement with human albumin 5 % will have a long-term advantage over a third-generation hydroxyethyl starch 130/0.4 on the progression of renal dysfunction until 90 days after surgery. EudraCT number 2010-018343-34. Registered on 11 January 2010. Albumin Glycocalyx Volume replacement therapy Various crystalloid and colloid solutions are available to the clinician for perioperative volume replacement therapy. Colloids are large–molecular weight (nominally MW >30,000) substances. In normal plasma, the plasma proteins are the major colloids present. Albumin solutions are available for use as colloids. Various other solutions containing artificial colloids, such as hydroxyethyl starches (HESs), are commonly used in clinical praxis. Different preparations are indicated according to the different patient types and the type of fluid loss. For some decades there have been contradicting philosophies, in particular concerning the best choice for volume substitutes to deal with perioperative blood loss. However, fluid preloading and liberal intraoperative fluid substitution are not evidence-based procedures [1]. According to current reviews, recommended rational fluid therapy includes a combination of crystalloid and colloid solutions. An adequate replacement of fluid needs seems to have the power to improve patient outcomes and should be considered the therapy of choice to minimize perioperative fluid shifting [2]. HES solutions are used for the treatment of hypovolemia (low blood volume) when plasma volume expansion is desired. There is conflicting evidence about its relative safety, most notably regarding adverse effects of HES on kidney function. In an animal model, the application of 10 % HES 200,000 already led to a reduction of diuresis and increases in inflammation and tubular damage [3]. These effects were more distinctive in 10 % HES 200,000 than in 6 % HES 130,000 or lactated Ringer solution. In the Efficacy of Volume Substitution and Insulin Therapy in Severe Sepsis study, conducted by the German Competence Network Sepsis, the authors compared 10 % HES 200,000 with lactated Ringer solution as volume replacement therapy in critically ill patients. In the HES group, the incidence of acute renal failure, renal replacement therapy and need for blood transfusion were significantly increased. The authors postulated that the long storage of HES molecules in the tissue and a potential direct toxicity of the substance per se could be responsible for these negative effects [4]. Indeed, these results were seen only in patients who received more than 22 ml/kg/day. Schabinski and colleagues compared the effect of 6 % HES 130/0.4 with 4 % gelatin on renal failure and mortality in critically ill surgical patients. They showed that a significant increase in mortality and acute renal failure occurred in the HES group if the cumulative dose of 33 ml/kg body weight was exceeded [5]. A survey of 120 Scandinavian intensive care units (ICUs) published in 2008 [6] revealed that, for most physicians, colloids are a second-choice volume substitute. Only one-third were found to use it as a primary volume substitute. HES 130/0.4 was the preferred colloid. A large part of the interviewees attested that there were no internal directives or contraindications for the use of HES at their centers. At the same time, almost all interviewees indicated that they would change their infusion practice if randomized controlled studies clearly showed negative effects on mortality and renal failure [6]. An alternative to HES is the use of human albumin (HA). The safety of HA was examined in the Saline versus Albumin Fluid Evaluation (SAFE) study, published in 2004 [7]. In that large investigation, the impact of saline 0.9 % and albumin 4 % was compared in critically ill patients. Neither in the whole group nor in one of the subgroups was a relevant difference found with regard to mortality, ICU length of stay, organ failure, respiratory support or renal replacement therapy. A post hoc analysis showed that the results were independent of the serum albumin value of the patients [8]. Indeed, in a subgroup analysis, the authors found an increase in mortality in patients with brain injury who received albumin compared with those in the saline group [9]. However, this was not a negative effect of the colloid albumin, but the result of volume loading in patients who are not hypovolemic and therefore without demand of any colloid infusion [10]. In a meta-analysis, Wilkes and Navickis investigated the safety of albumin and its influence on mortality. The authors compared 55 randomized trials in patients after major surgery, trauma, burn, hypoalbuminemia, ascites and other diseases. The researchers in the included trials compared albumin therapy with crystalloid therapy, no albumin or lower doses of albumin. Overall, the authors found no effect of albumin on mortality [11]. In an experimental study, Jacob et al. [12] investigated the different negative effects of colloids on vascular permeability. The authors perfused an isolated heart (guinea pig) with 5 % albumin, 6 % HES 130/0.4 and saline 0.9 % and observed the fluid extravasation before and after 20 minutes of ischemia. This study demonstrated a significantly lower extravasation in the albumin and HES group compared with the group that was given saline 0.9 %. After ischemia, a transient increase in vascular permeability resulted from use of HES and saline 0.9 %. In the albumin group, there was no increase in vascular leakage. This effect was independent of the intravascular colloid osmotic pressure. The authors concluded that albumin interacts with the endothelial glycocalyx, which seems to be the cause of the protective effects of albumin on the vascular barrier [12]. In an investigation on isolated heart perfusion, Jacob et al. [13] examined the effects of albumin on endothelial integrity and myocardial function after 4 h of ischemia. The authors compared Bretschneider solution with and without addition of albumin. After reperfusion, intracoronary adhesion of polymorphonuclear granulocytes, edema formation, left and right heart performance of pressure-to-volume work, and glycocalyx formation were assessed. The intracoronary adhesion of leukocytes was doubled in the Bretschneider solution group, whereas it remained at basal values after albumin addition. Addition of albumin also decreased edema and led to significantly better right ventricular function. Glycocalyx shedding was significantly lower in the albumin group than in the group without albumin addition. The authors concluded that addition of albumin improves endothelial integrity as well as heart function after 4-h ischemia, owing to protection of the glycocalyx [13]. The effects of HES 200/0.6 and HES 130/0.4 on renal function after transplant were compared in a retrospective investigation by Blasco et al. in brain-dead donors [14]. Thirty-two donors were included in every group. The appearance of delayed organ function and postsurgical creatinine was documented. Delayed organ function was found in the group treated with HES 130/0.4 compared with patients treated with HES 200/0.6. The creatinine levels after 1 month amounted to 133 μmol/L in the HES 130/0.4 group and 172 μmol/L in the HES 200/0.6 group (p = 0.005). After 1 year, increased creatinine was found in the HES 200/0.6 group compared with the HES 130/0.4 group (147 μmol/L versus 128 μmol/L; p = 0.05). The authors concluded that the use of modern third-generation HES preparations are associated with better postsurgical renal function [14]. Van der Linden et al. compared HES 130/0.4 with modified 3 % gelatin solution in 132 cardiac surgery patients with regard to perioperative blood loss and transfusion need. Both groups also received a cumulative dose of 48.9 ml/kg of colloids. Blood loss, transfusion need, laboratory parameters and hemodynamics were comparable in both groups. The authors concluded that it is safe to use 50 ml/kg of HES 130/0.4 [15]. In a systematic review, Hartog and colleagues [16] identified 56 randomized controlled trials on the use of HES 130/04 in an elective surgical setting. These studies were small-sized and of short duration. The main goal was to assess whether published studies on HES 130/0.4 resuscitation are sufficiently well designed to make conclusions about the safety of this compound. The 56 studies were small, heterogeneous, and involved different control fluids and different clinical conditions. The authors concluded that the results of these studies could not be pooled and that the studies did not provide convincing evidence that third-generation HES 130/0.4 is safe in surgical, emergency, or ICU patients, despite publication of numerous clinical studies [16]. Recent data have associated the use of HES products with an increased risk of severe adverse events (SAEs) when used in certain patient populations. After a review of the available evidence [17], on 14 June 2013, the Pharmacovigilance Risk Assessment Committee of the European Medicines Agency concluded that the benefits of HES solutions no longer outweighed their risks and recommended that the marketing authorizations for these medicines be withdrawn [18]. The European Union regulatory agency then decided to restrict the indication for HES solutions. HES is now indicated only as second-line treatment after crystalloids when there is acute bleeding. The use of HES solution is contraindicated in critically ill patients, including those with sepsis, burns, transplant and cerebral bleeding, and renal impairment and/or hepatic dysfunction. On 24 June 2013, the U.S. Food and Drug Administration recommended that HES products not be used in critically ill patients or in those with preexisting renal dysfunction, but it also did not withdraw them completely. Despite of the fact that there are several small clinical trials in which HES was compared with another fluid for volume replacement in various clinical settings, there is a general lack of evidence on the relative safety and efficacy of HES versus HA for volume replacement in the perioperative period. Previously conducted studies of surgical patients in which different HES products were compared included too few patients for proper evaluation of important clinical outcomes, such as renal function. In the present study, we will determine, in a patient population undergoing a major surgical intervention, if perioperative fluid replacement with HA has a long-term advantage compared with a third-generation HES (HES 130/0.4) on the progression to renal dysfunction until 3 months after surgery. Hydroxyethyl starch 6 % 130/0.4 (Volulyte) Blood and protein losses that occur during surgical procedures can be compensated by the patients up to a certain point. The current infusion practice is to provide a combination of crystalloid and colloid solutions. The objective is to reach hemodynamic targets and treat the diuresis of the patient to maintain sufficient organ perfusion. Guided by the hemoglobin values, blood transfusions are also used. Protein and blood losses are currently substituted, at least in Central Europe, with HES. The common modern preparation is 6 % HES with a MW of 130 kDa and a substitution degree of 0.4 (Volulyte; Fresenius Kabi, Bad Homburg, Germany). This solution contains, in comparison with the serum value, higher sodium and chloride amounts (154 mmol/L in each case). According to the manufacturer, the maximum dose to be administered is 50 ml of hydroxyethyl starch/kg (i.e., 3500 ml for a patient of 70-kg body weight). This dose is based on experimental investigations in rats where the infusion of 9 g/kg HES showed no toxic effects. Information concerning the maximum dose to be administered in humans is absent. Up to now, HES preparations of the modern generation were examined only in specified patient groups. No publication of a systematic investigation with information about maximum administrable dose exists in the literature. Therefore, it is unclear whether organ-toxic or coagulation-restraining effects, as proved with older HES preparations, also occur with use of the modern products. Human albumin 5 % (Humanalbin) HA is a natural component of blood plasma and is responsible for colloid osmotic pressure. It has a MW of 66 kDa and acts as a transport protein for water-indissoluble substances in the blood. It binds cations as well as anions and contributes on account of these ampholytic qualities to the buffer capacity. In addition, it is an important component of the endothelial glycocalyx, which forms the so-called endothelial surface layer, together with the endothelial cell line. HA as a colloidal volume substitute is a content of various clinical and experimental investigations. Still, there is controversial international discussion regarding this substance as infusion solution. The SAFE study investigators examined this aspect in the most recent study. Indeed, those authors compared HA with a crystalloid solution (saline 0.9 %). The safety of this substance could be proved in the whole group, even though a subgroup showed conflicting results [9]. The HA solution used in this investigation is a 5 % infusion solution (Humanalbin; CSL Behring, Marburg, Germany) with at least 96 % HA. The substance is licensed for volume replacement therapy. Dosage and infusion rate are recommended to be adjusted to the patient's individual requirements. There is no upper dose limit. Trial rationale In the present trial, data concerning postoperative renal function are the main interest, in addition to blood loss, need for transfusions and coagulation substitutes, hospital stay, and complications and mortality. Patients undergoing cystectomy will be included. These patients are considered high-risk patients for the development of progressive renal dysfunction postoperatively based on their underlying disease and the severe surgical intervention. Because of its higher accuracy, cystatin C was chosen for calculation of glomerular filtration rate (GFR) [19, 20]. Both colloid solutions, HES and HA, are used in routine clinical perioperative settings. Their comparative safety and long-term effects on renal function in perioperative patients has not been studied in well-controlled and adequately powered randomized clinical trials. It is expected that the findings of this trial will increase knowledge about fluid management in surgical patients who are at risk for developing renal dysfunction postoperatively. Trial objectives and endpoints Primary objective Our primary objective is to compare the effects on renal function of HA 5 % with those of HES 130/0.4 when administered for perioperative volume replacement in patients undergoing cystectomy, with the aim of demonstrating the superiority of HA over HES. The primary endpoint is the cystatin C ratio, calculated as the ratio of the cystatin value at day 90 after surgery relative to the preoperative value. This ratio corresponds to the calculated GFR ratio (the value at day 90 relative to that before surgery). The calculation of the GFR is carried out with the help of cystatin C, based on the following formula: $$ \mathrm{G}\mathrm{F}\mathrm{R}\ {\left(\mathrm{ml}/ \min \right)}_{\mathrm{calculated}} = 74,835 \times \mathrm{cystatin}\ \mathrm{C}\ {\left(\mathrm{mg}/\mathrm{L}\right)}^{-1333} $$ Secondary objectives Secondary objectives of the clinical investigation are the evaluation of the influence of HA and HES on further laboratory chemical and clinical parameters, ICU and hospital length of stay and acute kidney injury as defined by RIFLE criteria (risk of renal dysfunction, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage kidney disease). Also, we are interested in the presence of pruritus evaluated by conducting a standardized interview. Secondary endpoints Incidence of acute kidney injury as defined by RIFLE criteria (in hospital and at midterm [3 months]) (see Appendix: Table 4) Relative change of calculated GFR (by cystatin C) up to the third postoperative day Glycocalyx shedding (syndecan 1, hyaluronan) at days 0, 1 and 3 Glomerular damage as measured by neutrophil gelatinase-associated lipocalin at days 0, 1, 3 and 90 Length of ICU and hospital stay Necessity and duration of renal replacement therapy Presence of pruritus at day 90 Further objectives and variables We also want to investigate the effect of the fluid therapy on the thrombocyte function and coagulation as measured using the Multiple Platelet Function Analyzer (Multiplate; Roche Diagnostics, Mannheim, Germany), Platelet Function Analyzer (PFA 100; Siemens Healthcare, Erlangen, Germany) and rotational thrombelastometry (ROTEM; Tem international GmbH, Munich, Germany). Also we are interested in the life quality after the operation evaluated by standardized interviews. Furthermore, we will collect the following data: Intraoperative blood loss, urine output and fluid amount The laboratory chemical course of creatinine and urea, starting before surgery up to the third postoperative day and at 90 days (visit 5) The laboratory chemical course of serum albumin, starting presurgically up to the third postoperative day Dementia screening (Mini Mental State Examination) on the day of screening Delirium screening (Nursing Delirium Screening Scale) postoperatively in the recovery room and at days 1 and 3 Life quality assessment preoperatively and at day 90 (activities of daily living, instrumental activities of daily living) Effect on thrombocytes and coagulation measured by ROTEM, PFA 100) and Multiplate at day 0 Incidence of adverse events and SAEs In this single-center, open-label, randomized, comparative trial, we will investigate two parallel patient groups, comparing HA (test drug) versus HES (comparator). Ethical approval of this trial was obtained from the ethics committee of the Ludwig Maximilians University of Munich (reference number 311-11). The confirmatory statistical analysis is based on a leading surrogate parameter of renal function, with the aim of establishing a recommendation for therapy optimization. Because both investigational medicinal products (IMPs) are licensed for volume replacement in the perioperative setting in this trial, it is a phase IV trial. However, the clinical aim corresponds to that of a confirmatory phase IIb or III trial. Randomization will be performed by stratifying participants by type of surgical procedure (ileum conduit or neobladder) to balance allocation to treatment groups with respect to this risk factor for postoperative renal dysfunction. After randomization, each patient will receive the allocated volume replacement treatment according to the current prescribing information up to the seventh operative day. The patient should be kept on the allocated infusion solution during the first 90 days after operation if further volume replacement therapy becomes necessary, if not contraindicated, and if manageable. A detailed schedule of study activities, and the treatment algorithm to be used for fluid management in both treatment arms are provided in the Appendix: Tables 1, 2 and 3. Anesthetic management All patients will receive thoracic epidural anesthesia in combination with general anesthesia before surgery. Patients with contraindications to neuraxial procedures (e.g., dual platelet inhibition) will receive general anesthesia and postoperative patient-controlled analgesia with piritramide. The anesthetic technique used will be standardized. Anesthesia will be induced with propofol (2 mg/kg), sufentanil (0.4 mg/kg) and rocuronium (0.6 mg/kg) and maintained with propofol and remifentanil or sevoflurane in patients with certain conditions (e.g., cardiopulmonary diseases). After surgery, the epidural will be maintained for at least 3 days and will be combined with acetaminophen, non-steroidal anti-inflammatory drug and an opioid (piritramide) if needed. Intraoperative hemodynamic monitoring will be performed with a Vigileo monitor with FloTrac sensor (Edwards Lifesciences, Irvine, CA, USA). After randomization, patients will receive either HA or HES from one of the investigators according to an algorithm (see Appendix). American Society of Anesthesiologists Physical Status (ASA) classification I and II patients without cardiopulmonary diseases or cerebral insufficiency will be transferred directly from the recovery room to the general ward. ASA classifications III and IV patients or those with perioperative complications will be transferred from the operating room to the ICU for at least 1 day. All patients will receive early postoperative enteral feeding without a nasogastric tube as well as early mobilization. All patients will be visited by in-hospital postoperative pain management service staff daily. Treatment groups After screening and randomization, every patient is assigned to one of the two therapy arms: VoluCyst study arm (patients with cystectomy treated with HES): In this arm, patients receive 6 % HES 130/0.6 (Volulyte) perioperatively and up to the third postoperative day as the only colloid according to a treatment algorithm (see Appendix). The maximum dose according to the prescribing information is 50 ml/kg/day (e.g., 3500 ml for a patient with a body weight of 70 kg). Patients who require additional volume replacement therapy from day 3 until discharge will be treated primarily with HES, unless a contraindication to HES has emerged. AlbuCyst study arm (patients with cystectomy treated with HA): In this arm, patients receive 5 % HA (Humanalbin) perioperatively and up to the third postoperative day as the only colloid according to a treatment algorithm (see Appendix). Patients who require additional volume replacement therapy from day 3 until discharge will be treated with HA. Trial population and selection criteria Patients will be screened for eligibility by using the surgery schedule. Patients who seem to be eligible for study participation based on their diagnosis are screened according to the inclusion and exclusion criteria. Patients who fulfill these criteria are informed about the present study. Subjects must meet all of the following inclusion criteria to be eligible for enrolment into the trial: Patients (male and female) aged 18–85 years Patients undergoing cystectomy with urinary diversion using an ileal conduit or neobladder procedure Ability to follow study instructions and likely to attend and complete all required visits Written informed consent provided Subjects presenting with any of the following exclusion criteria may not be included in the trial. General exclusion criteria Unfavorable prognosis (e.g., palliative surgical care in cases of obstruction of the efferent urinary tract) Evidence of metastatic disease Bleeding tendency or platelet dysfunction Preoperative creatinine clearance <30 ml/min Preoperative chemotherapy with nephrotoxic drugs (e.g., cisplatin) Application of >1000 ml of colloid solution within the 24 h before surgical intervention Physical or acute medical condition, psychiatric condition or laboratory abnormality that, based on the investigator's decision, may put the patient at risk, may confound the trial results or may interfere with the patient's participation in this clinical trial History of hypersensitivity to the investigational drug or to drugs with a similar chemical structure History of uncontrolled chronic disease or a concurrent clinically significant illness or medical condition that, in the investigator's opinion, would contraindicate study participation or compliance with protocol-mandated procedures Known or persistent abuse of medications, drugs or alcohol Simultaneous participation in another clinical trial or participation in any clinical trial involving an IMP within 30 days before provision of written informed consent for this trial Special restrictions for women Current or planned pregnancy or nursing women Women of childbearing potential who are not using and not willing to use medically reliable methods of contraception for the entire study duration (such as oral, injectable or implantable contraceptives or intrauterine contraceptive devices), unless they are surgically sterilized and/or hysterectomized or there are any other criteria considered sufficiently reliable by the investigator in individual cases Subject information and recruitment If a patient appears to be eligible for the study, either the investigator responsible for that site or delegated medical doctors will provide the patient a full verbal explanation of the trial and the Patient Information Sheet so that the patient can consider participating. This will include detailed information about the rationale, design and personal implications of the study. After information is provided to patients, they will have sufficient time to consider participation before they are asked whether they would be willing to take part in the trial. It is imperative that written consent be obtained before any trial-specific procedures commence. The investigator will then record the details of these trial patients in trial-specific lists. Randomization and stratification This trial is designed as an open-label trial. Randomization will be performed at the Institute for Medical Informatics, Biometry and Epidemiology (IBE) of the University of Munich, and the treating physicians will be informed about the treatment arm to which a patient is assigned. Randomization to both treatment arms will be performed in a ratio of 1:1. The randomization technique is based on randomized, balanced blocks with random block length. The procedure considers stratification by type of surgical procedure (ileal conduit or neobladder). The IBE will provide an internet-based randomization tool (Randoulette), which chooses the colloid treatment group for a new patient fulfilling the eligibility criteria and having signed the informed consent. Randoulette will register the patient by the patient's pseudonym or screening number, sex, year of birth and stratum (ileal conduit or neobladder) before the allocated colloid treatment group is provided. Statistical planning and analysis The primary efficacy analysis statistically tests superiority of HA versus HES at an α significance level of 5 % (two-sided, simultaneously testing both sides at a level of 2.5 %) with respect to the primary endpoint. The primary endpoint is expected to be non-normally distributed. After logarithm transformation, the primary endpoint may not be normally distributed. Thus, the corresponding statistical null hypothesis is the hypothesis that the distribution functions of the endpoint in the HA group and the HES group are equal and will be tested non-parametrically with the Mann-Whitney-Wilcoxon test to detect directed differences of the distributions. The Mann-Whitney-Wilcoxon test will be applied. The one-sided p value will be calculated, if possible, using Fisher's exact test. The primary endpoint adjusts in a specific way for the preoperative cystatin C values. At the planning stage, the Mann-Whitney-Wilcoxon test will be applied in the stratified version by type of surgical procedure (ileal conduit or neobladder). If adjustments of stratification are deemed necessary during the course of the trial, the trial protocol will be amended. A fixed-sample design is planned. Changes of trial design, if necessary or advisable, resulting from an interim analysis without significance testing (no αspent) had to keep the prespecified significance level. In that case, the Brownian motion will be used for modeling the primary test statistic based on the accumulating data as a stochastic process (see next section). The distributions will be described at all time points by median, minimum, maximum and quartiles separately for both the HES and HA groups. The primary statistical analysis will be based on the intention-to-treat (ITT) principle. Analysis of all secondary endpoints and of patient characteristics will be descriptive. Descriptive comparisons will be conducted with the t test or, in the case of deviations from the normal distribution, with the Mann-Whitney-Wilcoxon test, or with Fisher's exact test in case of a binary outcome. Safety data will be analyzed descriptively in the two groups at least every 12 months. Interim analyses regarding confirmatory analysis An interim analysis will be performed as approximately a midcourse analysis with the objective of checking the assumptions of the initial sample size calculation. There is no α spending prespecified at the planned interim analysis. In cases where the interim analysis results in substantial differences from the assumptions at the planning stage, the assumptions will be revised and a sample size recalculation will be performed. The sample size recalculation will be based on the conditional rejection probability approach (see next section). If a sample size modification deems advisable, the study protocol will be amended. Calculations for sequential analyses regarding the primary confirmatory analysis will be done in the Brownian motion model. Values of a Brownian motion are approximately resembled based on the accumulating data and applying the inverse normal transformation of the (exact) one-sided p values at interim and final analyses followed by multiplication with the square root of the information time. The information time is approximated by $$ \frac{{\mathrm{n}}_{\mathrm{IC},\mathrm{H}\mathrm{A},\mathrm{I}}\cdot {\mathrm{n}}_{\mathrm{IC},\mathrm{H}\mathrm{E}\mathrm{S},\mathrm{I}}}{{\mathrm{n}}_{\mathrm{IC},\mathrm{H}\mathrm{A},\mathrm{I}}+{\mathrm{n}}_{\mathrm{IC},\mathrm{H}\mathrm{E}\mathrm{S},\mathrm{I}}+1}+\frac{{\mathrm{n}}_{\mathrm{NB},\mathrm{H}\mathrm{A},\mathrm{I}}\cdot {\mathrm{n}}_{\mathrm{NB},\mathrm{H}\mathrm{E}\mathrm{S},\mathrm{I}}}{{\mathrm{n}}_{\mathrm{NB},\mathrm{H}\mathrm{A},\mathrm{I}}+{\mathrm{n}}_{\mathrm{NB},\mathrm{H}\mathrm{E}\mathrm{S},\mathrm{I}}+1} $$ where nS,T,I denotes the number of patients with assessed primary endpoint in stratum S (S = IC for ileal conduit or S = NB for neobladder) and treatment group T (T = HA or T = HES) at an arbitrary time during the course of the trial. Alternatively, a proportional information time scale may be used (e.g., by multiplication by a factor of 4). Modifications of the statistical design for confirmatory analysis To keep the type I error level in the case of a design modification at any time during the course of the trial, the Conditional Rejection Probability approach of Müller and Schäfer will be applied [21, 22]. Thereby, as described in the preceding section, the calculations will be based on the Brownian motion approximation. In the case of a modification of the statistical design (e.g., a sample size recalculation), the trial protocol will be amended. Power considerations and sample size calculation Three parameters will influence the sample size of the study, in which we will use the Mann-Whitney-Wilcoxon test for the statistical decision based on a fixed-sample design: the level of significance, the power of the two-sided test and the probability that an observation XAlbuCyst in the HA group is less than an observation XVoluCyst in the HES group. For sample size planning, no stratification and approximately normally distributed data for log-transformed cystatin C values and calculated log-transformed GFR values are assumed. The location parameter and standard deviation of cystatin C at baseline are assumed to be 0.9 mg/L and 0.2 mg/L, respectively, based on the publication by Evangelopoulos and colleagues [23]. Thereby we consider the older patient groups and greater variability because subgroups are pooled and study patients did not represent a healthy population. For sample size calculation, the value of 0.2222 for the standard deviation of the difference at day 90 minus baseline of the log-normal transformed cystatin C values is specified in both treatment groups. There are two arguments for this specification. First, the value is suggested to be conservative in the four groups (two treatments times two measurement time points) considered in the statistical analyses. Second, adjusting for baseline values by using cystatin C values at day 90 relative to the same patients' value at baseline is suggested to reduce rather than to increase the standard deviation. An increase of the GFR by the factor 1.2 due to HA compared with HES would be clinically meaningful and worthwhile to be detected as statistically significant difference at a two-sided type I error level of α = 5 % with a statistical power of 1 − β = 80 %. For the interpretation of the increase of the GFR by the factor 1.2 consider, for example, a patient with a Cystatin C value of 0.9 mg/L. This value results in a calculated GFR at baseline between 86 and 87 ml/min. Then, the factor 1.2 would mean an increase of GFR at day 90 from 60 to 72 ml/min, from 65 to 78 ml/min, or from 70 to 84 ml/min. The factor of 1.2 with respect to GFR transfers to the detectable difference of 0.1368 for the log-transformed (natural logarithm) Cystatin C values when using the conservative formula where the exponent is −1.333. The detectable difference and the specified standard deviation correspond to the probability of $$ \mathrm{P}\left({\mathrm{X}}_{\mathrm{AlbuCyst}}<{\mathrm{X}}_{\mathrm{VoluCyst}}\right)=0.6683. $$ With a sample size of 47 in each group, the non-stratified Mann-Whitney-Wilcoxon test at a 0.05 two-sided significance level will have 80 % power to detect a probability of 0.6683 that an observation XAlbuCyst is less than an observation XVoluCyst. This means that n = 94 assessed patients are required in total to achieve the desired power. On the basis of our experience with patient compliance in previous studies and routine treatment, we expect a dropout rate of about 10 %. Thus, adjusting for slight random imbalances in allocation to the treatment groups, a total of 105 patients (50–55 in each treatment group) have to be enrolled. It has to be taken into consideration that about 50 % of patients fulfilling the inclusion criteria for this trial might refuse to give their informed consent to participate; we therefore expect to screen a total of about 210 patients for eligibility. After the primary ITT analysis, sensitivity analyses will follow per protocol or according to as-treated principles. In addition to the primary Mann-Whitney-Wilcoxon analysis, the Mann-Whitney-Wilcoxon test will be applied to the cystatin C values at day 90, and, assuming approximately normal distributions after log transformation (natural logarithm), explorative regression analyses will be performed modeling the log-transformed (natural logarithm) cystatin C values depending on treatment group, log-transformed (natural logarithm) baseline cystatin C values and other potentially prognostic factors. Ethics and Good Clinical Practice The trial will be conducted in accordance with the International Conference on Harmonization guidance regarding Good Clinical Practice, the relevant national regulations and the Declaration of Helsinki. The study protocol and consent forms were approved by the institutional review board of the Ludwig Maximilians University of Munich (reference number 311-11). Despite the fact that there are several smaller clinical trials comparing HES to another fluid for volume replacement in various clinical settings, there is a general lack of evidence on the relative safety and effects of HES versus HA for volume replacement in a perioperative setting. Previously conducted studies of surgical patients in which researchers compared different HES products included too few patients for proper evaluation of clinically important outcomes such as renal function. The present study will determine, in a high-risk patient population undergoing major surgical intervention, whether perioperative fluid replacement with HA has a long-term advantage over a third-generation HES (HES 130/0.4) on the progression to renal dysfunction until 3 months after surgery. The study was initiated as planned in May 2012. The study is expected to be completed in February 2016. Tobias Kammerer and Florian Klug contributed equally to this work. ADL: ARF: American Society of Anesthesiologists Performance Status classification system Cardiac index Electrocardiographic ESKD: End-stage kidney disease GFR: Glomerular filtration rate HA: Human albumin HES: Hydroxyethyl starch IADL: Instrumental activities of daily living IBE: Institute for Medical Informatics, Biometry and Epidemiology IMP: Investigational medicinal product ITT: Intention to treat IV: i.a.: NGAL: Neutrophil gelatinase-associated lipocalin PFA 100: Platelet Function Analyzer 100 RIFLE: Risk of renal dysfunction, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage kidney disease ROTEM: Rotational thrombelastometry SAE: Serious adverse event Saline versus Albumin Fluid Evaluation study SVV: Stroke volume variation UO: Urine output The authors thank the Clinical Study Center, Hospital of the University of Munich, for providing permission for and support of the study. Special thanks are given to Dr. Marion Seybold for support of the planning and conduct of the clinical trial and to Gabriele Gröger and her team for analysis of laboratory parameters. Thanks are also extended to the clinic staff for assisting in subject recruitment and data collection. We also thank all patients who have given their approval to be a part of this trial. The present trial is an investigator-initiated trial. The trial is financially supported by CSL Behring. The sponsors had no role in the study design, data collection, data analysis, data interpretation or writing of the report. Schedule of activities Screening day (day −1) Day 0 (day of surgery) Inclusion and exclusion criteria Medical history (primary diagnosis) Pregnancy test for women of childbearing potential Mini Mental State Examination Life Quality Assessment (ADL, IADL) Concomitant medications (diuretics) Cystatin C, GFR, serum creatinine ROTEM, Multiplate, PFA 100 Randomization NGAL, syndecan 1, hyaluronan Study drug Blood products IV fluid amount Blood losses and urine output Nursing Delirium Screening Scale AEs and SAEs Pruritus assessment ADL activities of daily living, AE adverse event, IADL instrumental activities of daily living, IV intravenous, NGAL neutrophil gelatinase-associated lipocalin, PFA 100 Platelet Function Analyzer 100, ROTEM rotational thrombelastometry, SAE serious adverse event2 Algorithm of infusions and transfusion in the operating room and intensive care unit ASA I and II patients without cardiac diseases or cerebral insufficiency: Stroke volume variation (SVV) <12 % Cardiac index (CI) > 2.5 L/min/m2 MAD >60 mmHg ASA III and IV patients or patients with cardiac diseases or cerebral insufficiency: SVV <12 % CI >2.5 L/min/m2 Central venous oxygen saturation >70 % or mixed venous oxygen saturation >65 % To reach the desired parameter: Start with infusion protocol (see below) If not successful within 15 minutes: Start with norepinephrine as first-choice vasopressor Infusion protocol Replacement of urine output with Ringer's acetate solution in a 1:1 ratio Additionally, 500 ml of crystalloids for insensible sweating Replacement of blood and protein losses dependent on treatment assignment, with either HA 5 % or HES 6 % in a 1:1 ratio up to a transfusion trigger point or a maximum of 50 ml/kg/day Additionally up to 1500 ml of colloids for the protein loss into the third compartment Transfusion protocol (in support of [24]) Hb (g/dl) <6 g/dl >6–8 g/dl Adequate compensation, no risk factors Limited compensation or risk factors (cardiac diseases, cerebrovascular insufficiency) Signs of anemic hypoxia (tachycardia, hypotension, lactic acidosis, ECG change) 8–10 g/dl ADL activities of daily living, AE adverse event, ECG electrocardiographic, Hb hemoglobin, IADL instrumental activities of daily living, SAE serious adverse event RIFLE criteria as defined by Bellomo et al. [25] GFR criteria Urine Output (UO) criteria Increased creatinine × 1.5 or GFR decrease > 25 % UO <0.5 ml/kg/h for 6 h Increased creatinine × 2 or GFR decrease > 50 % UO <0.5 ml/kg/h for 12 h Increase creatinine × 3 or GFR decrease > 75 % UO <0.3 ml/kg/h for 24 h or anuria for 12 h Persistent ARF = complete loss of kidney function >4 wk ESKD End-stage kidney disease (>3 mo) ARF acute renal failure, ESKD end-stage kidney disease, GFR glomerular filtration rate, RIFLE risk of renal dysfunction, injury to the kidney, failure of kidney function, loss of kidney function, and end-stage kidney disease, UO urine output TK and MR have received speaker fees from CSL Behring. The other authors declare that they have no competing interests. TK, MR and FK took the main responsibility for drafting the study protocol. BZ is the main leader of the trial. MR is the sponsor´s representative of the trial. TK, MR, MS and SH are responsible for the implementation of the study. AK is the coinvestigator. HHM has the main responsibility for data analysis. VD is trial coordinator. All authors contributed to the design of the trial and approved the final manuscript. Department of Anesthesiology, Hospital of the University of Munich, LMU Munich, Marchioninistrasse 15, 81377 Munich, Germany Department of Anesthesiology, Surgical Clinic Munich-Bogenhausen, Denninger Strasse 44, 81679 Munich, Germany Department of Urology, Hospital of the University of Munich, LMU Munich, Marchioninistrasse 15, 81377 Munich, Germany Institute of Medical Biometry and Epidemiology, Philipps University, Bunsenstrasse 3, 35037 Marburg, Germany Jacob M, Chappell D, Rehm M. Clinical update: perioperative fluid management. Lancet. 2007;369:1984–6.View ArticlePubMedGoogle Scholar Chappell D, Jacob M, Hofmann-Kiefer K, Conzen P, Rehm M. A rational approach to perioperative fluid management. Anesthesiology. 2008;109:723–40.View ArticlePubMedGoogle Scholar Hüter L, Simon TP, Weinmann L, Schuerholz T, Reinhart K, Wolf G, et al. Hydroxyethyl starch impairs renal function and induces interstitial proliferation, macrophage infiltration and tubular damage in an isolated renal perfusion model. Crit Care. 2009;13:R23. doi:10.1186/cc7726.View ArticlePubMedPubMed CentralGoogle Scholar Brunkhorst FM, Engel C, Bloos F, Meier-Hellmann A, Ragaller M, Weiler N, et al. Intensive insulin therapy and pentastarch resuscitation in severe sepsis. N Engl J Med. 2008;358:125–39.View ArticlePubMedGoogle Scholar Schabinski F, Oishi J, Tuche F, Luy A, Sakr Y, Bredle D, et al. Effects of a predominantly hydroxyethyl starch (HES)-based and a predominantly non HES-based fluid therapy on renal function in surgical ICU patients. Intensive Care Med. 2009;35(9):1539–47. doi:10.1007/s00134-009-1509-1.View ArticlePubMedGoogle Scholar The FLUIDS Study Investigators for the Scandinavian Critical Care Trials Group. Preferences for colloid use in Scandinavian intensive care units. Acta Anaesthesiol Scand. 2008;52:750–8.View ArticleGoogle Scholar The SAFE Study Investigators. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med. 2004;350:2247–56.View ArticleGoogle Scholar SAFE Study Investigators. Effect of baseline serum albumin concentration on outcome of resuscitation with albumin or saline in patients in intensive care units: analysis of data from the saline versus albumin fluid evaluation (SAFE) study. BMJ. 2006;333:1044.View ArticleGoogle Scholar SAFE Study Investigators. Saline or albumin for fluid resuscitation in patients with traumatic brain injury. N Engl J Med. 2007;357:874–84.View ArticleGoogle Scholar Jacob M, Chappell D. Saline or albumin for fluid resuscitation in traumatic brain injury. N Engl J Med. 2007;357(25):2634–5.View ArticlePubMedGoogle Scholar Wilkes MM, Navickis RJ. Patient survival after human albumin administration a meta-analysis of randomized, controlled trials. Ann Intern Med. 2001;135:149–64.Google Scholar Jacob M, Bruegger D, Rehm M, Welsch U, Conzen P, Becker BF. Contrasting effects of colloid and crystalloid resuscitation fluids on cardiac vascular permeability. Anesthesiology. 2006;104:1223–31.View ArticlePubMedGoogle Scholar Jacob M, Paul O, Mehringer L, Chappell D, Rehm M, Welsch U, et al. Albumin augmentation improves condition of guinea pig hearts after 4 hr of cold ischemia. Transplantation. 2009;87:956–65.View ArticlePubMedGoogle Scholar Blasco V, Leone M, Antonini F, Geissler A, Albanèse J, Martin C. Comparison of the novel hydroxyethyl starch 130/0.4 and hydroxyethyl starch 200/0.6 in brain-dead donor resuscitation on renal function after transplantation. Br J Anaesth. 2008;100:504–8.View ArticlePubMedGoogle Scholar Van der Linden PJ, De Hert SG, Deraedt D, Cromheecke S, De Decker K, De Paep R, et al. Hydroxyethyl starch 130/0.4 versus modified fluid gelatin for volume expansion in cardiac surgery patients: the effects on perioperative bleeding and transfusion needs. Anesth Analg. 2005;101:629–34.View ArticlePubMedGoogle Scholar Hartog CS, Kohl M, Reinhart K. A systematic review of third-generation hydroxyethyl starch (HES 130/0.4) in resuscitation: safety not adequately addressed. Anesth Analg. 2011;112:635–45.View ArticlePubMedGoogle Scholar European Medicines Agency (EMA). Review of hydroxyethyl-starch-containing solutions for infusion started. EMA/757392/2012. London: EMA; 2012. http://www.ema.europa.eu/docs/en_GB/document_library/Referrals_document/Solutions_for_infusion_containing_hydroxyethyl_starch/Procedure_started/WC500135589.pdf. Accessed 22 July 2015.Google Scholar European Medicines Agency (EMA). PRAC recommends suspending marketing authorisations for infusion solutions containing hydroxyethyl-starch. EMA/349341/2013. London: EMA; 2013. http://www.ema.europa.eu/docs/en_GB/document_library/Referrals_document/Solutions_for_infusion_containing_hydroxyethyl_starch/Recommendation_provided_by_Pharmacovigilance_Risk_Assessment_Committee/WC500144448.pdf. Accessed 22 July 2015.Google Scholar Mussap M, Dalla Vestra M, Fioretto P, Saller A, Varagnolo M, Nosadini R, et al. Cystatin C is a more sensitive marker than creatinine for the estimation of GFR in type 2 diabetic patients. Kidney Int. 2002;61:1453–61.View ArticlePubMedGoogle Scholar Grubb A, Nyman U, Björk J, Lindström V, Rippe B, Sterner G, et al. Simple cystatin C–based prediction equations for glomerular filtration rate compared with the modification of diet in renal disease prediction equation for adults and the Schwartz and the Counahan–Barratt prediction equations for children. Clin Chem. 2005;51:1420–31.View ArticlePubMedGoogle Scholar Müller HH, Schäfer H. Adaptive group sequential designs for clinical trials: combining the advantages of adaptive and of classical group sequential approaches. Biometrics. 2001;57(3):886–91.View ArticlePubMedGoogle Scholar Müller HH, Schäfer H. A general statistical principle for changing a design any time during the course of a trial. Stat Med. 2004;23(16):2497–508.View ArticlePubMedGoogle Scholar Evangelopoulos AA, Vallianou NG, Bountziouka VP, Giotopoulou AN, Bonou MS, Barbetseas J, et al. The impact of demographic characteristics and lifestyle in the distribution of cystatin C values in a healthy Greek adult population. Cardiol Res Pract. 2011;2011:163281.Google Scholar Executive Committee of the German Medical Association (eds). Cross-sectional guidelines for therapy with blood components and plasma derivatives. 4th ed. Berlin: German Medical Association; 2009. http://www.bundesaerztekammer.de/fileadmin/user_upload/downloads/Querschnittsleitlinie_Gesamtdokument-englisch_07032011.pdf. Accessed 22 July 2015.Google Scholar Bellomo R, Ronco C, Kellum JA, Mehta RL, Palevsky P. Acute Dialysis Quality Initiative workgroup. Acute renal failure – definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care. 2004;8:R204–12. doi:10.1186/cc2872.View ArticlePubMedPubMed CentralGoogle Scholar
CommonCrawl
Differential network analysis and protein-protein interaction study reveals active protein modules in glucocorticoid resistance for infant acute lymphoblastic leukemia Zaynab Mousavian1,2, Abbas Nowzari-Dalini1, Yasir Rahmatallah3 & Ali Masoudi-Nejad2 Molecular Medicine volume 25, Article number: 36 (2019) Cite this article Acute lymphoblastic leukemia (ALL) is the most common type of cancer diagnosed in children and Glucocorticoids (GCs) form an essential component of the standard chemotherapy in most treatment regimens. The category of infant ALL patients carrying a translocation involving the mixed lineage leukemia (MLL) gene (gene KMT2A) is characterized by resistance to GCs and poor clinical outcome. Although some studies examined GC-resistance in infant ALL patients, the understanding of this phenomenon remains limited and impede the efforts to improve prognosis. This study integrates differential co-expression (DC) and protein-protein interaction (PPI) networks to find active protein modules associated with GC-resistance in MLL-rearranged infant ALL patients. A network was constructed by linking differentially co-expressed gene pairs between GC-resistance and GC-sensitive samples and later integrated with PPI networks by keeping the links that are also present in the PPI network. The resulting network was decomposed into two sub-networks, specific to each phenotype. Finally, both sub-networks were clustered into modules using weighted gene co-expression network analysis (WGCNA) and further analyzed with functional enrichment analysis. Through the integration of DC analysis and PPI network, four protein modules were found active under the GC-resistance phenotype but not under the GC-sensitive. Functional enrichment analysis revealed that these modules are related to proteasome, electron transport chain, tRNA-aminoacyl biosynthesis, and peroxisome signaling pathways. These findings are in accordance with previous findings related to GC-resistance in other hematological malignancies such as pediatric ALL. Differential co-expression analysis is a promising approach to incorporate the dynamic context of gene expression profiles into the well-documented protein interaction networks. The approach allows the detection of relevant protein modules that are highly enriched with DC gene pairs. Functional enrichment analysis of detected protein modules generates new biological hypotheses and may help in explaining the GC-resistance in MLL-rearranged infant ALL patients. Acute lymphoblastic leukemia (ALL) is a malignant disease of the bone marrow characterized by the overproduction of immature white blood cells that accumulate and inhibit the production of normal cells. ALL is the most common type of leukemia in children (Gaynon and Carrel 1999) and major improvements in the treatment of childhood ALL have been achieved in recent years (Pui et al. 2004). However, the treatment outcome remains poor in infant (< 1 year of age) ALL patients due to frequent resistance to cytotoxic chemotherapy drugs, including glucocorticoids (GCs). This condition is associated with a genetic translocation involving the mixed lineage leukemia (MLL) gene (gene KMT2A) that is present in about 80% of infant ALL patients (Greaves 1996; Pieters et al. 2007). Glucocorticoids are used in ALL treatment for their cytotoxicity induction properties that lead to cellular apoptosis (Gaynon and Carrel 1999) and resistance to their effects is the main cause of treatment failure in MLL-rearranged infant ALL (Pieters et al. 1998). Although some researchers have found biomarkers that mediate GC-resistance in MLL-rearranged infant ALL (Spijkers-Hagelstein et al. 2014a; Spijkers-Hagelstein et al. 2013; Spijkers-Hagelstein et al. 2012; Spijkers-Hagelstein et al. 2014b), knowledge regarding the mechanism underlying this phenomenon remains limited. The majority of gene expression studies adopted conventional gene-wise approaches that detect differential expression in each gene separately between two phenotypes. Motivated by the fact that gene differential co-expression (DC) analysis has emerged as an alternative approach to differential expression analysis (de la Fuente 2010), recently we used weighted gene co-expression network analysis to reveal a gene module associated with GC-resistance (Mousavian et al. 2016) in infant ALL patients. The detected module included genes with documented association to GC-resistance, confirming the hypothesis that network-based analysis complements the conventional gene-wise methods and provides further biological insights into GC-resistance in MLL-rearranged infant ALL. Instead of gene modules, some studies used DC analysis to find phenotype-specific protein modules (Zhang et al. 2012; Lin et al. 2010; Yoon et al. 2011; Chung et al. 2013). Prior to such approach, protein-protein interaction (PPI) networks have been used to find disease-specific protein modules enriched with differentially expressed genes between two groups of samples (Ideker et al. 2002; Chuang et al. 2007; Dittrich et al. 2008; Nacu et al. 2007; Sohler et al. 2004). In this study, we propose the use of DC analysis to identify protein modules that are active in GC-resistance infant ALL patients but not in GC-sensitive patients. First, gene expression profiles are considered to identify DC gene pairs and construct a DC network between GC-resistance and GC-sensitive conditions. Next, the DC network is modified such that any links that are absent in the experimentally validated PPI network are removed from the DC network. To construct a dynamic protein network for each condition (GC-resistance and GC-sensitive), the resulting network is decomposed into two sub-networks depending on the sign of the difference in co-expression between the two conditions. Finally, each sub-network is clustered into modules. Examining these modules using functional enrichment analysis reveal which of them are highly enriched with gene ontology (GO) terms. Active modules in each condition are specified by extracting genes of modules which are highly enriched in the same category of GO terms and form a connected sub-graph in the corresponding module. Our results demonstrate that protein modules related to signaling pathways such as proteasome, electron transport chain, tRNA-aminoacyl biosynthesis and peroxisome are active under the GC-resistance condition in MLL-rearranged infant ALL patients. Datasets and preprocessing steps The infant acute lymphoblastic leukemia gene expression dataset was obtained from the gene expression omnibus (GEO) database under the series accession number GSE32962 (Spijkers-Hagelstein et al. 2012). This dataset consists of expression profiles of 43 untreated infant samples (bone marrow and/or peripheral blood samples) diagnosed with MLL-rearranged ALL and categorized into prednisolone sensitive (19 samples) and prednisolone resistant (24 samples) groups. All leukemic samples contained > 90% of leukemic blasts and contaminating non-leukemic cells were removed using immunomagnetic beads as described in (Kaspers et al. 1994). In vitro prednisolone sensitivity was assessed by 4-day cytotoxicity assays as described in (Pieters et al. 1990). Patient samples were characterized as in vitro sensitive or resistant to prednisolone based on the concentration of prednisolone lethal to 50% of the leukemic cells (LC50 value), such that LC50 < 0.1 μg/ml of prednisolone indicates prednisolone-sensitive and LC50 > 150 μg/ml of prednisolone indicates prednisolone-resistant (Spijkers-Hagelstein et al. 2012). Raw CEL files were downloaded using the GEOquery Bioconductor package (Davis and Meltzer 2007). Probe level data was mapped to gene level data using the Affy Bioconductor package (Gautier et al. 2004) and the hgu133plus2.db Bioconductor human genome annotation package. Intensity levels were normalized using the variance stabilizing normalization method as implemented in the VSN Bioconductor package (Huber et al. 2002). The normalized expression matrix consisted of ~ 19,000 rows and 43 columns, representing genes and samples respectively. The protein-protein interaction data was downloaded from the Human Integrated Protein-Protein Interaction rEference (HIPPIE) database (Schaefer et al. 2012). HIPPIE integrates the experimentally validated PPIs from different sources including BioGrid (Chatr-aryamontri et al. 2013), DIP (Salwinski et al. 2004), HPRD (Prasad et al. 2009), IntAct (Kerrien et al. 2011), MINT (Licata et al. 2012), BIND (Bader et al. 2003) and MIPS (Pagel et al. 2005). The current version includes 203,968 interactions between 14,874 proteins where interactions are given a score between 0 and 1 based on the confidence in used experimental techniques in determining them. Construction of DC network and pruning by PPIs To identify DC gene pairs between GC-sensitive and GC-resistant groups, a DC network was constructed using the DiffCorr R package (Fukushima 2013). Pearson's correlation coefficient was used for calculating the co-expression between gene pairs under conditions A (resistant) and B (sensitive) separately. Pearson's correlation coefficient between genes x and y under condition A is defined as $$ {r}_A\left(x,y\right)=\frac{\sum_{k=1}^{n_A}\left({x}_k-\overline{x}\right)\left({y}_k-\overline{y}\right)}{\sqrt{\sum_{k=1}^{n_A}{\left({x}_k-\overline{x}\right)}^2}\sqrt{\sum_{k=1}^{n_A}{\left({y}_k-\overline{y}\right)}^2}} $$ where \( \overline{x} \) and \( \overline{y} \) are respectively the mean expressions of gene x and y under condition A, nA is the number of samples under condition A, and k is the sample index. The correlation values were transformed using Fisher's Z transformation such that (Fukushima 2013). $$ {Z}_A=\frac{1}{2}\log \frac{1+{r}_A}{1-{r}_A} $$ The test statistic for each individual gene pair is the difference between the Z-transformed correlations under conditions A and B (ZA and ZB) such that (Fukushima 2013). $$ Z=\frac{Z_A-{Z}_B}{\sqrt{\frac{1}{n_A-3}+\frac{1}{n_B-3}}} $$ where nA and nB are respectively the numbers of samples under conditions A and B. DiffCorr provides a significance of the correlation difference between two conditions (p-value) for each individual gene pair and only links with assigned p-values< 0.01 are deemed significant and remain in the DC network while the remaining links are removed from the network. The resulting DC network represents links between gene pairs that are differentially co-expressed between GC-resistant and GC-sensitive samples with high confidence. To identify active protein modules, the DC network was modified such that all the links that are absent in the experimentally validated PPI network are removed from the DC network. In other words, the links of the DC network are pruned or trimmed by the experimentally validated protein-protein interactions. Combining the gene expression data and PPI networks in this step is reasonable given the moderate concordance between messenger RNA (mRNA) and protein abundances (Kosti et al. 2016), and provides the bridge between mRNA-based gene expression data and protein modules. Decomposing DC network into resistant and sensitive sub-networks The DC network is further decomposed into two sub-networks, DCresistant and DCsensitive, based on the weights of the links given by (rA- rB) for individual gene pairs. In the DCresistant sub-network, only the links that satisfy both of the two conditions rA- rB > 0.5 and rA > 0.5 are included. Similarly, in the DCsensitive sub-network, only links that satisfy both of the two conditions rA- rB < − 0.5 and rB > 0.5 are included. These conditions ensure that selected links represent high or moderate co-expression between the respective genes under the condition of interest but not under the second condition. Decomposing the DC network in this way allows easier biological interpretation for detected modules under each condition, as only genes with homogeneous changes between conditions are highlighted in each sub-network. Module identification Two sub-networks DCresistant and DCsensitive were constructed to represent active links under each condition. Each sub-network was clustered into modules to identify active protein modules under each condition (resistant and sensitive). We used the generalized version of Topological Overlap Measure (TOM), as implemented in the WGCNA R package (Langfelder and Horvath 2008), to define similarity between gene pairs based on the correlation difference. The possible correlation differences can take values between − 2 and 2, and therefore are normalized by a factor of 2 while generating the adjacency matrix of each network. The diagonal elements of the adjacency matrix were set to 1. Then the TOM computes the similarity among gene pairs based on the shared neighbors in the DC networks. The average hierarchical clustering algorithm, as implemented in the WGCNA R package (Langfelder and Horvath 2008), was applied to the dissimilarity matrix (1-TOM) to find clusters in each network. The resulting protein modules in each network represent active protein modules under one condition only. After module identification, we also refined modules in order to maximize the intra-modular connectivity and minimize the inter-modular relationships. A module membership measure was defined for each pair of gene and module based on the connectivity of gene to the corresponding module, and genes were assigned to modules with the highest level of module membership. If a gene is connected to multiple modules with the same number of links, the sum of link weights is used for measuring the module membership value. Genes' module assignments are iteratively adjusted to reach the optimal assignments. Functional enrichment analysis To determine the potential functions of active protein modules, we imported both DCresistant and DCsensitive sub-networks into the Cytoscape software platform (Smoot et al. 2011) separately and then used the BiNGO application (Maere et al. 2005) to find the overrepresented gene ontology (GO) categories in modules. BiNGO uses the hyper-geometric test to determine which gene ontology terms are significantly overrepresented in a module. We also used the Database for Annotation Visualization and Integrated Discovery (DAVID) tool (Dennis Jr et al. 2003) to test if some gene modules are highly enriched with genes from known signaling pathways, including KEGG (Kanehisa et al. 2006) and Reactome (Croft et al. 2010) pathways. GO terms and signaling pathways with FDR corrected p-values < 0.01 were deemed significant and selected for describing the functions of different modules. Differential network analysis reveals active protein modules To detect which gene pairs are differentially co-expressed with significance between resistant and sensitive samples, a weighted differential co-expression network was constructed and only significant links (p-value < 0.01) remained in the network. To identify active protein modules, the resulting network was integrated into the PPI network obtained from the HIPPIE database, such that only links available in the PPI network remain. As a result of integrating with the PPI network, a DC network with 4053 links across 3551 nodes (genes or proteins) was obtained. To identify active protein modules in each condition separately, the resulting DC network was decomposed into two DCresistant and DCsensitive sub-networks, as described in the Methods section. The DCresistant sub-network had 1511 links and 1449 nodes, and the DCsensitive sub-network had 739 links and 1075 nodes. DCresistant represents protein links with their associated genes having moderate or high co-expression (correlation) in resistant samples that is also higher than what is observed in sensitive samples. Clustering the DCresistant sub-network into modules revealed 8 gene modules, with 385 genes, which are active under the resistant condition but not the sensitive condition (see Table 1). Each module was assigned a unique color, and the size of module varies from 20 genes (pink module) to 85 genes (turquoise module). All genes that remained unassigned to any of the 8 modules were placed under the grey module and ignored in this study. Table 1 presents the description of the 8 modules found in the DCresistant sub-network. The hub gene of each module refers to the gene with highest degree in each module. Table 1 Description of found modules in resistant sub-network Applying the same steps to the DCsensitive sub-network identified 5 modules (see Table 2) comprising 141 genes and ranging in size between 20 (green module) and 39 genes (turquoise module). The remaining unassigned genes were also grouped into the grey module and ignored in further analysis. It is worth stating here that colors were assigned to detected modules in DCsensitive and DCresistant independently, i.e. using similar colors for modules in both conditions was totally random. Table 2 provides the description of the 5 detected modules in the DCsensitive sub-network. Table 2 Description of found modules in sensitive sub-network After detecting gene modules in both DCresistant and DCsensitive sub-networks separately, we performed functional enrichment analysis for all modules using both BiNGO and DAVID tools. Table 3 lists the significantly enriched biological process (BP) GO terms in the modules of the DCresistant sub-network. We consider GO terms that are not coarse terms and occupy lower layers of the GO tree. There were no GO terms overrepresented in the blue and violet modules, hence they are not included in Table 3. Table 3 shows that in some modules, such as turquoise, brown, pink and red, a rather large number of module members are involved in the same biological process, hence yielding a high significance (small p-value). The turquoise module has ~ 38 out of 85 genes playing key roles in the regulation of protein ubiquitination, ubiquitin-protein ligase activity in mitotic cell cycle, ubiquitin-dependent protein catabolic process and proteolysis, and most of them are highly enriched (FDR = 1.47 × 10− 48) in the proteasome KEGG pathway. These genes belong to the proteasome subunit (PSM) family and PSMC4 with 34 differentially co-expressed links with the rest of the module members is a hub gene in this module. Two other important genes of the same module are PSMD2 and PSMD1 with 27 and 22 differentially co-expressed links, respectively. The brown module is highly enriched with some close BP GO terms including mitochondrial adenosine triphosphate (ATP) synthesis coupled electron transport, respiratory electron transport chain and oxidative phosphorylation. Twelve genes of this module are different subunits of NADH:ubiquinone oxidoreductase (complex I), which is the first enzyme complex located in the inner membrane of the mitochondrion and plays a key role in the electron transport chain. In Table 1, NDUFA9 was introduced as a hub gene of the brown module. NDUFA9 is differentially co-expressed with other members of the module encoding different subunits of mitochondrial complex I. The brown module was also found significantly enriched with members of the REACTOME Complex I biogenesis and respiratory electron transport pathways. These findings are concordant with earlier studies which associated the up-regulation of oxidative phosphorylation with GC-resistant (Beesley et al. 2009; Samuels et al. 2014). Table 3 Significantly enriched BP GO terms in active protein modules of DCresistant sub-network Another module seen in the DCresistant sub-network is the yellow module. Genes of this module are highly enriched in some distinct BP GO terms including programmed cell death, MAPKKK cascade and response to stimulus. Approximately, 24 genes of the yellow module are highly enriched in response to stimulus and some of these genes are also involved in response to stress. Among 46 genes located in the yellow module, about 10 genes are significantly enriched in programmed cell death and apoptosis. Leucine rich repeat kinase 2 (LRRK2) is a gene encoding protein kinase, which is differentially co-expressed with 11 members of the yellow module associated with MAPKKK cascade and also response to stress. Protein kinase C delta (PRKCD) is a member of the yellow module that is differentially co-expressed with 9 other members of the module, and is the second hub in the module. Human studies demonstrate that this gene encodes a kinase which is involved in B cell signaling and the regulation of growth and apoptosis. Some distinct categories of BP GO terms were found in the green module. Respectively, 8 and 7 out of 43 genes were associated with RNA splicing and response to DNA damage. Only 3 genes of this module (PDHB, PDHA1 and DLAT) are highly enriched in the acetyl-CoA biosynthetic process, but none of them is a hub gene within the module. Gene ABCE1 has the highest connectivity within the green module, but ABCE1 shares no common biological function with its neighbors in the green module. The red module with 41 genes and 43 DC links is another important module found in the DCresistant sub-network. Nine out of 39 module members are involved in tRNA aminoacylation for protein translation and are as well members of class I aminoacyl-tRNA synthetase family. The encoded protein by RARS, which is a hub gene in this module, belongs to the mentioned protein family and most of its immediate neighbors in the DCresistant sub-network, including LARS, IARS, EPRS, DARS and MARS, also encode proteins found in the aminoacyl-tRNA synthetase family. In accordance with this finding, DAVID also indicates that the red module is highly enriched in Aminoacyl-tRNA biosynthesis pathway with FDR corrected p-value < 10− 4. Although the pink module is the smallest module in the DCresistant sub-network, it is significantly enriched with more GO terms than some other modules. Enriched terms include fatty acid oxidation, fatty acid catabolic process and peroxisome organization (Table 3). Genes of this module are also highly enriched in Peroxisome KEGG pathway (FDR = 6.07 × 10− 12). PEX5 which plays an essential role in peroxisome, has the highest connectivity within the pink module, and most of its immediate neighbors, including PEX6, ACOX3, ACOT8, EHHADH, HMGCL, HACL1, ECI2 and MPV17, are also involved in Peroxisome KEGG pathway. To determine whether the genes associated with biological functions are connected in their corresponding modules, we extract sub-graphs comprising these genes. We observed that the genes associated with proteasome, respiratory electron transport, peroxisome and aminoacyl-tRNA biosynthesis pathways, respectively in the turquoise, brown, pink and red modules are connected within these modules. This indicates that the genes involved in these biological functions are significantly DC between resistant and sensitive condition. This result suggests that the pathways connecting these genes are active pathways where regulatory relationships under one condition are disrupted under another. As indicated in Fig. 1, the links among genes of these pathways are highly co-expressed in the resistant condition in contrast to the sensitive condition. A schematic representation of correlation changes across four protein sub-modules, found in the DCresistant sub-network, between GC-resistance (right side) and GC-sensitive (left side) conditions. Connected genes in the turquoise, brown, pink and red modules are associated with proteasome, respiratory electron transport, peroxisome and aminoacyl-tRNA biosynthesis pathways, respectively, suggesting that the regulatory relationships in these pathways under one condition are disrupted under another. A positive correlation is indicated by yellow color and a negative correlation by blue color We performed similar functional enrichment analysis and sub-graph extraction steps to find out whether the detected active modules under the sensitive condition are enriched with BP GO terms. Table 4 shows that only 2 out of 5 modules (blue and yellow modules) of the DCsensitive sub-network are highly enriched in BP GO terms (FDR < 0.01). In the blue module, 10 genes are involved with protein ubiquitination and proteolysis and ~ 10 genes of the yellow module are involved in RNA splicing and spliceosome KEGG pathway. Although the blue module of DCsensitive sub-network is involved in proteolysis similar to the turquoise module of DCresistant sub-network, there are only two genes (PSMC2 and PSMD6) in the intersection of the two modules. Moreover, after extracting the genes associated with proteolysis from the blue module under the sensitive sub-network, no connectivity was observed among them. We also extracted the genes associated with the spliceosome pathway from the yellow module of the DCsensitive sub-network, and observed that these genes are connected within the module. Hence, the spliceosome can be suggested as an active pathway in sensitive condition as compared to the resistant condition. Table 4 Significantly enriched BP GO terms in active protein modules of DCsensitive sub-network To check the hypothesis that the detected modules in the present study are possibly confounded by differences in prednisolone responsiveness in addition to differences related to GC-resistance, we checked the intersection between the list of 51 transcriptionally-regulated genes by prednisolone reported in (Tissing et al. 2007) and each detected module under the resistant condition. These 51 genes showed differential expression after 8 h of prednisolone exposure in leukemic cells of 13 children as compared with non-exposed cells (Tissing et al. 2007). None of the reported 51 genes appeared in our detected modules under the resistant condition. Through DC network analysis and protein interaction networks, we identified gene modules which show much higher co-expression under the GC-resistant condition as compared to the GC-sensitive condition. After detecting gene modules from the integration of DC network between GC-sensitive and GC-resistant samples and PPI links, functional enrichment analysis detected which members of modules share similar biological functions or are members of the same biological pathway. Together, these results suggest that four gene sub-modules, obtained from the turquoise, brown, pink and red modules of DCresistant sub-network, are respectively associated with the proteasome, mitochondrial respiratory electron transport, peroxisome and aminoacyl-tRNA biosynthesis signaling pathways (Fig. 1). The lists of genes (ranked by the inter-modular connectivity) present in these four modules are given in Additional file 1: Tables S1-S4. The yellow module was identified in the DCresistant sub-network as a significantly enriched module in the immune system process, and this module shares 11 genes with a module we found in our previous study (Mousavian et al. 2016) which was introduced as a relevance module to GC-resistant in infant ALL. In 1993, the proteasome has been localized with high serum concentration in tumor cells of patients with hematological malignancy (Ichihara 1993). It was found later that NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells) can mediate glucocorticoid resistance in multiple myeloma, which is a cancer formed by terminally differentiated B Cells (Feinman et al. 1999; Tricot 2002). NF-κB is a heterodimeric transcription factor that activates survival genes coding for cytokines, cytokine receptors, chemotactic proteins and cell adhesion molecules (De Bosscher et al. 2000) and repressing its transcriptional activity facilitates cellular apoptosis. In many cell types, the function of NF-κB depends on the enzymatic activity of proteasome (Baud and Derudder 2010). Through the degradation of the inhibitory protein, I-κBα, protein subunit of NF-κB including RELA or c-Rel is allowed to activate expression of target genes after entering the nuclease. In 2003, Bortezomib (a proteasome inhibitor) was found efficient in treating patients whose multiple myeloma showed poor response to at least two treatment protocols (Dick and Fleming 2010; Lambrou et al. 2012). In recent years, the effectiveness of Bortezomib was also tested for the treatment of acute lymphoblastic leukemia. It was shown that bortezomib can sensitize in-vitro GC-resistant childhood B-cell precursor leukemia cell lines, MHH-cALL-2 and MHH-cALL-3, to prednisolone-induced cell death via inhibiting the proteasome (Junk et al. 2015). The use of proteasome inhibitors to sensitize GC-resistant ALL cells was supported by detecting that high expression of valosin-containing protein (VCP), a member of the ubiquitin proteasome degradation system (UPS), is associated with poor response to prednisolone treatment in childhood ALL patients (Lauten et al. 2006). Valosin-containing protein mediates apoptosis after tumor necrosis factor (TNF) stimulation by influencing the proteasome degradation pathway and NF-κB activation via I-κBα degradation (Asai et al. 2002). The immunosuppressive effects of glucocorticoids are linked to an inhibition of NF-κB activity (Greenstein et al. 2002; Scheinman et al. 1995; Auphan et al. 1995), suggesting that suppressing the NF-κB activity is required for glucocorticoid-induced apoptosis (Chandra et al. 1998). Our results show that the activity of proteasome and ubiquitination family genes (enriched in the turquoise module) is significantly higher in GC-resistant MLL-rearranged infant ALL patients as compared to GC-sensitive patients. Our results agree with related literature in suggesting that inhibiting the proteasome protein family members, which are crucial in regulating protein ubiquitination and proteasome pathway, may lead to sensitizing the infant ALL cells to prednisolone. In the DCresistant sub-network, we found a set of genes related to the NADH:ubiquinone oxidoreductase activity, which forms complex I in mitochondria for electron transport chain, as a differential co-expressed gene set between GC-resistant and GC-sensitive samples. Some studies demonstrated that GC-resistance in T-cell ALL is associated with a proliferative metabolism such as the up-regulation of glycolysis, oxidative phosphorylation and cholesterol biosynthesis (Beesley et al. 2009; Samuels et al. 2014). It was shown that the activation of bioenergetic pathways required for proliferation may suppress the apoptotic potential and offset the metabolic crisis initiated by glucocorticoids in the lymphocytes (Beesley et al. 2009). It was also shown later that targeting bioenergetic pathways in combination with glucocorticoid treatment may offer a promising therapeutic strategy to overcome GC-resistance in ALL (Samuels et al. 2014). The detected brown module has 13 genes that are involved in oxidative phosphorylation which is a metabolic pathway for oxidizing nutrients and releasing energy in the mitochondria. These genes are highly co-expressed in GC-resistant infant ALL in comparison with the sensitive cases, indicating that GC-resistance in infant ALL may also be associated with some proliferative metabolism like oxidative phosphorylation. High expression of the valosin-containing protein, that mediates NF-κB activation via I-κBα degradation, is associated with poor response (resistance) to prednisolone treatment in childhood ALL patients as discussed above (Lauten et al. 2006). NF-κB acts through the transcription of anti-apoptotic proteins, leading to increased proliferation and growth activities (Escarcega et al. 2007). Therefore, detecting an increased co-expression between genes associated with proliferation and oxidative phosphorylation in GC-resistant ALL infants might (at least in part) be explained through this mechanism. Another important active protein module observed in DCresistant sub-network is the pink module. The pink module contains genes involved in fatty acid oxidation and peroxisome organization. The peroxisome is a small cell organelle which contributes to the breakdown of very-long-chain fatty acids via beta oxidation. Recently, it was indicated that the peroxisome proliferator-activated receptor alpha (PPARα) and fatty acid oxidation mediate glucocorticoid resistance in chronic lymphoblastic leukemia (CLL) (Tung et al. 2013). Gene PEX5 (Peroxisomal Biogenesis Factor 5), the hub gene of the pink module detected in the resistant sub-network, is associated with fatty acid beta oxidation, peroxisome pathway, and glucose metabolism. Also gene ABCE1 (ATP Binding Cassette Subfamily E Member 1), the hub gene of the green module detected in the resistant sub-network, is associated with glucose transport. The recent work by Chan et al. (Chan et al. 2017) characterized pre-B-cell ALL with transcriptional repression of glucose and energy supply. Chan et al. found that the PAX5 and IKZF1 B-lymphoid transcription factors enforce a state of chronic energy deprivation in pre-B-cell ALL cells, and identified, among others, products of gene NR3C1 (a transcription factor gene encoding the glucocorticoid receptor that bind to glucocorticoid response elements and activate their transcription) as central effectors of B-lymphoid restriction of glucose and energy supply. More specific to MLL-rearranged infant ALL, the data reported in (Stumpel et al. 2009) independently showed that NR3C1 is among the top 100 genes with significant hypermethylated promoter region in t(4;11)-positive MLL-rearranged infant ALL samples. Hence, the literature already presents possible mechanisms (transcriptional targets and promoter methylation) by which glucose metabolism alterations and energy deprivation could be associated with MLL-rearranged infant ALL cells. Although Chan et al. did not conduct their study using MLL-rearranged infant samples, the similarities between their findings and the current study are within the general characterization of B-cell ALL with transcriptional repression of glucose metabolism and energy supply. In addition to the pink module, the brown module found in resistant sub-network is rich with genes related to ATP synthesis by chemiosmotic coupling, adding additional indication to energy deprivation in B-cell ALL. Our results indicate that the red module is highly associated with the Aminoacyl-tRNA biosynthesis pathway where 9 of its 39 module members belong to the Aminoacyl-tRNA synthetases family. Aminoacyl-tRNA synthetases (ARSs) are essential house-keeping enzymes that provide the substrates for protein synthesis (Yao and Fox 2013). They have been implicated with human cancers, given their varied effects on cell differentiation and growth. It was discovered since the 1960s that leukemic blasts require external asparagine (an ARS) for growth since they lack sufficient activity of asparagine synthetase. A component of guinea pig serum, L-asparaginase, was isolated and successfully used to convert free asparagine to aspartic acid, effectively starving the leukemia cells (Broome 1963). L-asparaginase has been used as a component of the chemotherapy in the treatment of childhood ALL in combination with glucocorticoids (prednisolone and dexamethasone), and vincristine (Dübbers et al. 1998). Similar to glucocorticoids, some patients show resistance to L-asparaginase and in vitro resistance was found highly correlated with an increase in the cellular asparagine synthetase activity, messenger RNA and protein content (Hutson et al. 1997). On another vein, the expression of glutamyl-prolyl-tRNA synthetase and mitochondrial isoleucyl-tRNA synthetase is controlled by the c-myc proto-oncogene (Coller et al. 2000), hence abnormal expression of these tRNA synthetases under oncogenic conditions is not surprising. In addition to NF-κB and activator protein 1 (AP-1), c-myc was one of three transcription factors identified as the most likely targets of GC-induced gene repression (Greenstein et al. 2002). Previous studies have revealed correlations between c-myc suppression and GC-induced apoptosis in human leukemic cells (Thulasi et al. 1993). Interestingly, the transforming growth factor-β (TGF-β) induces nuclear localization of the aminoacyl-tRNA synthetase-interacting factor 2 (AIMP2), where AIMP2 enhances ubiquitin-dependent degradation of the FUSE-binding protein (FBP), which is the transcriptional activator of c-myc (Kim et al. 2003), resulting in down-regulation of c-myc. Detecting the red module that is highly enriched with ARSs in GC-resistant patients may indicate failure to repress c-myc and initiate GC-induced apoptosis due to increased cellular activity of ARS interacting factors. Additionally, a few mutations of the aminoacyl-tRNA synthetase-interacting factor 3 (AIMP3) that affect its interaction with ataxia-telangiectasia mutated (ATM) kinases and ability to activate p53 (a tumor suppressor protein) have been reported in human chronic myeloid leukemia patients (Kim et al. 2008). These observations further support the relationship between ARSs and/or their interacting factors with the initiation or progression of human leukemia. Differential co-expression analysis is a promising approach to incorporate the dynamic context of gene expression profiles into experimentally-validated protein-protein interaction networks. The approach allows the detection of relevant gene modules that are highly enriched with DC gene pairs and reduces the problem of detecting modules of co-expressed genes that are not truly related by discarding all gene-pairs not documented in the PPI databases. Functional enrichment analysis of detected modules revealed that these modules are related to proteasome, electron transport chain, tRNA-aminoacyl biosynthesis, and peroxisome signaling pathways. These findings are in accordance with reported literature related to GC-resistance in hematological malignancies such as pediatric ALL. Our results support the use of proteasome inhibitors and asparagine depletion drugs as components of the chemotherapy in the treatment of childhood ALL for patients showing resistance to glucocorticoids. Our results also support the characterization of B-cell ALL with chronic glucose metabolism and energy supply deprivation. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Aminoacyl-tRNA synthetase Ataxia-Telangiectasia Mutated ATP: CLL: Chronic lymphoblastic leukemia DC: Differential co-expression False discovery rate GCs: MLL: Mixed lineage leukemia PPI: Protein-protein interaction TOM: Topological Overlap Measure Asai T, Tomita Y, Si N, Hoshida Y, Myoui A, Yoshikawa H, et al. VCP (p97) regulates NFkB signaling pathway, which is important for metastasis of osteosarcoma cell line. Jpn J Cancer Res. 2002;93(3):296–304. Auphan N, DiDonato JA, Rosette C, Helmberg A, Karin M. Immunosuppression by glucocorticoids: inhibition of NF-κB activity through induction of IκB synthesis. Science. 1995;270(5234):286–90. Bader GD, Betel D, Hogue CW. BIND: the biomolecular interaction network database. Nucleic Acids Res. 2003;31(1):248–50. Baud V, Derudder E. Control of NF-κB activity by proteolysis. In: NF-kB in Health and Disease. Berlin, Heidelberg: Springer; 2010. p. 97–114. Beesley A, Firth M, Ford J, Weller R, Freitas J, Perera K, et al. Glucocorticoid resistance in T-lineage acute lymphoblastic leukaemia is associated with a proliferative metabolism. Br J Cancer. 2009;100(12):1926–36. Broome J. Evidence that the L-asparaginase of Guinea pig serum is responsible for its antilymphoma effects: II. Lymphoma 6C3HED cells cultured in a medium devoid of L-asparagine lose their susceptibility to the effects of Guinea pig serum in vivo. J Exp Med. 1963;118(1):121–48. Chan LN, Chen Z, Braas D, Lee J-W, Xiao G, Geng H, et al. Metabolic gatekeeper function of B-lymphoid transcription factors. Nature. 2017;542(7642):479–83. Chandra J, Niemer I, Gilbreath J, Kliche K-O, Andreeff M, Freireich EJ, et al. Proteasome inhibitors induce apoptosis in glucocorticoid-resistant chronic lymphocytic leukemic lymphocytes. Blood. 1998;92(11):4220–9. Chatr-aryamontri A, Breitkreutz B-J, Heinicke S, Boucher L, Winter A, Stark C, et al. The BioGRID interaction database: 2013 update. Nucleic Acids Res. 2013;41(D1):D816–D23. Chuang HY, Lee E, Liu YT, Lee D, Ideker T. Network-based classification of breast cancer metastasis. Mol Syst Biol. 2007;3(1):140. Chung F-H, Lee HH-C, Lee H-C. ToP: a trend-of-disease-progression procedure works well for identifying cancer genes from multi-state cohort gene expression data for human colorectal cancer. PLoS One. 2013;8(6):e65683. Coller HA, Grandori C, Tamayo P, Colbert T, Lander ES, Eisenman RN, et al. Expression analysis with oligonucleotide microarrays reveals that MYC regulates genes involved in growth, cell cycle, signaling, and adhesion. Proc Natl Acad Sci. 2000;97(7):3260–5. Croft D, O'Kelly G, Wu G, Haw R, Gillespie M, Matthews L, et al. Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res. 2010;39(suppl_1):gkq1018. Davis S, Meltzer PS. GEOquery: a bridge between the gene expression omnibus (GEO) and BioConductor. Bioinformatics. 2007;23(14):1846–7. De Bosscher K, Berghe WV, Vermeulen L, Plaisance S, Boone E, Haegeman G. Glucocorticoids repress NF-κB-driven genes by disturbing the interaction of p65 with the basal transcription machinery, irrespective of coactivator levels in the cell. Proc Natl Acad Sci. 2000;97(8):3919–24. de la Fuente A. From 'differential expression'to 'differential networking'–identification of dysfunctional regulatory networks in diseases. Trends Genet. 2010;26(7):326–33. Dennis G Jr, Sherman BT, Hosack DA, Yang J, Gao W, Lane HC, et al. DAVID: database for annotation, visualization, and integrated discovery. Genome Biol. 2003;4(5):P3. Dick LR, Fleming PE. Building on bortezomib: second-generation proteasome inhibitors as anti-cancer therapy. Drug Discov Today. 2010;15(5):243–9. Dittrich MT, Klau GW, Rosenwald A, Dandekar T, Müller T. Identifying functional modules in protein–protein interaction networks: an integrated exact approach. Bioinformatics. 2008;24(13):i223–i31. Dübbers A, Schulze-Westhoff P, Kurzknabe E, Creutzig U, Ritter J, Boos J. Asparagine synthetase in pediatric acute leukemias: AML-M5 subtype shows lowest activity. In: Acute Leukemias VII. Berlin, Heidelberg: Springer; 1998. p. 530–5. Escarcega R, Fuentes-Alexandro S, Garcia-Carrasco M, Gatica A, Zamora A. The transcription factor nuclear factor-kappa B and cancer. Clin Oncol. 2007;19(2):154–61. Feinman R, Koury J, Thames M, Barlogie B, Epstein J, Siegel DS. Role of NF-κB in the rescue of multiple myeloma cells from glucocorticoid-induced apoptosis by bcl-2. Blood. 1999;93(9):3044–52. Fukushima A. DiffCorr: an R package to analyze and visualize differential correlations in biological networks. Gene. 2013;518(1):209–14. Gautier L, Cope L, Bolstad BM, Irizarry RA. Affy—analysis of Affymetrix GeneChip data at the probe level. Bioinformatics. 2004;20(3):307–15. Gaynon PS, Carrel AL. Glucocorticosteroid therapy in childhood acute lymphoblastic leukemia. In: Drug Resistance in Leukemia and Lymphoma III. Berlin, Heidelberg: Springer; 1999. p. 593–605. Greaves M. Infant leukaemia biology, aetiology and treatment. Leukemia. 1996;10(2):372–7. Greenstein S, Ghias K, Krett NL, Rosen ST. Mechanisms of glucocorticoid-mediated apoptosis in hematological malignancies. Clin Cancer Res. 2002;8(6):1681–94. Huber W, Von Heydebreck A, Sültmann H, Poustka A, Vingron M. Variance stabilization applied to microarray data calibration and to the quantification of differential expression. Bioinformatics. 2002;18(suppl 1):S96–S104. Hutson RG, Kitoh T, Moraga Amador DA, Cosic S, Schuster SM, Kilberg MS. Amino acid control of asparagine synthetase: relation to asparaginase resistance in human leukemia cells. Am J Phys Cell Phys. 1997;272(5):C1691–C9. Ichihara A. Serum concentration and localization in tumor cells of proteasomes in patients with hematologic malignancy and their pathophysiologic significance. J Lab Clin Med. 1993;121:215. Ideker T, Ozier O, Schwikowski B, Siegel AF. Discovering regulatory and signalling circuits in molecular interaction networks. Bioinformatics. 2002;18(suppl 1):S233–S40. Junk S, Cario G, Wittner N, Stanulla M, Scherer R, Schlegelberger B, et al. Bortezomib treatment can overcome glucocorticoid resistance in childhood B-cell precursor acute lymphoblastic leukemia cell lines. Klin Padiatr. 2015;227(3):123–30. Kanehisa M, Goto S, Hattori M, Aoki-Kinoshita KF, Itoh M, Kawashima S, et al. From genomics to chemical genomics: new developments in KEGG. Nucleic Acids Res. 2006;34(suppl 1):D354–D7. Kaspers G, Veerman A, Pieters R, Broekema G, Huismans D, Kazemier K, et al. Mononuclear cells contaminating acute lymphoblastic leukaemic samples tested for cellular drug resistance using the methyl-thiazol-tetrazolium assay. Br J Cancer. 1994;70(6):1047–52. Kerrien S, Aranda B, Breuza L, Bridge A, Broackes-Carter F, Chen C, et al. The IntAct molecular interaction database in 2012. Nucleic Acids Res. 2011;40(D1):gkr1088. Kim K-J, Park MC, Choi SJ, Oh YS, Choi E-C, Cho HJ, et al. Determination of three-dimensional structure and residues of the novel tumor suppressor AIMP3/p18 required for the interaction with ATM. J Biol Chem. 2008;283(20):14032–40. Kim MJ, Park B-J, Kang Y-S, Kim HJ, Park J-H, Kang JW, et al. Downregulation of FUSE-binding protein and c-myc by tRNA synthetase cofactor p38 is required for lung cell differentiation. Nat Genet. 2003;34(3):330. Kosti I, Jain N, Aran D, Butte AJ, Sirota M. Cross-tissue analysis of gene and protein expression in normal and cancer tissues. Sci Rep. 2016;6:24799. Lambrou GI, Papadimitriou L, Chrousos GP, Vlahopoulos SA. Glucocorticoid and proteasome inhibitor impact on the leukemic lymphoblast: multiple, diverse signals converging on a few key downstream regulators. Mol Cell Endocrinol. 2012;351(2):142–51. Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics. 2008;9(1):559. Lauten M, Schrauder A, Kardinal C, Harbott J, Welte K, Schlegelberger B, et al. Unsupervised proteome analysis of human leukaemia cells identifies the Valosin-containing protein as a putative marker for glucocorticoid resistance. Leukemia. 2006;20(5):820. Licata L, Briganti L, Peluso D, Perfetto L, Iannuccelli M, Galeota E, et al. MINT, the molecular interaction database: 2012 update. Nucleic Acids Res. 2012;40(D1):D857–D61. Lin C-C, Hsiang J-T, Wu C-Y, Oyang Y-J, Juan H-F, Huang H-C. Dynamic functional modules in co-expressed protein interaction networks of dilated cardiomyopathy. BMC Syst Biol. 2010;4(1):1. Maere S, Heymans K, Kuiper M. BiNGO: a Cytoscape plugin to assess overrepresentation of gene ontology categories in biological networks. Bioinformatics. 2005;21(16):3448–9. Mousavian Z, Nowzari-Dalini A, Stam RW, Rahmatallah Y, Masoudi-Nejad A. Network-based expression analysis reveals key genes related to glucocorticoid resistance in infant acute lymphoblastic leukemia. Cell Oncol. 2016;40(1):1–13. Nacu Ş, Critchley-Thorne R, Lee P, Holmes S. Gene expression network analysis and applications to immunology. Bioinformatics. 2007;23(7):850–8. Pagel P, Kovac S, Oesterheld M, Brauner B, Dunger-Kaltenbach I, Frishman G, et al. The MIPS mammalian protein–protein interaction database. Bioinformatics. 2005;21(6):832–4. Pieters R, Den Boer M, Durian M, Janka G, Schmiegelow K, Kaspers G, et al. Relation between age, immunophenotype and in vitro drug resistance in 395 children with acute lymphoblastic leukemia-implications for treatment of infants. Leukemia. 1998;12(9):1344–8. Pieters R, Loonen A, Huismans D, Broekema G, Dirven M, Heyenbrok M, et al. In vitro drug sensitivity of cells from children with leukemia using the MTT assay with improved culture conditions. Blood. 1990;76(11):2327–36. Pieters R, Schrappe M, De Lorenzo P, Hann I, De Rossi G, Felice M, et al. A treatment protocol for infants younger than 1 year with acute lymphoblastic leukaemia (Interfant-99): an observational study and a multicentre randomised trial. Lancet. 2007;370(9583):240–50. Prasad TK, Goel R, Kandasamy K, Keerthikumar S, Kumar S, Mathivanan S, et al. Human protein reference database—2009 update. Nucleic Acids Res. 2009;37(suppl 1):D767–D72. Pui C-H, Relling MV, Downing JR. Acute lymphoblastic leukemia. N Engl J Med. 2004;350(15):1535–48. Salwinski L, Miller CS, Smith AJ, Pettit FK, Bowie JU, Eisenberg D. The database of interacting proteins: 2004 update. Nucleic Acids Res. 2004;32(suppl 1):D449–D51. Samuels AL, Heng JY, Beesley AH, Kees UR. Bioenergetic modulation overcomes glucocorticoid resistance in T-lineage acute lymphoblastic leukaemia. Br J Haematol. 2014;165(1):57–66. Schaefer MH, Fontaine J-F, Vinayagam A, Porras P, Wanker EE, Andrade-Navarro MA. HIPPIE: integrating protein interaction networks with experiment based quality scores. PLoS One. 2012;7(2):e31826. Scheinman RI, Cogswell PC, Lofquist AK, Baldwin AS. Role of transcriptional activation of IκBα in mediation of immunosuppression by glucocorticoids. Science. 1995;270(5234):283–6. Smoot ME, Ono K, Ruscheinski J, Wang P-L, Ideker T. Cytoscape 2.8: new features for data integration and network visualization. Bioinformatics. 2011;27(3):431–2. Sohler F, Hanisch D, Zimmer R. New methods for joint analysis of biological networks and expression data. Bioinformatics. 2004;20(10):1517–21. Spijkers-Hagelstein J, Pinhanços S, Schneider P, Pieters R, Stam R. Chemical genomic screening identifies LY294002 as a modulator of glucocorticoid resistance in MLL-rearranged infant ALL. Leukemia. 2014a;28(4):761–9. Spijkers-Hagelstein JA, Pinhancos SM, Schneider P, Pieters R, Stam RW. Src kinase-induced phosphorylation of annexin A2 mediates glucocorticoid resistance in MLL-rearranged infant acute lymphoblastic leukemia. Leukemia. 2013;27(5):1063–71. Spijkers-Hagelstein JA, Schneider P, Hulleman E, de Boer J, Williams O, Pieters R, et al. Elevated S100A8/S100A9 expression causes glucocorticoid resistance in MLL-rearranged infant acute lymphoblastic leukemia. Leukemia. 2012;26(6):1255–65. Spijkers-Hagelstein JA, Schneider P, Pinhanços SM, Castro PG, Pieters R, Stam RW. Glucocorticoid sensitisation in mixed lineage Leukaemia-rearranged acute lymphoblastic leukaemia by the pan-BCL-2 family inhibitors gossypol and AT-101. Eur J Cancer. 2014b;50(9):1665–74. Stumpel DJ, Schneider P, van Roon EH, Boer JM, de Lorenzo P, Valsecchi MG, et al. Specific promoter methylation identifies different subgroups of MLL-rearranged infant acute lymphoblastic leukemia, influences clinical outcome, and provides therapeutic options. Blood. 2009;114(27):5490–8. Thulasi R, Harbour D, Thompson E. Suppression of c-myc is a critical step in glucocorticoid-induced human leukemic cell lysis. J Biol Chem. 1993;268(24):18306–12. Tissing WJ, Den Boer ML, Meijerink JP, Menezes RX, Swagemakers S, van der Spek PJ, et al. Genomewide identification of prednisolone-responsive genes in acute lymphoblastic leukemia cells. Blood. 2007;109(9):3929–35. Tricot GJ. New insights into role of microenvironment in multiple myeloma. Int J Hematol. 2002;76(1):334–6. Tung S, Shi Y, Wong K, Zhu F, Gorczynski R, Laister RC, et al. PPARα and fatty acid oxidation mediate glucocorticoid resistance in chronic lymphocytic leukemia. Blood. 2013;122(6):969–80. Yao P, Fox PL. Aminoacyl-tRNA synthetases in medicine and disease. EMBO Mol Med. 2013;5(3):332–43. Yoon D, Kim H, Suh-Kim H, Park RW, Lee K. Differentially co-expressed interacting protein pairs discriminate samples under distinct stages of HIV type 1 infection. BMC Syst Biol. 2011;5(2):1. Zhang X, Yang H, Gong B, Jiang C, Yang L. Combined gene expression and protein interaction analysis of dynamic modularity in glioma prognosis. J Neuro-Oncol. 2012;107(2):281–8. The authors would like to thank the Center of High Performance Computing (CHPC), School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran for providing their cluster to meet our computational needs. Funding from the Institute of Biochemistry and Biophysics, University of Tehran is gratefully acknowledged. School of Mathematics, Statistics, and Computer Science, College of Science, University of Tehran, Tehran, Iran Zaynab Mousavian & Abbas Nowzari-Dalini Laboratory of Systems Biology and Bioinformatics (LBB), Institute of Biochemistry and Biophysics, University of Tehran, Tehran, Iran & Ali Masoudi-Nejad Department of Biomedical Informatics, University of Arkansas for Medical Sciences, Little Rock, AR, 72205, USA Yasir Rahmatallah Search for Zaynab Mousavian in: Search for Abbas Nowzari-Dalini in: Search for Yasir Rahmatallah in: Search for Ali Masoudi-Nejad in: ZM, AND & AMN conceived and designed the experiments. ZM analyzed the data, and wrote the manuscript. AND, YR & AMN edit and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Zaynab Mousavian or Ali Masoudi-Nejad. Additional file Table S1. The list of genes present in the turquoise module. Table S2 The list of genes present in the pink module. Table S3. The list of genes present in the brown module. Table S4. The list of genes present in the red module. (DOCX 11 kb) Mousavian, Z., Nowzari-Dalini, A., Rahmatallah, Y. et al. Differential network analysis and protein-protein interaction study reveals active protein modules in glucocorticoid resistance for infant acute lymphoblastic leukemia. Mol Med 25, 36 (2019). https://doi.org/10.1186/s10020-019-0106-1 Glucocorticoid resistance Differential co-expression network analysis Active protein modules
CommonCrawl
•https://doi.org/10.1364/BOE.449456 Femtosecond-laser stimulation induces senescence of tumor cells in vitro and in vivo Xiaohui Zhao, Wanyi Tang, Haipeng Wang, and Hao He Xiaohui Zhao, Wanyi Tang, Haipeng Wang, and Hao He* School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China *Corresponding author: [email protected] X Zhao W Tang H Wang Xiaohui Zhao, Wanyi Tang, Haipeng Wang, and Hao He, "Femtosecond-laser stimulation induces senescence of tumor cells in vitro and in vivo," Biomed. Opt. Express 13, 791-804 (2022) Plasmonic fusion between fibroblasts and skeletal muscle cells for skeletal muscle regeneration Limor Minai, et al. Biomed. Opt. Express 13(2) 608-619 (2022) Novel window for cancer nanotheranostics: non-invasive ocular assessments of tumor growth and... Mayank Goswami, et al. Spectroscopic second and third harmonic generation microscopy using a femtosecond laser source in... Yusuke Murakami, et al. Optical Biophysics and Photobiology High power lasers Laser energy Laser irradiation Laser therapies Optical design Methods and materials Tumor cells present anti-apoptosis and abnormal proliferation during development. Senescence and stemness of tumor cells play key roles in tumor development and malignancy. In this study, we show the transient stimulation by a single-time scanning of tightly focused femtosecond laser to tumor cells can modulate the stemness and senescence in vitro and in vivo. The laser-induced cellular senescence and stemness present distinct transitions in vitro and in vivo. The cells 1.2 mm deep in tumor tissue are found with significant senescence induced by the transient photostimulations in 100-200 µm shallow layer in vivo, which suppresses the growth of whole tumor in living mice. © 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement Cellular senescence is generally a stress-responsive cell-cycle arrest program that terminates the further expansion of malignant cells in tissue and thus restrains tumorigenesis, which is implicated in cancer and in aging [1]. But senescence has recently been recognized as a double-edged sword for tumor development. It induces immune surveillance against pre-malignant cells by a potent tumor-suppressive mechanism, while, however, accumulation of senescent cells in aged tissues blocks tissue renewal and leads chronic inflammation associated with agerelated diseases to finally form cancer [2]. In tumors, cellular senescence does not conflict with stemness but can contribute to the generation of cancer cells with high stemness. The cell signaling molecules in senescence, for example, Ras-type oncogene proteins including activated Ras, Raf or MEK, induce senescent G1-phase cell-cycle arrest, which is also stably maintained by global epigenetic reprogramming, suggesting an interplay between senescence and stemness [3,4]. Hence, manipulation of cell senescence is of great significance for cancer research and therapy. Laser technology is able to deliver precise energy to biological systems at predefined spatiotemporal coordinates noninvasively, and thus naturally suitable for photobiomodulation. Recently, the technology of optogenetics enables molecule-specific photobiomodulation by transfecting the optogenetic genes into cells and animals to introduce photosensitivity of specific targeted molecules to light [5]. The optogenetic photomodulation of cancerization and cancer immunotherapy brings the potential of optogenetic cancer therapy [6,7]. However, the invasive gene engineering to introduce optogenetics into human beings makes a barrier to clinical applications due to the biosafety issue and ethical concerns [8,9]. To this challenge, direct photobiomodulation technology has been developed. In theory, most cells cannot response to light stimulation with any specific biological processes except photodamage. Laser or light irradiation generates thermal effect, oxidative stress, and direct breakdown of molecular bonds in cells at infrared, visible, and ultraviolet bands respectively [10]. The idea using continuous-wave (CW) low-level laser irradiation (LLLI) to restrain photodamage in a moderate range and maintain cells viable, is found able to initiate some cell processes like cell repair and even therapy effect [11]. Hence the idea is also named as low-level laser therapy (LLLT) and applied to a series of diseases therapy in clinic and even tumor therapy [12–14]. Signaling pathways including TGF-β and ERK can be activated by long-term irradiation of CW lasers nonspecifically to initiate cell repair and proliferation and thus prompt wound healing and tissue regeneration [15,16]. LLLT also presents some anti-inflammatory effect at cellular and tissue level [17]. Therefore, some studies reported LLLT could help for side effects and complications caused by cancer therapy [18,19]. Nevertheless, application of LLLT was limited by the modulation efficiency and depth in vivo. It has been found the tightly focused femtosecond laser could excite intracellular Ca2+ release in diverse types of cells [20], which was firstly reported at 2002 [21]. This technology was used in neurons for the intercellular Ca2+ propagation [22]. We found the femtosecond-laser stimulation could precisely release and deplete the intracellular Ca2+ store in endoplasmic reticulum [23]. The Ca2+ rise pattern could further activate ERK pathway in a controllable manner [24]. Since Ca2+ signaling is the universal second messenger in cells to regulate all cell processes [25], we recently further developed a method to control the store-operated Ca2+ channel specifically by femtosecond laser [26]. Those results indicate the femtosecond laser holds the ability to activate cell processes. Even more remarkably, calcium (Ca2+) plays a major role in many key cellular processes as the universal second messenger in cells, which control a series of "hallmarks of cancer" and regulate senescence through nuclear factor of activated T cells (NFAT) and NF-κB pathways for balancing between proliferation and cell cycle arrest [27–29]. Therefore, it is possible to influence cellular senescence by laser-regulated Ca2+ signaling. In this study, we report the transient stimulation by a tightly focused femtosecond laser to tumor cells induces cell senescence in vitro and in vivo. The stemness and senescence of tumor cells can be directly modulated by a single-time short flash of femtosecond-laser activation noninvasively. We propose this method can work as a photobiomodulation technology for tumor and benefit cancer research and therapy. 2. Methods and materials 2.1 Preparation of PC3 cells Human prostate cancer cell line PC3 cells were cultivated in Roswell Park Memorial Institute-1640 medium containing 10% fetal bovine serum, 2 mM L-glutamine, and 1% (v/v) penicillin/streptomycin at 37 °C with 5% CO2. Cells were seeded on the bottom of 35 mm glass-bottom (0.17 mm thick) dishes, and another 2 mL medium was added 1 hour later when the cells were adherent. Cells were cultured for another 12 hours before experiments. Cells were stained with Fluo-4/AM (final concentration 2 µM, Thermo Fisher Scientific, F14202) for 30 minutes at 37 °C for the indication of intracellular Ca2+. For in vivo experiments, PC3 cells with stable expression of green fluorescent protein (GFP) were cultured under the same condition before tumor implantation. 2.2 Immunofluorescence microscopy Immunofluorescence microscopy of cells was performed following the protocol below. In brief, cells were fixed with 4% paraformaldehyde (Beyotime) and permeabilized with 0.1% Triton X-100 (Beyotime) to allow the primary antibody to diffuse into cells. After being blocked with 1% bovine serum albumin (Sigma-Aldrich), cells were incubated with primary antibodies at 4 °C overnight followed by incubation with secondary antibodies for 2 hours at room temperature. Immunofluorescence microscopy of frozen tissue section (10 µm thick) was performed according to the following protocol. Tumor sections were fixed with 4% paraformaldehyde (Beyotime) and permeabilized with 0.3% Triton X-100 (Beyotime). After being blocked with 5% bovine serum albumin (Sigma-Aldrich), sections were incubated with primary antibodies at 4 °C overnight followed by incubation with secondary antibodies for 2 hours at room temperature. The immunofluorescence microscopy works based on the immune-reaction of specific antibody (primary antibody) against the antigen in the sample. After washing out the residual primary antibody, the anti-antibody with a fluorophore (secondary antibody) could combine with the primary antibody such that the targeted protein (antigen) in the sample was fluorescently labeled with ultrahigh specificity. Antibodies for immunofluorescence were as follows (with the dilution ratio): primary antibodies: anti-SOX2 mouse antibody (abcam, ab79351, 1:250), anti-Oct4 rabbit antibody (abcam, ab181557, 1:250), anti-p21 rabbit antibody (abcam, ab109520, 1:1000), anti-CDKN2A/p16INK4a mouse antibody (abcam, ab16123, 1:1000). Secondary antibodies: Goat Anti-Rabbit IgG H&L-Green (Alexa Fluor 488) (Abcam, ab150077, 1:1000) and Goat Anti-Mouse IgG H&L-Red (Alexa Fluor 555) (Abcam, ab150114, 1:1000). 2.3 Animal model of subcutaneous tumor mice 8-9 weeks male BALB/c nude mice, weighing approximately 20 g were purchased from Charles River, and maintained in a specific pathogen-free (SPF) environment. To establish the subcutaneous tumor model, 100 µL cell suspension with the concentration of 107 /mL PC3-GFP cells in phosphate buffer saline was implanted into the axilla of right hindlimb subcutaneously per mouse. Since the day of tumor implantation, we measured tumor volume every three days for 3 weeks. The blood vessels within tumor were stained with PE anti-mouse CD105 antibody (0.1 mg/mL, 100 µL, BioLegend, 120408) by intravenous injection. 2.4 Photostimulation in vitro and in vivo A microscopic system was established for simultaneous photostimulation and microscopy by coupling a femtosecond laser (BlueCut, Menlo) into a confocal microscope (FV1200, Olympus) as shown in Fig. 1 (a). The beam of femtosecond laser at 1030 nm was expanded to match the back aperture of objective (30${\times} $, oil immersion, N.A. = 1.05) for tight focusing and controlled by the galvo mirrors and a mechanical shutter for scanning in a predesigned region. Fig. 1. Photostimulation to targeted PC3 cells in any predefined regions by femtosecond-laser. (a) Optical design for the femtosecond-laser stimulation to cells. The photostimulation was accomplished by the predefined scanning of femtosecond laser (1030 nm, 220 fs, 1 MHz) controlled by the shutter and galvo mirrors. The stimulation region could be defined as any arbitrary polygons. (b) The Ca2+ dynamics in the stimulated cells. Left: the baseline, peak, and last Ca2+ level in photostimulated cells (left region of the left dashed line) and control (right region of right dashed line). Green fluorescence: Fluo-4/AM. Right: the kinetic change of the Ca2+ levels after photostimulation at 5.25 mW (L, laser n= 64 cells from 3 fields, control n= 38 cells from 3 fields), 10.5 mW (M, laser n = 41 cells from 3 fields, control n = 60 cells from 3 fields), and 21 mW (H, laser n = 44 cells from 3 fields, control n = 63 cells from 3 fields). Zoom in: the damaged cells with little fluorescence of Fluo-4 and bright spots of bubbles on membrane. Insert: the definition of amplitude, peak time, and decay time of the Ca2+ pattern. The violin plots of amplitude (ΔF/F0) (c), peak time (d) and decay time (e) of the Ca2+ kinetic dynamics at different photostimulation modes. (f) The ratio of cellular Ca2+ responses to different photostimulations. Data represent mean ± SEM. Scale bar: 100 µm. For in vitro photostimulation, the cells adherent on petri dishes (the bottom was a 0.17 mm glass slide) were observed by confocal microscopy and selected randomly by defining an arbitrary polygon region in the field of view (FOV) to suffered femtosecond-laser scanning point by point for a single time. In this scheme, the photostimulation was defined as a single frame of microscopy, which could be inserted in any time slot of a predefined confocal microscopy sequence for real-time continuous observation. In the Ca2+ study, cells were stimulated by the laser scanning in a predefined area (180 ${\times} $ 420 µm2) in the FOV (420 ${\times} $ 420 µm2) for a single frame (220 ${\times} $ 512 pixels, 2 µs/pixel, 0.8s) at 5.25 mW (L), 10.5 mW (M), and 21 mW (H) respectively. The average Ca2+ level was acquired by accessing the fluorescence intensity of each individual cell manually according to the cell morphology indicated by the intracellular Ca2+ and Fluo-4/AM. In the experiments of senescence and stemness, cells were stimulated by the laser scanning in the whole FOV (420 ${\times} $ 420 µm2) for a single frame (512 ${\times} $ 512 pixels, 2 µs/pixel, 1.1 s) at 10.5 mW (M). The total photostimulation area per dish was 5 mm2. For in vivo photostimulation, the mice were anaesthetized by Isoflurane on the microscope stage. The implanted subcutaneous tumor of PC3 cells with genetically labeled GFP was observed by confocal microscopy. The regions of the tumor at the horizontal X-Y planes were randomly selected and then photostimulated at 3 different depths of 100 µm, 150 µm and 200 µm under the epidermis the respectively. In the senescence and stemness experiments, the tumors were stimulated in succession frame by frame (420 µm ${\times} $ 420 µm, 512 ${\times} $ 512 pixels, 4 µs/pixel, 2.2 s/frame) at 18 mW in a region of 16 mm2 per layer. In the study of tumor development, the tumors were stimulated in succession frame by frame (420 µm ${\times} $ 420 µm, 512 ${\times} $ 512 pixels, 4 µs/pixel,2.2 s/frame) at 18 mW in a region of 30 mm2 per layer. The laser propagation efficiency in the tumor tissue was estimated by the absorption and scattering model of multilayer cells and direct measurement of laser transmission efficiency of the skin. After 200 µm tissue attenuation, the total transmission efficiency was calculated as around 0.55 ∼ 0.62. To compensate the attenuation of laser propagation in tissue, the laser power was increased to 18 mW, around two times of that in the in vitro case. 3.1 Ca2+ kinetic patterns in PC3 cells by femtosecond-laser photostimulation We established a system to simultaneously observe and stimulate PC3 cells by coupling a femtosecond laser (1030 nm, 220 fs, 1 MHz) to a confocal microscope and focusing it to a submicron spot (diameter $\mathrm{\ < }$ 1 µm). The photons at such long wavelength were generally absorbed by water due to low single-photon energy, but with little photochemical effect [30]. The ultrashort pulse width and relatively ultra-long pulse interval of the laser prevent thermal deposition and accumulation in cells [30]. The photostimulation was accomplished by a single-frame scanning of the femtosecond laser in a predefined region controlled by a mechanical shutter synchronized with galvo mirrors (Fig. 1(a)). In this setup, the total FOV was 420 ${\times} $ 420 µm2, mapping to 512 ${\times} $ 512 pixels, and scanned by lasers at a speed of 2 µs/pixel for microscopy. The photostimulation region could be defined as any arbitrary polygon areas in the FOV and coordinated by the galvo mirrors for accurate scanning inside. The total time of photostimulation was controlled by a mechanical shutter and usually defined as an integer multiple of a frame time. The incident laser energy on each cell was thus dependent on the projection area of cells adherent on the bottom slides in petri dishes. The cells were observed continuously by time-lapse confocal microscopy and the photostimulation was performed at a predefined time slot. To show the femtosecond laser could stimulate cells effectively in a controllable manner, the Ca2+ response of cells as a readout of the photostimulation was investigated by laser scanning at different powers, as shown in Fig. 1(b). The cells were stained with Fluo-4/AM to indicate the Ca2+ kinetics and then stimulated by the laser scanning in a predefined area (180 ${\times} $ 420 µm2) in the FOV for a single frame (220 ${\times} $ 512 pixels, 0.8 s) at 5.25 mW (L), 10.5 mW (M), and 21 mW (H) respectively. Immediately the intracellular Ca2+ response was observed in those stimulated cells with distinct kinetic patterns. The cellular Ca2+ patterns were quantified as shown in Fig. 1(c-e). We compared the amplitude, peak time, and decay time, defined as in Fig. 1(b) under those three conditions. The fluorescence of most cells stimulated at mode H presented a rapid and significant increase but decayed very fast, suggesting extreme damage to cells such that the integrity and permeability of cytoplasm membrane was lost, Fluo-4 molecules leaked out, and little bubbles could be found in those cells (the insert in Fig. 1(b)). The violin plots of peak time and decay time of Ca2+ response indicate their variation at mode L was quite large compared with that at the other two modes respectively (Fig. 1(d) and (e)). Therefore, at 600 s, some cells in the L group still maintained bright Ca2+ signals. Synergistically, the efficiency of photostimulation at mode L defined by the cell ratio with effective Ca2+ responses over the total photostimulated cells was lowest as in Fig. 1(f). By contrast, mode M presented relatively uniform Ca2+ responses and the optimal balance between photostimulation efficiency and cell viability. In this regard, we used photostimulation at mode M for all following experiments. 3.2 Senescence of cells induced by femtosecond-laser stimulation We investigated the effect of femtosecond-laser stimulation (at mode M) on cell senescence. It should be noted cell senescence is associated with stem-cell functions, collectively referred to as 'stemness', to generate the potential to develop highly aggressive tumors [3]. Actually, cell stemness and senescence seem to be co-regulated by overlapping signaling networks [31]. Therefore, PC3 cells were photostimulated at Day 0 and measured both the senescence and stemness at Day 0, 1, and 3 respectively as designed in Fig. 2(a). Transcription factors Oct4 and Sox2 that regulate genes for the self-renewal and pluripotency of embryonic stem cells (ESC) together [32], are regarded as cancer stem cells markers [33]. Hence, we used the expression level of Oct4 and Sox2 to indicate the stemness of PC3 cells. The Day 0 group (3 fields × 2 dishes of cells) without photostimulation provided the immunofluorescence baseline of Oct4 and Sox2. The photostimulated cells (Laser group) and control cells (Control) in the same dish (but > 1 mm far away from each other) were analyzed at Day 1 (3 fields × 3 dishes of cells) and Day 3 (3 fields × 3 dishes of cells) by immunofluorescence microscopy respectively. It could be found Oct4 and Sox2 did not change at Day 1 in those photostimulated cells compared with that at Day 0 and the control at Day 1 respectively (Fig. 2(b-e)). At Day 3, both Oct4 and Sox2 in the control and stimulated cells exhibited significant downregulation compared with that at Day 0. But no significant difference of Oct4 and Sox2 could be found between the control and stimulated cells at Day 3 (Fig. 2(c) and (e)). Therefore, the stemness of PC3 cells was not influenced by photostimulation in vitro. Fig. 2. The level of stemness markers after femtosecond-laser stimulation. (a) The experimental design. (b) Representative immunofluorescence images of Oct4 (green) in the cells at Day 0 and the control and femtosecond-laser stimulated cells at Day 1 and 3 respectively. (c) The quantified Oct4 level from (b, Day 0: n = 3 fields ${\times} {\; }$2 dishes of cells, control and laser: n = 3 fields ${\times} {\; }$3 dishes of cells per group, P values were calculated by two-tailed t-test). (d) Representative immunofluorescence images of Sox2 (red) in the cells at Day 0 and the control and femtosecond-laser stimulated cells at Day 1 and 3 respectively. (e) The quantified Sox2 level from (d, Day 0: n = 3 fields ${\times} {\; }$2 dishes of cells, control and laser: n = 3 fields ${\times} $ 3 dishes of cells per group, P values were calculated by two-tailed t-test). Blue: DAPI fluorescence from nucleus. Data represent mean ± SEM. * P < 0.05. ** P < 0.01. *** P < 0.001. Scale bar: 100 µm. We then measured the level of two typical senescence makers by immunofluorescence microscopy respectively, the cyclin-dependent kinase inhibitors p21 and p16, which are cell cycle inhibitors and anti-proliferative effectors [34], and therefore work as main tumor suppressor proteins (together with p53) to avert tumor formation [35,36]. As shown in Fig. 3(a) and (b), the p21 level in the photostimulated cells was significantly higher than that of control and at Day 0. But the p21 level at Day 3 did not exhibit further increase compared with Day 1 since cells with different cell fates exhibited distinct p21 dynamics. The p21 upregulated and was maintained at high levels for the senescent cell subpopulation [37]. We found only a slight upregulation of p16 at Day 3 with no significant difference compared with that at Day 0, probably because the upregulation and accumulation of p16 was later than p21 [38]. Taken together, the photostimulation could induce senescence of tumor cells in vitro. Fig. 3. The level of senescence markers after femtosecond-laser stimulation. (a) Representative immunofluorescence images of p21 (green) in the cells at Day 0 and the control and femtosecond-laser stimulated cells at Day 1 and 3 respectively. (b) The quantified p21 level from (a, Day 0: n = 3 fields ${\times} {\; }$2 dishes of cells, control and laser: n = 3 fields ${\times} $ 3 dishes of cells per group, P values were calculated by two-tailed t-test). (c) Representative immunofluorescence images of p16 (red) in the cells at Day 0 and the control and femtosecond-laser stimulated cells at Day 1 and 3 respectively. (d) The quantified p16 level from (d, Day 0: n = 3 fields ${\times} {\; }$2 dishes of cells, control and laser: n = 3 fields ${\times} $ 3 dishes of cells per group, P values were calculated by two-tailed t-test). Blue: DAPI fluorescence from nucleus. Data represent mean ± SEM. * P < 0.05. ** P < 0.01. *** P < 0.001. Scale bar: 100 µm. 3.3 Senescence of tumor cells induced by a femtosecond laser in vivo We further assessed the senescence of tumor cells by femtosecond-laser stimulation in vivo. The subcutaneous tumor mouse model was established by implanting PC3-GFP cells (PC3 cells genetically labeled with GFP) in the armpit of the right hind leg of mice. After one week, the tumors reached a measurable size (about 110 mm3) and were stimulated by the femtosecond laser at 18 mW at Day 0. Similarly, tumor tissues in the Day 0 group (15 sections from 5 mice) without photostimulation worked for the immunofluorescence baseline. The tumors suffered photostimulation were analyzed at Day 3 (11 sections (Oct4) or 13 sections (Sox2) from 4 mice) as shown in Fig. 4(a). Fig. 4. The level of stemness markers in tumor tissue after femtosecond-laser stimulation in vivo. (a) The experimental scheme. The tumor was implanted 1 week before Day 0. The tumor biopsy was performed at Day 3 and before laser at Day 0 for immunofluorescence microscopy. (b) Left panel: the image of the whole tumor section. Scale bar: 1 mm. Dashed line: the photostimulation region by femtosecond-laser scanning. White boxes: the original areas for the zoom-in images in the right panel. Right panel: representative immunofluorescence images of Oct4 (red) in the tumor frozen sections of the tumors at Day 0 and the control and femtosecond-laser stimulated tumors at Day 3. Blue: DAPI fluorescence from nucleus. Green: GFP fluorescence from PC3 cells. Scale bar: 20 µm. (c) The quantified Oct4 level from (b, Day 3 n = 11 sections from 4 mice, Day 0 n = 15 sections from 5 mice, P values were calculated by one-tailed t-test). (d) Left panel: the image of the whole tumor section. Scale bar: 1 mm. Right panel: representative zoom-in immunofluorescence images of Sox2 (red) in the tumor frozen sections of the tumors at Day 0 and the control and femtosecond-laser stimulated tumors at Day 3. Blue: DAPI fluorescence from nucleus. Green: GFP fluorescence from PC3 cells. Scale bar: 20 µm. (e) The quantified Sox2 level from (d, Day 3 n = 13 sections from 4 mice, Day 0 n =15 sections from 5 mice, P values were calculated by one-tailed t-test). Data represent mean ± SEM. * P < 0.05. ** P < 0.01. We identified the location of tumors by the green fluorescence of PC3-GFP cells and stimulated cells at 100 µm, 150 µm and 200 µm under the epidermis in succession frame by frame (420 µm ${\times} $ 420 µm, 2.2 s/frame) in a region of 16 mm2 per layer. We measured the Oct4 and Sox2 level in the photostimulated and control (without photostimulation) regions at same depth. The Oct4 level of the tumor tissue after photostimulation at Day 3 showed significant upregulation than that of the control (Fig. 4(b) and (c)). Similarly, the Sox2 level was significantly higher than that of the control (but no significant difference compared with that at Day 0 as in Fig. 4(d) and (e). The cells more than 1 mm deep under the photostimulation plane still presented significant different Oct4 and Sox2 level from that in control. This in vivo result was different from that in vitro, probably due to the in vivo microenvironment of the tumor in which the PC3 cells could generate stemness for epithelial-mesenchymal transition after the activation of photostimulation. We then studied the senescence of tumor cells in vivo in this mouse model. The sections of the tumor tissue at Day 3 were immunostained with p21 and p16 respectively. As shown in Fig. 5(a-d), the p21 and p16 level of photostimulated tumor cells were both significantly higher than that of control. Notably, the tumor tissue more than 1 mm deep under the photostimulation plane showed overall significantly higher p21 and p16 signals than that of control (Fig. 5(a) and (c)). Therefore, the senescence of tumor cells could be induced by the transient photostimulation in vivo. The scanning at superficial layers in the tumor tissue of femtosecond laser could induce significant cell senescence at a depth of more than 1 mm in the tumor. We quantified the p21 and p16 level in the tumor tissue along depth. As shown in Fig. 5(e) and Supplement 1, at 1 mm, the p21 and p16 level of cells under the photostimulated region were still significantly higher than that of control, which remained the significantly different at 2 mm. The laser power of photostimulation was tuned to 15 mW and stimulation duration for 1.1 s and 2.2 s respectively, the p21 and p16 level changed accordingly (Supplement 1), further suggesting the photostimulation effect. We found the effective depth of photostimulation could achieve 1.2 mm by defining it as the full width at half maximum of the p16 and p21 level. Even more noteworthy is that senescent cells can acquire features of stemness partially through activation of Wnt to produce tumor-initiating cells (e.g., cancer stem cells) [39], consistent with the result of stemness upregulation by photostimulation in Fig. 4. Fig. 5. The level of senescence markers in tumor tissue after femtosecond-laser stimulation in vivo. (a) Left panel: the image of the whole tumor section. Scale bar: 1 mm. Dashed line: the photostimulation region by femtosecond-laser scanning. White boxes: the original areas for the zoom-in images in the right panel. Right panel: representative immunofluorescence images of p21 (red) in the tumor frozen sections of the tumors at Day 0 and the control and femtosecond-laser stimulated tumors at Day 3. Blue: DAPI fluorescence from nucleus. Green: GFP fluorescence from PC3 cells. Scale bar: 20 µm. (b) The quantified p21 level from (a, Day 3 n = 11 sections from 4 mice, Day 0 n= 15 sections from 5 mice). (c) Left panel: the image of the whole tumor section. Scale bar: 1 mm. Right panel: representative zoom-in immunofluorescence images of p16 (red) in the tumor frozen sections of the tumors at Day 0 and the control and femtosecond-laser stimulated tumors at Day 3. Scale bar: 20 µm. (d) The quantified p16 level from (c). (e) The quantified p21 and p16 level from (a and c) respectively versus depth. Data represent mean ± SEM. * P < 0.05. ** P < 0.01. *** P < 0.001. P values were calculated by one-tailed t-test (c and d) and two-tailed paired t test (e). We finally verified the general influence of laser-induced senescence on the tumor development (6 mice per group). The subcutaneous PC3-GFP tumors were photostimulated by the femtosecond laser at 18 mW in a region of 30 mm2 100 µm, 150 µm and 200 µm below the epidermis for a single time respectively at Day 0 (1 week after tumor implantation). After that, the tumor volume was recorded and compared with that of control along the development. As shown in Fig. 6(a), the tumors suffered photostimulation showed significant slower development and smaller volume at Day 21 than that of control, consistent with the senescence effect. The tumor was resected and observed at Day 21 which presented loose distribution of blood vessels inside (indicated by CD105) as shown in Fig. 6(b). We quantified general distribution and morphology of blood vessels in the tumor tissue (Fig. 6(c) and Supplement 1). The total vessel area in the photostimulated tumor was a little smaller than that of control (P = 0.0538), and the mean area of continuous vessels in the photostimulated tumors was significantly smaller (P = 0.0348), indicating the loose and sparse distribution of vessels. We tested the generation of reactive oxygen specieses (ROS) in tumor cells after photostimulation, the general photodamage of laser that could influence tumor growth. As in Fig. 6 (d), the PC3 cells did not show any ROS generation after photostimulation. This result further suggested the tumor growth was probably suppressed by laser-induced senescence. Fig. 6. The tumor development after photostimulation. (a) The tumor volume after photostimulation. (n = 6 tumors per group, P = 0.00043, two-way analysis of variance (ANOVA)). (b) The sections of the tumors with (right) and without (left) photostimulation. Red fluorescence: CD105 to indicate blood vessels. Green fluorescence: GFP from PC3 cells. Arrows: blood vessels in tumor tissue. Scale bar: 100 µm (c) The quantified total vessel area and mean area of continuous vessels in each frame. (d) The ROS level indicated by H2DCFDA by photostimulation at L and M mode respectively. Scale bar: 100 µm Data represent mean ± SEM. P values were calculated by one-tailed t-test. Cellular senescence is implemented in response to severe cellular insults. It is a failsafe program that protects organismic integrity by excluding potentially harmful cells from further expansion and also has a physiological function in tissue homeostasis during organ development [40]. The relationship of stemness and senescence of tumor cells is quite complex. Basically, the cell senescence restrains the development of tumor whereas stemness keeps the tumor potential of development, heterogeneity (defense against immunity), and metastasis [41]. However, in some recent studies, senescence of tumor cells can also induce or promote stemness [42]. In this study, we demonstrated the stemness and senescence of tumor cells could both been influenced by a single-time, short, and transient photostimulation of a tightly focused femtosecond laser. Here, the tumor tissue presented significant upregulation of both stemness and senescence after photostimulation in vivo. This is plausible considering the overlapping of molecular signaling of regulating senescence and stem-cell functions. Key signaling components of cellular senescence machinery, such as p16, p21 and p53, have been reported to operate as critical regulators of stem-cell functions [43]. The stemness enhancement in vivo might be involved in the initiation of epithelial-mesenchymal transition [44]. However, only the in vivo microenvironment of tumor can provide cells with senescence-associated stemness to enable cells to escape from cell-cycle blockade. Here we did not further measure the p16 and p21 level after Day 3 mainly because the fast proliferation of tumor cells mixed the photostimulated cells and newly-generated cells and thus influenced the measurement of laser-induced upregulation of senescence markers. The photostimulated region could hardly be localized again. In the in vitro cells, no stemness upregulation could be found after photostimulation. The tumor senescence can suppress tumor development. In this study, the photostimulation for a short duration in a very localized small region at the surface of tumor (100-200 µm deep) introduced cell senescence at a depth of 1.2 mm and showed significant suppression to the tumor development. Even deeper cells still presented senescence upregulation (Fig. 5 (e)). Even though the tumor heterogeneity was more and more significant at late stage, as a result of the stemness of tumor cells, the overall suppression to tumor development by cellular senescence induced by photostimulation could still be found at this subcutaneous tumor study. The laser-induced simultaneous upregulation of senescence and stemness did not conflict. The stemness of the tumor cells might initiate mutations to grow again and migrate to other organs. But still, the senescence of tumor cells is a quite effective and safe method for tumor therapy if the stemness could be controlled. Senescence has been shown to cancel the protumorigenic potential of cancerous lesions and contribute to the outcome of anticancer chemotherapy in vivo [45]. These findings have profound implications for cancer therapy, and provide new insights into the mechanism of cancer cells plasticity [46]. Photostimulation by laser is always limited by the penetration depth in in vivo applications. This physical limitation can hardly be broken. Here we used the femtosecond laser at 1030 nm which theoretically has limited improvement in the physical penetration depth in biological tissue. However, as shown in Fig. 5, although the femtosecond laser stimulated cells only around 100-200 µm below the skin surface, the cells more than 1 mm deep partially showed significant upregulation of senescence markers. This might be due to that the intercellular Ca2+ propagation initiates senescence in the surrounding cells. Fortunately, the tumor tissue with abundant mesenchyme and gap junctions is quite suitable for Ca2+ signaling propagation. In this study, we used cellular Ca2+ kinetics as the readout of cellular response to photostimulation which indicated the representative molecular changes of cells induced by photostimulation and the stress level of the photostimulation. The cell senescence and stemness have been found able to be regulated by intracellular Ca2+ signaling, the universal second messenger. We performed the immunofluorescence microscopy to measure those markers. The photostimulation at L mode could hardly initiate significant cellular senescence in vivo. The efficiency would be quite low. In contrast, the H mode might still induce senescence in cells surrounding the laser scanning lines with a moderate efficiency. But the cells at the laser focus would be greatly damaged and even dead by the high laser power, without upregulation of senescence. The M mode in this study is a relatively optimal choice. However, there might exist some powers that could lead a better photostimulation result. In this study, we report a biophotomodulation method to induce cellular senescence by a fast single-time femtosecond laser photostimulation in vitro and in vivo. The photostimulation depth could achieve 1.2 mm in vivo by photostimulation in the shallow layer at 200 µm. Cell stemness was also involved and influenced. The growth of subcutaneous tumor showed suppression after a single-time photostimulation. Our results thus provide a powerful noninvasive tool for the research of senescence of tumor cells and hold good potential for tumor therapy. National Natural Science Foundation of China (61975118, 62022056). Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Supplemental document See Supplement 1 for supporting content. 1. T. Kuilman, C. Michaloglou, W. J. Mooi, and D. S. Peeper, "The essence of senescence," Genes Dev. 24(22), 2463–2479 (2010). [CrossRef] 2. D. Muñoz-Espín and M. Serrano, "Cellular senescence: from physiology to pathology," Nat. Rev. Mol. Cell Biol. 15(7), 482–496 (2014). [CrossRef] 3. M. Milanovic, D. N. Fan, D. Belenki, J. H. M. Däbritz, Z. Zhao, Y. Yu, J. R. Dörr, L. Dimitrova, D. Lenze, and I. A. M. Barbosa, "Senescence-associated reprogramming promotes cancer stemness," Nature 553(7686), 96–100 (2018). [CrossRef] 4. M. Milanovic, Y. Yu, and C. A. Schmitt, "The senescence–stemness alliance–a cancer-hijacked regeneration principle," Trends Cell Biol. 28(12), 1049–1061 (2018). [CrossRef] 5. K. Deisseroth, "Optogenetics," Nat. Methods 8(1), 26–29 (2011). [CrossRef] 6. X. Zhou, J. Wang, J. Chen, Y. Qi, D. Nan, L. Jin, X. Qian, X. Wang, Q. Chen, and X. Liu, "Optogenetic control of epithelial-mesenchymal transition in cancer cells," Sci. Rep. 8(1), 14098 (2018). [CrossRef] 7. Y. Hagihara, A. Sakamoto, T. Tokuda, T. Yamashita, S. Ikemoto, A. Kimura, M. Haruta, K. Sasagawa, J. Ohta, and K. Takayama, "Photoactivatable oncolytic adenovirus for optogenetic cancer therapy," Cell Death Dis. 11(7), 570–579 (2020). [CrossRef] 8. J. C. Williams and T. Denison, "From optogenetic technologies to neuromodulation therapies," Sci. Transl. Med. 5(177), 177ps176 (2013). [CrossRef] 9. C. Towne and K. R. Thompson, "Overview on research and clinical applications of optogenetics," Current protocols in pharmacology 75(1), 11.19.11 (2016). [CrossRef] 10. M. R. Hamblin, "Mechanisms and applications of the anti-inflammatory effects of photobiomodulation," AIMS Biophys. 4(3), 337–361 (2017). [CrossRef] 11. P. Avci, A. Gupta, M. Sadasivam, D. Vecchio, Z. Pam, N. Pam, and M. R. Hamblin, "Low-level laser (light) therapy (LLLT) in skin: stimulating, healing, restoring," in Seminars in Cutaneous Medicine and Surgery (NIH Public Access, 2013), 41. 12. L. Brosseau, G. Wells, S. Marchand, I. Gaboury, B. Stokes, M. Morin, L. Casimiro, K. Yonge, and P. Tugwell, "Randomized controlled trial on low level laser therapy (LLLT) in the treatment of osteoarthritis (OA) of the hand," Lasers Surg. Med. 36(3), 210–219 (2005). [CrossRef] 13. C. Ramos Silva, F. V. Cabral, C. F. M. de Camargo, S. C. Núñez, T. Mateus Yoshimura, A. C. de Lima Luna, D. A. Maria, and M. S. Ribeiro, "Exploring the effects of low-level laser therapy on fibroblasts and tumor cells following gamma radiation exposure," J. Biophotonics 9(11-12), 1157–1166 (2016). [CrossRef] 14. F. M. de Lima, J. M. Bjordal, R. Albertini, F. V. Santos, and F. Aimbire, "Low-level laser therapy (LLLT) attenuates RhoA mRNA expression in the rat bronchi smooth muscle exposed to tumor necrosis factor-α," Lasers Med. Sci. 25(5), 661–668 (2010). [CrossRef] 15. L. Lin, L. Liu, B. Zhao, R. Xie, W. Lin, H. Li, Y. Li, M. Shi, Y.-G. Chen, and T. A. Springer, "Carbon nanotube-assisted optical activation of TGF-β signalling by near-infrared light," Nat. Nanotechnol. 10(5), 465–471 (2015). [CrossRef] 16. K. Zhang, L. Duan, Q. Ong, Z. Lin, P. M. Varman, K. Sung, and B. Cui, "Light-mediated kinetic control reveals the temporal effect of the Raf/MEK/ERK pathway in PC12 cell neurite outgrowth," PLoS One 9(3), e92917 (2014). [CrossRef] 17. R. Albertini, A. Villaverde, F. Aimbire, M. Salgado, J. Bjordal, L. Alves, E. Munin, and M. Costa, "Anti-inflammatory effects of low-level laser therapy (LLLT) with two different red wavelengths (660 nm and 684 nm) in carrageenan-induced rat paw edema," J. Photochem. Photobiol., B 89(1), 50–55 (2007). [CrossRef] 18. J. M. Bjordal, R.-J. Bensadoun, J. Tunèr, L. Frigo, K. Gjerde, and R. A. Lopes-Martins, "A systematic review with meta-analysis of the effect of low-level laser therapy (LLLT) in cancer therapy-induced oral mucositis," Supportive Care in Cancer 19(8), 1069–1077 (2011). [CrossRef] 19. M. Myakishev-Rempel, I. Stadler, P. Brondon, D. R. Axe, M. Friedman, F. B. Nardia, and R. Lanzafame, "A preliminary study of the safety of red light phototherapy of tissues harboring cancer," Photomed. Laser Surg. 30(9), 551–558 (2012). [CrossRef] 20. V. Gomez Godinez, V. Morar, C. Carmona, Y. Gu, K. Sung, L. Z. Shi, C. Wu, D. Preece, and M. W. Berns, "Laser-induced shockwave (LIS) to study neuronal Ca2+responses," Front. Bioeng. Biotechnol. 9, 97 (2021). [CrossRef] 21. H. Hirase, V. Nikolenko, J. H. Goldberg, and R. Yuste, "Multiphoton stimulation of neurons," J. Neurobiol. 51(3), 237–247 (2002). [CrossRef] 22. Y. Zhao, Y. Zhang, W. Zhou, X. Liu, S. Zeng, and Q. Luo, "Characteristics of calcium signaling in astrocytes induced by photostimulation with femtosecond laser," J. Biomed. Opt. 15(3), 035001 (2010). [CrossRef] 23. H. He, S. Li, S. Wang, M. Hu, Y. Cao, and C. Wang, "Manipulation of cellular light from green fluorescent protein by a femtosecond laser," Nat. Photonics 6(10), 651–656 (2012). [CrossRef] 24. S. Wang, Y. Liu, D. Zhang, S.-C. Chen, S.-K. Kong, M. Hu, Y. Cao, and H. He, "Photoactivation of extracellular-signal-regulated kinase signaling in target cells by femtosecond laser," Laser Photonics Rev. 12(7), 1700137 (2018). [CrossRef] 25. J. McCormack and R. Denton, "Ca2+ as a second messenger within mitochondria," Trends Biochem. Sci. 11(6), 258–262 (1986). [CrossRef] 26. P. Cheng, X. Tian, W. Tang, J. Cheng, J. Bao, H. Wang, S. Zheng, Y. Wang, X. Wei, and T. Chen, "Direct control of store-operated calcium channels by ultrafast laser," Cell Res. 31(7), 758–772 (2021). [CrossRef] 27. V. Farfariello, O. Iamshanova, E. Germain, I. Fliniaux, and N. Prevarskaya, "Calcium homeostasis in cancer: a focus on senescence," Biochim. Biophys. Acta, Mol. Cell Res. 1853(9), 1974–1979 (2015). [CrossRef] 28. N. Martin and D. Bernard, "Calcium signaling and cellular senescence," Cell Calcium 70, 16–23 (2018). [CrossRef] 29. C. Wiel, H. Lallet-Daher, D. Gitenay, B. Gras, B. Le Calvé, A. Augert, M. Ferrand, N. Prevarskaya, H. Simonnet, and D. Vindrieux, "Endoplasmic reticulum calcium release through ITPR2 channels leads to mitochondrial calcium accumulation and senescence," Nat. Commun. 5(1), 3792 (2014). [CrossRef] 30. A. Vogel, J. Noack, G. Hüttman, and G. Paltauf, "Mechanisms of femtosecond laser nanosurgery of cells and tissues," Appl. Phys. B 81(8), 1015–1047 (2005). [CrossRef] 31. Z. Dou and S. L. Berger, "Senescence elicits stemness: a surprising mechanism for cancer relapse," Cell Metab. 27(4), 710–711 (2018). [CrossRef] 32. A. Rizzino, "Concise review: The SOX2-OCT4 connection: Critical players in a much larger interdependent network integrated at multiple levels," Stem Cells 31(6), 1033–1039 (2013). [CrossRef] 33. M. Robinson, S. F. Gilbert, J. A. Waters, O. Lujano-Olazaba, J. Lara, L. J. Alexander, S. E. Green, G. A. Burkeen, O. Patrus, and Z. Sarwar, "Characterization of SOX2, OCT4 and NANOG in ovarian cancer tumor-initiating cells," Cancers 13(2), 262 (2021). [CrossRef] 34. M. A. Al-Mohanna, H. H. Al-Khalaf, N. Al-Yousef, and A. Aboussekhra, "The p16 INK4a tumor suppressor controls p21 WAF1 induction in response to ultraviolet light," Nucleic Acids Res. 35(1), 223–233 (2006). [CrossRef] 35. B. Shamloo and S. Usluer, "p21 in cancer research," Cancers 11(8), 1178 (2019). [CrossRef] 36. A. Okuma, A. Hanyu, S. Watanabe, and E. Hara, "p16 Ink4a and p21 Cip1/Waf1 promote tumour growth by enhancing myeloid-derived suppressor cells chemotaxis," Nat. Commun. 8(1), 2050 (2017). [CrossRef] 37. C.-H. Hsu, S. J. Altschuler, and L. F. Wu, "Patterns of early p21 dynamics determine proliferation-senescence cell fate after chemotherapy," Cell 178(2), 361–373.e12 (2019). [CrossRef] 38. G. Stein, L. F. Drullinger, A. Soulard, and V. Dulic, "Differential roles for cyclin-dependent kinase inhibitors p21 and p16 in the mechanisms of senescence and differentiation in human fibroblasts," Mol Cell Biol 19(3), 2109–2117 (1999). [CrossRef] 39. N. Azazmeh, B. Assouline, E. Winter, S. Ruppo, Y. Nevo, A. Maly, K. Meir, A. K. Witkiewicz, J. Cohen, and S. V. Rizou, "Chronic expression of p16 INK4a in the epidermis induces Wnt-mediated hyperplasia and promotes tumor initiation," Nat. Commun. 11(1), 2711–2713 (2020). [CrossRef] 40. P. V. Vasileiou, K. Evangelou, K. Vlasis, G. Fildisis, M. I. Panayiotidis, E. Chronopoulos, P.-G. Passias, M. Kouloukoussa, V. G. Gorgoulis, and S. Havaki, "Mitochondrial homeostasis and cellular senescence," Cells 8(7), 686 (2019). [CrossRef] 41. H. Clevers, "The cancer stem cell: premises, promises and challenges," Nat. Med. 17(3), 313–319 (2011). [CrossRef] 42. S. Lee and C. A. Schmitt, "The dynamic nature of senescence in cancer," Nat. Cell Biol. 21(1), 94–101 (2019). [CrossRef] 43. R. Zhang, H. Li, S. Zhang, Y. Zhang, N. Wang, H. Zhou, H. He, G. Hu, T.-C. Zhang, and W. Ma, "RXRα provokes tumor suppression through p53/p21/p16 and PI3K-AKT signaling pathways during stem cell differentiation and in cancer cells," Cell Death Dis. 9(1), 1–13 (2018). [CrossRef] 44. T. Brabletz, R. Kalluri, M. A. Nieto, and R. A. Weinberg, "EMT in cancer," Nat. Rev. Cancer 18(2), 128–134 (2018). [CrossRef] 45. A. Toso, A. Revandkar, D. Di Mitri, I. Guccini, M. Proietti, M. Sarti, S. Pinton, J. Zhang, M. Kalathur, and G. Civenni, "Enhancing chemotherapy efficacy in Pten-deficient prostate tumors by activating the senescence-associated antitumor immunity," Cell Rep. 9(1), 75–89 (2014). [CrossRef] 46. J. C. Acosta and J. Gil, "Senescence: a new weapon for cancer therapy," Trends Cell Biol. 22(4), 211–219 (2012). [CrossRef] T. Kuilman, C. Michaloglou, W. J. Mooi, and D. S. Peeper, "The essence of senescence," Genes Dev. 24(22), 2463–2479 (2010). D. Muñoz-Espín and M. Serrano, "Cellular senescence: from physiology to pathology," Nat. Rev. Mol. Cell Biol. 15(7), 482–496 (2014). M. Milanovic, D. N. Fan, D. Belenki, J. H. M. Däbritz, Z. Zhao, Y. Yu, J. R. Dörr, L. Dimitrova, D. Lenze, and I. A. M. Barbosa, "Senescence-associated reprogramming promotes cancer stemness," Nature 553(7686), 96–100 (2018). M. Milanovic, Y. Yu, and C. A. Schmitt, "The senescence–stemness alliance–a cancer-hijacked regeneration principle," Trends Cell Biol. 28(12), 1049–1061 (2018). K. Deisseroth, "Optogenetics," Nat. Methods 8(1), 26–29 (2011). X. Zhou, J. Wang, J. Chen, Y. Qi, D. Nan, L. Jin, X. Qian, X. Wang, Q. Chen, and X. Liu, "Optogenetic control of epithelial-mesenchymal transition in cancer cells," Sci. Rep. 8(1), 14098 (2018). Y. Hagihara, A. Sakamoto, T. Tokuda, T. Yamashita, S. Ikemoto, A. Kimura, M. Haruta, K. Sasagawa, J. Ohta, and K. Takayama, "Photoactivatable oncolytic adenovirus for optogenetic cancer therapy," Cell Death Dis. 11(7), 570–579 (2020). J. C. Williams and T. Denison, "From optogenetic technologies to neuromodulation therapies," Sci. Transl. Med. 5(177), 177ps176 (2013). C. Towne and K. R. Thompson, "Overview on research and clinical applications of optogenetics," Current protocols in pharmacology 75(1), 11.19. 11–11.19. 21 (2016). M. R. Hamblin, "Mechanisms and applications of the anti-inflammatory effects of photobiomodulation," AIMS Biophys. 4(3), 337–361 (2017). P. Avci, A. Gupta, M. Sadasivam, D. Vecchio, Z. Pam, N. Pam, and M. R. Hamblin, "Low-level laser (light) therapy (LLLT) in skin: stimulating, healing, restoring," in Seminars in Cutaneous Medicine and Surgery (NIH Public Access, 2013), 41. L. Brosseau, G. Wells, S. Marchand, I. Gaboury, B. Stokes, M. Morin, L. Casimiro, K. Yonge, and P. Tugwell, "Randomized controlled trial on low level laser therapy (LLLT) in the treatment of osteoarthritis (OA) of the hand," Lasers Surg. Med. 36(3), 210–219 (2005). C. Ramos Silva, F. V. Cabral, C. F. M. de Camargo, S. C. Núñez, T. Mateus Yoshimura, A. C. de Lima Luna, D. A. Maria, and M. S. Ribeiro, "Exploring the effects of low-level laser therapy on fibroblasts and tumor cells following gamma radiation exposure," J. Biophotonics 9(11-12), 1157–1166 (2016). F. M. de Lima, J. M. Bjordal, R. Albertini, F. V. Santos, and F. Aimbire, "Low-level laser therapy (LLLT) attenuates RhoA mRNA expression in the rat bronchi smooth muscle exposed to tumor necrosis factor-α," Lasers Med. Sci. 25(5), 661–668 (2010). L. Lin, L. Liu, B. Zhao, R. Xie, W. Lin, H. Li, Y. Li, M. Shi, Y.-G. Chen, and T. A. Springer, "Carbon nanotube-assisted optical activation of TGF-β signalling by near-infrared light," Nat. Nanotechnol. 10(5), 465–471 (2015). K. Zhang, L. Duan, Q. Ong, Z. Lin, P. M. Varman, K. Sung, and B. Cui, "Light-mediated kinetic control reveals the temporal effect of the Raf/MEK/ERK pathway in PC12 cell neurite outgrowth," PLoS One 9(3), e92917 (2014). R. Albertini, A. Villaverde, F. Aimbire, M. Salgado, J. Bjordal, L. Alves, E. Munin, and M. Costa, "Anti-inflammatory effects of low-level laser therapy (LLLT) with two different red wavelengths (660 nm and 684 nm) in carrageenan-induced rat paw edema," J. Photochem. Photobiol., B 89(1), 50–55 (2007). J. M. Bjordal, R.-J. Bensadoun, J. Tunèr, L. Frigo, K. Gjerde, and R. A. Lopes-Martins, "A systematic review with meta-analysis of the effect of low-level laser therapy (LLLT) in cancer therapy-induced oral mucositis," Supportive Care in Cancer 19(8), 1069–1077 (2011). M. Myakishev-Rempel, I. Stadler, P. Brondon, D. R. Axe, M. Friedman, F. B. Nardia, and R. Lanzafame, "A preliminary study of the safety of red light phototherapy of tissues harboring cancer," Photomed. Laser Surg. 30(9), 551–558 (2012). V. Gomez Godinez, V. Morar, C. Carmona, Y. Gu, K. Sung, L. Z. Shi, C. Wu, D. Preece, and M. W. Berns, "Laser-induced shockwave (LIS) to study neuronal Ca2+responses," Front. Bioeng. Biotechnol. 9, 97 (2021). H. Hirase, V. Nikolenko, J. H. Goldberg, and R. Yuste, "Multiphoton stimulation of neurons," J. Neurobiol. 51(3), 237–247 (2002). Y. Zhao, Y. Zhang, W. Zhou, X. Liu, S. Zeng, and Q. Luo, "Characteristics of calcium signaling in astrocytes induced by photostimulation with femtosecond laser," J. Biomed. Opt. 15(3), 035001 (2010). H. He, S. Li, S. Wang, M. Hu, Y. Cao, and C. Wang, "Manipulation of cellular light from green fluorescent protein by a femtosecond laser," Nat. Photonics 6(10), 651–656 (2012). S. Wang, Y. Liu, D. Zhang, S.-C. Chen, S.-K. Kong, M. Hu, Y. Cao, and H. He, "Photoactivation of extracellular-signal-regulated kinase signaling in target cells by femtosecond laser," Laser Photonics Rev. 12(7), 1700137 (2018). J. McCormack and R. Denton, "Ca2+ as a second messenger within mitochondria," Trends Biochem. Sci. 11(6), 258–262 (1986). P. Cheng, X. Tian, W. Tang, J. Cheng, J. Bao, H. Wang, S. Zheng, Y. Wang, X. Wei, and T. Chen, "Direct control of store-operated calcium channels by ultrafast laser," Cell Res. 31(7), 758–772 (2021). V. Farfariello, O. Iamshanova, E. Germain, I. Fliniaux, and N. Prevarskaya, "Calcium homeostasis in cancer: a focus on senescence," Biochim. Biophys. Acta, Mol. Cell Res. 1853(9), 1974–1979 (2015). N. Martin and D. Bernard, "Calcium signaling and cellular senescence," Cell Calcium 70, 16–23 (2018). C. Wiel, H. Lallet-Daher, D. Gitenay, B. Gras, B. Le Calvé, A. Augert, M. Ferrand, N. Prevarskaya, H. Simonnet, and D. Vindrieux, "Endoplasmic reticulum calcium release through ITPR2 channels leads to mitochondrial calcium accumulation and senescence," Nat. Commun. 5(1), 3792 (2014). A. Vogel, J. Noack, G. Hüttman, and G. Paltauf, "Mechanisms of femtosecond laser nanosurgery of cells and tissues," Appl. Phys. B 81(8), 1015–1047 (2005). Z. Dou and S. L. Berger, "Senescence elicits stemness: a surprising mechanism for cancer relapse," Cell Metab. 27(4), 710–711 (2018). A. Rizzino, "Concise review: The SOX2-OCT4 connection: Critical players in a much larger interdependent network integrated at multiple levels," Stem Cells 31(6), 1033–1039 (2013). M. Robinson, S. F. Gilbert, J. A. Waters, O. Lujano-Olazaba, J. Lara, L. J. Alexander, S. E. Green, G. A. Burkeen, O. Patrus, and Z. Sarwar, "Characterization of SOX2, OCT4 and NANOG in ovarian cancer tumor-initiating cells," Cancers 13(2), 262 (2021). M. A. Al-Mohanna, H. H. Al-Khalaf, N. Al-Yousef, and A. Aboussekhra, "The p16 INK4a tumor suppressor controls p21 WAF1 induction in response to ultraviolet light," Nucleic Acids Res. 35(1), 223–233 (2006). B. Shamloo and S. Usluer, "p21 in cancer research," Cancers 11(8), 1178 (2019). A. Okuma, A. Hanyu, S. Watanabe, and E. Hara, "p16 Ink4a and p21 Cip1/Waf1 promote tumour growth by enhancing myeloid-derived suppressor cells chemotaxis," Nat. Commun. 8(1), 2050 (2017). C.-H. Hsu, S. J. Altschuler, and L. F. Wu, "Patterns of early p21 dynamics determine proliferation-senescence cell fate after chemotherapy," Cell 178(2), 361–373.e12 (2019). G. Stein, L. F. Drullinger, A. Soulard, and V. Dulic, "Differential roles for cyclin-dependent kinase inhibitors p21 and p16 in the mechanisms of senescence and differentiation in human fibroblasts," Mol Cell Biol 19(3), 2109–2117 (1999). N. Azazmeh, B. Assouline, E. Winter, S. Ruppo, Y. Nevo, A. Maly, K. Meir, A. K. Witkiewicz, J. Cohen, and S. V. Rizou, "Chronic expression of p16 INK4a in the epidermis induces Wnt-mediated hyperplasia and promotes tumor initiation," Nat. Commun. 11(1), 2711–2713 (2020). P. V. Vasileiou, K. Evangelou, K. Vlasis, G. Fildisis, M. I. Panayiotidis, E. Chronopoulos, P.-G. Passias, M. Kouloukoussa, V. G. Gorgoulis, and S. Havaki, "Mitochondrial homeostasis and cellular senescence," Cells 8(7), 686 (2019). H. Clevers, "The cancer stem cell: premises, promises and challenges," Nat. Med. 17(3), 313–319 (2011). S. Lee and C. A. Schmitt, "The dynamic nature of senescence in cancer," Nat. Cell Biol. 21(1), 94–101 (2019). R. Zhang, H. Li, S. Zhang, Y. Zhang, N. Wang, H. Zhou, H. He, G. Hu, T.-C. Zhang, and W. Ma, "RXRα provokes tumor suppression through p53/p21/p16 and PI3K-AKT signaling pathways during stem cell differentiation and in cancer cells," Cell Death Dis. 9(1), 1–13 (2018). T. Brabletz, R. Kalluri, M. A. Nieto, and R. A. Weinberg, "EMT in cancer," Nat. Rev. Cancer 18(2), 128–134 (2018). A. Toso, A. Revandkar, D. Di Mitri, I. Guccini, M. Proietti, M. Sarti, S. Pinton, J. Zhang, M. Kalathur, and G. Civenni, "Enhancing chemotherapy efficacy in Pten-deficient prostate tumors by activating the senescence-associated antitumor immunity," Cell Rep. 9(1), 75–89 (2014). J. C. Acosta and J. Gil, "Senescence: a new weapon for cancer therapy," Trends Cell Biol. 22(4), 211–219 (2012). Aboussekhra, A. Acosta, J. C. Aimbire, F. Albertini, R. Alexander, L. J. Al-Khalaf, H. H. Al-Mohanna, M. A. Altschuler, S. J. Alves, L. Al-Yousef, N. Assouline, B. Augert, A. Avci, P. Axe, D. R. Azazmeh, N. Bao, J. Barbosa, I. A. M. Belenki, D. Bensadoun, R.-J. Berger, S. L. Bernard, D. Berns, M. W. Bjordal, J. Bjordal, J. M. Brabletz, T. Brondon, P. Brosseau, L. Burkeen, G. A. Cabral, F. V. Cao, Y. Carmona, C. Casimiro, L. Chen, Q. Chen, S.-C. Chen, T. Chen, Y.-G. Cheng, P. Chronopoulos, E. Civenni, G. Clevers, H. Cohen, J. Cui, B. Däbritz, J. H. M. de Camargo, C. F. M. de Lima, F. M. de Lima Luna, A. C. Deisseroth, K. Denison, T. Denton, R. Di Mitri, D. Dimitrova, L. Dörr, J. R. Dou, Z. Drullinger, L. F. Duan, L. Dulic, V. Evangelou, K. Fan, D. N. Farfariello, V. Ferrand, M. Fildisis, G. Fliniaux, I. Friedman, M. Frigo, L. Gaboury, I. Germain, E. Gil, J. Gilbert, S. F. Gitenay, D. Gjerde, K. Goldberg, J. H. Gomez Godinez, V. Gorgoulis, V. G. Gras, B. Gu, Y. Guccini, I. Gupta, A. Hagihara, Y. Hamblin, M. R. Hanyu, A. Hara, E. Haruta, M. Havaki, S. He, H. Hirase, H. Hsu, C.-H. Hu, G. Hüttman, G. Iamshanova, O. Ikemoto, S. Jin, L. Kalathur, M. Kalluri, R. Kimura, A. Kong, S.-K. Kouloukoussa, M. Kuilman, T. Lallet-Daher, H. Lanzafame, R. Lara, J. Le Calvé, B. Lenze, D. Li, S. Lin, L. Lin, Z. Liu, L. Liu, X. Lopes-Martins, R. A. Lujano-Olazaba, O. Luo, Q. Ma, W. Maly, A. Marchand, S. Maria, D. A. Martin, N. Mateus Yoshimura, T. McCormack, J. Meir, K. Michaloglou, C. Milanovic, M. Mooi, W. J. Morar, V. Morin, M. Munin, E. Muñoz-Espín, D. Myakishev-Rempel, M. Nan, D. Nardia, F. B. Nevo, Y. Nieto, M. A. Nikolenko, V. Noack, J. Núñez, S. C. Ohta, J. Okuma, A. Ong, Q. Paltauf, G. Pam, N. Pam, Z. Panayiotidis, M. I. Passias, P.-G. Patrus, O. Peeper, D. S. Pinton, S. Preece, D. Prevarskaya, N. Proietti, M. Qi, Y. Qian, X. Ramos Silva, C. Revandkar, A. Ribeiro, M. S. Rizou, S. V. Rizzino, A. Robinson, M. Ruppo, S. Sadasivam, M. Sakamoto, A. Salgado, M. Santos, F. V. Sarti, M. Sarwar, Z. Sasagawa, K. Schmitt, C. A. Serrano, M. Shamloo, B. Shi, L. Z. Simonnet, H. Soulard, A. Springer, T. A. Stadler, I. Stein, G. Stokes, B. Takayama, K. Tang, W. Thompson, K. R. Tian, X. Tokuda, T. Toso, A. Towne, C. Tugwell, P. Tunèr, J. Usluer, S. Varman, P. M. Vasileiou, P. V. Vecchio, D. Villaverde, A. Vindrieux, D. Vlasis, K. Vogel, A. Wang, N. Wang, X. Watanabe, S. Waters, J. A. Wei, X. Weinberg, R. A. Wells, G. Wiel, C. Williams, J. C. Winter, E. Witkiewicz, A. K. Wu, L. F. Xie, R. Yamashita, T. Yonge, K. Yu, Y. Yuste, R. Zeng, S. Zhang, D. Zhang, K. Zhang, R. Zhang, T.-C. Zhang, Y. Zhao, B. Zhao, Y. Zhao, Z. Zheng, S. Zhou, H. Zhou, W. Zhou, X. AIMS Biophys. (1) Appl. Phys. B (1) Biochim. Biophys. Acta, Mol. Cell Res. (1) Cell Calcium (1) Cell Death Dis. (2) Cell Metab. (1) Cell Rep. (1) Cell Res. (1) Cells (1) Current protocols in pharmacology (1) Front. Bioeng. Biotechnol. (1) Genes Dev. (1) J. Biophotonics (1) J. Neurobiol. (1) J. Photochem. Photobiol., B (1) Laser Photonics Rev. (1) Lasers Med. Sci. (1) Lasers Surg. Med. (1) Mol Cell Biol (1) Nat. Cell Biol. (1) Nat. Commun. (3) Nat. Med. (1) Nat. Nanotechnol. (1) Nat. Rev. Cancer (1) Nat. Rev. Mol. Cell Biol. (1) Nucleic Acids Res. (1) Photomed. Laser Surg. (1) Sci. Rep. (1) Sci. Transl. Med. (1) Supportive Care in Cancer (1) Trends Biochem. Sci. (1) Trends Cell Biol. (2) Supplement 1 Supplementary Figures Ruikang (Ricky) Wang, Editor-in-Chief
CommonCrawl
How to reconcile these two tensor notations? How can I connect the symbols, i.e. the notation, preferably in English or with a 2 x 2 (or 3 x 3) matrix example, between an order 2 tensor expressed as: A $(p,q)$ tensor, $T$ is a MULTILINEAR MAP that takes $p$ copies of $V^*$ and $q$ copies of $V$ and maps multilinearly (linear in each entry) to $k:$ $$T: \underset{p}{\underbrace{V^*\times \cdots \times V^*}}\times \underset{q}{\underbrace{V\times \cdots \times V\times V}} \overset{\sim}\rightarrow K\tag 1$$ $$\large \mathbf{T}= T_{ij}\;\mathbf{\hat e_i}\otimes\mathbf{\hat e_j}\tag 2$$ Would $\large T_{ij}$ in Eq.2, which I guess can be interpreted as coefficients or entries in a matrix) be the $V^*$ elements of the dual space (functionals), while the $\mathbf{\hat e_i}$ and $\mathbf{\hat e_j}$ are the vectors $V$? Are the $\times$ in Eq.1 Cartesian products (presumably they can't be cross-products...)? Are the $V$'s in Eq. 1 just vectors, or are they elements of the double dual? Are the indices $(p,q)$ in Eq.1 the equivalent of $(i,j)$ in Eq.2? I realize Eq. 1 is probably more general, but it should be possible to reduce it to the more simple case of Eq.2, again just to be able to enunciate what the symbols are. Do both equations produce a field element? linear-algebra tensors paulellis Antoni ParelladaAntoni Parellada $\begingroup$ I realize that I am asking about the same issue from different perspectives, and that I haven't accepted any answers so far. It is extremely rare for me not to "accept", but I am hoping to get an answer that at least anchors the problem, and charts the road ahead in the understanding of this issue. After that, I will go back and accept the previous answers. $\endgroup$ – Antoni Parellada Feb 12 '17 at 17:35 $\begingroup$ In case it's helpful, the first time I understood tensors (including the issues that seem to be puzzling you) was while working through Chapter 4 of Spivak's Calculus on Manifolds. $\endgroup$ – Andrew D. Hwang Feb 12 '17 at 17:55 $\begingroup$ @AndrewD.Hwang I have posted 3 questions on this topic over the weekend. Your answer and follow-up have been great, so I would like to bring to your attention a bounty of hard-earned 100 point I just posted on my original question on this topic. $\endgroup$ – Antoni Parellada Feb 12 '17 at 20:21 $\begingroup$ see a version math.stackexchange.com/questions/1545870/… $\endgroup$ – janmarqz Feb 14 '17 at 21:22 $\begingroup$ Or this another math.stackexchange.com/questions/1750015/… $\endgroup$ – janmarqz Feb 14 '17 at 22:37 The $\times$ in eq. 1 are Cartesian products. Note that, in finite dimensions, $V^{**}=V$, so vectors can be seen as elements of the double dual. Now, if $\mathcal T^{(p,q)}(V)$ denotes the space of $(p,q)$ tensors over a vector space $V$, $\mathcal T^{(p,q)}(V)$ is a vector space, and if we pick $\{e_1, \cdots, e_n\}$ a basis for $V$, and $\{\omega_1, \cdots, \omega_n\}$ the dual basis, we can construct a basis for $\mathcal T^{(p,q)}(V)$ with the elements: $$e_{i_1} \otimes \cdots \otimes e_{i_p} \otimes \omega^{j_1} \otimes \cdots \otimes \omega^{j_q}$$ with $\{i_1, ..., i_p, j_1, ..., j_q\}\in\{1,...,n\}$. So if $T\in \mathcal T^{(p,q)}(V)$, then it would be \begin{equation} T=\sum \lambda_{i_1, \cdots, i_p}^{j_1, \cdots, j_q} e_{i_1} \otimes \cdots \otimes e_{i_p} \otimes \omega^{j_1} \otimes \cdots \otimes \omega^{j_q} \end{equation} Equation 2 is the particular case of the previous equation for $\mathcal T^{(0,2)}(V)$ (we will write, instead of $i_1,i_2$, $i,j$): $$T=\lambda_{ij}\ \omega^i \otimes \omega^j$$ As you only have two indices, you can form a matrix with the numbers $(\lambda_{ij})$. A. Salguero-AlarcónA. Salguero-Alarcón $\begingroup$ What is the $0$ in $\mathcal T^{(2,0)}(V)$? And the $w$'s in the final expression have supraindices, as opposed to subindices in Eq.2... $\endgroup$ – Antoni Parellada Feb 12 '17 at 17:50 $\begingroup$ That means that your tensor is $T:V\times V \longrightarrow K$, it is, you won't take any copies of $V^*$. With your notation, it would be $\mathcal T^{(0,2)}$. Let me fix it. $\endgroup$ – A. Salguero-Alarcón Feb 12 '17 at 17:51 $\begingroup$ Why is your last expression different from Eq.2 (sub- vs. supra- indices)? $\endgroup$ – Antoni Parellada Feb 12 '17 at 17:53 $\begingroup$ It doesn't matter where you put the indices. If you are working with a $(p,q)$ tensor, it's common to write them as I've done: sub for $e$'s, and supra for $\omega$'s. But it's just for the sake of clarity. $$T=\lambda_{ij} \omega_i \otimes \omega_j$$ is perfect. $\endgroup$ – A. Salguero-Alarcón Feb 12 '17 at 17:56 $\begingroup$ Yes, both notations mean the same. No matter where you write the indices. $\endgroup$ – A. Salguero-Alarcón Feb 12 '17 at 18:05 Not the answer you're looking for? Browse other questions tagged linear-algebra tensors or ask your own question. Understanding the definition of tensors as multilinear maps What is the most general/abstract way to think about Tensors Generalized "scalar product" based on multilinear form? Tensor operation on a vector space Tensor product and valence of a tensor? raising/ lowering indices How to understand acting one tensor on another tensor to obtain a third tensor? Tensor product of dual vectors and vectors Understanding the notation in the definition of a tensor through an example. Expected output of a tensor - Is it a real (field) element or a vector? Matrix as tensor exercise with answer Expressing conversion of a $(1,1)$ tensor to a $(2,0)$ tensor in terms of matrices
CommonCrawl
HomeNewsprobability density function questions probability density function questions Suppose two random variables X and Y have density function f_{X Y} = {x e^{- x(1 + y)}, x greater than 0, y greater than 0: 0, otherwise (a) Determine the marginal density function of X, f_X(x). x+y& \text{ for } y\geq x\geq0, y\geq0 \\ How does linux retain control of the CPU on a single-core machine? Find the probability x is less than 0.25. b) The probability can take any value. The random variable X is distributing according to the following p.d.f: f(x) = xe^{-x}, x \geq 0. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. 6. Thepdffor X is known as f(x) = (1 24 0 x 24 0 otherwise If we want to know the probability that the clock will stop between For the pdf f(x) = 2 - 2 x, 0 less than or equal to x less than or equal to 1 : 0 otherwise. The joint density of X and Y is f x , y x e 3 x y , 1 x 4 , y 0. a Find E X Y by conditioning on X. b Find C o v X , Y . Consider the function f(x) = 3/16 (4-x^2), 0 is less than or equal to x is less than or equal to 2 which is shown in the figure. In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space(the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. This means that all probabilities lie between 0 and 1. f(x) = \frac{1}{18} \sqrt{9 - x}, [0,9]. ? Limitations of Monte Carlo simulations in finance. (c) Find the pdf of Y. Use MathJax to format equations. Sciences, Culinary Arts and Personal X and Y have the following joint probability density function f(x, y) = x + y, if 0 is less than x is less than 1 and 0 is less than y is less than 1; 0, otherwise. What is the probability mass function of X? Explain how to verify a valid density function statistics. c. Find P(2 less than X less than 5) d. Find P(X greater than 4). Check whether the given function is a probability density function. Suppose that random variable X has probability density function as follows. Step 1. f(x) ≥ 0 ∀ x ∈ R. The function f(x) should be greater than or equal to zero. The graph of f x( ) further consists ofm a straight line segment from P to Q a(,0), for Let \alpha \neq 0 \space and \space \beta \space \epsilon \space \mathbb{R}. a. Test your understanding with practice problems and step-by-step solutions. (c)... Let the probability density of X be given by f(x) = c(4x ? Suppose that X follows an exponential distribution with E(X)=200. The length of telephone calls (in minutes) in a public telephone booth has the probability density function: A lamp has two bulbs, each of a type with an average lifetime of 4 hours. (b) Find F_Y (y). Find a number b such that the function p ( x ) = e - x , defined on [ 0 , b ] , is probability distribution function. b. (The median of a distrib... Time a train is late. Assume X has density fx(x)=3/(x^4) on the interval (1,infty) . 0.0179 b. Let X1, X2, . Find: P(Y greater than or equal to 1/2 | Y greater than or equal to 1 - 2 X). Find the constan... Let X be an RV with PDF f(x) = 1/3 if - 1 less than x less than 2, and = 0 otherwise. 1) View Solution. If y = 3 (1 - 2 y + y^2) where 0 greater than y greater than 1. ? (d) Calculate Var(X). Compute the variance. The current in a certain circuit as measured by an ammeter is a continuous random variable X with the blowing probability density function: f(x) = \left\{\begin{matrix} kr \sqrt{x}, & 0 < x < 1\... Let X and Y be discrete random variables with joint pdf f(x,y) = 4/(5xy) if x 1, 2 and y 2, 3, and zero otherwise. 1 0 , otherwise . What does support mean in probability and statistics? 0 & \text{ otherwise } (b) What is the probability that a student will finish the aptitude test in less than 75 minutes? (b) Find E [3 e^{x + 5}]. Find the density of Y = X 2 . (Use a fraction. Explain how to check if a Joint PDF is valid. What are its parameters? Create an account to browse all assets today, Probability Density Function Questions and Answers, Biological and Biomedical Show that this is a PDF. (b) Calculate the expected value of X. This calculus 2 video tutorial provides a basic introduction into probability density functions. f(x) = k(x + 9)^{-2} -6 less than or equal to x less than or equal to 8, and 0 otherwise. ? y 0 "less than" y "less than" 1 2 ? For what value of c (depending on M) is f a density? Find the value of c. Sketch and label the other function. The weights of the bookbags for middle school students were normally distributed with a mean of 29 pounds and a standard deviation of 3.7 pounds. In a center college students' ages are uniformly distributed from 16 to 30. (b) Wh... Let X be a random variable with density fx (t): = a {t 0 < t lessthanorequalto 1, 2 - t 1 < t lessthanorequalto 2, 0 otherwise, where a is a real number. on [0,1] Given: f(x) = 1/2xe^{-x}, 0\leq x \leq \infty is a p.d.f. The density function of X is given by p(x) = {1, 0 less than x less than 1 : 0, otherwise. Find the median. The amount of wait time for a bus is a random variable X with probability density function f(x)=\begin{cases} \frac{1}{25} & \text{ if } 0less than xless than 5\\ \frac{2}{5}-\frac{1}{25}x & \te... Let X and Y be positive random variables (i.e., random variables that only take on positive values) with joint pdf given by f_{XY} (x, y). E(Y) 3. These short objective type questions with answers are very important for Board exams as well as competitive exams. Determine the value c so that the following function can serve as a probability distribution of the discrete random variable x: (a) f(x) = c(x2 + 4), for x = 0, 1, 2, 3; Determine the value of c so that the function can serve as a probability distribution of the discrete random variable X. f(x) = c(x4 + 1), x = 0, 1, 2. I am having trouble with this problem. Suppose that X has as continuous distribution with CDF F(x) = 1-e^{(-kx)} , x \geq 0 0 , x is less than 0 where k is a constant and k is greater than 0 a) Calculate the pdf of X b) Calculate t... Let X denote the amount of space occupied by an article placed in a 1-ft^3 packing container. Don't forget to define the support of the pdf! Lovecraft (?) Kindly show the step-by-step solution. (b) What proportion of Douglas fir trocs have diameter between 40 and 55cm? A continuous Random Variable X that can assume values between x = 1 and x = 3 has a density function given by f(x) = 1/2. a.) Let F(x) be the cdf of a continuous random variable X. Asking for help, clarification, or responding to other answers. How exactly is the domain of the marginal probability density function determined from a joint density function? In certain city, the dirty water consumption (in million of liters/is a random variable X with p.d.f f(x) = {1 / 9 x e^{-x / 3}, for 0 less than x less than infinity : 0, otherwise. (b) E (X). (b) P (2 less than or equal to X less than 5). A proper density function would go way beyond the point I am trying to clarify. (a) What is the probability that a randomly selected Douglas fir tree has a diameter greater than 73cm? What is the benefit of having FIPS hardware-level encryption on a drive when you can use Veracrypt instead? f ( x ) = x / 49 if 0 < x < 7 and f ( x ) = 14 minus x / 49 if 7 < x < 14 A. Find p(X is less than 1.5). Services, Working Scholars® Bringing Tuition-Free College to the Community. For (b) $\displaystyle E[X+2] = \int_{-\infty}^{\infty} (X+2) f(x) dx$, though the definition of $f(x)$ here in effect reduces the range of integration to $[0,1]$. Probability Distribution Multiple Choice Questions and Answers for competitive exams. Find the pdf of U = (1 - Y)^3. However, I prefer to use $\text{Var}(kX)=k^2\text{Var}(X)$. Hint:... Let Y have a probability density function defined as f ( y ) = 0.2 , ? site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Suppose X and Y are continuous random variables with the joint pdf given by f(x,y)=24xy, if \ x>0.y>0,x+y<1 \\ \qquad o, \qquad otherwise Find P(Y>2X) Please explain. F(x) for all x. (a) Determine the constant... A) compute E(x) B) compute Var(x) C) compute P( X \geq 3) D) compute P(X \geq 3|X \geq -1) E) what is the MGF of X 4? Why is the concept of injective functions difficult for my students? What is the value of the constant C? The probability density function is positive. f ( x ) = c 2 ? Suppose X has the density f(x) = c \sqrt{x^6}, -1 \leq x \leq 1, 0, otherwise, where c is a positive constant. Open Kettle Canning, Laser Reflective Material, Silhouette Of A Woman Painting, Barilla Collezione Three Cheese Tortellini, Sweet Potato Pancakes Korean, Rosemary In Telugu, Mg+2 Protons Neutrons Electrons, Genmaicha Tea Caffeine,
CommonCrawl
New general decay result for a system of two singular nonlocal viscoelastic equations with general source terms and a wide class of relaxation functions Mohammad M. Al-Gharabli ORCID: orcid.org/0000-0003-2098-01171, Adel M. Al-Mahdi1 & Salim A. Messaoudi2 This work is concerned with a system of two singular viscoelastic equations with general source terms and nonlocal boundary conditions. We discuss the stabilization of this system under a very general assumption on the behavior of the relaxation function \(k_{i}\), namely, $$\begin{aligned} k_{i}^{\prime }(t)\le -\xi _{i}(t) \Psi _{i} \bigl(k_{i}(t)\bigr),\quad i=1,2. \end{aligned}$$ We establish a new general decay result that improves most of the existing results in the literature related to this system. Our result allows for a wider class of relaxation functions, from which we can recover the exponential and polynomial rates when \(k_{i}(s) = s^{p}\) and p covers the full admissible range \([1, 2)\). In this paper, we consider the following system: $$\begin{aligned} \textstyle\begin{cases} u_{tt}(x,t)- \frac{1}{x}(xu_{x}(x,t))_{x}+\int _{0}^{t}k_{1}(t-s) \frac{1}{x}(xu_{x}(x,s))_{x}\,ds=f_{1}(u,v),\\ \quad x\in \Omega, t>0, \\ v_{tt}(x,t)- \frac{1}{x}(xv_{x}(x,t))_{x}+\int _{0}^{t}k_{2}(t-s) \frac{1}{x}(xv_{x}(x,s))_{x}\,ds=f_{1}(u,v),\\ \quad x\in \Omega, t>0, \\ u(x,t)=v(x,t)=0,\quad x \in \partial \Omega, t\geq 0, \\ u(x,0)=u_{0}(x),\qquad u_{t}(x, 0)=u_{1}(x),\qquad v(x,0)=v_{0}(x),\\ v_{t}(x, 0)=v_{1}(x),\quad x\in \Omega, \\ u(L,t)=v(L,t)=0, \qquad\int _{0}^{L}xu(x,t)\,dx=\int _{0}^{L}xv(x,t)\,dx=0, \end{cases}\displaystyle \end{aligned}$$ where \(\Omega =(0,L)\), \(k_{i}:[0,+\infty )\longrightarrow (0,+\infty )\), (\(i=1,2\)), are non-increasing differentiable functions satisfying more general conditions to be mentioned later and $$\begin{aligned} \textstyle\begin{cases} f_{1}(u,v)=a{ \vert {u+v} \vert }^{2 (r +1 )} (u+v )+b{ \vert {u} \vert }^{r} u { \vert {v} \vert }^{r+2}, \\ f_{2}(u,v)=a{ \vert {u+v} \vert }^{2 (r +1 )} (u+v )+b{ \vert {v} \vert }^{r} v { \vert {u} \vert }^{r+2}, \end{cases}\displaystyle \end{aligned}$$ where \(r>-1\) and \(a,b>0\). Mixed nonlocal problems for parabolic and hyperbolic partial differential equations have received a great attention during the last few decades. These problems are especially inspired by modern physics and technology. They aim to describe many physical and biological phenomena. For instance, physical phenomena are modeled by initial boundary value problems with nonlocal constraints such as integral boundary conditions, when the data cannot be measured directly on the boundary, but the average value of the solution on the domain is known. Initial boundary value problems for second-order evolution partial differential equations and systems having nonlocal boundary conditions can be encountered in many scientific domains and many engineering models and are widely applied in heat transmission theory, underground water flow, medical science, biological processes, thermoelasticity, chemical reaction diffusion, plasma physics, chemical engineering, heat conduction processes, population dynamics, and control theory. See in this regard the work by Cannon [1], Shi [2], Capasso and Kunisch [3], Cahlon and Shi [4], Ionkin and Moiseev [5], Shi and Shilor [6], Choi and Chan [7], and Ewing and Lin [8]. In early work, most of the research on nonlocal mixed problems was devoted to the classical solutions. Later, mixed problems with integral conditions for both parabolic and hyperbolic equations were studied by Pulkina [9, 10], Yurchuk [11], Kartynnik [12], Mesloub and Bouziani [13], Mesloub and Messaoudi [14, 15], Mesloub [16], and Kamynin [17]. For instance, Said Mesloub and Fatiha Mesloub [18] obtained existence and uniqueness of the solution to the following problem: $$\begin{aligned} u_{tt}- \frac{1}{x}(xu_{x})_{x}+ \int _{0}^{t}k(t-s)\frac{1}{x} (xu_{x} )_{x}\,ds+a u_{t}=f(t,x,u,u_{x}),\quad x\in (0,1), t>0, \end{aligned}$$ and proved that the solution blows up for large initial data and decays for sufficiently small initial data. Mesloub and Messaoud [14] considered the following nonlocal singular problem: $$\begin{aligned} u_{tt}- \frac{1}{x}(xu_{x})_{x}+ \int _{0}^{t}g(t-s)\frac{1}{x} (xu_{x} )_{x}\,ds= \vert u \vert ^{p} u,\quad x\in (0,a), t>0, \end{aligned}$$ and proved blow-up result for large initial data and decay results of sufficiently small initial data enough for \(p>2\). In [19], Draifia et al. proved a general decay result for the following singular one-dimensional viscoelastic system: $$\begin{aligned} \textstyle\begin{cases} u_{tt}-\frac{1}{x}(xu_{x})_{x}+\int _{0}^{t}g_{1}(t-s) \frac{1}{x} (xu_{x}(x,s))_{x}\,ds= \vert v \vert ^{q+1} \vert u \vert ^{p-1}u,\quad\text{in }Q, \\ v_{tt}-\frac{1}{x}(xv_{x})_{x}+\int _{0}^{t}g_{2}(t-s) \frac{1}{x} (xv_{x}(x,s))_{x}\,ds= \vert u \vert ^{p+1} \vert v \vert ^{q-1}v,\quad\text{in }Q, \\ u(x,0)=u_{0}(x),\qquad u_{t}(x,0)=u_{1}(x),\quad x\in (0,\alpha ), \\ v(x,0)=v_{0}(x),\qquad v_{t}(x,0)=v_{1}(x),\quad x\in (0,\alpha ), \\ u(\alpha,t)=v(\alpha,t)=0,\qquad \int _{0}^{\alpha }xu(x,t)\,dx= \int _{0}^{\alpha }xv(x,t)\,dx=0, \end{cases}\displaystyle \end{aligned}$$ where \(Q=(0,\alpha )\times (0,t)\) and \(p,q >1\). Piskin and Ekinci [20] studied problem (1) when the Bessel operator has been replaced by a Kirchhoff operator with a degenerate damping terms. They proved the global existence and established a decay rate of solution and also a finite time blow up. Recently, Boulaaras et al. [21] treated problem (1) and proved the existence of a global solution to the problem using the potential-well theory. Moreover, they established a general decay result in which the relaxation functions \(k_{1}\) and \(k_{2}\) satisfy $$\begin{aligned} k_{i}'(t)\leq - \xi (t) k_{i}^{p}(t),\quad 1\leq p < \frac{3}{2}. \end{aligned}$$ Motivated by the above work, we prove a general stability result of system (1) replacing the condition (6) used in [21] by a more general assumption of the form: Our decay result improves all the existing results in the literature related to this system. This paper is divided into four sections. In Sect. 2, we state some assumptions needed in our work. Some technical lemmas will be given in Sect. 3. The statement with proof of the main result and some examples will be given in Sect. 4. In this section, we present some materials needed in the proof of our results. We also state, without proof, the global existence result for problem (1). Let \(L^{p}_{x}=L^{p}_{x}(0,L)\) be the weighted Banach space equipped with the norm $$\begin{aligned} \Vert u \Vert _{L^{p}_{x}}= \biggl( \int _{0}^{L} x u^{p} \,dx \biggr)^{\frac{1}{p}}. \end{aligned}$$ \(L^{2}_{x}(0,L)\) is the Hilbert space of square integral functions having the finite norm $$\begin{aligned} \Vert u \Vert _{L^{2}_{x}}= \biggl( \int _{0}^{L} x u^{2} \,dx \biggr)^{\frac{1}{2}}, \end{aligned}$$ \(V=V^{1}_{x}(0,L)\) is the Hilbert space equipped with the norm $$\begin{aligned} \Vert u \Vert _{V}= \bigl( \Vert u \Vert ^{2}_{L^{2}_{x}}+ \Vert u_{x} \Vert ^{2}_{L^{2}_{x}} \bigr)^{\frac{1}{2}} \end{aligned}$$ $$\begin{aligned} V_{0}=\bigl\{ u \in V \text{ such that } u(L)=0\bigr\} . \end{aligned}$$ \(\forall w\in V_{0}\), a Poincaré-type inequality is $$\begin{aligned} \Vert w \Vert ^{2}_{L^{2}_{x}} \leq C_{p} \Vert w_{x} \Vert ^{2}_{L^{2}_{x}}. \end{aligned}$$ Remark 2.1 Notice that \(\Vert u \Vert _{V_{0}}=\Vert u_{x} \Vert _{L^{2}_{x}}\) defines an equivalent norm on \(V_{0}\). \((A1)\) : \(k_{i}:\mathbb{R}_{+}\to \mathbb{R}_{+}\) (for \(i=1,2\)) are \(C^{1}\) non-increasing functions satisfying $$\begin{aligned} k_{i}(0)>0,\quad 1- \int _{0}^{+\infty }k_{i}(s)\,ds=:\ell _{i}>0. \end{aligned}$$ There exist non-increasing differentiable functions \(\xi _{i}:[0,+\infty )\longrightarrow (0,+\infty )\) and \(\boldsymbol{C}^{1}\) functions \(\Psi _{i}:[0,+\infty )\longrightarrow [0,+\infty )\) which are linear or strictly increasing and strictly convex \(\boldsymbol{C}^{2}\) functions on \((0,\varepsilon ]\), \(\varepsilon \leq k_{i}(0)\), with \(\Psi _{i}(0)=\Psi _{i}'(0)=0\) such that $$\begin{aligned} k_{i}'(t)\leq -\xi _{i}(t)\Psi _{i}\bigl(k_{i}(t)\bigr),\quad \forall t\geq 0 \text{ and for }i=1,2. \end{aligned}$$ The given functions \(f_{1}\) and \(f_{2}\) satisfy $$\begin{aligned} uf_{1}(u,v)+vf_{2}(u,v)=2(r+2)F(u,v), \quad\forall (u,v)\in \mathbb{R}^{2}, \end{aligned}$$ $$\begin{aligned} 2(r+2) F(u,v)= \bigl[a { \vert u+v \vert }^{2(r+2)}+2b { \vert uv \vert }^{r+2} \bigr]. \end{aligned}$$ (Jensen's inequality) Let \(G:[a,b]\longrightarrow \mathbb{R}\) be a convex function. Assume that the functions \(f:(0,L)\longrightarrow [a,b]\) and \(h:(0,L)\longrightarrow \mathbb{R}\) are integrable such that \(h(x)\geq 0\), for any \(x\in (0,L)\) and \(\int _{0}^{L}h(x)\,dx=k>0\). Then $$\begin{aligned} G \biggl(\frac{1}{k} \int _{0}^{L} f(x)h(x)\,dx \biggr)\leq \frac{1}{k} \int _{0}^{L} G\bigl(f(x)\bigr)h(x)\,dx. \end{aligned}$$ If Ψ is a strictly increasing, strictly convex \(C^{2}\) function over \((0, \varepsilon ]\) and satisfying \(\Psi (0) = \Psi '(0) = 0\), then it has an extension, Ψ̅, that is also strictly increasing and strictly convex \(C^{2}\) over \((0,\infty )\). For example, if \(\Psi (\varepsilon ) = a, \Psi '(\varepsilon ) = b, \Psi ''( \varepsilon ) = c\), and for \(t > \varepsilon \), Ψ̅ can be defined by $$\begin{aligned} \overline{\Psi }(t)=\frac{c}{2} t^{2}+ (b-c \varepsilon )t+ \biggl(a+ \frac{c}{2} {\varepsilon }^{2} - b \varepsilon \biggr). \end{aligned}$$ Since \(\Psi _{i}\) is strictly convex on \((0,\varepsilon ]\) and \(\Psi _{i}(0)=0\), $$\begin{aligned} \Psi _{i}(\theta z)\le \theta \Psi _{i}(z),\quad 0\le \theta \le 1, \forall z\in (0,\varepsilon ]\text{ and }i=1,2. \end{aligned}$$ The modified energy functional E associated to problem (1) is $$\begin{aligned} E(t)={}&\frac{1}{2} \Vert u_{t} \Vert _{L_{x}^{2}}^{2}+ \Vert v_{t} \Vert _{L_{x}^{2}}^{2}+ \frac{1}{2} \biggl(1- \int _{0}^{t}k_{1}(s)\,ds \biggr) \Vert u_{x} \Vert _{L_{x}^{2}}^{2} \\ &{} +\frac{1}{2} \biggl(1- \int _{0}^{t} k_{2}(s)\,ds \biggr) \Vert v_{x} \Vert _{L_{x}^{2}}^{2} \\ &{} +\frac{1}{2}\bigl[(k_{1} o u_{x}) (t)+(k_{2} o v_{x}) (t)\bigr]- \int _{0}^{L}xF(u,v)\,dx, \end{aligned}$$ where, for any \(w\in L^{2}_{loc} ([0,+\infty );L_{x}^{2}(0,L) )\) and \(i=1,2\), $$\begin{aligned} (k_{i}\circ w) (t):= \int _{0}^{t}k_{i}(t-s) \bigl\Vert w(t)-w(s) \bigr\Vert _{L_{x}^{2}}^{2}\,ds. \end{aligned}$$ Using (1) with direct differentiation gives $$\begin{aligned} \frac{dE(t)}{dt}& = -\frac{1}{2} \bigl(k_{1}^{\prime } o u_{x}\bigr) (t)- \frac{1}{2}k_{1}(t) \Vert u_{x} \Vert ^{2}_{L^{2}_{x}}- \frac{1}{2}\bigl(k_{2}^{\prime }o v_{x}\bigr) (t)-\frac{1}{2}k_{2}(t) \Vert v_{x} \Vert ^{2}_{L^{2}_{x}} \\ & \le \frac{1}{2}\bigl(k_{1}^{\prime } o u_{x}\bigr) (t)+\frac{1}{2}\bigl(k_{2}^{\prime }o v_{x}\bigr) (t) \le 0. \end{aligned}$$ Local and global existence In this subsection, we state, without proof, the local and global existence results for system (1), which can be proved similarly to the ones in [14, 18] and [21]. Assume that \((A1)\) and \((A2)\) hold. If \((u_{0},v_{0})\in V_{0}^{2}\) and \((u_{1},v_{1}) \in (L_{x}^{2})^{2} \). Then problem (1) has a unique local solution. For the global existence, we introduce the following functionals: $$\begin{aligned} J(t)={}&\frac{1}{2} \biggl(1- \int _{0}^{t}k_{1}(s)\,ds \biggr) \Vert u_{x} \Vert ^{2}_{L^{2}_{x}} +\frac{1}{2} \biggl(1- \int _{0}^{t} k_{2}(s)\,ds \biggr) \Vert v_{x} \Vert ^{2}_{L^{2}_{x}} \\ &{}+\frac{1}{2}\bigl[(k_{1}o u_{x}) (t)+(k_{2}o v_{x}) (t)\bigr]- \int _{0}^{L}x \bigl[a \vert u+v \vert ^{2(r+2)}+2b \vert uv \vert ^{(r+2)} \bigr]\,dx \end{aligned}$$ $$\begin{aligned} I(t)={}& \biggl(1- \int _{0}^{t} k_{1}(s)\,ds \biggr) \Vert u_{x} \Vert ^{2}_{L^{2}_{x}}+ \biggl(1- \int _{0}^{t}k_{2}(s)\,ds \biggr) \Vert v_{x} \Vert ^{2}_{L^{2}_{x}}+(k_{1}o u_{x}) (t)+(k_{2}o v_{x}) (t) \\ &{} -2(r+2) \int _{0}^{L}x \bigl[a \vert u+v \vert ^{2(r+2)}+2b \vert uv \vert ^{(r+2)} \bigr]\,dx. \end{aligned}$$ We notice that \(E(t)=J(t)+\frac{1}{2}\Vert u_{t} \Vert ^{2}_{L^{2}_{x}}+ \frac{1}{2}\Vert v_{t} \Vert ^{2}_{L^{2}_{x}}\). Suppose that \((A1)\) and \((A2)\) hold. Then, for any \((u_{0},v_{0})\in V_{0}^{2}\) and \((u_{1},v_{1}) \in (L^{2}_{x})^{2} \) satisfying $$\begin{aligned} \textstyle\begin{cases} \beta =\eta { [\frac{2(r+2)}{r+1}E(0) ]}^{r+1} < 1, \\ I(0)=I(u_{0},v_{0})>0, \end{cases}\displaystyle \end{aligned}$$ there exists \(t_{*} > 0\) such that $$\begin{aligned} I(t)=I\bigl(u(t),v(t)\bigr) > 0, \quad\forall t \in [0,t_{*}). \end{aligned}$$ We can easily deduce from Lemma 2.3 that $$\begin{aligned} \ell _{1}{ \Vert { u_{x}} \Vert }_{L_{x}^{2}}^{2}+\ell _{2}{ \Vert {v_{x}} \Vert }_{L_{x}^{2}}^{2} \le { \frac{2(\rho +2)}{\rho +1}E(0)},\quad \forall t \ge 0. \end{aligned}$$ Assume that \((A1)\) and \((A2)\) hold. If \((u_{0},v_{0})\in V_{0}^{2}\) and \((u_{1},v_{1}) \in (L_{x}^{2})^{2} \) and satisfies (16), then the solution of (1) is global and bounded. Technical lemmas In this section, we establish several lemmas needed for the proof of our main result. There exist two positive constants \(c_{1}\) and \(c_{2}\) such that $$\begin{aligned} \int _{0}^{L} x \bigl\vert f_{i}(u,v) \bigr\vert ^{2} \,dx \leq c_{i} \bigl(\ell _{1} \Vert u_{x} \Vert ^{2}_{L^{2}_{x}}+\ell _{2} \Vert v_{x} \Vert ^{2}_{L^{2}_{x}} \bigr)^{2r+3},\quad i=1,2. \end{aligned}$$ We prove inequality (19) for \(f_{1}\) and the same result holds for \(f_{2}\). It is clear that $$\begin{aligned} \bigl\vert {f_{1}(u,v)} \bigr\vert &\le {C \bigl({ \vert {u+v} \vert }^{2r+3}+{ \vert {u} \vert }^{r+1} { \vert {v} \vert }^{r+2} \bigr)} \\ & \le C \bigl({ \vert u \vert }^{2r+3}+{ \vert v \vert }^{2r+3}+{ \vert {u} \vert }^{r+1} { \vert {v} \vert }^{r+2} \bigr). \end{aligned}$$ From (20) and Young's inequality, with $$\begin{aligned} q=\frac{2r+3}{r+1},\qquad q^{\prime }=\frac{2r+3}{r+2}, \end{aligned}$$ we get $$\begin{aligned} { \vert {u} \vert }^{r+1}{ \vert {v} \vert }^{r+2}\le {c_{1} { \vert {u} \vert }^{2r+3} + c_{2}{ \vert {v} \vert }^{2r+3}}, \end{aligned}$$ $$\begin{aligned} \bigl\vert f_{1}(u,v) \bigr\vert \le {C \bigl[{ \vert {u} \vert }^{2r+3} + { \vert {v} \vert }^{2r+3} \bigr]}. \end{aligned}$$ Consequently, by using (7), (12), (13) and the embedding \(V_{0} \hookrightarrow L^{2(2r+3)}\), we obtain $$\begin{aligned} \int _{0}^{L}x{ \bigl\vert {f_{1}(u,v)} \bigr\vert }^{2}\,dx &\le C \bigl( \Vert u \Vert ^{2(2r+3)}_{L_{x}^{2(2r+3)}}+ \Vert v \Vert ^{2(2r+3)}_{L_{x}^{2(2r+3)}} \bigr) \\ &\le {c_{1}{\bigl(\ell _{1} { \Vert {u_{x}} \Vert }_{L^{2}_{x}}^{2}+\ell _{2}{ \Vert { v_{x}} \Vert }_{L^{2}_{x}}^{2} \bigr)}^{2r+3}}. \end{aligned}$$ This completes the proof of Lemma 3.1. □ There exist positive constants d and \(t_{0}\) such that, for any \(t\in [0,t_{0}]\), we have $$\begin{aligned} k_{i}^{\prime }(t)\le -d k_{i}(t),\quad i=1,2. \end{aligned}$$ If \((A1)\) holds. Then, for any \(w\in V_{0}\), \(0<\alpha <1\) and \(i=1,2\), we have $$\begin{aligned} \int _{0}^{L}x \biggl( \int _{0}^{t} k_{i}(t-s) \bigl(w(t)-w(s) \bigr)\,ds \biggr)^{2}\,dx \leq C_{\alpha,i}(h_{i}\circ w) (t), \end{aligned}$$ where \(C_{\alpha,i}:=\int _{0}^{\infty }\frac{k_{i}^{2}(s)}{\alpha k_{i}(s)-k_{i}'(s)}\,ds\) and \(h_{i}(t):=\alpha k_{i}(t)-k_{i}'(t)\). The proof of this lemma goes similar to the one in [22]. □ Under the assumptions \((A1)\) and \((A2)\), the functional $$\begin{aligned} \Phi (t):= \int _{0}^{L}xuu_{t} \,dx+ \int _{0}^{L}xvv_{t} \,dx, \end{aligned}$$ satisfies, along with the solution of system (1), the estimate $$\begin{aligned} \Phi ^{\prime }(t) \le{}& \Vert u_{t} \Vert ^{2}_{L^{2}_{x}}+ \Vert v_{t} \Vert ^{2}_{L^{2}_{x}} - \frac{\ell _{1}}{2} \Vert u_{x} \Vert ^{2}_{L^{2}_{x}}- \frac{\ell _{2}}{2} \Vert v_{x} \Vert ^{2}_{L^{2}_{x}} \\ &{} +C_{\alpha,1}(h_{1}\circ u_{x}) (t)+C_{\alpha,2}(h_{2}\circ v_{x}) (t)+ \int _{0}^{L}xF(u,v)\,dx. \end{aligned}$$ Direct differentiation, using (1), yields $$\begin{aligned} \Phi ^{\prime }(t)={}& \int _{0}^{L}xu^{2}_{t}\,dx+ \biggl(1- \int _{0}^{t}k_{1}(s)\,ds \biggr) \int _{0}^{L}xu_{x}^{2}\,dx \\ &{} + \int _{0}^{L}xu_{x} \int _{0}^{t}k_{1}(t-s) \bigl(u_{x}(s)-u_{x}(t)\bigr)\,ds\,dx \\ &{} + \int _{0}^{L}xv^{2}_{t}\,dx+ \biggl(1- \int _{0}^{t}k_{1}(s)\,ds \biggr) \int _{0}^{L}xv_{t}^{2}\,dx \\ &{} + \int _{0}^{L}xv_{x} \int _{0}^{t}k_{2}(t-s) \bigl(v_{x}(s)-v_{x}(t)\bigr)\,ds\,dx \\ &{} + \int _{0}^{L}x \bigl(uf_{1}(u,v)+vf_{2}(u,v) \bigr)\,dx. \end{aligned}$$ Using Young's inequality, we obtain, for any \(\delta _{1}, \delta _{2}\in (0,1)\), $$\begin{aligned} \Phi ^{\prime }(t)\le{}& \int _{0}^{L}xu^{2}_{t}\,dx- \ell _{1} \int _{0}^{L}xu_{x}^{2}\,dx+ \frac{\delta _{1}}{2} \int _{0}^{L}xu_{x}^{2}\,dx \\ &{} +\frac{1}{2\delta _{1}} \int _{0}^{L}x \biggl( \int _{0}^{t}k_{1}(t-s) \bigl(u_{x}(s)-u_{x}(t)\bigr)\,ds \biggr)^{2}\,dx \\ &{} + \int _{0}^{L}xv^{2}_{t}\,dx- \ell _{2} \int _{0}^{L}xv_{x}^{2}\,dx+ \frac{\delta _{2}}{2} \int _{0}^{L}xv_{x}^{2}\,dx \\ &{} +\frac{1}{2\delta _{2}} \int _{0}^{L}x \biggl( \int _{0}^{t}k_{2}(t-s) \bigl(v_{x}(s)-v_{x}(t)\bigr)\,ds \biggr)^{2}\,dx \\ &{} + \int _{0}^{L}x F(u,v)\,dx. \end{aligned}$$ Taking \(\delta _{1}=\ell _{1}\) and \(\delta _{2}=\ell _{2}\) and using Lemma 3.3, we have $$\begin{aligned} \Phi ^{\prime }(t)\le{}& \int _{0}^{L}xu^{2}_{t}\,dx- \frac{\ell _{1}}{2} \int _{0}^{L}xu_{x}^{2} \,dx+cC_{\alpha,1}(h_{1}\circ u_{x}) (t) \\ &{} + \int _{0}^{L}xv^{2}_{t}\,dx- \frac{\ell _{1}}{2} \int _{0}^{L}xv_{x}^{2} \,dx+cC_{ \alpha,2}(h_{2}\circ v_{x}) (t)+ \int _{0}^{L}x F(u,v)\,dx. \end{aligned}$$ Let us introduce the functionals $$\begin{aligned} \chi _{1}(t):=- \int _{0}^{L}x u_{t} \int _{0}^{t}k_{1}(t-s) \bigl(u(t)-u(s) \bigr)\,ds\,dx \end{aligned}$$ $$\begin{aligned} \chi _{2}(t):=- \int _{0}^{L} xv_{t} \int _{0}^{t}k_{2}(t-s) \bigl(v(t)-v(s) \bigr)\,ds\,dx. \end{aligned}$$ Assume that \((A1)\) and \((A2)\) hold. Then the functional $$\begin{aligned} \chi (t):=\chi _{1}(t)+\chi _{2}(t) \end{aligned}$$ satisfies, along with the solution of (1), the following estimate: $$\begin{aligned} \chi '(t)\le{}& {-} \biggl( \int _{0}^{t}k_{1}(s)\,ds-\delta \biggr) \Vert u_{t} \Vert ^{2}_{L^{2}_{x}}+c\delta \Vert u_{x} \Vert ^{2}_{L^{2}_{x}}+\frac{c}{\delta }(C_{\alpha,1}+1) (h_{1} \circ u_{x}) (t) \\ &{} - \biggl( \int _{0}^{t}k_{2}(s)\,ds-\delta \biggr) \Vert v_{t} \Vert ^{2}_{L^{2}_{x}}+c\delta \Vert v_{x} \Vert ^{2}_{L^{2}_{x}}+ \frac{c}{\delta }(C_{\alpha,2}+1) (h_{2}\circ v_{x}) (t), \end{aligned}$$ where \(0<\delta <1\). Direct differentiation, using (1), gives $$\begin{aligned} \chi _{1}'(t)={}&{-} \biggl( \int _{0}^{t}k_{1}(s)\,ds \biggr) \int _{0}^{L}xu_{t}^{2} \\ &{} + \biggl(1- \int _{0}^{t}k_{1}(s)\,ds \biggr) \int _{0}^{L}xu_{x}(t) \int _{0}^{t}k_{1}(t-s) \bigl(u_{x}(t)-u_{x}(s)\bigr)\,ds\,dx \\ &{} + \int _{0}^{L}x \biggl( \int _{0}^{t}k_{1}(t-s) \bigl(u_{x}(t)-u_{x}(s)\bigr)\,ds \biggr)^{2}\,dx \\ &{} - \int _{0}^{L}x f_{1}(u,v) \int _{0}^{t}k_{1}(t-s) \bigl(u(t)-u(s) \bigr)\,ds\,dx \\ &{} - \int _{0}^{L}x u_{t} \int _{0}^{t}k_{1}'(t-s) \bigl(u(t)-u(s)\bigr)\,ds\,dx. \end{aligned}$$ Using Young's inequality and Lemma 3.3, we get, for any \(0<\delta <1\), the following: $$\begin{aligned} &\biggl(1- \int _{0}^{t}k_{1}(s)\,ds \biggr) \int _{0}^{L}xu_{x}(t) \int _{0}^{t}k_{1}(t-s) \bigl(u_{x}(t)-u_{x}(s)\bigr)\,ds\,dx \\ &\qquad{} + \int _{0}^{L}x \biggl( \int _{0}^{t}k_{1}(t-s) \bigl\vert u_{x}(t)-u_{x}(s) \bigr\vert \,ds \biggr)^{2} \,dx \\ &\quad \leq \delta \int _{0}^{L}xu_{x}^{2}+ \frac{c}{\delta } \int _{0}^{L}x \biggl( \int _{0}^{t}k_{1}(t-s) \bigl\vert u_{x}(t)-u_{x}(s) \bigr\vert \,ds \biggr)^{2} \,dx \\ &\quad \leq \delta \int _{0}^{L}xu_{x}^{2}+ \frac{c}{\delta }C_{\alpha,1}(h_{1} \circ u_{x}) (t). \end{aligned}$$ Using Young's inequality, (18), (19) and (22), we have $$\begin{aligned} & \int _{0}^{L} xf_{1}(u,v) \int _{0}^{t}k_{1}(t-s) \bigl(u(t)-u(s) \bigr)\,ds\,dx \\ &\quad \le \delta \biggl( \int _{0}^{L} x{ \bigl\vert f_{1}(u,v) \bigr\vert }^{2} \,dx \biggr)+ \frac{1}{4\delta } \int _{0}^{L}x \biggl( \int _{0}^{t}k_{1}(t-s) \bigl(u(t)-u(s) \bigr)\,ds \biggr)^{2} \,dx \\ &\quad \le {c_{1} \delta {\bigl(\ell _{1} \Vert u_{x} \Vert _{L_{x}^{2}}^{2}+\ell _{2} \Vert v_{x} \Vert _{L_{x}^{2}}^{2} \bigr)}^{2r+3}}+\frac{c}{\delta }C_{\alpha,1}(h_{1} \circ u_{x}) (t) \\ &\quad \le c_{1}\delta { \biggl(\frac{2(r+2)}{r+1}E(0) \biggr)}^{2r+1}\bigl(\ell _{1} \Vert u_{x} \Vert _{L_{x}^{2}}^{2}+\ell _{2} \Vert v_{x} \Vert _{L_{x}^{2}}^{2}\bigr)+ \frac{c}{\delta }C_{\alpha,1}(h_{1} \circ u_{x}) (t) \\ &\quad \leq c\delta \Vert u_{x} \Vert _{L_{x}^{2}}^{2}+c \delta \Vert v_{x} \Vert _{L_{x}^{2}}^{2}+ \frac{c}{\delta }C_{\alpha,1}(h_{1}\circ u_{x}) (t). \end{aligned}$$ Also, by applying Young's inequality and Lemma 3.3, we obtain, for any \(0<\delta <1\), $$\begin{aligned} &- \int _{0}^{L} xu_{t} \int _{0}^{t}k_{1}'(t-s) \bigl(u(t)-u(s)\bigr)\,ds\,dx \\ &\quad = \int _{0}^{L} x u_{t} \int _{0}^{t} h_{1}(t-s) \bigl(u(t)-u(s) \bigr)\,ds\,dx- \int _{0}^{L} x u_{t} \int _{0}^{t}\alpha k_{1}(t-s) \bigl(u(t)-u(s)\bigr)\,ds\,dx \\ &\quad \leq \delta \Vert u_{t} \Vert _{L_{x}^{2}}^{2}+ \frac{1}{2\delta } \biggl( \int _{0}^{t}h_{1}(s)\,ds \biggr) (h_{1}\circ u) (t)+\frac{c}{\delta }C_{\alpha,1}(h_{1} \circ u) (t) \\ &\quad \leq \delta \Vert u_{t} \Vert _{L_{x}^{2}}^{2}+ \frac{c}{\delta }(C_{\alpha,1}+1) (h_{1} \circ u_{x}) (t). \end{aligned}$$ Similarly, we have $$\begin{aligned} - \int _{0}^{L} xv_{t} \int _{0}^{t}k_{2}'(t-s) \bigl(v(t)-v(s)\bigr)\,ds\,dx\leq \delta \Vert v_{t} \Vert _{L_{x}^{2}}^{2}+\frac{c}{\delta }(C_{\alpha,2}+1) (h_{2} \circ v_{x}) (t). \end{aligned}$$ A combination of all the above estimates gives $$\begin{aligned} \chi _{1}'(t)\le - \biggl( \int _{0}^{t}k_{1}(s)\,ds-\delta \biggr) \Vert u_{t} \Vert ^{2}_{L^{2}_{x}}+c\delta \Vert u_{x} \Vert ^{2}_{L^{2}_{x}}+\frac{c}{\delta }(C_{\alpha,1}+1) (h_{1} \circ u_{x}) (t). \end{aligned}$$ Repeating the same calculations with \(\chi _{2}\), we obtain $$\begin{aligned} \chi _{2}'(t)\le - \biggl( \int _{0}^{t}k_{2}(s)\,ds-\delta \biggr) \Vert v_{t} \Vert ^{2}_{L^{2}_{x}}+c\delta \Vert v_{x} \Vert ^{2}_{L^{2}_{x}}+\frac{c}{\delta }(C_{\alpha,2}+1) (h_{2} \circ v_{x}) (t). \end{aligned}$$ Therefore, (33) and (34) imply (27), which completes the proof of Lemma 3.5. □ Assume that \((A1)\) and \((A2)\) hold. Then the functionals \(J_{1}\) and \(J_{2}\) defined by $$\begin{aligned} J_{1}(t):= \int _{0}^{L}x \int _{0}^{t}K_{1}(t-s) \bigl\vert u_{x}(s) \bigr\vert ^{2}\,ds\,dx \end{aligned}$$ $$\begin{aligned} J_{2}(t):= \int _{0}^{L}x \int _{0}^{t}K_{2}(t-s) \bigl\vert v_{x}(s) \bigr\vert ^{2}\,ds\,dx \end{aligned}$$ satisfy, along with the solution of (1), the estimates $$\begin{aligned} & J_{1}'(t)\leq 3(1-\ell ) \Vert u_{x} \Vert _{L_{x}^{2}}^{2}-\frac{1}{2}(k_{1} \circ u_{x}) (t), \end{aligned}$$ $$\begin{aligned} & J_{2}'(t)\leq 3(1-\ell ) \Vert v_{x} \Vert _{L_{x}^{2}}^{2}-\frac{1}{2}(k_{2} \circ v_{x}) (t), \end{aligned}$$ where \(K_{i}(t):=\int _{t}^{\infty }k_{i}(s)\,ds\) (for \(i=1,2\)) and \(\ell =\min \{\ell _{1},\ell _{2}\}\). We will prove inequality (35) and the same proof also holds for (36). By Young's inequality and the fact that \(K_{1}^{\prime }(t)=-k_{1}(t)\), we see that $$\begin{aligned} J_{1}^{\prime }(t)={}&K_{1}(0) \int _{0}^{L}x \bigl\vert u_{x}(t) \bigr\vert ^{2}\,dx- \int _{0}^{L}x \int _{0}^{t}k_{1}(t-s) \bigl\vert u_{x}(s) \bigr\vert ^{2} \,dx \\ ={}&{-} \int _{0}^{L}x \int _{0}^{t}k_{1}(t-s) \bigl\vert u_{x}(s)- u_{x}(t) \bigr\vert ^{2} \,ds \,dx \\ &{} -2 \int _{0}^{L} xu_{x}(t). \int _{0}^{t}k_{1}(t-s) \bigl( u_{x}(s)- u_{x}(t)\bigr)\,ds\,dx+K_{1}(t) \int _{0}^{L}x \bigl\vert u_{x}(t) \bigr\vert ^{2} \,dx. \end{aligned}$$ $$\begin{aligned} &-2 \int _{0}^{L}x u_{x}(t). \int _{0}^{t}k_{1}(t-s) \bigl( u_{x}(s)-u_{x}(t)\bigr)\,ds\,dx \\ &\quad \le 2(1-\ell _{1}) \int _{0}^{L}x \bigl\vert u_{x}(t) \bigr\vert ^{2} \,dx+ \frac{\int _{0}^{t}k_{1}(s)\,ds}{2(1-\ell _{1})} \int _{0}^{L}x \int _{0}^{t}k_{1}(t-s) \bigl\vert u_{x}(s)-u_{x}(t) \bigr\vert ^{2} \,ds\,dx. \end{aligned}$$ Using the facts that \(K_{1}(0)=1-\ell _{1}\) and \(\int _{0}^{t}k_{1}(s)\,ds \le 1-\ell _{1}\), (35) is established. □ The functional L defined by $$\begin{aligned} L(t):=NE(t)+N_{1} \phi (t)+N_{2} \chi (t) \end{aligned}$$ satisfies, for a suitable choice of \(N,N_{1},N_{2}\ge 1\), $$\begin{aligned} L(t)\sim E(t) \end{aligned}$$ and the estimate $$\begin{aligned} L'(t)\leq {}&{-}4(1-\ell ) \bigl( \Vert u_{x} \Vert _{L_{x}^{2}}^{2}+ \Vert v_{x} \Vert ^{2}_{L_{x}^{2}} \bigr)- \bigl( \Vert u_{t} \Vert ^{2}_{L_{x}^{2}}+ \Vert v_{t} \Vert ^{2}_{L_{x}^{2}} \bigr) \\ &{} +c \int _{0}^{L} xF(u,v)\,dx+\frac{1}{4} \bigl[(k_{1}\circ u_{x}) (t)+(k_{2} \circ v_{x}) (t) \bigr], \quad\forall t\geq t_{0}, \end{aligned}$$ where \(t_{0}\) is introduced in Lemma 3.2and \(\ell =\min \{\ell _{1},\ell _{2}\}\). It is not difficult to prove that \(L(t)\sim E(t)\). To establish (38), we choose \(\delta =\frac{\ell }{4cN_{2}}\) where \(\ell =\min \{\ell _{1},\ell _{2}\}\). We set \(C_{\alpha }=\max \{C_{\alpha,1},C_{\alpha,2}\}\) and \(k_{0}=\min \lbrace \int _{0}^{t_{0}}k_{1}(s)\,ds,\int _{0}^{t_{0}}k_{2}(s)\,ds \rbrace >0\). Now using (23) and (28) and recalling the fact that \(k_{i}'=\alpha k_{i}-h_{i}\), we obtain, for any \(t\geq t_{0}\), $$\begin{aligned} L'(t)\leq{}&{ -}\frac{\ell }{4}(2N_{1}-1) \bigl( \Vert u_{x} \Vert ^{2}_{L_{x}^{2}}+ \Vert v_{x} \Vert ^{2}_{L_{x}^{2}} \bigr)- \biggl(k_{0}N_{2}-\frac{\ell }{4c}-N_{1} \biggr) \bigl( \Vert u_{t} \Vert ^{2}_{L_{x}^{2}}+ \Vert v_{t} \Vert ^{2}_{L_{x}^{2}} \bigr) \\ &{} -N_{1} \int _{0}^{L}x F(u,v)\,dx+\frac{\alpha }{2}N \bigl[(k_{1}\circ u_{x}) (t)+(k_{2} \circ v_{x}) (t) \bigr] \\ &{} - \biggl[\frac{1}{2}N-\frac{4c^{2}}{\ell }N^{2}_{2}-C_{\alpha } \biggl( \frac{4c^{2}}{\ell }N_{2}^{2}+cN_{1} \biggr) \biggr] \bigl[(h_{1}\circ u_{x}) (t)+(h_{2} \circ v_{x}) (t) \bigr]. \end{aligned}$$ First, we choose \(N_{1}\) so large such that \(\frac{\ell }{4}(2N_{1}-1)>4(1-\ell )\). Then we select \(N_{2}\) large enough so that \(k_{0} N_{2}-\frac{\ell }{4c}-N_{1}>1\). Now, one can use the Lebesgue dominated convergence theorem with the fact that \(\frac{\alpha k_{i}^{2}(s)}{\alpha k_{i}(s)-k_{i}'(s)}< k_{i}(s)\), for \(i=1,2\), to prove that $$\begin{aligned} \lim_{\alpha \rightarrow 0^{+}}\alpha C_{\alpha }=0. \end{aligned}$$ Therefore, there exists \(\alpha _{0}\in (0,1)\) such that if \(\alpha <\alpha _{0}\), then, we get \(\alpha C_{\alpha }< \frac{1}{8 [\frac{4c^{2}}{\ell }N^{2}_{2}+cN_{1} ]}\). Then, by letting \(\alpha =\frac{1}{2N}<\alpha _{0}\), we get \(\frac{1}{4}N-\frac{4c^{2}}{\ell }N^{2}_{2}>0\). This leads to $$\begin{aligned} \frac{1}{2}N-\frac{4c^{2}}{\ell }N^{2}_{2}-C_{\alpha } \biggl[ \frac{4c^{2}}{\ell }N_{2}^{2}+cN_{1} \biggr]>\frac{1}{4}N- \frac{4c^{2}}{\ell }N^{2}_{2}>0. \end{aligned}$$ Then, (38) is established. □ General decay result In this section, we state and prove our main result. Let \((u_{0},v_{0}) \in V_{0}^{2}\) and \((u_{1},v_{1})\in (L_{x}^{2})^{2}\) be given and satisfying (16). Assume that \((A1)\) and \((A2)\) hold. If \(\Psi _{1}\) and \(\Psi _{2}\) are linear, then there exist two positive constants \(\lambda _{1}\) and \(\lambda _{2}\) such that the solution to problem (1) satisfies the estimate $$\begin{aligned} E(t)\leq \lambda _{2}\exp \biggl(-\lambda _{1} \int _{t_{0}}^{t}\xi (s)\,ds \biggr),\quad \forall t\ge t_{0}, \end{aligned}$$ where \(t_{0}\) is introduced in Lemma 3.2and \(\xi (t)=\min \{\xi _{1}(t),\xi _{2}(t)\}\). Using (21) and (13) we have, for any \(t\geq t_{0}\), $$\begin{aligned} \int _{0}^{t_{0}} k_{1}(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2} \,ds+ \int _{0}^{t_{0}} k_{2}(s) \bigl\Vert v_{x}(t)- v_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2} \,ds \leq -cE'(t). \end{aligned}$$ Using this inequality, the estimate (38) becomes, for some \(m>0\) and for any \(t\geq t_{0}\), $$\begin{aligned} L'(t)\leq{} &{-}mE(t)-cE'(t)+c \int _{t_{0}}^{t} k_{1}(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2} \,ds \\ &{}+c \int _{t_{0}}^{t} k_{2}(s) \bigl\Vert v_{x}(t)- v_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2} \,ds. \end{aligned}$$ Let \(\mathcal{L}:= L+cE\sim E\), we obtain $$\begin{aligned} \mathcal{L}'(t)\leq {}& {-}mE(t)+c \int _{t_{0}}^{t} k_{1}(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2} \,ds \\ &{}+c \int _{t_{0}}^{t} k_{2}(s) \bigl\Vert v_{x}(t)- v_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2} \,ds, \quad\forall t\geq t_{0}. \end{aligned}$$ Multiply both sides of (40) by \(\xi (t)=\min \{\xi _{1}(t),\xi _{2}(t)\}\) where ξ is non-increasing function and using \((A2)\) and (13) we get, for any \(t\geq t_{0}\) and \(m>0\), the following: $$\begin{aligned} \xi (t)\mathcal{L}'(t)\leq{}& {-}m\xi (t)E(t)+c \int ^{t}_{0}\xi _{1}(s)k_{1}(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2}\,ds \\ &{}+c \int _{0}^{t}\xi _{2}(s)k_{2}(s) \bigl\Vert v_{x}(t)- v_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2}\,ds \\ \leq{} & {-}m\xi (t)E(t)-c \int _{0}^{t} k'_{1}(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2}\,ds \\ &{}\times c \int _{0}^{t} k'_{2}(s) \bigl\Vert v_{x}(t)- v_{x}(t-s) \bigr\Vert _{L_{x}^{2}}^{2}\,ds \\ \leq {}& {-}m\xi (t)E(t)-cE'(t). \end{aligned}$$ Since ξ is non-increasing, we have $$\begin{aligned} (\xi \mathcal{L}+cE)'(t)\leq -m\xi (t)E(t), \quad\forall t\geq t_{0}. \end{aligned}$$ Integrating over \((t_{0},t)\) and using the fact that \(\xi \mathcal{L}+cE\sim E\), then, for any \(\lambda _{1},\lambda _{2} > 0\), we obtain $$\begin{aligned} E(t)\leq \lambda _{2}\exp \biggl(-\lambda _{1} \int _{t_{0}}^{t}\xi (s)\,ds \biggr), \quad\forall t\geq t_{0}. \end{aligned}$$ Let \((u_{0},v_{0}) \in V_{0}^{2}\) and \((u_{1},v_{1})\in (L_{x}^{2})^{2}\) be given and satisfying (16). Assume that \((A1)\) and \((A2)\) hold. If \(\Psi _{1}\) or \(\Psi _{2}\) is nonlinear, then there exist two positive constants \(\lambda _{1}\) and \(\lambda _{2}\) such that the solution to problem (1) satisfies the estimate $$\begin{aligned} E(t)\leq \lambda _{2} \Psi _{*}^{-1} \biggl( \lambda _{1} \int _{t_{0}}^{t}\xi (s)\,ds \biggr), \quad\forall t>t_{0}, \end{aligned}$$ $$\begin{aligned} \Psi _{*}(t)= \int _{t}^{r}\frac{1}{s H(s)}\,ds \quad\textit{with } H(t)= \min \bigl\{ \Psi _{1}'(t),\Psi _{2}'(t) \bigr\} . \end{aligned}$$ Using Lemmas 3.6 and 3.7, we easily see that $$\begin{aligned} \mathcal{L}_{1}(t):=L(t)+J_{1}(t)+J_{2}(t) \end{aligned}$$ is nonnegative and, for any \(t\geq t_{0}\), and, for some \(C>0\), $$\begin{aligned} \mathcal{L}_{1}'(t)\leq -c E(t). \end{aligned}$$ Therefore, we arrive at $$\begin{aligned} \int _{0}^{\infty }E(s)\,ds< +\infty. \end{aligned}$$ Now, we define the following functionals: $$\begin{aligned} I_{1}(t):=\gamma \int _{t_{0}}^{t} \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}} \,ds, \qquad I_{2}(t):=\gamma \int _{t_{0}}^{t} \bigl\Vert v_{x}(t)- v_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}}\,ds. \end{aligned}$$ Thanks to (42), one can choose \(0<\gamma <1\) so that $$\begin{aligned} I_{i}(t)< 1, \quad \forall t \geq t_{0} \text{ and } i=1,2. \end{aligned}$$ Without loss of the generality, we assume that \(I_{i}(t)>0\), for any \(t> t_{0}\); otherwise, we get an exponential decay from (38). We also define the following functionals: $$\begin{aligned} \eta _{1}(t):=- \int _{t_{0}}^{t} k_{1}'(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}}\,ds, \text{ }\eta _{2}(t):=- \int _{t_{0}}^{t} k_{2}'(s) \bigl\Vert v_{x}(t)- v_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}}\,ds \end{aligned}$$ and observe that $$\begin{aligned} \eta _{1}(t)+\eta _{2}(t)\leq -cE'(t), \quad\forall t \geq t_{0}. \end{aligned}$$ Using (2.4), Assumption \((A2)\), inequality (43) and Jensen's inequality, we obtain $$\begin{aligned} \eta _{1}(t)&\leq \frac{1}{\gamma I_{1}(t)} \int _{t_{0}}^{t}\gamma I_{1}(t) \xi _{1}(s)\Psi _{1}\bigl(k_{1}(s)\bigr) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}}\,ds \\ &\leq \frac{\xi _{1}(t)}{\gamma I_{1}(t)} \int _{t_{0}}^{t}\gamma \Psi _{1} \bigl(I_{1}(t)k_{1}(s)\bigr) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}} \,ds \\ &\leq \frac{\xi _{1}(t)}{\gamma }\Psi _{1} \biggl(\frac{1}{I_{1}(t)} \int _{t_{0}}^{t}\gamma I_{1}(t) k_{1}(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}}\,ds \biggr) \\ &=\frac{\xi _{1}(t)}{\gamma }\bar{\Psi }_{1} \biggl(\gamma \int _{t_{0}}^{t} k_{1}(s) \bigl\Vert u_{x}(t)- u_{x}(t-s) \bigr\Vert ^{2}_{L_{x}^{2}} \,ds \biggr),\quad \forall t \geq t_{0}, \end{aligned}$$ where \(\bar{\Psi }_{1}\) is defined in Remark (2.3). Then, we have $$\begin{aligned} \int _{t_{0}}^{t} k_{1}(s) \bigl\Vert u(t)-u(t-s) \bigr\Vert ^{2}_{L_{x}^{2}}\,ds \leq \frac{1}{\gamma } \bar{\Psi }^{-1}_{1} \biggl( \frac{\gamma \eta _{1}(t)}{\xi _{1}(t)} \biggr), \quad t\geq t_{0}. \end{aligned}$$ Similarly, we can have $$\begin{aligned} \int _{t_{0}}^{t} k_{2}(s) \bigl\Vert v(t)- v(t-s) \bigr\Vert ^{2}_{L_{x}^{2}}\,ds \leq \frac{1}{\gamma } \bar{\Psi }^{-1}_{2} \biggl( \frac{\gamma \eta _{2}(t)}{\xi _{2}(t)} \biggr),\quad t \geq t_{0}. \end{aligned}$$ Thus, the estimate (40) becomes $$\begin{aligned} F'(t)\leq -mE(t)+c \bar{\Psi }^{-1}_{1} \biggl( \frac{\gamma \eta _{1}(t)}{\xi _{1}(t)} \biggr)+c \bar{\Psi }^{-1}_{2} \biggl(\frac{\gamma \eta _{2}(t)}{\xi _{2}(t)} \biggr),\quad t\geq t_{0}. \end{aligned}$$ Set \(H=\min \{\bar{\Psi }_{1}',\bar{\Psi }_{2}'\}\) and define the functional $$\begin{aligned} F_{1}(t):= H \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)F(t)+E(t), \quad \text{for } \varepsilon _{0} \in (0,\varepsilon ) \text{ and }t\geq t_{0}. \end{aligned}$$ Using the fact that \(\bar{\Psi }_{i}'>0\), \(\bar{\Psi }_{i}''>0\) and \(E'\leq 0\), we also deduce that \(F_{1}\sim E\). Further, we get $$\begin{aligned} F_{1}'(t)=\varepsilon _{0}\frac{E'(t)}{E(0)} H' \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)F(t)+H \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr)F'(t)+E'(t),\quad \text{for a.e }t \geq t_{0}. \end{aligned}$$ Recalling that \(E'\leq 0\), then we drop the first and last terms of the above identity. Therefore, by using the estimate (45), we have $$\begin{aligned} F_{1}'(t)\leq{} &{-}mE(t)H \biggl( \varepsilon _{0}\frac{E(t)}{E(0)} \biggr)+c H \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr) \bar{\Psi }^{-1}_{1} \biggl(\frac{\gamma \eta _{1}(t)}{\xi _{1}(t)} \biggr) \\ &{}+c H \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)\bar{\Psi }^{-1}_{2} \biggl(\frac{\gamma \eta _{2}(t)}{\xi _{2}(t)} \biggr),\quad \text{for a.e }t\geq t_{0}. \end{aligned}$$ In the sense of Young [23], we let \(\bar{\Psi }_{i}^{*}\) be the convex conjugate of \(\bar{\Psi }_{i}\) such that $$\begin{aligned} \bar{\Psi }_{i}^{*}(s)=s \bigl(\bar{\Psi }_{i}' \bigr)^{-1}(s)-\bar{\Psi }_{i} \bigl[ \bigl(\bar{\Psi }_{i}' \bigr)^{-1}(s) \bigr], \quad\text{for }i=1,2, \end{aligned}$$ and it satisfies the following generalized Young inequality: $$\begin{aligned} AB_{i}\leq \bar{\Psi }_{i}^{*}(A)+ \bar{\Psi }_{i}(B_{i}),\quad \text{for }i=1,2. \end{aligned}$$ By letting \(A=H (\varepsilon _{0}\frac{E(t)}{E(0)} )\), \(B_{i}=\bar{\Psi }_{i}^{-1} ( \frac{\gamma \eta _{i}(t)}{\xi _{i}(t)} )\), for \(i=1,2\), and combining (46)–(48), we have, for almost every \(t\geq t_{0}\), $$\begin{aligned} F_{1}'(t)\leq{} &{-}mE(t)H \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)+c\bar{\Psi }_{1}^{*} \biggl[H \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr) \biggr]+c \frac{\gamma \eta _{1}(t)}{\xi _{1}(t)} \\ &{}+c\bar{\Psi }_{2}^{*} \biggl[H \biggl(\varepsilon \frac{E(t)}{E(0)} \biggr) \biggr]+c\frac{\gamma \eta _{2}(t)}{\xi _{2}(t)} \\ \leq {}&{-}mE(t)H \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)+cH \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr) \bigl(\bar{\Psi }_{1}' \bigr)^{-1} \biggl[\bar{\Psi }_{1}' \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr) \biggr]+c \frac{\gamma \eta _{1}(t)}{\xi _{1}(t)} \\ &{}+cH \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr) \bigl(\bar{ \Psi }_{2}' \bigr)^{-1} \biggl[\bar{\Psi }_{2}' \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr) \biggr]+c \frac{\gamma \eta _{2}(t)}{\xi _{2}(t)} \\ \leq{} &{-} \bigl(mE(0)-c\varepsilon _{0} \bigr)\frac{E(t)}{E(0)}H \biggl( \varepsilon _{0}\frac{E(t)}{E(0)} \biggr)+c \biggl( \frac{\gamma \eta _{1}(t)}{\xi _{1}(t)}+ \frac{\gamma \eta _{2}(t)}{\xi _{2}(t)} \biggr). \end{aligned}$$ Multiplying the above estimate by \(\xi (t)=\min \{\xi _{1}(t),\xi _{2}(t)\}>0\) and using the fact in (44), we get $$\begin{aligned} \xi (t)F_{1}'(t)\leq -\bigl(mE(0)-c\varepsilon _{0}\bigr)\xi (t) \frac{E(t)}{E(0)} H \biggl(\varepsilon _{0}\frac{E(t)}{E(0)} \biggr)-cE'(t), \quad\text{for a.e }t \geq t_{0}. \end{aligned}$$ Select \(\varepsilon _{0}\) small enough so that \(k_{0}:= mE(0)-c\varepsilon _{0} >0\), and we obtain $$\begin{aligned} \xi (t)F_{1}'(t)\leq -k_{0}\xi (t) \frac{E(t)}{E(0)} H \biggl( \varepsilon _{0}\frac{E(t)}{E(0)} \biggr)-cE'(t),\quad \text{for a.e }t\geq t_{0}. \end{aligned}$$ Let \(F_{2}=\xi F_{1}+cE\sim E\), we have, for some \(\alpha _{1},\alpha _{2}>0\), the following equivalent inequality: $$\begin{aligned} \alpha _{1} F_{2}(t)\leq E(t)\leq \alpha _{2} F_{2}(t), \quad \forall t\geq t_{0}. \end{aligned}$$ Hence, we have $$\begin{aligned} F_{2}'(t)\leq -k_{0}\xi (t) \frac{E(t)}{E(0)}H \biggl(\varepsilon _{0} \frac{E(t)}{E(0)} \biggr), \quad\text{for a.e }t\geq t_{0}. \end{aligned}$$ Now, we set $$\begin{aligned} H_{0}(t)=t H(\varepsilon _{0} t), \quad\forall t\in [0,1]. \end{aligned}$$ Using the fact that \(\Psi _{i}'>0\) and \(\Psi _{i}''>0\) on \((0,r]\) (for \(i=1,2\)), we deduce that \(H_{0},H_{0}'>0\) \(a.e\). on \((0,1]\). Now, we define the following functional: $$\begin{aligned} R(t):=\frac{\alpha _{1} F_{2}(t)}{E(0)} \end{aligned}$$ and use (49) and (50) to show that \(R\sim E\) and, for some \(\beta _{1}>0\), $$\begin{aligned} R'(t)\leq -\beta _{1}\xi (t)H_{0}\bigl(R(t) \bigr), \quad\text{for a.e }t\geq t_{0}. \end{aligned}$$ Integrating over the interval \((t_{0},t)\) and using a change of variables, we get $$\begin{aligned} \int _{\varepsilon _{0} R(t)}^{\varepsilon _{0} R(0)}\frac{1}{s H(s)}\,ds \geq \beta _{1} \int _{t_{0}}^{t}\xi (s)\,ds; \end{aligned}$$ which gives $$\begin{aligned} R(t)\leq \frac{1}{\varepsilon _{0}} \Psi _{*}^{-1} \biggl(\beta _{1} \int _{t_{0}}^{t}\xi (s)\,ds \biggr)\quad \forall t \geq t_{0}, \end{aligned}$$ where \(\Psi _{*}(t):=\int _{t}^{r}\frac{1}{s H(s)}\,ds\). Since \(R\sim E\), we have, for \(\beta _{2}>0\), $$\begin{aligned} E(t)\leq \beta _{2} \Psi _{*}^{-1} \biggl(\beta _{1} \int _{0}^{t}\xi (s)\,ds \biggr) \quad\forall t \geq t_{0}. \end{aligned}$$ This completes the proof. □ Example 4.3 Let \(k_{1}(t)=ae^{-\alpha t}\) and \(k_{2}(t)=\frac{b}{(1+t)^{q}}\), \(q>1\). The constants a and b are chosen so that \((A1)\) is satisfied. Then there exists \(C>0\) such that $$\begin{aligned} E(t)\leq \frac{C}{(1+t)^{q}},\quad \forall t>0. \end{aligned}$$ Let \(k_{1}(t)=\frac{a}{(1+t)^{m}}\) and \(k_{2}(t)=\frac{b}{(1+t)^{n}}\) with \(m,n>1\). The constants a and b are chosen so that \((A1)\) is satisfied. Then there exists \(C>0\) such that, for any \(t>0\), $$\begin{aligned} E(t)\leq \frac{C}{(1+t)^{\nu }}, \quad\text{with }\nu =\min \{m,n \}. \end{aligned}$$ Let \(k_{1}(t)=ae^{-\beta t}\) and \(k_{2}(t)=be^{-(1+t)^{q}}\) with \(0< q<1\). The constants a and b are chosen so that \((A1)\) is satisfied. Then there exist positive constants C and \(\alpha _{1}\) such that $$\begin{aligned} E(t)\leq Ce^{-\alpha _{1}(1+t)^{\nu }}, \quad\text{for } t \text{ large}. \end{aligned}$$ Cannon, J.: The solution of the heat equation subject to the specification of energy. Q. Appl. Math. 21(2), 155–160 (1963) Shi, P.: Weak solution to an evolution problem with a nonlocal constraint. SIAM J. Math. Anal. 24(1), 46–58 (1993) Capasso, V., Kunisch, K.: A reaction–diffusion system arising in modelling man-environment diseases. Q. Appl. Math. 46(3), 431–450 (1988) Cahlon, B., Kulkarni, D.M., Shi, P.: Stepwise stability for the heat equation with a nonlocal constraint. SIAM J. Numer. Anal. 32(2), 571–593 (1995) Ionkin, N.I., Moiseev, E.I.: A problem for the heat conduction equation with two-point boundary condition. Differ. Uravn. 15(7), 1284–1295 (1979) Shi, P., Shillor, M.: On design of contact patterns in one-dimensional thermoelasticity. In: Theoretical Aspects of Industrial Design (1992) Choi, Y., Chan, K.-Y.: A parabolic equation with nonlocal boundary conditions arising from electrochemistry. Nonlinear Anal. 18(4), 317–331 (1992) Ewing, R.E., Lin, T.: A class of parameter estimation techniques for fluid flow in porous media. Adv. Water Resour. 14(2), 89–97 (1991) Pulkina, L.S.: A non-local problem with integral conditions for hyperbolic equations (1999) Pul'kina, L.S.: A nonlocal problem with integral conditions for a hyperbolic equation. Differ. Equ. 40(7), 947–953 (2004) Yurchuk, N.: Mixed problem with an integral condition for certain parabolic equations. Differ. Equ. 22(12), 1457–1463 (1986) Kartynnik, A.: Three point boundary value problem with an integral space variables conditions for second order parabolic equations. Differ. Uravn. 26, 1568–1575 (1990) Mesloub, S., Bouziani, A.: Mixed problem with a weighted integral condition for a parabolic equation with the Bessel operator. Int. J. Stoch. Anal. 15(3), 277–286 (2002) Mesloub, S., Messaoudi, S.A.: Global existence, decay, and blow up of solutions of a singular nonlocal viscoelastic problem. Acta Appl. Math. 110(2), 705–724 (2010) Mesloub, S., Messaoudi, S.A.: A nonlocal mixed semilinear problem for second-order hyperbolic equations. Electron. J. Differ. Equ. 2003, 30 (2003) Mesloub, S.: On a singular two dimensional nonlinear evolution equation with nonlocal conditions. Nonlinear Anal., Theory Methods Appl. 68(9), 2594–2607 (2008) Kamynin, L.I.: A boundary value problem in the theory of heat conduction with a nonclassical boundary condition. USSR Comput. Math. Math. Phys. 4(6), 33–59 (1964) Mesloub, S., Mesloub, F.: Solvability of a mixed nonlocal problem for a nonlinear singular viscoelastic equation. Acta Appl. Math. 110(1), 109–129 (2010) Draifia, A., Zarai, A., Boulaaras, S.: Global existence and decay of solutions of a singular nonlocal viscoelastic system. Rend. Circ. Mat. Palermo 2, 1–25 (2018) Pişkin, E., Ekinci, F.: General decay and blowup of solutions for coupled viscoelastic equation of Kirchhoff type with degenerate damping terms. Math. Methods Appl. Sci. 42(16), 5468–5488 (2019) Boulaaras, S., Guefaifia, R., Mezouar, N.: Global existence and decay for a system of two singular one-dimensional nonlinear viscoelastic equations with general source terms. Appl. Anal. (2020). https://doi.org/10.1080/00036811.2020.1760250 Mustafa, M.I.: General decay result for nonlinear viscoelastic equations. J. Math. Anal. Appl. 457(1), 134–152 (2018) Arnol'd, V.I.: Mathematical Methods of Classical Mechanics, vol. 60. Springer, Berlin (2013) The authors would like to express their profound gratitude to King Fahd University of Petroleum and Minerals (KFUPM) and University of Sharjah for their continuous support and he also thanks an anonymous referee for his/her very careful reading and valuable suggestions. This work is funded by KFUPM under Project #SB191048. This work is funded by KFUPM under Project (SB191048). The Preparatory Year Math Program, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia Mohammad M. Al-Gharabli & Adel M. Al-Mahdi Department of Mathematics, University of Sharjah, Sharjah, UAE Salim A. Messaoudi Mohammad M. Al-Gharabli Adel M. Al-Mahdi The authors read and approved the final manuscript. Correspondence to Mohammad M. Al-Gharabli. Al-Gharabli, M.M., Al-Mahdi, A.M. & Messaoudi, S.A. New general decay result for a system of two singular nonlocal viscoelastic equations with general source terms and a wide class of relaxation functions. Bound Value Probl 2020, 170 (2020). https://doi.org/10.1186/s13661-020-01467-5 Viscoelasticity Nonlocal boundary conditions Relaxation function Convex functions
CommonCrawl
What is the justification for Kaiming He initialization? I've been trying to understand where the formulas for Xavier and Kaiming He initialization come from. My understanding is that these initialization schemes come from a desire to keep the gradients stable during back-propagation (avoiding vanishing/exploding gradients). I think I can understand the justification for Xavier initialization, and I'll sketch it below. For He initialization, what the original paper actually shows is that that initialization scheme keeps the pre-activation values (the weighted sums) stable throughout the network. Most sources I've found explaining Kaiming He initialization seem to just take it as "obvious" that stable pre-activation values will somehow lead to stable gradients, and don't even mention the apparent mismatch between what the math shows and what we're actually trying to accomplish. The justification for Xavier initialization (introduced here) is as follows, as I understand it: As an approximation, pretend the activation functions don't exist and we have a linear network. The actual paper says we're assuming the network starts out in the "linear regime", which for the sigmoid activations they're interested in would mean we're assuming the pre-activations at every layer will be close to zero. I don't see how this could be justified, so I prefer to just say we're disregarding the activation functions entirely, but in any case that's not what I'm confused about here. Zoom in on one edge in the network. It looks like $x\to_{w} y$, connecting the input or activation value $x$ to the activation value $y$, with the weight $w$. When we do gradient descent we consider $\frac{\partial C}{\partial w}$, and we have: $$\frac{\partial C}{\partial w}=x\frac{\partial C}{\partial y}$$ So if we want to avoid unstable $\frac{\partial C}{\partial w}$-s, a sufficient (not necessary, but that's fine) condition is to keep both those factors stable - the activations and the gradients with respect to activations. So we try to do that. To measure the "size" of an activation, let's look at its mean and variance (where the randomness comes from the random weights). If we use zero-mean random weights all i.i.d. on each layer, then we can show that all of the activation values in our network are zero-mean, too. So controlling the size comes down to controlling the variance (big variance means it tends to have large absolute value and vice versa). Since the gradients with respect to activations are calculated by basically running the neural network backwards, we can show that they're all zero-mean too, so controlling their size comes down to controlling their variance as well. We can show that all the activations on a given layer are identically distributed, and ditto for the gradients with respect to activations on a given layer. If $v_n$ is the variance of the activations on layer $n$, and if $v'_n$ is the variance of the gradients, we have $$v_{n+1}=v_n k_n \sigma^2$$ $$v'_n=v_{n+1} k_{n+1} \sigma^2$$ where $k_i$ is the number of neurons on the $i$-th layer, and $\sigma^2$ is the variance of the weights between the $n$-th and $n+1$-th layers. So to keep either of the growth factors from being too crazy, we would want $\sigma^2$ to be equal to both $1/k_n$ and $1/k_{n+1}$. We can compromise by setting it equal to the harmonic mean or the geometric mean or something like that. This stops the activations from exploding out of control, and stops the gradients with respect to activations from exploding out of control, which by step (2) stops the gradients with respect to the weights (which at the end of the day are the only things we really care about) from growing out of control. However, when I look at the paper on He initialiation, it seems like almost every step in this logic breaks down. First of all, the math, if I understand correctly, shows that He initialization can control the pre-activations, not the activations. Therefore, the logic from step (2) above that this tells us something about the gradients with respect to the weights fails. Second of all, the activation values in a ReLU network like the authors are considering are not zero-mean, as they point out themselves, but this means that even the reasoning as to why we should care about the variances, from step (3), fails. The variance is only relevant for Xavier initialization because in that setting the mean is always zero, so the variance is a reasonable proxy for "bigness". So while I can see how the authors show that He initialization controls the variances of the pre-activations in a ReLU network, for me the entire reason as to why we should care about doing this has fallen apart. deep-learning weights-initialization Jack MJack M Browse other questions tagged deep-learning weights-initialization or ask your own question. Is color information only extracted in the first input layer of a convolutional neural network? Neural Network on EV3 Mindstorm without 3rd Party Software How do intermediate layers of a trained neural network look like? How can alpha zero learn if the tree search stops and restarts before finishing a game? What is the intuition behind the Xavier initialization for deep neural networks? Is this ML task possible?
CommonCrawl
Fluids and Barriers of the CNS LC–MS/MS-based in vitro and in vivo investigation of blood–brain barrier integrity by simultaneous quantitation of mannitol and sucrose Behnam Noorani1,4, Ekram Ahmed Chowdhury1,4, Faleh Alqahtani2, Yeseul Ahn1,4, Dhavalkumar Patel1, Abraham Al-Ahmad1,4, Reza Mehvar3 & Ulrich Bickel ORCID: orcid.org/0000-0002-2721-299X1,4 Fluids and Barriers of the CNS volume 17, Article number: 61 (2020) Cite this article Understanding the pathophysiology of the blood brain–barrier (BBB) plays a critical role in diagnosis and treatment of disease conditions. Applying a sensitive and specific LC–MS/MS technique for the measurement of BBB integrity with high precision, we have recently introduced non-radioactive [13C12]sucrose as a superior marker substance. Comparison of permeability markers with different molecular weight, but otherwise similar physicochemical properties, can provide insights into the uptake mechanism at the BBB. Mannitol is a small hydrophilic, uncharged molecule that is half the size of sucrose. Previously only radioactive [3H]mannitol or [14C]mannitol has been used to measure BBB integrity. We developed a UPLC–MS/MS method for simultaneous analysis of stable isotope-labeled sucrose and mannitol. The in vivo BBB permeability of [13C6]mannitol and [13C12]sucrose was measured in mice, using [13C6]sucrose as a vascular marker to correct for brain intravascular content. Moreover, a Transwell model with induced pluripotent stem cell-derived brain endothelial cells was used to measure the permeability coefficient of sucrose and mannitol in vitro both under control and compromised (in the presence of IL-1β) conditions. We found low permeability values for both mannitol and sucrose in vitro (permeability coefficients of 4.99 ± 0.152 × 10−7 and 3.12 ± 0.176 × 10−7 cm/s, respectively) and in vivo (PS products of 0.267 ± 0.021 and 0.126 ± 0.025 µl g−1 min−1, respectively). Further, the in vitro permeability of both markers substantially increased in the presence of IL-1β. Corrected brain concentrations (Cbr), obtained by washout vs. vascular marker correction, were not significantly different for either mannitol (0.071 ± 0.007 and 0.065 ± 0.009 percent injected dose per g) or sucrose (0.035 ± 0.003 and 0.037 ± 0.005 percent injected dose per g). These data also indicate that Cbr and PS product values of mannitol were about twice the corresponding values of sucrose. We established a highly sensitive, specific and reproducible approach to simultaneously measure the BBB permeability of two classical low molecular weight, hydrophilic markers in a stable isotope labeled format. This method is now available as a tool to quantify BBB permeability in vitro and in vivo in different disease models, as well as for monitoring treatment outcomes. The blood–brain barrier (BBB) maintains the homeostatic environment of the CNS by separating circulating blood from the central nervous system [1]. It encompasses specialized endothelial cells with a basal lamina that supports the abluminal surface of the endothelium along with other supporting cells, such as pericytes, astrocytes, and neurons [1]. The brain microvascular endothelial cells with tight junctions and transporter proteins are the primary and main gatekeepers for the transportation of nutrients and metabolites, and for the efflux of neurotoxins [2, 3]. The BBB dysfunction and breakdown contribute to neurological disorders due to the transfer of harmful blood components into the brain, irregular transport, and dysregulated clearance of metabolites associated with reduced cerebral blood flow [4]. Therefore, measuring the functional integrity of BBB by various methods such as paracellular markers is frequently performed in in vitro and in vivo studies. There are many technical and conceptual pitfalls associated with the experimental application of supposedly paracellular markers and the subsequent interpretation of data. One important aspect, which deserves mentioning, is the fact that these markers can serve two distinct purposes. The first purpose is that, due to their characteristically low BBB permeability, these substances are often used as so-called vascular markers. This is commonly the case when other, more permeable agents are measured in the same study. When used as a vascular marker, it is assumed that neglecting the extent of brain uptake of the substance during a short experimental time period (1 minute or less) does not significantly compromise the study. Therefore, any concentration measured in whole brain tissue presumably represents brain intravascular space (with reference to concentration in whole blood), or brain plasma volume (with reference to plasma concentration). Such intravascular space values can then be used to correct brain concentrations of other substances, before calculating their BBB permeability. The second purpose is to determine the genuine permeability values of the BBB markers themselves, which is not zero. The latter measurement, of course, also requires proper correction for intravascular volume. Major damage to the BBB, caused by severe disease processes, such as stroke or a relapse phase of multiple sclerosis, may be readily detected using various imaging techniques and a range of markers. However, for the quantification of subtle BBB impairment, markers with naturally low permeability, such as sucrose or mannitol, are used, because even a minor degree of barrier damage is expected to have a noticeable effect on their permeability. Such damage has been observed in acute situations, for instance caused by peripheral inflammatory pain [5, 6], and after major surgery, where it has been connected to the occurrence of postoperative cognitive dysfunction in animal studies [7] and in patients [8]. Subtle BBB damage has also been postulated to play a role in the pathophysiology of chronic diseases like Alzheimer's dementia [9] or small vessel disease [10]. However, there is still uncertainty, as functional BBB changes related to drug transport could not be confirmed in animal models of Alzheimer's disease [11, 12]. Radiolabeled versions of sucrose, in particular [14C]sucrose, have long been used as low molecular weight, hydrophilic markers. We have recently introduced [13C12]sucrose as a superior marker substance, which is non-radioactive and can be quantified by a sensitive and highly specific LC–MS/MS technique [13, 14]. The disaccharide sucrose may be considered as the most widely accepted standard for the precise measurement of paracellular BBB permeability due to its properties, such as being uncharged, absence of protein binding, and metabolic stability in the circulation [15]. Our lab has focused on understanding the uptake mechanism at the BBB of different molecular weight markers, which have similar physicochemical properties. Mannitol is a small molecule that is about half the size of sucrose and has otherwise similar characteristics as sucrose as a marker for the BBB. It also has a molecular weight (182 Da) in the range of many small-molecule drugs. Furthermore, mannitol has been widely used over the last 30 years in the lactulose/mannitol (L/M) test as a common dual-sugar test to assess the intestinal barrier function [16]. The radiotracer version of mannitol has been used for measurement of BBB integrity, but it requires a radioactive license and special handling skills [17,18,19,20,21,22]. We have also shown that using the radiolabeled versions of a marker, in particular [14C]sucrose, might result in a substantial overestimation of the true BBB permeability due to the presence of low level lipid-soluble impurities in the radiolabeled versions of the marker. The first objective of the present study was to develop a UPLC–MS/MS method, which allows simultaneous analysis of stable isotope-labeled sucrose and mannitol. The second objective was to show the application of these markers in BBB in an vitro and in vivo model. The application of a stable isotope-labeled version of mannitol as a marker for BBB has not been reported yet. Different stable isotope-labeled versions of mannitol and sucrose, respectively, are commercially available. The variants of each marker coelute from a BEH-amide UPLC column, but are separate from each other. This allowed the simultaneous use of both markers for BBB permeability analysis. Furthermore, for the first time, one variant of [13C]sucrose was used to correct vascular space for mannitol and sucrose simultaneously. Thus, we selected a suitable combination of mass transitions and settings of the mass detector to detect and quantify [13C6]mannitol and [13C12]sucrose as permeability markers, [2H8]mannitol and [2H2]sucrose as internal standards, and [13C6]sucrose as a vascular marker. Our method offers novel accurate biomarkers of different sizes for permeability measurements of the BBB in the preclinical phase. Chemicals and reagents [13C6]mannitol, [2H8]mannitol, [13C12]sucrose, [13C6]sucrose, and [2H2]sucrose were obtained from Omicron Biochemicals (South Hill Street, South Bend, IN, USA). LC–MS grade water was purchased under the brand name J.T. Baker from Avantor Performance Materials, Inc. (Center Valley, PA). LC–MS/MS grade acetonitrile, water, and analytical grade ammonium hydroxide were purchased from Fisher Scientific (Fair Lawn, NJ, USA). For anesthesia, isoflurane was purchased from Lloyd Laboratories (Shenandoah, IA, USA). Heparin solution was purchased from APP Pharmaceuticals (Schaumburg, IL, USA). All other chemicals were analytical grade and obtained from commercial sources. Mass spectrometric and chromatographic conditions Analytes were detected using an AB SCIEX QTRAP® 5500 triple quadrupole mass spectrometer (MS) attached to a Nexera UPLC system (Shimadzu Corporation). The UPLC system contained an autosampler (Sil-30AC), pumps (LC-30AD), a controller (CBM-20A), a degasser (DGA-20A5), and a column oven (CTO-30A). Analyst software was used for data acquisition and quantification. Chromatographic separation was performed using an Acquity B.E.H. amide (2.1 mm × 50 mm, 1.7 μm; Waters, Milford, MA, USA), attached to an inline filter with a pore size of 0.2 μm as a pre-column. The isocratic elution was acetonitrile: water: ammonium hydroxide (73:27:0.1, v/v), at a flow rate of 0.2 mL/min. The column temperature was maintained at 45 °C, and the autosampler was at 4 °C. The total run time was 6 min. However, MS data were collected from 1 to 4.5 min only, and the valve was diverted to waste before and after that time. Electrospray ionization with multiple reactions monitoring system in negative mode was used for the ionization source. The mass spectrometer parameters for [13C12]sucrose, [13C6]sucrose and [2H2]sucrose were optimized in our previous study [13], however, [13C6] and [2H2]sucrose mass spectrometer parameters were changed due to presence of an interfering peak in the blank plasma and brain samples at the same retention time when combined with mannitol transitions. The mass spectrometer conditions for [13C6] and [2H8]mannitol were optimized to get optimum M−H−1 signal by continuous infusion of 100 ng/ml mannitol solution with an infusion pump. The optimized mass spectrometer parameters were as follows: ion spray voltage, − 4500 V; collision gas, high; curtain gas, 30 psi; temperature, 600 °C; ion source gas 1 (nebulizer gas), 55 psi; and ion source gas 2 (turbo gas), 55 psi. For [13C6] and [2H8]mannitol, the m/z transitions 187 → 92 and 189 → 73 were selected, respectively. Also, the transitions 353 → 92, 347 → 179 and 343 → 71 were used for [13C12], [13C6] and [2H2]sucrose. Standard curve preparation Stock solutions of triple analytes [13C6] mannitol, [13C12], and [13C6]sucrose were prepared in water at a concentration of 10 mg/mL. Plasma standard curves were made by adding blank mouse plasma to stock solutions to get plasma concentrations of 1–100 μg/mL. Then, each concentration was diluted 100-fold in water to obtain specific plasma calibration standards of 10–1000 ng/mL. For brain standard curve, blank brain tissue was homogenized in water (1:19), and triple analytes were spiked into the homogenized brain. Homogenate concentrations ranging from 5 to 400 ng/mL were prepared by serial dilution. For the deproteination process, all samples were diluted tenfold in acetonitrile: water (80:20) containing 20 ng/mL of [2H2]sucrose and [2H8]mannitol. Then, precipitated samples were vortexed and centrifuged at 20,000g for 10 min. The supernatant was transferred into autosampler inserts, and a sample volume of 5 μL was injected into the UPLC column. Blank matrix samples from mice containing no analyte were run to obtain the selectivity of the method. Also, to ensure that there is no interference between analyte transitions, neat samples of single analytes without matrix were run. Accuracy and precision Inter and intra-day runs were performed to determine the accuracy and precision of the method. The quality control samples (low, medium, and high concentrations) were evaluated against calibration curves. The accuracy was calculated as a percentage of measured concentration over nominal concentration. Precision was calculated as a percentage of relative standard deviations (RSD). The acceptable inter and intra-run limits for the accuracy were set at 85–115% for the middle and high concentrations and 80–120% for the low concentration. The standard precision values were 15% (medium and high concentrations) or 20% (low concentration). The linearity of calibration curves was evaluated by the coefficient of determination (r2) of the linear regression analysis of the concentration–response data using a weight of 1/x, where x is the concentration. Weighting by 1/x is superior to analysis with equal weights, on the strength of higher accuracy and less variability at low concentrations. The recovery of triple analytes was calculated in diluted plasma and homogenized brain. We expected similar recovery of the sucrose analytes as mentioned in the results described previously [13, 14] since all of the analytes are stable labeled isotopes of the same chemical entity. Three concentrations representing low, medium, and high were selected from the calibration curve. In case of plasma matrix, 10, 100, and 1000 ng/mL were used, and 5, 50, and 400 ng/mL were selected for the brain homogenate. Five replicate samples were prepared in each of the respective matrices, as well as samples with equivalent concentrations in water as reference. The samples and references were subjected to the sample preparation method described above, and the peak areas of analytes were determined. Recovery was calculated as the percent of the ratio of peak areas Sample/Reference, where sample refers to the matrix and reference to water (neat sample), respectively. Freeze–thaw stability The freeze–thaw stability was performed by subjecting two neat concentrations of analytes (50 and 500 ng/mL) to three freeze/thaw cycles (n = 3). Prepared samples were stored at − 80 °C and thawed at room temperature for 1 h, in order to replicate the experimental conditions. The concentration of the analytes in the neat samples was compared to the standard curve. Long-term stability Long term storage stability of different isotope of sucrose was checked in our previous study [13]. In this section, long term stability of [13C6]mannitol was evaluated for the diluted plasma samples and brain homogenate at − 80 °C. Quality control samples at low, medium and high concentrations of analyte in the brain and plasma (n = 3) were stored at − 80 °C for 2 months. The stored samples were then compared against standards in order to assess stability. In vivo application of the method Two groups of anesthetized C57BL/6J mice were used to perform the pharmacokinetic study. 8–10 weeks old female C57BL/6J mice with 23–27 g bodyweight were purchased from Jackson Laboratories (Bar Harbor, ME, USA). The experimental protocols were approved by the Institutional Animal Care and Use Committee at Texas Tech University Health Sciences Center and followed current NIH guidelines. A silicone face mask was used to apply isoflurane (4% for induction, 1.5–2% v/v for maintenance) in 70% nitrous oxide/30% oxygen at a flow rate of 1 L/min. By skin incisions, the jugular veins were exposed bilaterally at the neck for IV injections and blood sampling, respectively. [13C6]mannitol and [13C12]sucrose (10 mg/kg) were co-injected as an IV bolus dose into the jugular vein. Then, at 1, 5, 10, 20, and 30 min after injection, blood samples (40 µL) were collected from the contralateral jugular vein. The samples were used to generate plasma concentration–time curves in each individual animal. Two groups of animals were used to investigate the effect of [13C6]sucrose application as vascular marker compared to transcardiac perfusion for vascular space correction (washout group). In the washout group (n = 6), the thorax was opened immediately after the last time point of sampling (30 min), and 20 mL phosphate buffered saline (pH 7.4) at room temperature, was used to perform the vascular perfusion via the left ventricle of the heart (flow rate of 2 mL/min) using a Harvard syringe pump. In order to facilitate the outflow of blood from the brain and to visually confirm the complete blood removal from the brain following perfusion, both jugular veins were cut open at the start of perfusion. In the second group of the animals (n = 6), a bolus dose of the vascular marker [13C6]sucrose (10 mg/kg in saline) was injected intravenously 30 s before the last sampling time point. Afterwards, the animals were euthanized by decapitation. Collected blood samples were centrifuged, and supernatant plasma was separated for further analysis. Meninges were removed from the collected brains, and the forebrains were weighted without olfactory bulbs, cerebellum, or brain stem. Then, the brain and plasma samples homogenized and diluted respectively according to the sample preparation steps described in UPLC-MS/MS section and the homogenized brain and diluted plasma were stored at − 80 °C until measurement by the UPLC-MS/MS system. The value of corrected brain concentration (\( C_{br - corr}^{Analyte} \)) in the vascular marker group, which received [13C6]sucrose, was determined as follows: $$ C_{br - corr}^{Analyte} = \frac{{(V_{d} - V_{0} ) \times C_{pl}^{Analyte} }}{{1 - V_{0} }} $$ Here, Vd is the apparent volume of distribution of the BBB permeability marker, [13C6]mannitol and [13C12] sucrose, V0 is the apparent volume of distribution of the vascular marker, [13C6] sucrose, and \( C_{pl}^{analyte} \) is the terminal (30 min) plasma concentration of [13C6]mannitol or [13C12]sucrose. Vd and V0 values were obtained using the following two equations. $$ V_{d} = C_{br}^{analyte} /C_{pl}^{analyte} $$ $$ V_{0} = C_{br}^{vascular\,marker} /C_{pl}^{vascular\,marker} $$ where \( C_{br}^{analyte} \) is the total uncorrected brain concentration of [13C6] mannitol or [13C12]sucrose and \( C_{br}^{vascular\,marker} \) is the total (uncorrected) brain concentrations of [13C6]sucrose, at the terminal sampling time (30 min), and \( C_{pl}^{vascular\,marker} \) is the terminal plasma concentration of the vascular marker at 30 min. Brain tissue concentration values in the washout group, which had undergone buffer washout, were considered as corrected for intravascular content. Values for brain uptake clearance, Kin, also known as the permeability-surface area product, were calculated using the following equations based on either uncorrected (\( C_{br}^{analyte} \)) or corrected (\( C_{br - corr}^{analyte} \)) brain concentrations of mannitol and sucrose: $$ K_{in} = C_{br}^{analyte} /AUC_{0}^{T} $$ $$ K_{in - corr} = C_{br - corr}^{analyte} /AUC_{0}^{T} $$ where AUC T0 denotes the area under the plasma concentration–time curve from time point 0 to the terminal sampling time (30 min) for [13C6]mannitol and [13C12]sucrose. AUC T0 was estimated via the linear-logarithmic trapezoidal method. For a comparison between in vitro and in vivo models, the Kin values or permeability surface area products (PS) were converted to permeability coefficients, taking 120 cm2/g of brain as the surface area of the BBB in vivo [23]. In vitro application of the method iPSCs differentiation to BMECs IMR90-c4 induced pluripotent stem cell line was used from the WiCell cell repository (WiCell, Madison, WI, USA). iPSCs were differentiated into brain microvascular endothelial cells (BMECs) following the established protocol [24, 25]. Undifferentiated stem cells were seeded on six well tissue culture treated plates coated with matrigel (C-Matrigel; Corning, Corning, MA, USA) in Essential 8 medium (E8 Thermo Fisher, Waltham, MA, USA) containing 10 μM Y-27632 (Tocris, Minneapolis, MN, USA) at a density of 100,000 cells/mL. Cells were maintained in E8 for 3 days prior to differentiation. Then, differentiation was initiated using unconditioned medium [UM: Dulbecco's modified Eagle's medium/F12 with 15 mM HEPES (Thermo Fisher, Waltham, MA, USA), 20% knockout serum replacement (Thermo Fisher, Waltham, MA, USA), 1% non-essential amino acids (Thermo Fisher, Waltham, MA, USA), 0.5% Glutamax (Thermo Fisher, Waltham, MA, USA) and 0.1 mM β-mercaptoethanol (Sigma-Aldrich, St. Louis, MO, USA)] and maintained for 6 days. After 6 days, cells were incubated for two days with EC++ media [human serum-free endothelial medium (hESFM, Thermo Fisher, Waltham, MA, USA) supplemented with 1% bovine platelet-poor plasma-derived serum (PDS, Alfa Aesar, Ward Mill, MA, USA), 10 ng/mL bFGF and 10 μM retinoic acid (Sigma-Aldrich)]. Upon eight days of differentiation, cells were removed by accutase (Corning) treatment and seeded as single cells on 24-well Transwells (polyester, 0.4 μm pore size; filter area 0.33 cm2, Corning) coated with a solution of collagen from human placenta (Sigma-Aldrich) and bovine plasma fibronectin (Sigma-Aldrich) (400 μg/mL collagen IV and 100 μg/mL fibronectin) at a density of 1,000,000 cells/cm2. Twenty-four h after seeding, EC–medium was added (EC medium supplemented with 1% platelet-poor derived serum). Purified endothelial monolayers were formed on day 10 of the experiment, and permeability barrier function tests were performed 48 h after seeding on the Transwell system. Measurement of barrier function Barrier integrity of BMECs monolayer was obtained by measuring transendothelial electrical resistance (TEER) while using a Millicell ERS electrode (Millipore, Bedford, MA, USA). After conducting three measurements for each insert (n = 3), the average resistance was obtained. Paracellular permeability was assessed by adding 1 mg/mL of [13C6] mannitol and [13C12] sucrose to the donor site of the Transwell system. Then, 50 μL of aliquots were collected from the acceptor (basolateral chamber) at 10, 20, 30, 60, and 120 min. At the end of the experiment, the donor and acceptor samples were diluted in water to be in the range of standard curve (10–1000 ng/mL) and the aforementioned preparation steps were performed to measure the concentrations with UPLC–MS/MS system. The clearance or permeability-surface area product (PS) for mannitol and sucrose were calculated using the following steps: First, the cleared volume up to each time point was calculated from the following equation. Then, linear regression applied to the plotted cleared volume versus time for samples and blank to obtain the PS of the Transwell system. $$ {\text{Cleared Volume}} = \left( {C\left( {acceptor } \right) *V\left( {acceptor} \right)} \right)/ C\left( {donor} \right) $$ Here, Cacceptor referred to measured concentration in acceptor compartment at a given sampling time point, and Vacceptor referred to the volume of acceptor compartment. Also, C donor is the concentration in donor compartment. Afterwards, the permeability coefficient (P) was obtained by the following equations: $$ {\text{P}} = {\text{PS}}/{\text{S}} $$ $$ \frac{1}{{ P_{Cells} }} = \frac{1}{{P_{total} }} - \frac{1}{{P_{blank} }} $$ The permeability coefficient (P) was obtained by dividing the PS to insert surface area (S) (cm2) Eq. (7), and then the permeability coefficient of the cell monolayer (Pcells) was obtained by subtracting the permeability coefficient of Transwell (Ptotal) from the permeability coefficient of the coated filter (Pblank) Eq. (8). Measurement of permeability coefficient of Mannitol and Sucrose in presence of inflammatory cytokine The effect of Interleukin 1 beta (IL-1β) on the permeability of mannitol and sucrose in in vitro model of BBB (iPSC-derived BMECs) was also measured. To mimic inflammatory conditions, the Transwell model was exposed to media supplemented with 10 and 100 ng/mL IL-1β (Peprotech, Cranbury, NJ) for 24 h (n = 3). Then, the medium was removed, and fresh medium containing 1 mg/mL of sucrose and mannitol was added to the apical side of the Transwell. The permeability coefficients of markers were measured as previously described. Also, the TEER values of iPSC-derived BMECs were measured before and after exposure to IL-1β. Measurement of partition coefficients of Mannitol and Sucrose By using an established method, the partition coefficients of [13C6]mannitol and [13C12]sucrose between 1-octanol and water were determined [15]. For this purpose, an equal volume of 1-octanol and water were mixed together at room temperature overnight with continuous stirring. Then 100 μg/mL of [13C6]mannitol and [13C12]Sucrose were added to 5 mL saturated water, and then the mixture was added to 5 mL of saturated 1-octanol in a glass scintillation vial. Subsequently, the glass vial was placed in a rotary machine, and the content was mixed for 30 min. 500 μL samples were taken from both the water and 1-octanol phase for further analysis with LC-MS/MS. The water samples are diluted 100-fold, and the 1-octanol samples remain undiluted for this purpose. Statistical analysis of data was performed using Prism 8 (GraphPad Software, La Jolla, CA). All experimental metrics were collected across at least three biological replicates. The student's paired t-test was used for comparison of uncorrected and corrected vascular space for the same mice. Unpaired two tailed t-test was used for comparison of two groups. Data with 3 groups were analyzed by 1-way ANOVA, followed by Tukey's multiple comparison test. In all cases, a p value < 0.05 was considered significant. Data are presented as mean ± SD or individual values. Method development and validation The mass spectra of [13C12], [13C6], and [2H2]sucrose have been reported in our previous study [13]. The best m/z transition of stable isotopes of mannitol was selected based on signal to noise ratio and higher sensitivity, see Fig. S1 in Additional file 1. Figure 1 depicts the chromatograms of single analyte neat samples of mannitol and sucrose prepared in water with no cross channel interference between transitions observed. We also showed the lowest calibration standard, blank matrix and internal standard in plasma and brain matrix (Fig. 2). We found no interference in matrix samples. However, [13C6]sucrose (347 > 179) had a peak at retention time approx. 1.7 and 2.6 min in plasma and brain matrix, respectively. Also, [2H2]sucrose transition (343 > 71) displayed the same peaks at the retention time of 1.7 and 2.7 min for plasma and brain matrix, respectively. These peaks do not interfere with sucrose peak at the retention time of 2.2 min. Chromatograms of single analyte neat samples of [13C6]mannitol, [13C12]sucrose, [13C6] sucrose, [2H8]mannitol, and [2H2]sucrose, prepared in LC–MS/MS grade water, with all considered transitions Chromatograms of blank matrices, Lowest calibration standard and internal standard in plasma (a) and brain matrix (b) The data for Inter- and Intra-run accuracy and precision in plasma and brain samples are included in Additional file 1: Table S1 and S2. Both plasma and brain values were within the limits of Food and Drug Administration (FDA) guidelines for the method validation. Moreover, the calibration curves generated in the ranges of 10–1000 ng/mL and 5–400 ng/mL for plasma and brain, were found to be linear with r2 > 0.99 across all Intra and Inter-assay runs. Recovery and stability Recoveries of [13C6]mannitol, [13C12]sucrose and [13C6]sucrose as the analytes of the method were performed for plasma and brain matrix at low, medium, and high concentrations. Based on Table S3, the recoveries of analytes were relatively high (≥ 95%) in all the tested matrices. Plus, the recovery of both sucrose analytes was similar to our previously developed method [13, 14]. In addition to high recovery, these data suggest minimal or no matrix effect on the analyte signal intensity. The freeze–thaw stability of [13C6]mannitol was determined for 50 and 500 ng/ml in water (Additional file 1: Fig. S2). The results confirmed that mannitol stays stable at three cycles of freeze–thaw that was similar to sucrose analytes in our previous study [13]. Results from the long term storage stability also showed mannitol was stable in plasma and brain matrix over the long term. Regarding accuracy, the values of [13C6]mannitol in plasma at nominal concentrations of 10, 100, and 1000 ng/mL were 96.1%, 109%, and 97.6%, respectively. In case of brain matrix, the accuracy values for 5, 50 and 400 ng/mL nominal concentrations were 105%, 97.2%, and 97.5%, respectively. A comparative pharmacokinetic study was done in two groups of anesthetized C57BL/6J mice to show the application of the method. The results of the pharmacokinetic study are shown in Figs. 3 and 4. The plasma profiles of both groups (vascular marker group and washout group) were similar for both mannitol and sucrose, and the areas under the curve from 0 to 30 min were not significantly different. Moreover, the plasma profile of mannitol was similar to sucrose, which showed a biexponential decline (Fig. 3). Pharmacokinetic profiles for [13C6]mannitol and [13C12]sucrose in mouse plasma up to 30 min after IV bolus (mean ± SD, n = 6) a, c Differences in brain concentration and brain uptake clearance (Kin) of [13C6]mannitol with or without correction by vascular marker. b, d Cbr and Kin of [13C12]sucrose with or without correction by vascular marker. ***p < 0.001 (n = 6), analyzed by Student's paired t-test (two-tailed). N.S. Student's unpaired t-test Comparison of the corrected brain concentrations (washout vs. vascular marker correction) showed no significant difference for both mannitol and sucrose (unpaired, two-tailed t-test) (Fig. 4). Cbr (%ID/mL) of mannitol was 0.071 ± 0.007 and 0.065 ± 0.009 for vascular marker and washout respectively, whereas the Cbr of sucrose was almost half of mannitol Cbr values (0.035 ± 0.003 and 0.037 ± 0.005 for vascular marker and washout respectively). Similarly, comparison of the brain uptake clearance (Kin) between these two groups of each marker showed no significant difference (unpaired, two-tailed t-test) (Fig. 4). For example, The Kin value of mannitol was 0.267 ± 0.021 μL/min g−1 and 0.245 ± 0.013 μL/min g−1 for the vascular marker and washout groups, respectively. In terms of comparison of two markers, the Kin of mannitol (0.267 ± 0.021 µl g−1 min−1) was more than two times higher than that of sucrose (0.126 ± 0.025 µl g−1 min−1). In vitro application of the method (in vitro-in vivo correlation) Transwell system is widely used in in vitro models of BBB for drug development and screening [26]. We evaluated the permeability of our novel markers in iPSC derived brain endothelial cells cultured on the Transwell membranes. The barrier function of the monolayer was confirmed by measuring the TEER. The average TEER value was 1812 ± 54 Ω cm2, which is similar to values reported in the literature [24, 27]. For a comparison between in vitro and in vivo models, the Kin values or permeability surface area products (PS) were converted to permeability coefficients, taking 0.33 cm2/well as the surface area of the Transwell membranes, and 120 cm2/g of brain as the surface area of the BBB in vivo [23]. The in vitro permeability coefficient of mannitol and sucrose was 4.99 ± 0.152 × 10−7 and 3.12 ± 0.176 × 10−7, respectively. Figure 5a depicts the permeability values of the two markers, with mannitol showing higher permeability compared to sucrose (p < 0.0001 unpaired, two-tailed t-test). The PS value of mannitol and sucrose in vivo was 0.267 ± 0.021 and 0.126 ± 0.025 µl g−1 min−1 respectively, which corresponds to a permeability coefficient value of 3.71 ± 0.296 × 10−8 and 1.75 ± 0.355 × 10−8 cm/s for mannitol and sucrose, respectively. Figure 5c showed the in vitro and in vivo correlation of the markers. Interestingly, the P values for mannitol and sucrose in vitro were only about 13-fold and 18-fold higher than the permeability coefficient in vivo. a Permeability coefficient (P) of mannitol and sucrose in the Transwell model with TEER value of 1812 ± 54 Ω cm2 (n = 3). b Permeability coefficient (P) of mannitol and sucrose in the vivo model (n = 6). ****p < 0.0001, analyzed by Student's unpaired t-test (two-tailed). c In vitro and in vivo correlation of mannitol and sucrose based on the permeability coefficients The effect of an inflammatory cytokine on the permeability of BBB in the vitro model was examined. As shown in Fig. 6, the permeability coefficient of mannitol and sucrose significantly increased from 6.90 ± 0.689 × 10−7 and 4.74 ± 0.314 × 10−7 to 1.67 ± 0.188 × 10−6 and 1.23 ± 0.163 × 10−6, respectively, with 100 ng/mL IL-1β. Moreover, The TEER values of iPSC-derived BMECs decreased 38% after 1 day exposure to 100 ng/mL IL-1β. However, the decrease in the TEER values and increases in the permeability coefficients of sucrose and mannitol in the presence of 10 ng/mL IL-1β were not statistically significant (Fig. 6). Permeability coefficient of a mannitol, b sucrose in iPSC-BMECs following treatment with different concentrations of IL-1β. c The effect of IL-1β cytokine on TEER of iPSC-BMECs. **p < 0.01 and ***p < 0.001, 1-way ANOVA, followed by Tukey's multiple comparisons test (n = 3) The correlations between log P (partition coefficient) and in vitro and in vivo permeability coefficients are shown in Fig. 7. We found that mannitol and sucrose have a log P of − 2.98 ± 0.033 and − 3.62 ± 0.056. The permeability coefficient of sucrose is lower compared to mannitol, reflecting the lower log P value. a Correlation of in vitro permeability coefficient (n = 3) and log P (n = 5). b Correlation of in vivo permeability coefficient (n = 6) and log P (n = 5) The results of our study showed we could accurately quantify the stable labeled isotopes of mannitol and sucrose simultaneously in brain and plasma by LCMS/MS. Our method not only could replace radioactive tracers of mannitol and sucrose for permeability studies, but also it could detect two different molecular weight markers by one single run. The radioactive version of mannitol has been used widely in the measurement of the BBB [17,18,19,20, 27]. Moreover, mannitol is also used as a marker in the lactulose/mannitol (L/M) ratio test as a widespread dual-sugar test to assess the intestinal barrier function [16]. [13C]mannitol has been recently presented as a novel biomarker for quantifying the intestinal permeability [28, 29] but, the validation of [13C]mannitol as a marker for the BBB has not been reported. Hence, the mass spectrometry condition of [13C6]mannitol was optimized by continuous injection of mannitol solution with injection pump to get the optimum M–H−1. According to our findings, the most robust m/z value for [13C6]mannitol was 187 → 92, based on signal to noise ratio. The dual analytes with a vascular marker of sucrose were previously developed by our group. In our study, three analytes (one mannitol and two sucrose) plus two stable isotope internal standards were easily detected due to co-elution from a suitable stationary phase and variation of detector signal based on their molecular weight for each analyte. Figure 2 shows the simultaneous detectability of mannitol and sucrose in the various matrices. Co-administration of mannitol and sucrose could provide information on the uptake mechanism at the BBB of markers that have similar physicochemical properties over a range of molecular weights which covers the vast majority of marketed drugs [30]. The method has this feature to be run for single analyte by removing the transition of another anlayte from the method. To indicate the application of our method in in vitro studies, we used the Transwell system as the in vitro platform most commonly used for BBB permeability studies. iPSC-derived BMECs were used, which provide high TEER values resulting in low paracellular permeability. The iPSC-derived BMECs is considered the ideal cell line for drug screening and permeability studies [31]. Various BBB permeability markers are currently being used for Transwell models and advanced microfluidic models, including sodium fluorescein, radiolabeled sucrose, and different molecular weight dextrans [32]. Our method can quantify with high sensitivity and accuracy the integrity of the BBB, when compared to radiolabeled versions of sucrose or mannitol, and the fluorescent dye sodium fluorescein, which all have drawbacks. In terms of comparison between studies using radioactive versions of mannitol and sucrose measured by liquid scintillation counting with their stable isotopes analyzed by LC–MS/MS, we have previously shown that [14C]sucrose had a 6 to sevenfold higher Kin value in vivo than [13C12]sucrose [15]. We also found by chromatographic fractionation of [14C]sucrose after in vivo administration that the majority of the brain content of measured 14C radioactivity belonged to compounds other than the intact [14C]sucrose [15]. Here, we found the Kin of mannitol (0.267 ± 0.021) 3–7 fold lower than published values obtained with radioactive versions of mannitol ([14C] and [3H] mannitol) [18, 19, 33, 34]. Moreover, Preston and Haas reported 30–40% lower permeability area products of chromatographically purified [3H]mannitol compared to stock solution of the same tracer lot [33, 35]. Comparing the permeability values in different in vitro studies is challenging due to major differences in experimental design in the published studies, including different sources of the endothelial cells and different culture conditions, which also results in a range of different TEER values. Recent iPSC-derived BMECs in vitro models reported similar permeability values (in the range of 10−6 to 10−7 cm/s) for mannitol and sucrose as our values [24, 36]. With respect to the small molecule fluorescent dye marker, sodium fluorescein, we have shown in previous work that, in order to avoid erroneous interpretation of brain uptake data, it is mandatory to perform a chromatographic analysis of the unmetabolized (non-glucuronidated) substance, and to measure the free fraction in plasma [37, 38]. Both is often neglected in publications using fluorescein in studies on BBB permeability. In addition, the potential role of efflux transporters for fluorescein at the BBB has not been conclusively ruled out [39,40,41]. Recent advanced microfluidic BBB models (BBB-on-a-chip) have reported general barrier restrictiveness by measuring paracellular flux with different molecular weights of dextran (ranging from 3 to 70 kDa). Reporting barrier function for large molecular weight markers may not accurately predict the integrity of BBB models for small, drug like molecules [42, 43]. Furthermore, permeability quantifications with such markers are not reliable in in vivo experiments and result in inaccurate comparison between these advanced in vitro models and in vivo models. We obtained a permeability coefficient of 4.99 ± 0.152 × 10−7 and 3.12 ± 0.176 × 10−7 for mannitol and sucrose, respectively. The permeability was found to be very low and showed the human in vitro model has very tight barrier properties. Moreover, the precision and accuracy of the method supports its use for in vitro-in vivo correlation studies with respect to permeability properties under healthy and disease conditions. We also found that 100 ng/mL IL-1β resulted in a change of barrier function in the in vitro model. This observation is similar to the previous in vitro reports [44,45,46]. Additionally, there was a trend towards a decrease in the TEER value accompanied by an increase in the permeability coefficients for both markers at the 10 ng/mL concentration of IL-1β. However, these changes did not reach statistical significance (Fig. 6), most likely due to the small sample size used in our study (n = 3). The developed LC–MS/MS method was successfully applied to measurements of plasma, and brain concentrations of mannitol and sucrose after injection of the markers to mice at a dose of 10 mg/kg. We previously showed that correcting vascular space using [13C6]sucrose was equally effective as buffer perfusion for determination of the BBB permeability of [13C12]sucrose [13]. Interestingly, similar results were obtained when we used [13C6]sucrose for correcting the vascular space of mannitol analyte in the brain (Fig. 4). The corrected Kin and Cbr of analytes showed no significant difference between the vascular marker and washout groups for both mannitol and sucrose. However, the uncorrected Kin and concentration of brain indicated an overestimation, almost two times higher than the correct values, which demonstrates the impact of the intravascular content. In this context it is also apparent that correction of intravascular volume needs to be performed in each individual animal, rather than by a value determined in a separate experimental series. The correction method by vascular marker administration could be practically more advantageous compared to the washout method in several aspects: Technically, it is easier to perform, and brain tissue collection is attainable within seconds after the terminal blood sampling, as opposed to delays for several minutes by performing thoracotomy and perfusion (e.g., over 10 min in the present study). Furthermore, rapid sampling gains importance when, apart from measuring the BBB permeability, parts of the brain samples were needed for measurement of other analytes such as neurotransmitters or metabolites, that may undergo rapid degradation. By comparing the PK profile of the two markers, we found that the plasma profiles of mannitol and sucrose were similar. However, the brain concentrations and Kin of mannitol were almost two-fold higher than those for sucrose, which could be related to its lower molecular weight and higher paracellular diffusibility. An alternative explanation is the slightly higher lipid solubility of mannitol, with a log P of − 2.98 ± 0.033, which is half a log order higher than that of sucrose − 3.62 ± 0.055. In conclusion, the newly developed method allows the measurement of triple analytes of mannitol and sucrose in the same sample in a single run. This technique simplifies correction for intravascular plasma space in brain uptake experiments with sucrose or mannitol and makes a vascular washout step dispensable. In addition, non-radiolabeled [13C6]mannitol was introduced as BBB marker for the first time in this study. Last but not least, this method can now be considered as a very useful tool in quantifying BBB permeability in different in vitro and in vivo disease models as well as for monitoring treatment outcomes. All data generated or analyzed during this study are included in this published article. BBB: Blood brain barrier UPLC: Ultra performance liquid chromatography LC-MS: Liquid chromatography and mass spectrometry iPSC: Induced pluripotent stem cells BMECs: Brian microvascular endothelial cells CNS: Cecchelli R, Berezowski V, Lundquist S, Culot M, Renftel M, Dehouck MP, et al. Modelling of the blood–brain barrier in drug discovery and development. Nat Rev Drug Discov. 2007;6(8):650–61. Liebner S, Czupalla CJ, Wolburg H. Current concepts of blood–brain barrier development. Int J Dev Biol. 2011;55(4–5):467–76. Abbott NJ, Patabendige AA, Dolman DE, Yusof SR, Begley DJ. Structure and function of the blood–brain barrier. Neurobiol Dis. 2010;37(1):13–25. Sweeney MD, Zhao Z, Montagne A, Nelson AR, Zlokovic BV. Blood–brain barrier: from physiology to disease and back. Physiol Rev. 2019;99(1):21–78. Ronaldson PT, Demarco KM, Sanchez-Covarrubias L, Solinsky CM, Davis TP. Transforming growth factor-beta signaling alters substrate permeability and tight junction protein expression at the blood–brain barrier during inflammatory pain. J Cereb Blood Flow Metab. 2009;29(6):1084–98. Huber JD, Witt KA, Hom S, Egleton RD, Mark KS, Davis TP. Inflammatory pain alters blood–brain barrier permeability and tight junctional protein expression. Am J Physiol Heart Circ Physiol. 2001;280(3):H1241–8. Yang S, Gu C, Mandeville ET, Dong Y, Esposito E, Zhang Y, et al. Anesthesia and surgery impair blood–brain barrier and cognitive function in mice. Front Immunol. 2017;8:902. Abrahamov D, Levran O, Naparstek S, Refaeli Y, Kaptson S, Abu Salah M, et al. Blood–brain barrier disruption after cardiopulmonary bypass: diagnosis and correlation to cognition. Ann Thorac Surg. 2017;104(1):161–9. Nation DA, Sweeney MD, Montagne A, Sagare AP, D'Orazio LM, Pachicano M, et al. Blood–brain barrier breakdown is an early biomarker of human cognitive dysfunction. Nat Med. 2019;25(2):270–6. Thrippleton MJ, Backes WH, Sourbron S, Ingrisch M, van Osch MJP, Dichgans M, et al. Quantifying blood–brain barrier leakage in small vessel disease: review and consensus recommendations. Alzheimers Dement. 2019;15(6):840–58. Gustafsson S, Lindstrom V, Ingelsson M, Hammarlund-Udenaes M, Syvanen S. Intact blood–brain barrier transport of small molecular drugs in animal models of amyloid beta and alpha-synuclein pathology. Neuropharmacology. 2018;128:482–91. Gustafsson S, Gustavsson T, Roshanbin S, Hultqvist G, Hammarlund-Udenaes M, Sehlin D, et al. Blood–brain barrier integrity in a mouse model of Alzheimer's disease with or without acute 3D6 immunotherapy. Neuropharmacology. 2018;143:1–9. Chowdhury EA, Alqahtani F, Bhattacharya R, Mehvar R, Bickel U. Simultaneous UPLC-MS/MS analysis of two stable isotope labeled versions of sucrose in mouse plasma and brain samples as markers of blood–brain barrier permeability and brain vascular space. J Chromatogr B Analyt Technol Biomed Life Sci. 2018;1073:19–26. Miah MK, Bickel U, Mehvar R. Development and validation of a sensitive UPLC-MS/MS method for the quantitation of [(13)C]sucrose in rat plasma, blood, and brain: its application to the measurement of blood–brain barrier permeability. J Chromatogr B Analyt Technol Biomed Life Sci. 2016;1015–1016:105–10. Miah MK, Chowdhury EA, Bickel U, Mehvar R. Evaluation of [(14)C] and [(13)C]sucrose as blood–brain barrier permeability markers. J Pharm Sci. 2017;106(6):1659–69. Camilleri M, Nadeau A, Lamsam J, Nord SL, Ryks M, Burton D, et al. Understanding measurements of intestinal permeability in healthy humans with urine lactulose and mannitol excretion. Neurogastroenterol Motil. 2010;22(1):e15–26. Sisson WB, Oldendorf WH. Brain distribution spaces of mannitol-3H, inulin-14C, and dextran-14C in the rat. Am J Physiol. 1971;221(1):214–7. CAS PubMed Article PubMed Central Google Scholar Amtorp O. Estimation of capillary permeability of inulin, sucrose and mannitol in rat brain cortex. Acta Physiol Scand. 1980;110(4):337–42. Preston E, Haas N, Allen M. Reduced permeation of 14C-sucrose, 3H-mannitol and 3H-inulin across blood–brain barrier in nephrectomized rats. Brain Res Bull. 1984;12(1):133–6. Preston JE, Al-Sarraf H, Segal MB. Permeability of the developing blood–brain barrier to 14C-mannitol using the rat in situ brain perfusion technique. Brain Res Dev Brain Res. 1995;87(1):69–76. Iliff JJ, Wang M, Liao Y, Plogg BA, Peng W, Gundersen GA, et al. A paravascular pathway facilitates CSF flow through the brain parenchyma and the clearance of interstitial solutes, including amyloid beta. Sci Transl Med. 2012;4(147):147ra11. Daniel PM, Lam DK, Pratt OE. Comparison of the vascular permeability of the brain and the spinal cord to mannitol and inulin in rats. J Neurochem. 1985;45(2):647–9. Pardridge WM. The isolated brain microvessel: a versatile experimental model of the blood–brain barrier. Front Physiol. 2020;11:398. Lippmann ES, Al-Ahmad A, Azarin SM, Palecek SP, Shusta EV. A retinoic acid-enhanced, multicellular human blood–brain barrier model derived from stem cell sources. Sci Rep. 2014;4:4160. Nozohouri S, Noorani B, Al-Ahmad A, Abbruscato TJ. Estimating brain permeability using in vitro blood–brain barrier models. Methods Mol Biol. 2020. https://doi.org/10.1007/7651_2020_311. Helms HC, Abbott NJ, Burek M, Cecchelli R, Couraud PO, Deli MA, et al. In vitro models of the blood–brain barrier: an overview of commonly used brain endothelial cell culture models and guidelines for their use. J Cereb Blood Flow Metab. 2016;36(5):862–90. Patel R, Alahmad AJ. Growth-factor reduced Matrigel source influences stem cell derived brain microvascular endothelial cell barrier properties. Fluids Barriers CNS. 2016;13:6. Gervasoni J, Primiano A, Graziani C, Scaldaferri F, Gasbarrini A, Urbani A, et al. Validation of UPLC-MS/MS method for determination of urinary lactulose/mannitol. Molecules. 2018;23(10):2705. PubMed Central Google Scholar Grover M, Camilleri M, Hines J, Burton D, Ryks M, Wadhwa A, et al. (13) C mannitol as a novel biomarker for measurement of intestinal permeability. Neurogastroenterol Motil. 2016;28(7):1114–9. Blunt JW, Carroll AR, Copp BR, Davis RA, Keyzers RA, Prinsep MR. Marine natural products. Nat Prod Rep. 2018;35(1):8–53. Aday S, Cecchelli R, Hallier-Vanuxeem D, Dehouck MP, Ferreira L. Stem cell-based human blood–brain barrier models for drug discovery and delivery. Trends Biotechnol. 2016;34(5):382–93. Oddo A, Peng B, Tong Z, Wei Y, Tong WY, Thissen H, et al. Advances in microfluidic blood–brain barrier (BBB) models. Trends Biotechnol. 2019;37(12):1295–314. Hladky SB, Barrand MA. Elimination of substances from the brain parenchyma: efflux via perivascular pathways and via the blood–brain barrier. Fluids Barriers CNS. 2018;15(1):30. Murakami H, Takanaga H, Matsuo H, Ohtani H, Sawada Y. Comparison of blood–brain barrier permeability in mice and rats using in situ brain perfusion technique. Am J Physiol Heart Circ Physiol. 2000;279(3):H1022–8. Preston E, Haas N. Defining the lower limits of blood–brain barrier permeability: factors affecting the magnitude and interpretation of permeability-area products. J Neurosci Res. 1986;16(4):709–19. Martinez A, Al-Ahmad AJ. Effects of glyphosate and aminomethylphosphonic acid on an isogeneic model of the human blood–brain barrier. Toxicol Lett. 2019;304:39–49. Miah MK, Shaik IH, Bickel U, Mehvar R. Effects of Pringle maneuver and partial hepatectomy on the pharmacokinetics and blood–brain barrier permeability of sodium fluorescein in rats. Brain Res. 2015;1618:249–60. Shaik IH, Miah MK, Bickel U, Mehvar R. Effects of short-term portacaval anastomosis on the peripheral and brain disposition of the blood–brain barrier permeability marker sodium fluorescein in rats. Brain Res. 2013;1531:84–93. Hawkins BT, Ocheltree SM, Norwood KM, Egleton RD. Decreased blood–brain barrier permeability to fluorescein in streptozotocin-treated rats. Neurosci Lett. 2007;411(1):1–5. Sun H, Miller DW, Elmquist WF. Effect of probenecid on fluorescein transport in the central nervous system using in vitro and in vivo models. Pharm Res. 2001;18(11):1542–9. Sun H, Johnson DR, Finch RA, Sartorelli AC, Miller DW, Elmquist WF. Transport of fluorescein in MDCKII-MRP1 transfected cells and mrp1-knockout mice. Biochem Biophys Res Commun. 2001;284(4):863–9. Park TE, Mustafaoglu N, Herland A, Hasselkus R, Mannix R, FitzGerald EA, et al. Hypoxia-enhanced blood–brain barrier chip recapitulates human barrier function and shuttling of drugs and antibodies. Nat Commun. 2019;10(1):2621. Campisi M, Shin Y, Osaki T, Hajal C, Chiono V, Kamm RD. 3D self-organized microvascular model of the human blood–brain barrier with endothelial cells, pericytes and astrocytes. Biomaterials. 2018;180:117–29. Vatine GD, Barrile R, Workman MJ, Sances S, Barriga BK, Rahnama M, et al. Human iPSC-derived blood–brain barrier chips enable disease modeling and personalized medicine applications. Cell Stem Cell. 2019;24(6):995–1005. Labus J, Hackel S, Lucka L, Danker K. Interleukin-1beta induces an inflammatory response and the breakdown of the endothelial cell layer in an improved human THBMEC-based in vitro blood–brain barrier model. J Neurosci Methods. 2014;228:35–45. Wong D, Dorovini-Zis K, Vincent SR. Cytokines, nitric oxide, and cGMP modulate the permeability of an in vitro model of the human blood–brain barrier. Exp Neurol. 2004;190(2):446–55. This study was funded by Texas Tech University Health Sciences Center (US) (Fund #122710). Department of Pharmaceutical Sciences, Jerry H. Hodge School of Pharmacy, Texas Tech University Health Sciences Center, Amarillo, TX, 79106, USA Behnam Noorani, Ekram Ahmed Chowdhury, Yeseul Ahn, Dhavalkumar Patel, Abraham Al-Ahmad & Ulrich Bickel Department of Pharmacology and Toxicology, College of Pharmacy, King Saud University, Riyadh, 11451, Saudi Arabia Faleh Alqahtani Department of Biomedical and Pharmaceutical Sciences, Chapman University, School of Pharmacy, Irvine, CA, USA Reza Mehvar Center for Blood–Brain Barrier Research, School of Pharmacy, Texas Tech University Health Sciences Center, Amarillo, TX, 79106, USA Behnam Noorani, Ekram Ahmed Chowdhury, Yeseul Ahn, Abraham Al-Ahmad & Ulrich Bickel Behnam Noorani Ekram Ahmed Chowdhury Yeseul Ahn Dhavalkumar Patel Abraham Al-Ahmad Ulrich Bickel BN, RM, and UB conceived and designed the study. BN, EC, FA, YA and DP performed the experiments. AA was involved in the design of in vitro study and provided IMR90. BN, RM and UB analyzed and interpreted the data. BN, RM and UB wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Ulrich Bickel. The authors declare that they have no potential competing interests. Additional file 1. Figure S1. Mass spectra of [13C6]mannitol and [2H8]mannitol. Figure S2. Freeze thaw stability of [13C6] mannitol. Table S1. Inter-run and Intra-run accuracy and precision values of analytes for plasma. Table S2. Inter-run and Intra-run accuracy and precision values of analytes for brain. Table S3. Recoveries of analytes in plasma and brain matrix. Noorani, B., Chowdhury, E.A., Alqahtani, F. et al. LC–MS/MS-based in vitro and in vivo investigation of blood–brain barrier integrity by simultaneous quantitation of mannitol and sucrose. Fluids Barriers CNS 17, 61 (2020). https://doi.org/10.1186/s12987-020-00224-1 Blood–brain barrier Vascular space correction Permeability coefficient Brain uptake clearance In vitro and in vivo correlation
CommonCrawl
Home / Artículos por Website / Dynamics of collective modes in an unconventional charge density wave system BaNi2As2 Dynamics of collective modes in an unconventional charge density wave system BaNi2As2 19:22 09 junio in Artículos por Website by Consultora CIS We studied the T– and F-dependence of the photoinduced near-infrared reflectivity dynamics in undoped BaNi2As2 using an optical pump-probe technique. The single crystals were cleaved along the a-b plane with the pump and the probe beams at near-normal incidence (they were cross-polarized for higher signal-to-noise ratio). We performed also pump- and probe-polarization dependence of the photoinduced reflectivity, with no significant variation being observed (see Supplementary Note 4). The reported temperature dependence measurements were performed upon warming. Continuous laser heating was experimentally determined to be about 3 K for F = 0.4 mJ cm−2, and has been taken into account (see Supplementary Note 6). Photoinduced reflectivity dynamics in the near-infrared Figure 1a presents the T-dependence of photoinduced reflectivity transients, ΔR/R(t), recorded upon increasing the temperature from 10 K, with F = 0.4 mJ cm−2. This fluence was chosen such that the response is still linear, yet it enables high enough dynamic range to study collective dynamics (see also section on excitation density dependence). Fig. 1: Photo-induced in-plane reflectivity traces on undoped BaNi2As2 single crystal. a Transient reflectivity traces, ΔR/R(t), between 13 and 149 K, measured with fluence F = 0.4 mJ cm−2, upon increasing the temperature. b Decomposition of the reflectivity transient at 13 K (black dotted line) into overdamped (solid red line) and oscillatory (solid blue line) components. Insert shows the individual overdamped components (dotted and dashed blue lines). c Oscillatory response at selected temperatures, together with fits using a sum of four damped oscillators (black dashed lines). Signals at 138 and 149 K are multiplied by a factor of 2 and 20, respectively. Clear oscillatory response is observed up to ≈ 150 K, with the magnitude displaying a strong decrease near and above TS. Similarly to the oscillatory signal, the overdamped response is also strongly T-dependent. As shown in Fig. 1b the response can be decomposed into an overdamped and oscillatory response. To analyze the dependence of the oscillatory response on T, we first subtract the overdamped components. These can be fit by $$\frac{{{\Delta }}R}{R}=H\left(\sigma ,t\right)\left[{A}_{1}{e}^{-t/{\tau }_{1}}+B+{A}_{2}\left(1-{e}^{-t/{\tau }_{2}}\right)\right],$$ where H(σ, t) presents the Heaviside step function with an effective rise time σ. The terms in brackets represent the fast decaying process with A1, τ1 and the resulting quasi-equilibrium value B, together with the slower buildup process with A2 and τ2, taking place on a 10 ps timescale—see inset to Fig. 1b. Figure 1c presents the oscillatory part of the signal subtracted from the overdamped response at selected temperatures together with the fit (black dashed lines) using sum of four damped oscillators (discussed below). Collective modes in BaNi2As2 Figure 2 presents the results of the analysis of the oscillatory response. Figure 2a shows the T–dependence of the Fast Fourier Transformation spectra in the contour plot, where several modes up to ≈ 6 THz can be resolved, with the low-T mode frequencies depicted by red arrows. To analyze the temperature dependence of the modes' parameters we fit the oscillatory response to a sum of damped oscillators, \({\sum }_{i}{S}_{i}\cos \left(2\pi {\widetilde{\nu }}_{i}t+{\phi }_{i}\right){e}^{-{{{\Gamma }}}_{i}t}\). Fig. 2: Analysis of the oscillatory response. a Temperature dependence of the Fast Fourier Transform spectra, FFT, demonstrating the presence of several modes at low temperatures. The extracted mode frequencies, νi in the low-temperature limit are denoted by red arrows (see also Supplementary Note 3). The top axis presents the energy scale in wavenumbers, ω (cm−1). Insert presents the FFT of the data recorded at 13 K, with white arrows pointing at the modes. The temperature dependence of the parameters of the four strongest low-frequency modes, obtained by fitting the oscillatory response with the sum of four damped oscillators: b central frequencies νi, c linewidths Γi, and d spectral weights Si. The triclinic phase transition temperature TS is denoted by vertical dashed lines. The dashed red line in c presents the expected T-dependence of the linewidth of 1.45 THz mode for the case, when damping is governed by the anharmonic phonon decay33. e–g present the zoom-in of the b–d, emphasizing the evolution of the parameters across the triclinic transition at TS = 138 K. The error bars are obtained from the standard deviation of the least-squared fit. Figure 2b–d presents T-dependences of the extracted mode frequencies νi (here \({\nu }_{i}^{2}={\widetilde{\nu }}_{i}^{2}+{({{{\Gamma }}}_{i}/2\pi )}^{2}\)—see ref. 30), dampings Γi, and spectral weights (Si) of the four dominant modes (see also Supplementary Notes 2 and 3). Noteworthy, all these low-frequency modes are observed up to ≈ 150 K, well above TS = 138 K and \({T}_{{{{{{{{\rm{S}}}}}}}}^{\prime} }=142\) K. While their spectral weights are dramatically reduced upon increasing the temperature through TS, their frequencies and linewidths remain nearly constant through TS and \({T}_{{{{{{{{\rm{S}}}}}}}}^{\prime} }\). In Fig. 3 we present the result of the phonon dispersion calculations for the high-temperature tetragonal structure. None of the experimentally observed low-frequency modes matches the calculated q = 0 mode frequencies. Therefore, and based on their T– and F-dependence, discussed below, we attribute these modes to collective amplitude modes of the CDW order19,21,27,28,31. In particular, we argue that these low-temperature q = 0 amplitude modes are a result of linear (or higher order19,21) coupling of the underlying electronic modulation with phonons at the wavevector qCDW (or n ⋅ qCDW for the n-th order coupling19,21) of the high-T phase. Within this scenario,19,21,27,28,31,32 the low-T frequencies of amplitude modes should be comparable to frequencies of normal state phonons at qCDW (or n⋅qCDW for the higher-order coupling), with renormalizations depending on the coupling strengths. Moreover, T-dependences of modes' parameters νi, Γi, and Si should reflect the temperature variation of the underlying electronic order parameter19,21,27,28,31,32. Fig. 3: Phonon dispersion calculation of the high-temperature tetragonal structure. Phonon dispersion along the a [100] and b [101] directions. The dashed red vertical line in a signifies the CDW wave-vectors of the incommensurate CDW (I-CDW) while the line in b corresponds to the CDW wave-vectors of the commensurate CDW (C-CDW) order. The dashed horizontal lines indicate the low-temperature frequencies of the observed modes. Note that calculations show an instability in an optical branch quite close to the critical wavevector of the I-CDW (see also Methods). The first support for the assignment of these modes to amplitude modes follows from calculations of the phonon dispersion, presented in Fig. 3. Note that, since these modes appear already above TS, their frequencies must be compared to phonon dispersion calculations in the high-temperature tetragonal phase. Figure 3 presents the calculated phonon dispersion in the [100] and [101] directions, along which the modulation of the I-CDW and C-CDW, respectively, is observed. In Figure 3, frequencies of the experimentally observed modes are denoted by the dashed horizontal lines (the line thicknesses reflect the modes' strengths). Indeed, the frequencies of strong 1.45 THz and 1.9 THz modes match surprisingly well with the calculated phonon frequencies at the I-CDW modulation wavevector (given by the vertical dashed line in Fig. 3a), supporting the linear-coupling scenario. The corresponding (calculated) frequencies of phonons at the C-CDW wavevector, shown in Fig. 3b, are quite similar. As shown in Fig. 2, both modes display a pronounced softening upon increasing temperature, much as the dominant amplitude modes in the prototype quasi-1D CDW system K0.3MoO3,19,21 as well as dramatic drop in their spectral weights at high temperatures19. Finally, the particular T-dependence of Γ for the 1.45 THz mode clearly cannot be described by an anharmonic phonon decay model, given by \({{\Gamma }}(\omega ,T)={{{\Gamma }}}_{0}+{{{\Gamma }}}_{1}(1+2/{e}^{h\nu /2{k}_{B}T}-1)\)33. Instead, the behavior is similar to prototype CDW systems, where damping is roughly inversely proportional to the order parameter19,21. Given the fact that the structural transition at TS is of the first order, such a strong T-dependence of frequencies and dampings at T < TS may sound surprising. However, as amplitude modes are a result of coupling between the electronic order and phonons at the CDW wavevector,19,21,28 the T-dependence of the mode frequencies and dampings reflect the T-dependence of the electronic order parameter19,21. Indeed, the T-dependence of PLD10 as well as of the charge/orbital order15 do display a pronounced T-dependence within the C-CDW phase. A strongly damped mode at 0.6 THz also matches the frequency of the calculated high-temperature optical phonon at qI−CDW. We note, however, that the calculations imply this phonon to have an instability near qI−CDW, thus the matching frequencies should be taken with a grain of salt. The extracted mode frequency does show a pronounced softening (Fig. 2b), though large damping and rapidly decreasing spectral weight result in a large scatter of the extracted parameters at high temperatures. We further note the anomalous reduction in damping of the 0.6 THz mode upon increasing the temperature (Fig. 2c). Such a behavior has not been observed in conventional Peierls CDW systems,19,21 and may reflect the unconventional nature of the CDW order in this system. We note, that phonon broadening upon cooling was observed for selected modes in Fe1+y Te1−xSex34,35 and NaFe1−xCoxAs36 above and/or below the respective structural phase transitions. Several interpretation have been put forward for these anomalous anharmonic behaviors, that can have distinct origins34,35,36. A weak narrow mode at 1.65 THz is also observed, which does not seem to have a high temperature phonon counterpart at the qI−CDW. Its low spectral weight may reflect the higher-order coupling nature of this mode. Finally, several much weaker modes are also observed (see Fig. 2a). Comparison with phonon calculations suggest 3.3 THz and 5.4 THz modes are likely regular q = 0 phonons, the 5.9 THz mode could also be the amplitude collective mode, while the nature of 0.17 THz mode is unclear (see Supplementary Note 3 for further discussion and Supplementary Note 5 for complementary data obtained by simultaneous Raman spectroscopy). We note that, as the pump-probe technique is mostly sensitive to Ag symmetry modes that couple directly to carrier density22,30, the stronger the coupling to the electronic system, the larger the spectral weight of the mode. Correspondingly, in time-resolved experiments the spectral weights of amplitude modes are much higher than regular q = 0 phonons. Overdamped modes in BaNi2As2 Further support for the CDW order in BaNi2As210,15 is provided by the T-dependence of overdamped components. Figure 4a presents the T-dependence of signal amplitudes A1 + B, which corresponds to the peak value, and A2 extracted by fitting the transient reflectivity data using Eq. (1). In CDW systems the fast decay process with τ1 has been attributed to an overdamped (collective) response of the CDW condensate,19,21 while the slower process (A2, τ2) has been associated to incoherently excited collective modes21. As both are related to the CDW order, their amplitudes should reflect this. Indeed, both components are strongly reduced at high temperatures, with a pronounced change in slope in the vicinity of TS—see Fig. 4a. Component A2 displays a maximum well below TS, similar to the observation in K0.3MoO337. Above ≈ 150 K the reflectivity transient shows a characteristic metallic response, with fast decay on the 100 fs timescale. Fig. 4: Extracted fit parameters of the overdamped components. Temperature dependence of a amplitudes and b relaxation times, τ1 and τ2, obtained by fitting reflectivity transients using Eq. (1). The triclinic phase transition temperature, Ts, is denoted by the black vertical dashed line. The error bars are the standard deviation of the least-squared fit. The evolution of timescales τ1 and τ2 is shown in Fig. 4b. In the C-CDW phase, up to ≈110−120 K, the two timescales show qualitatively similar dependence as in prototype 1D CDWs:19,20,21τ1 increases with increasing temperature while τ2 decreases19,20,21. As τ1 is inversely proportional to the CDW strength,19,21 its T-dependence is consistent with the observed softening of the amplitude modes. Its increase with increasing temperature is, however, not as pronounced as in CDW systems with continuous phase transitions, where timescales can change by an order of magnitude when gap is closing in a mean-field fashion18,19,20,21. From about 130 K τ1 remains nearly constant up to ≈150 K. On the other hand, for T ≳ 120 K τ2 displays a pronounced increase, though the uncertainties of the extracted parameters start to diverge as signals start to faint. Importantly, all of the observables seem to evolve continuously through TS, despite the pronounced changes in the electronic and structural properties that are observed, e.g., in the c-axis transport38 or the optical conductivity16,17. Excitation density dependence Valuable information about the nature of CDW order can be obtained from studies of dynamics as a function of excitation fluence, F. In conventional Peierls CDW systems a saturation of the amplitude of the overdamped response is commonly observed at excitation fluences of the order of 0.1–1 mJ cm−220,24,25,26. The corresponding absorbed energy density, at which saturation is reached, is comparable to the electronic part of the CDW condensation energy24,26. Similarly, the spectral weights of amplitude modes saturate at this saturation fluence. The modes are still observed up to excitation densities at which the absorbed energy density reaches the energy density required to heat up the excited volume up to the CDW transition temperature24. The reason for this is an ultrafast recovery of the electronic order on a timescale τ1, which is faster than the collective modes' periods24. We performed F-dependence study at 10 K base temperature, with F varied between 0.4 and 5.6 mJ cm−2. The reflectivity transients are presented in Fig. 5a. Unlike in prototype CDWs, no saturation of the fast overdamped response is observed up to the highest F (inset to Fig. 5b). The absence of spectroscopic signature of the CDW induced gap in BaNi2As217 suggest that most of the Fermi surface remains unaffected by the CDW order. Thus, the photoexcited carriers can effectively transfer their energy to the lattice,39 just as in the high-T metallic phase. Nevertheless, the fact that the excitation densities used here do exceed saturation densities in conventional CDW systems by over an order of magnitude suggests an unconventional mechanism driving the CDW in BaNi2As2. We note that signal A2 displays a super-linear dependence for F > 2 mJ cm−2. Fig. 5: Excitation density dependence of collective dynamics recorded at 10 K. a Reflectivity transients, ΔR/R(t), normalized to the excitation fluence, F. b The extracted relaxation timescales τ1 and τ2 as a function of F. Inset presents the F-dependence of amplitudes, with dashed lines presenting linear fits. c–e F-dependence of the collective mode parameters νi, Γi, Si. The error bars are obtained from the standard deviation of the least-squared fit. Figure 5b presents τ1(F) and τ2(F) for the data recorded at 10 K. Qualitatively, the F-dependence of the two timescales resembles their temperature dependence, similar to observations in Peierls CDW systems24. Since τ1 reflects the recovery of the electronic part of the order parameter, Δ, and follows τ1 ∝ 1/Δ,19,21 this observation supports a continuous suppression of the electronic order with increasing F. However, in Ni-122 no discontinuous drop in τ1(F) is observed up to the highest fluences. In K0.3MoO324 such a drop in τ1(F) is observed at the fluence corresponding to the full suppression of the electronic order. Figure 5c–e presents the F-dependence of the extracted amplitude mode parameters. A softening upon increasing the fluence is observed for all four modes (Fig. 5c). However, above ≈ 3 mJ cm−2 the values reach a plateau. Such an unusual behavior is not observed in Peierls CDWs19,21 and may hold clues to the interplay between the periodic lattice distortion and the underlying electronic instability. An indication of suppression of the underlying electronic order is observed also as saturation of spectral weights of some of the amplitude modes near F ≈ 3 mJ cm−2, see Fig. 5e. On the other hand, the mode at 1.45 THz, which is the most similar to main modes in K0.3MoO3, shows no such saturation up to the highest fluences. While the observed anomalies seen near F ≈ 3 mJ cm−2 may be linked to the underlying microscopic mechanism of CDW order in Ni-122, one could also speculate the anomalies may be related to the photoinduced suppression of commensurability. To put the observed robustness of the CDW against optical excitation into perspective, we note that F = 1 mJ cm−2 corresponds to the absorbed energy density of about 180 J cm−3 (110 meV per formula unit). Assuming rapid thermalization between electrons and the lattice, and no other energy decay channels, the resulting temperature of the excited sample volume would reach ≈ 160 K (see also Supplementary Notes 6 and 7). However, with high conductivity also along the c-axis38 and the estimated electronic mean free path on 7 nm40, transport of hot carriers into the bulk on the (sub)picosecond timescale cannot be excluded. Nevertheless, the fact that even at 5.6 mJ cm−2 (0.6 eV per formula unit) the CDW order has not collapsed, underscores an unconventional CDW order in in BaNi2As210,15.
CommonCrawl
Different applications of isosbestic points, normalized spectra and dual wavelength as powerful tools for resolution of multicomponent mixtures with severely overlapping spectra Ekram H. Mohamed1, Hayam M. Lotfy3, Maha A. Hegazy2 & Shereen Mowaka1,4 Chemistry Central Journal volume 11, Article number: 43 (2017) Cite this article Analysis of complex mixture containing three or more components represented a challenge for analysts. New smart spectrophotometric methods have been recently evolved with no limitation. A study of different novel and smart spectrophotometric techniques for resolution of severely overlapping spectra were presented in this work utilizing isosbestic points present in different absorption spectra, normalized spectra as a divisor and dual wavelengths. A quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PCT) and para-aminophenol (PAP) was taken as an example for application of the proposed techniques without any separation steps. The adopted techniques adopted of successive and progressive steps manipulating zero /or ratio /or derivative spectra. The proposed techniques includes eight novel and simple methods namely direct spectrophotometry after applying derivative transformation (DT) via multiplying by a decoding spectrum, spectrum subtraction (SS), advanced absorbance subtraction (AAS), advanced amplitude modulation (AAM), simultaneous derivative ratio (S1DD), advanced ratio difference (ARD), induced ratio difference (IRD) and finally double divisor–ratio difference-dual wavelength (DD-RD-DW) methods. The proposed methods were assessed by analyzing synthetic mixtures of the studied drugs. They were also successfully applied to commercial pharmaceutical formulations without interference from other dosage form additives. The methods were validated according to the ICH guidelines, accuracy, precision, repeatability, were found to be within the acceptable limits. The proposed procedures are accurate, simple and reproducible and yet economic. They are also sensitive and selective and could be used for routine analysis of complex most of the binary, ternary and quaternary mixtures and even more complex mixtures. Drotaverine (DRO) hydrochloride, 1-[(3,4-Diethoxy phenyl)methylene]-6,7-diethoxy-1,2,3,4-tetrahydroisoquinoline hydrochloride [1, 2] is non-anticholinergic antispasmodic drug. Caffeine (CAF) 1,3,7-Trimethylpurine-2,6-Dione, is an adenosine receptor antagonist and adenosine 3′,5′cyclic monophosphate (cAMP) phosphodiesterase inhibitor, thus levels of cAMP increase in cells following treatment with caffeine [2, 3]. Paracetamol (PCT) N-(4-hydroxyphenyl) acetamide, also known as acetaminophen PAR is widely used as analgesic and antipyretic for the relief of fever, headaches and minor pains. It is a major ingredient in numerous cold and flu remedies [4, 5]. Para-aminophenol (PAP), is the primary impurity of PCT, it occurs in PCT pharmaceutical preparations as a consequence of both synthesis and degradation during storage [6, 7]. The quantity of PAP must be strictly controlled as it is reported to have nephrotoxic and teratogenic effects [7]. The structures of the studied drugs are presented in Fig. 1. Structural formulae for a drotaverine, b caffeine, c paracetamol, d para-aminophenol The analysis of mixtures containing DRO, CAF and PCT was described in few analytical reports. These reports proposed spectrophotometric [8, 9], TLC [9] and high performance liquid chromatography (HPLC) [8, 10, 11]. While literature survey reveals that no methods have been reported for the simultaneous determination of the four components under study. The aim of this work was to develop novel spectrophotometric methods based on smart original mathematical techniques for resolving the quaternary mixture of DRO, CAF, PCT and PAP with spectral interfering problems. Derivative transformation [12], spectrum subtraction [13], amplitude factor [14], advanced absorbance subtraction method (AAS) [15], advanced amplitude modulation method (AAM) [15] and simultaneous derivative ratio (S1DD) [16] are well developed method that were successfully adopted for resolution of overlapped spectra of binary mixtures. For simultaneous determination of ternary mixtures two novel methods were newly proposed namely ratio difference-isosbestic points (RD-ISO) and induced ratio difference (IRD). Ratio difference-isosbestic points (RD-ISO) is considered as an extension to ratio difference method [17]. The method requires the presence of two isosbestic points (λiso1 and λiso2) between two drugs for its successful application as discussed briefly. If a ternary mixture X, Y and Z where (X and Y) shows two isoabsorptive points, Z can be determined by dividing the spectrum of the ternary mixture by normalized spectrum of X′. The ratio spectra obtained using X′ as a divisor generated a constant value of its concentration along the whole spectra. Suppose the amplitudes of the ratio spectra of the ternary mixture at the two selected wavelength (λiso1 and λiso2 between X and Y) are P1 and P2, respectively, then; $${\text{P}}_{ 1} = \left[ {{\text{C}}_{\text{x}} } \right]\; + \;\left[ {{\text{C}}_{\text{Y}} } \right]\; + \;\left[ {{\text{a}}_{\text{z1}} {\text{C}}_{\text{z}} } \right]/{\text{a}}_{\text{x}}$$ $${\text{P}}_{ 2} = \;\left[ {{\text{C}}_{\text{x}} } \right]\; + \;\left[ {{\text{C}}_{\text{Y}} } \right]\; + \;\left[ {{\text{a}}_{\text{z2}} {\text{ C}}_{\text{z}} } \right]/{\text{a}}_{\text{x}}$$ By subtraction $${\text{P}}_{ 1} - {\text{P}}_{ 2} = \left( {\frac{{a_{z} C_{z} }}{{a_{x} }}} \right)1 - \left( {\frac{{a_{z} C_{z} }}{{a_{x} }}} \right)2$$ The concentration of Z is calculated using the regression equation representing the linear correlation between the differences of ratio spectra amplitudes at the two selected wavelengths to the corresponding concentrations of drug (Z). While IRD method is a combination between induced dualwavelength [18] and amplitude modulation theory. All what it need is the extension of one of the three drugs over the other two as summarized briefly. The ratio spectra obtained using the normalized spectrum of the more extended component Z′ as a divisor generated a constant value of its concentration along the whole spectra that can be measured from the extended region parallel to the X axis. The constant value of Z was then subtracted from the total ratio spectrum of the ternary mixture to obtain the ratio spectra of the other two components X and Y. For determination of X, two wave lengths were selected in the ratio spectra of the resolved binary mixture. A remarkable amplitude difference between the two selected wavelengths in the ratio spectra of pure X should be present. To cancel the contribution of Y at the two selected wavelengths upon obtaining the ratio difference, the equality factor of pure ratio spectra of Y at these wavelengths (FY) is calculated. $${\text{Pm}}_{ 1} = {\text{P}}_{\text{X1}} \; + \;{\text{P}}_{\text{Y1}} \quad {\text{at }}\lambda_{ 1}$$ $${\text{F}}_{\text{Y}} = {\text{P}}_{\text{Y1}} /{\text{P}}_{\text{Y2}}$$ $$\therefore {\text{P}}_{\text{Y1}} = {\text{F}}_{\text{Y}} {\text{P}}_{\text{Y2}}$$ By substituting in Eq. (4) $${\text{Pm}}_{ 1} = {\text{P}}_{\text{X1}} \; + \;{\text{F}}_{\text{Y}} {\text{P}}_{\text{Y2}}$$ By multiply Eq. (5) by FY $${\text{F}}_{\text{Y}} {\text{Pm}}_{ 2} = {\text{F}}_{\text{Y}} {\text{P}}_{\text{X2}} \; + \;{\text{F}}_{\text{Y}} {\text{P}}_{\text{Y2}}$$ And by calculating the difference, Eqs. (6, 7), FY PY2 will be cancelled: $$\Delta {\text{P }}\left( {{\text{Pm}}_{ 1} \;{-}\;{\text{F}}_{\text{Y}} {\text{Pm}}_{ 2} } \right) = {\text{A}}_{\text{X1}} \; - \;{\text{F}}_{\text{Y}} {\text{A}}_{\text{X2}}$$ Equation (8) indicated that the amplitude difference of the ratio spectra of the resolved binary mixture X, Y is dependent only on X and independent on Y. The concentration of Y is calculated using the same procedure after calculating the equality factor of pure X (FX) at the two chosen wavelengths for Y. Finally another novel method for simultaneous determination of quaternary mixtures was proposed and named double divisor-ratio difference-dual wave length (DD-RD-DW). It considered as one of the new applications of double divisor [19] and an extension to the double divisor-ratio difference method (DD-RD) [20] by coupling it with dual wavelength method. For the determination of concentration of component of interest by the DD-RD-DW method, the component of interest shows a significant amplitude difference at two selected wavelengths λ1 and λ2 where the two interfering substances used as double divisor give constant amplitude as while the third one shows the same amplitude values at these two selected wavelengths. This can be summarized in the following equations. If we have a mixture of four drugs (X, Y, Z and W), dividing the spectrum of the quaternary mixture by the sum of the normalized spectra of Z and W (Z′ + W′) as a divisor, a constant value is generated in a certain region of wavelengths. $${\text{Pm}} = \frac{{a_{X} C_{X} }}{{a_{Z} + a_{W} }}\; + \,\frac{{a_{Y} C_{Y} }}{{a_{Z} + a_{W} }}\; + \;{\text{constant}}$$ Suppose the amplitudes at the two selected wavelength are P1 and P2 at λ1 and λ2 (where Y has the same amplitude), respectively, then; $${\text{P}}_{ 1} = \frac{{a_{X} C_{X} }}{{[a_{Z} + a_{W} ]1}} + \frac{{a_{Y} C_{Y} }}{{[a_{Z} + a_{W} ]1}} + {\text{constant}}$$ $$\frac{{a_{Y} C_{Y} }}{{[a_{Z}\,+\,a_{W} ]1}} = \frac{{a_{Y} C_{Y} }}{{[a_{Z}\,+\,a_{W} ]2}}$$ Then by subtraction $${\text{P}}_{ 1} - {\text{P}}_{ 2} = \left( {\frac{{a_{X} C_{X} }}{{a_{Z + } a_{W} }}} \right)1 - \left( {\frac{{a_{X} C_{X} }}{{a_{Z + } a_{W} }}} \right)2$$ The concentration of X is calculated using the regression equation representing the linear correlation between the differences of ratio spectra amplitude at the two selected wavelengths to the corresponding concentrations of drug (X). Reagents and chemicals Pure samples—drotaverine (DRO) was kindly supplied by Alexandria Pharmaceuticals and Chemical Industries, Alexandria, Egypt. CAF and PCT were kindly supplied by Minapharm Pharmaceutical Company, Cairo, Egypt. Para-aminophenol was purchased from Sigma Aldrich, Germany. The purities were found to be 100.25 ± 0.39, 99.56 ± 0.59, 99.98 ± 0.25 and 99.99 ± 0.39 for DRO, CAF, PCT and PAP respectively. Market sample—Petro tablets, labelled to contain 40 mg (DRO)/400 mg (PCT)/60 mg (CAF), Soumadril Compound tablets labelled to contain 200 mg Carisopradol (CAR)/160 mg (PCT)/32 mg (CAF) and Panadol Extra tablets labelled to contain 500 mg (PCT)/65 mg (CAF), were purchased from the Egyptian market. Solvents—Spectroscopic analytical grade methanol (S.d.fine-chem limited-Mumbai). Stock standard solutions—(1 mg/mL) stock solution of each of DRO, CAF, PCT and PAP in methanol were prepared. The prepared solutions were found to be stable without any degradation when stored in the dark in the refrigerator at 4° C for 1 week except for PAP which should be freshly prepared. Working standard solutions—(50 μg/mL) working solutions for DRO, CAF, PCT and PAP were prepared from (1 mg/mL) stock solutions by appropriate dilutions with methanol. Spectrophotometric measurements were carried out on JASCO V-630 BIO Double-beam UV–Vis spectrophotometer (S/N C367961148), using 1.00 cm quartz cells. Scans were carried out in the range from 200 to 400 nm at 0.1 nm intervals. Spectra Manager II software was used. Construction of calibration graphs Aliquots equivalent to 10–260 μg DRO, 15–260 μg CAF, 10–240 μg PCT and 10–300 μg PAP were accurately transferred from their working standard solutions into four separate series of 10-mL volumetric flasks then completed to volume with the same solvent. The spectra of the prepared standard solutions were scanned from 200 to 400 nm and stored in the computer against methanol as a blank. For DRO A calibration graph was constructed relating the absorbance of zero order spectra (D0) of DRO at 228.5 nm versus the corresponding concentrations. The stored (D0) spectra of DRO were divided by (a) the normalized spectrum of CAF, (b) the normalized spectrum of DRO, (c) sum of normalized spectrum of CAF and PAP, separately. Calibration graphs were constructed by plotting (a) the difference between the amplitudes at [263.6 and 291.8 nm], (b) the constant values measured from 310–400 nm, (c) the difference between the amplitudes at [315 and 336 nm] versus the corresponding DRO concentrations, respectively. For CAF Two calibration graphs were constructed using the zero order spectra (D0). The first one related the absorbance at 263.6 nm versus the corresponding CAF concentrations. While the second one related the difference between the absorbance at 231.5 and 263.6 nm versus the absorbance at 263.6 nm. The (D0) spectra of CAF were divided by the normalized spectrum of PCT, and then two calibration graphs were constructed. The first was plotted between the amplitudes difference at [240 and 263.6 nm] versus amplitudes at 263.6 nm where as the second graph between the amplitudes difference at [233.8 and 273.7 nm] versus the corresponding CAF concentrations. The stored (D0) spectra of CAF were also divided by the normalized spectrum of DRO and the obtained ratio spectra were manipulate for construction of another 2 calibration graphs. A graph was directly constructed between the amplitude difference at 265 and 295 nm multiplied by (5.58) versus the corresponding CAF concentrations and the regression equations were computed. The first derivative of the above ratio spectra was then recorded using scaling factor = 1 and ∆λ = 8 and a calibration graph between the amplitude at 219 nm versus the corresponding concentrations of CAF was constructed. For PCT A calibration graph was constructed relating the absorbance of zero order spectra (D0) of CAF or PCT at 263.6 nm versus the corresponding concentrations. The stored (D0) spectra of PCT were divided by (a) the normalized spectrum of CAF, (b) normalized spectrum of DRO and (c) the sum of normalized spectrum of DRO and CAF, separately. Three calibration graphs were constructed by plotting (a) the amplitude differences between 219.2 and 252 nm, (b) amplitude differences between 257 and 230 nm multiplied by (4.73), (c) amplitude differences between 261.2 and 277.2 nm versus the corresponding PCT concentrations, respectively. For PAP The zero order spectra (D0) of PAP were scanned and manipulated to obtain two calibration graphs. Firstly, they were divided by the sum of normalized spectrum of DRO and CAF, to construct a calibration graph was constructed between the amplitude differences at 311 and 318 nm versus the corresponding PAP concentrations. Then their first derivative spectra (D1) were recorded using scaling factor = 10 and ∆λ = 8 and a calibration graph was constructed relating the amplitude of the obtained (D1) spectra of PAP at 314.5 nm versus the corresponding concentrations. Application to laboratory prepared mixtures Into a series of 10 mL volumetric flask, accurate aliquots of DRO, CAF, PCT and PAP were transferred from their working standard solutions to prepare five mixtures containing different ratios of the cited drugs. The volumes were completed with methanol. Each drug in the quaternary mixture can be determined and analysed by more than one method using different approaches. DRO was determined by four different methods; direct spectrophotometric method after derivative transformation, ratio difference-isosbestic points, induced ratio difference and double divisor-ratio difference-dual wavelength; CAF was determined by five different methods; advanced absorbance subtraction, advanced amplitude modulation, simultaneous derivative ratio, ratio difference-isosbestic points and induced ratio difference. PCT was determined using six different methods; advanced absorbance subtraction, advanced amplitude modulation, simultaneous derivative ratio, ratio difference-isosbestic points, induced ratio difference and double divisor-ratio difference-dual wavelength. While PAP was determined adopting two methods; first derivative spectrophotometric method and double divisor-ratio difference-dual wavelength. Application to pharmaceutical dosage form Ten tablets of each of Petro®, Soumadril Compound® and Panadol Extra® formulations were accurately weighed, finely powdered and homogenously mixed. A portion of the powder equivalent to 5 mg PCT were separately weighed from Petro® (A), Soumadril Compound® (B) and Panadol Extra® (C), respectively and dissolved in methanol by shaking in ultrasonic bath for about 30 min. The solution was filtered into a 100 mL measuring flask and the volume was completed with the same solvent. 2 mL were accurately transferred from the above prepared solutions of formulations (A, B) and 4 mL were accurately transferred from the solution of formulation (C), to three separate 10-mL volumetric flasks. The concentration of each drug was calculated using its specified methods. When carrying out the standard addition technique, different known concentrations of pure standard of each drug were added to the pharmaceutical dosage form before proceeding in the previously mentioned procedure. By scanning the absorption spectra of DRO, CAF, PCT and PAP in the solution of dosage forms in methanol, severely overlapped spectral bands were observed in the wavelength region of 200–300 nm; which hindered their direct determination (Fig. 2). DRO showed extension over the PAP but with low absorptivity, in addition that PAP may exhibit a contribution at DRO extended region in high concentrations, and although PAP was more extended than CAF and PCT after 315 nm, but it can only be measured at a shoulder which could decrease sensitivity especially at high concentration of PCT which is the major component in all the proposed dosage forms. Zero order absorption spectra of 10 μg/mL DRO (solid line), 10 μg/mL PCT (dotted line), 10 μg/mL CAF (dashed line) and 10 μg/mL PAP (dashed dotted line) Upon derivatization using scaling factor = 10 and ∆λ = 8 nm, the contribution of PAP at the extended region of DRO was completely cancelled as shown in Fig. 3, but it was difficult to accurately measure the amplitude of DRO at its extended region due to its low absorptivity, so derivative transformation was adopted to overcome this problem. The derivative transformation was applied to obtain the (D0) of DRO by dividing the spectrum of the quaternary mixture by the first derivative of normalized spectrum of DRO (d/dλ) [aDRO], and then the constant generated in the region 360–380 nm was multiplied by the normalized spectrum of DRO [aDRO] where the absorbance of DRO can be measured at its 228.5 nm (λmax) giving maximum sensitivity and minimum error as shown in Fig. 4. First order absorption spectra of 10 μg/mL DRO (solid line), 10 μg/mL PCT (dotted line), 10 μg/mL CAF (dashed line) and 10 μg/mL PAP (dashed dotted line) Zero order absorption spectra of DRO in mixtures (2, 6, 10, 12, and 20 μg/mL) Also when the generated constant was multiplied by the first derivative of normalized spectrum of DRO used as divisor, the (D1) spectrum of DRO in the mixture was obtained and then subtracted from the total (D1) of the quaternary mixture via spectrum subtraction technique the spectrum of the first derivative of the resolved ternary mixture of CAF, PCT and PAP was obtained and PAP was determined by measuring the peak amplitude at 314.3 nm where CAF and PAP showed no contribution as shown in Fig. 3. Similarly, derivative transformation technique was adopted to obtain the D0 of PAP by dividing the spectrum of the above resolved ternary mixture by the first derivative of normalized spectrum of PAP (d/dλ) [aPAP], and then the constant generated in the region 310–330 nm was multiplied by the normalized spectrum of PAP [aPAP]. The obtained D0 of PAP was successively subtracted from the D0 spectrum of the resolved ternary mixture to get the D0 spectrum of binary mixture of CAF and PCT. Three different novel, simple and accurate methods were adopted for simultaneous determination of CAF and PCT in presence of each other either in bulk, in different dosage forms as binary mixture and in presence of other components after their resolutions. Advanced absorbance subtraction The absorption spectra of CAF and PCT are severely overlapped in the wavelength region of 200–300 nm and intersect at 3 isoabsorptive point 226.9, 263.6 and 292 nm where the mixture of the drugs acts as a single component and give the same absorbance value as pure drug. The absorption spectra of the standard solutions of CAF with different concentrations were recorded in the wavelength range of 200–400 nm. Two wavelengths are selected (λiso of CAF 263.6 nm and λ2 = 231.5 nm) where PCT shows equal absorbance at these wavelengths. The absorbance difference ∆A (Aiso – A231.5) between two selected wavelengths on the mixture spectra is directly proportional to the concentration of CAF; while for PCT the absorbance difference inherently equals to zero. A calibration graph is constructed for pure CAF representing the relationship between (Aiso – A2) and Aiso and a regression equation was computed. By substituting the absorbance difference ∆A (Aiso – A2) between the two selected wavelengths of the mixture spectrum in the above equation, the absorbance Apostulated corresponding to the absorbance of CAF only at Aiso was obtained. Subtracting the postulated absorbance of CAF at Aiso from the practically recorded absorbance [ARecorded] at Aiso to get that corresponding to PCT. The concentrations of CAF and PCT were calculated using the corresponding unified regression equation (obtained by plotting the absorbance of the zero order spectra of CAF or PCT at λiso 263.6 nm against the corresponding concentrations). Advanced amplitude modulation method (AAM) As shown in (Fig. 5), the absorption spectra of CAF and PCT in methanol shows isoabsorptive point at 263.6 nm (aCAF = aPCT) which is retained at the same place in the ratio spectrum of CAF using the normalized spectrum of PCT as a divisor (Fig. 6a). Zero order absorption spectra of 10 μg/mL PCT (dotted line) and CAF (dashed line) showing 3 isoabsorptive points at 226.9 263.6 and 292 nm and the binary mixture of CAF and PCT 10 μg/mL of each a Ratio absorption spectra of 10 μg/mL PCT (dotted line), 10 μg/mL CAF (dashed line) and the binary mixture of CAF and PCT 5 μg/mL of each (dotted straight line) obtained after division by the normalized spectra of PCT. b Ratio absorption spectra of 10 μg/mL PCT (dotted line), 10 μg/mL CAF (dashed line) and the binary mixture of CAF and PCT 10 μg/mL of each (dotted straight line) obtained after division by the normalized spectra of PCT At first a regression equation was formulated representing the linear relationship between the amplitudes difference of different pure CAF concentrations at (263.6–240 nm) versus its corresponding amplitude 263.6 nm. The AAM method was applied by dividing the spectrum of the binary mixture by the normalized divisor of PCT to obtain the ratio spectra (Fig. 6b). The amplitudes difference of the obtained ratio spectrum at 263.6 nm (λiso) and 240 nm were recorded (∆Pm). And by substituting in the above regression equation previously formulated postulated amplitude of CAF alone at 263.6 nm (λiso). Subtracting the postulated amplitude of CAF at λiso from the practically recorded amplitude [PRecorded] of the binary mixture at λiso we get that corresponding to PCT. The advantage of this method over the advanced absorbance subtraction method is the complete cancelling of the interfering component in the form of constant where the difference at any two points along its ratio spectrum will be equal to zero. So there is no need for critical selection of wavelengths which leads to highly reproducible and robust results. Simultaneous derivative ratio Salinas et al. [21] developed derivative ratio spectrophotometry (1DD) method to remove the interference of one component and to determine the other. This method was then modulated to be simultaneous by coupling with amplitude modulation theory to generate simultaneous derivative ratio method (S1DD) [16]. In S1DD after division by the normalized spectra of PCT and before the derivatization step took place, the amplitude at isoabsorptive point (263.6 nm) was determined representing the actually concentration of CAF and/or PCT. Then derivative of these ratio spectra was obtained to remove the constant generated of PCT concentration in the division spectrum. Figure 7 shows the obtained derivative ratio spectra of different concentrations of CAF using scaling factor = 1 and ∆λ = 8 nm. A correlation between the peak amplitudes at 219 nm and the corresponding CAF concentration was plotted from which its concentration could be determined. The concentration of PCT was progressively determined by subtraction of the obtained CAF concentration from the total concentration at isosbestic point (λiso 263.6 nm) recorded before derivatization. First derivative of ratio spectra of CAF (2–26 μg/mL) using normalized PCT spectrum as a divisor For simultaneous determination of ternary mixture Ratio difference-isosbestic points The zero order of the studied drugs showed the presence of three isoabsorpative points between CAF and PCT as shown in Fig. 5, three isoabsorptive points are between DRO and CAF (Fig. 8a) while another two isoabsorptive points are between DRO and PCT (Fig. 8b). a Zero order absorption spectra of DRO (solid line) and CAF (dashed line) showing three isoabsorptive points at 219.2, 252 and 288 nm 10 μg/mL of each. b Zero order absorption spectra of DRO (solid line) and PCT (dotted line) showing two isoabsorptive points at 233.8 and 273.7 nm 10 μg/mL of each For determination of DRO the absorption spectrum of the mixture was divided by the absorption spectrum of the normalized spectra of CAF, the obtained ratio spectrum is shown in Fig. 9a. a Ratio spectra of DRO (solid line), CAF (dashed line), PCT (dotted line) and their ternary mixture (dashed dotted line) containing 10 μg/mL of each using normalized CAF spectrum as a divisor. b Ratio spectra of DRO (solid line), CAF (dashed line), PCT (dotted line) and their ternary mixture (dashed dotted line) containing 10 μg/mL of each using normalized PCT spectrum as a divisor Then the difference between the amplitudes at the two selected isosbestic points between CAF and PCT (263.6 and 291.8 nm) was directly proportional to DRO concentration only. For determination of PCT, the difference between the amplitude of the above ratio spectra obtained after dividing the spectrum of the ternary mixture by the normalized spectrum of CAF at the two selected isosbestic points (219.2 and 252 nm) between CAF and DRO was corresponding to PCT concentration only as shown in Fig. 9a. The same procedures were applied for determination of CAF where the absorption spectrum of the mixture was divided by the absorption spectrum of the normalized spectra of PCT as divisor and the difference between the amplitude at the two selected isosbestic points (233.8 and 273.7 nm) between DRO and PCT was corresponding to CAF concentration only as shown in Fig. 9b. Induced ratio difference method The concentration of DRO was determined using amplitude modulation method from the straight line parallel to the x-axis in the extended region at 310–400 nm for DRO as shown in Fig. 10a. The obtained constants of DRO are then subtracted from the total ratio spectra of the mixture obtaining the ratio spectra of binary mixtures of both CAF and PCT divided by normalized spectra of DRO as shown in Fig. 10b. Ratio spectra of DRO (solid line), CAF (dashed line), PCT (dotted line) and their ternary mixture (dashed dotted line) containing 10 μg/mL of each using normalized DRO spectrum as a divisor. b Ratio spectra of CAF (dashed line), PCT (dotted line) and their resolved binary mixture (dashed dotted line) containing 10 μg/mL of each using normalized DRO spectrum as a divisor after subtraction of the obtained constant By screening the ratio spectra of pure CAF divided by the normalized spectra of DRO, two wavelengths were selected, 265 and 295 nm, where 265 nm showed the maximum peak in order to obtain maximum sensitivity. To cancel the contribution of PCT at both selected wavelengths, induced dual wave length method was adopted by calculating an equality factor for pure PCT at two selected wave lengths of CAF \(\left( {{\text{F}} \; = \; \left[ {{\text{P}}_{ 2 6 5} /{\text{P}}_{ 2 9 5} } \right] \; = \; 5. 5 8} \right)\) as shown in Fig. 10b. In order to determine of PCT, the same procedures were applied as described for CAF. The two selected wavelengths were 257 nm (maximum peak amplitude) and 230 nm. The factor that equalize the amplitude of CAF at the selected wavelengths was calculated \(\left( {{\text{F}}\, = \;\left[ {{\text{P}}_{ 2 5 7} /{\text{P}}_{ 2 30} } \right] \; = \; 4. 7 3} \right)\). For simultaneous determination of quaternary mixture Double divisor-ratio difference-dual wave length For the successful application of the proposed method, it is a must to obtain a constant region in the ratio spectra resulted after dividing the total spectrum of any two drugs by the sum of their normalized spectra. For determination of DRO, the spectra of quaternary mixtures of DRO, CAF, PCT and PAP were divided by the sum of the normalized spectra of both CAF and PAP where a constant region from 300–340 nm was generated for CAF and PAP as shown in Fig. 11a. A correlation was obtained between the amplitude difference at 315 and 336 nm at which PCT have the same amplitude \(\left( {\Delta {\text{P}}_{\text{PCT}} = {\text{P}}_{ 1} - {\text{P}}_{ 2} = {\text{zero}}} \right)\) and the corresponding DRO concentration was plotted from which its concentration could be determined as shown in Fig. 11b. a Ratio spectra of three binary mixtures of CAF and PAP in different concentrations using the sum of normalized spectra of CAF and PAP as double divisor showing the obtained constant region. b Ratio spectra of DRO (solid line), binary mixture of CAF and PAP (dashed line), PCT (dotted line) and their quaternary mixture (dashed dotted line), 5 μg/mL each using the sum of normalized spectra of CAF and PAP as double divisor For determination of PCT and PAP, the spectra of quaternary mixtures were divided by the sum of normalized spectra of both DRO and CAF, where constant regions at 260–280 nm and at 307–325 nm for DRO and CAF were obtained as shown in Fig. 12a. A correlation was obtained between the amplitude difference at 261.2 and 277.2 nm at which PAP have the same amplitude \(\left( {\Delta {\text{P}}_{\text{PAP}} = {\text{P}}_{ 1} - {\text{P}}_{ 2} = {\text{zero}}} \right)\) and the corresponding PCT concentration was plotted from which its concentration could be determined as shown in Fig. 12b. While for PAP the correlation was obtained between the amplitude difference at 311 and 318 nm at which PCT have the same amplitude \(\left( {\Delta {\text{P}}_{\text{PCT}} = {\text{P}}_{ 1} - {\text{P}}_{ 2} = {\text{zero}}} \right)\) and the corresponding PAP concentration was plotted from which its concentration could be determined as shown in Fig. 12b. a Ratio spectra of 4 binary mixtures of DRO and CAF in different concentrations using the sum of normalized spectra of DRO and CAF as double divisor showing the obtained constant regions. b Ratio spectra of binary mixture of DRO and CAF (solid line), PCT (dotted line), PAP (dashed single dotted line) and their quaternary mixture (dashed double dotted line), 5 μg/mL each using the sum of normalized spectra of CAF and PAP as double divisor The method failed in determination of CAF. The main disadvantage of this method is the restriction in the choice of the selected wavelengths which are restricted to those wavelengths with constant absorbance of the interfering substance. The proposed spectrophotometric methods were compared to a recently reported HPLC method [10] in which a separation was achieved on a C18 column (250 mm × 4.6 mm, 5 μm particle size), using methanol and 0.02 M phosphate buffer, pH 4.0 (50:50, v/v) as a mobile phase and UV detection at 220 nm. The chromatographic method showed better sensitivity where concentrations up to 0.5 µg/mL of each of DRO, CAF and PCT could be quantified. While the proposed novel spectrophotometric methods showed wider range. In addition the presented methods were capable to determine the concentration of PAP which is the main degradation products and synthetic impurity of PCT and thus could be considered as stability indicating methods. Also it needs no tedious conditions optimization as that required for the chromatographic method. The proposed spectrophotometric methods are also considered to be fast and time saving where the analysis of the quaternary or the ternary mixture takes few seconds once calibration graphs were constructed and regression equations are computed where all the reported chromatographic techniques needs at least 10 min in a single run to resolve the ternary mixture. Method validation The proposed spectrophotometric methods were validated in compliance with the ICH guidelines [22], as shown in Table 1. Table 1 Assay parameters and method validation obtained by applying the proposed spectrophotometric methods for determination of DRO, CAF, PCT and PAP The specificity of the proposed methods was assessed by the analysis of laboratory prepared mixtures containing different ratios of the drugs, where satisfactory results were obtained over the calibration range as shown in Table 2. The proposed methods were also applied for the determination of the drugs in Petro, Soumadril Compound and Panadol Extra tablets. The validity of the proposed methods was further assessed by applying the standard addition technique as presented in Table 3. In Soumadril Compound, Carisopradol which is an open aliphatic structure doesn't show any interference, therefore the mixture acts as a binary mixture of CAF and PCT. Table 2 Determination of DRO, CAF, and PCT and PAP in laboratory prepared mixtures and pharmaceutical dosage forms by the proposed methods and results obtained by standard addition technique Table 3 Determination of DRO, CAF, and PCT and PAP in pharmaceutical dosage forms by the proposed methods and results obtained by standard addition technique Table 4 showed statistical comparisons of the results obtained by the proposed methods and reported method for DRO [23], and official methods for CAF [24] and PCT [25]. The calculated t and F values were less than the theoretical ones indicating that there was no significant difference between them with respect to accuracy and precision. Table 4 Statistical comparison for the results obtained by the proposed spectrophotometric methods, the reported method [23] for the analysis of DRO and official methods [24, 25] for analysis of CAF and PCT In this work more than eight novel and smart spectrophotometric methods were developed and validated for the resolution of the quaternary mixtures either successively or progressively. Drotaverine, caffeine, paracetamol and para-aminophenol, the main degradation product and synthetic impurity of Paracetamol quaternary mixture was taken as a model for application of the proposed methods. It could be concluded that the proposed procedures are accurate, simple and reproducible and yet economic. They are also sensitive and selective and could be used for routine analysis of complex most of the binary, ternary and quaternary mixtures and even more complex mixtures. The proposed methods also showed the significance of isoabsorptive point, normalized spectra as divisors and dual wavelengths as powerful tools that could either be used alone or in combination with each other for the resolution of severely overlapped spectra without preliminary separation. Budavari S (2003) The merck index, 13th edn. Merck and Co Inc, Whitehouse Station Sweetman SC (2009) Martindale: the complete drug reference, 33rd edn. Pharmaceutical Press, London Shafer SH, Phelps SH, Williams CL (1998) Reduced DNA synthesis and cell viability in small cell lung carcinoma by treatment with cyclic AMP phosphodiesterase inhibitors. Biochem Pharmacol 56:1229–1236 Fredholm BB, Battig K, Holmen J, Nehlig A, Zvatau EE (1999) Actions of caffeine in the brain with special reference to factors that contribute to its widespread use. Pharmacol Rev 51:83–88 Gilman AC, Rall TW, Nies AS, Tayor P (2001) Goodman and Gilman's the pharmacological basis of therapeutics, 10th edn. Pergamon Press, New York Nemeth T, Jankovics P, Nemeth-Palotas J, Koszegi-Szalai H (2008) Determination of paracetamol and its main impurity 4-aminophenol in analgesic preparations by micellar electrokinetic chromatography. J Pharm Biomed Anal 47:746–749 Dejaegher B, Bloomfield MS, Smeyers-Verbeke J, Vander Heyden Y (2008) Validation of a flourimetric assay for 4-aminophenol in paracetamol formulations. Talanta 75:258–265 El-Gindy A, Emara S, Shaaban H (2010) Validation and application of chemometrics-assisted spectrophotometry and liquid chromatography for simultaneous determination of two ternary mixtures containing drotaverine hydrochloride. J AOAC Int 93:536–548 Metwally FH, El-Saharty YS, Refaat M, El-Khateeb SZ (2007) Application of derivative, derivative ratio, and multivariate spectral analysis and thin-layer chromatography–densitometry for determination of a ternary mixture containing drotaverine hydrochloride, caffeine, and paracetamol. J AOAC Int 90:391–404 Belal FF, Sharaf El-Din MK, Tolba MM, Elmansi H (2015) Determination of two ternary mixtures for migraine treatment using HPLC method with ultra violet detection. Sep Sci Tech 50:592–603 Issa YM, Hassoun ME, Zayed AG (2012) Simultaneous determination of paracetamol, caffeine, domperidone, ergotamine tartrate, propyphenazone, and drotaverine HCl by high performance liquid chromatography. J Liq Chromat Rel Tech 35:2148–2161 Hegazy MA, Lotfy HM, Rezk MR, Omran YR (2015) Novel spectrophotometric determination of chloramphenicol and dexamethasone in the presence of non labeled interfering substances using univariate methods and multivariate regression model updating. Spectrochim Acta A Mol Biomol Spectrosc 140:600–613 Lotfy HM, Hegazy MA, Mowaka S, Mohamed EH (2015) Novel spectrophotometric methods for simultaneous determination of amlodipine, valsartan and hydrochlorothiazide in their ternary mixture. Spectrochim Acta A Mol Biomol Spectrosc 140:495–508 Lotfy HM, Tawakkol SM, Fahmy NM, Shehata MA (2013) Validated stability indicating spectrophotometric methods for the determination of lidocaine hydrochloride, calcium dobesilate, and dexamethasone acetate in their dosage forms. Anal Chem Lett 3:208–225 Lotfy HM, Hegazy MA, Rezk MR, Omran YR (2015) Comparative study of novel versus conventional two-wavelength spectrophotometric methods for analysis of spectrally overlapping binary mixture. Spectrochim Acta A Mol Biomol Spectrosc 148:328–337 Lamie NT, Yehia AM (2015) Development of normalized spectra manipulating spectrophotometric methods for simultaneous determination of dimenhydrinate and cinnarizine binary mixture. Spectrochim Acta A Mol Biomol Spectrosc 150:142–150 Lotfy HM, Hegazy MA (2013) Simultaneous determination of some cholesterol-lowering drugs in their binary mixture by novel spectrophotometric methods. Spectrochim Acta Part A Mol Biomol Spectrosc 113:107–114 Lotfy HM, Saleh SS, Hassan NY, Salem H (2015) Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra. Spectrochim Acta A Mol Biomol Spectrosc 136:1786–1796 Dinç E (1999) The spectrophotometric multicomponent analysis of a ternary mixture of ascorbic acid, acetylsalicylic acid and paracetamol by the double divisor-ratio spectra derivative and ratio spectra-zero crossing methods. Talanta 48:1145–1157 Hegazy MA, Eissa MS, Abd El-Sattar OI, Abd El-Kawy MM (2014) Determination of a novel ACE inhibitor in the presence of alkaline and oxidative degradation products using smart spectrophotometric and chemometric methods. J Pharm Anal 4:132–143 Salinas F, Nevado JJ, Mansilla AE (1990) A new spectrophotometric method for quantitative multicomponent analysis resolution of mixtures of salicylic and salicyluric acids. Talanta 37:347–351 International conference on harmonization (ICH), Q2B: validation of analytical procedures: methodology, vol 62. US FDA, Federal Register (1997) Jain J, Patadia R, Vanparia D, Chauhan R, Shah S (2010) Dual wavelength spectrophotometric method for simultaneous estimation of drotaverine hydrochloride and aceclofenac in their combined tablet dosage form. Int J Pharm Pharm Sci 2:76–79 British pharmacopoeia, vol. 11. Her Majesty's Stationery Office, London (2003) The United States pharmacopeia, the national formulary: USP 29 NF 24 (2007) EHM collected the data, wrote the manuscript, made the practical work in the lab, and put the theoretical background. HML put the theoretical background, revised the manuscript, revised the results, and collected the data. MAH revised the manuscript and the data, collected the data, and put the theoretical background. SHM revised the manuscript, revised the data and the figures, and collected the data. All authors read and approved the final manuscript. Pharmaceutical Analytical Chemistry Department, Faculty of Pharmacy, The British University in Egypt, El-Sherouk City, 11837, Egypt Ekram H. Mohamed & Shereen Mowaka Pharmaceutical Analytical Chemistry Department, Faculty of Pharmacy, Cairo University, Kasr El-Aini Street, Cairo, 11562, Egypt Maha A. Hegazy Pharmaceutical Chemistry Department, Faculty of Pharmaceutical Science & Pharmaceutical Industries, Future University, Cairo, 12311, Egypt Hayam M. Lotfy Pharmaceutical Analytical Chemistry Department, Faculty of Pharmacy, Helwan University, Ein Helwan, Cairo, 11795, Egypt Shereen Mowaka Search for Ekram H. Mohamed in: Search for Hayam M. Lotfy in: Search for Maha A. Hegazy in: Search for Shereen Mowaka in: Correspondence to Ekram H. Mohamed. Mohamed, E.H., Lotfy, H.M., Hegazy, M.A. et al. Different applications of isosbestic points, normalized spectra and dual wavelength as powerful tools for resolution of multicomponent mixtures with severely overlapping spectra. Chemistry Central Journal 11, 43 (2017) doi:10.1186/s13065-017-0270-8 Derivative transformation Advanced ratio difference Induced ratio difference normalized spectra Isosbestic point Dual wave length
CommonCrawl
A die versus a quantum experiment Let suppose you roll a die, and it falls into a hidden place, for example under furniture. Then although the experiment has already been made (the die already has a number to show), that value can not be known, so the experiment was not fully realized. Then till you see the die's top side, the probability remain p = 1/6. I see no difference between this and the wave function collapse, at least as an analogy. Can someone explain a deeper difference? quantum-mechanics statistical-mechanics HDEHDE You're absolutely right: the probabilities predicted by quantum mechanics are conceptually fully analogous to probabilities predicted by classical statistical mechanics, or statistical mechanics with a somewhat undetermined initial state - just like your metaphor with dice indicates. In particular, the predicted probability is a "state of our knowledge" about the system and no object has to "collapse" in any physical way in order to explain the measurements. There are two main differences between the classical and quantum probabilities which are related to one another: In classical physics - i.e. in the case of dice assuming that it follows classical mechanics - one may imagine that the dice already has a particular value before we look. This assumption isn't useful to predict anything but we may assume that the "sharp reality" exists prior to and independently of any observations. In quantum mechanics, one is not allowed to assume this "realism". Assuming it inevitably leads to wrong predictions. The quantum probabilities are calculated as $|c|^2$ where $c$ are complex numbers, the so-called probability amplitudes, which may interfere with other contributions to these amplitudes. So the probabilities of outcomes, whenever some histories may overlap, are not given as the sum over probabilities but the squared absolute value of the sum of the complex probability amplitudes: in quantum mechanics, we first sum the complex numbers, and then we square the result to get the total probability. On the other hand, there is no interference in classical physics; in classical physics, we would surely calculate the probabilities of individual histories, by any tools, and then we would sum the probabilities. Of course, there is a whole tower of differences related to the fact that the observable (quantities) in quantum mechanics are given by operators that don't commute with each other: this leads to new logical relationships between statements and their probabilities that would be impossible in classical physics. A closely related question to yours: Why is quantum entanglement considered to be an active link between particles? The reason why people often misunderstand the analogy between the odds for dice and the quantum wave function is that they imagine that the wave function is a classical wave that may be measured in a single repetition of the situation. In reality, the quantum wave function is not a classical wave and it cannot be measured in a single case of the situation, not even in principle: we may only measure the values of the quantities that the wave function describes, and the result is inevitably random and dictated by a probability distribution extracted from the wave function. Luboš MotlLuboš Motl $\begingroup$ @Luboš Motl, question understood +1 good answer, however I still can't see difference, rolling two dice and taking sum as result, the distribution is a triangle with the peak at 7, and you can't directly measure that "wave" neither, and if you unhide (and know the value) of one dice, then the final sum result is altered, the whole experiment is altered, there is some kind of interference. $\endgroup$ – HDE Feb 23 '11 at 14:37 $\begingroup$ @Luboš Motl comment II, why do you say realism drive to wrong predictions? imagine a pair of entangled particles A/B, measuring one, the other state is determined, but if no one else knows the measure was done for A, the B state is still indetermined till a measure is done, then the realism suposition(or even having a Real [but unknown] deterministic value!), doesn't change the non-determinism $\endgroup$ – HDE Feb 23 '11 at 14:40 $\begingroup$ Dear HDE, well, I am agreeing that it is analogous. Just concerning your first comment above, it's not true that there is any interference of probability amplitudes in classical physics. Classical physics only knows the real, non-negative probabilities and they may only be added in the normal way which means that there's no interference. In quantum mechanics, probability amplitudes may be added both constructively and destructively and they may even cancel, which is impossible for classical probabilities that are non-negative. $\endgroup$ – Luboš Motl Feb 23 '11 at 15:19 $\begingroup$ Concerning your second comment, nope, it is not just determinism that fails in quantum mechanics: it's realism that fails. All the fun experiments with entanglement - such as quantum eraser, Bell's inequalities, GHZM state, and Hardy's paradox - see physics.stackexchange.com/questions/4040/… for the latter - imply that the very assumption of realism fails in the real world (governed by quantum mechanics). Just by assuming that the objects have properties before they're measured, you inevitably deduce predictions that are experimentally refuted. $\endgroup$ – Luboš Motl Feb 23 '11 at 15:21 $\begingroup$ @Luboš Motl, Wow! (despite the fact that I am deep against realism), It would surprise me that there is already a way to show that it fails!, I will try hard to understand what Hardy's paradox says, thanks for the reference! $\endgroup$ – HDE Feb 23 '11 at 15:49 A somewhat similar (yet inverted) question was posed, infamously, by Einstein, Podolsky, and Rosen as an argument against the same phenomenon you are challenging. The basic argument was that such quantum wave collapses are indistinguishable from dice falling under couches, but under opposite grounds as yours -- both must be "determined" by what were later named "hidden variables". That a quantum wave collapse, like a falling dice, is completely deterministic by the nature of its initial conditions, and its apparent randomness is due to our current inability to measure these initial conditions. It was John S. Bell who thought up of an experiment to test this. The basic idea was statistical, and involved quantum entanglement -- if hidden variables existed (and the particles already having a determined, yet unknown state at their inception), statistics should yield a certain correlation between entangled particles. If there are no hidden variables, then this correlation would not show up. Surprisingly, this correlation did not manifest by even a large margin of experimental error. This proves an inherent difference in the randomness of quantum wave collapse as opposed to simply not knowing the landing of a dice. The result culminated in Bell's theorem. David Z♦ Justin L.Justin L. $\begingroup$ I am tempted to think that any value showed just after a measure, was "a hidden -value-" before measurement, perhaps no hidden -variables- could explain measured values, but themselves could be "hidden values", why we can't think that way? I mean that they were already "defined" by "universe" before the measurement, I see no difference, as hidden dice, "true random" could be defined at measurement or billion years before, I see no difference. I've put two dice sum and an entanglement examples as a comment in Luboš Motl answer, then he guided me to Hardy's paradox, I am trying to understand it $\endgroup$ – HDE Feb 24 '11 at 11:55 $\begingroup$ Luboš said (as I understood) that "hidden value" supposition can't be done, because realism fails in QM, this is a very surprising to me, more examples of this are welcome, @Justin L. perhaps bell theorem is speaking exactly about this, but I think hidden variables to explain values is not the same that hidden values, thanks for any further comment $\endgroup$ – HDE Feb 24 '11 at 12:03 $\begingroup$ @HDE I guess my answer didn't directly address the question at hand. Indeed, isolated quantum randomness (without entanglement), is more or less indistinguishable. What I was meaning to say was that there is something fundamentally, inherently different about determination in the context of quantum mechanics than just a simple dice roll. $\endgroup$ – Justin L. Feb 24 '11 at 23:43 The other answers provide an account of quantum theory as a source of the answer. But I would like to consider the question from a more general perspective: that of the meaning of "probability" especially in the classical context. Specifically what is meant by "probability" of the dice value in this question and where has the value 1/6 come from? This may seem like a trivial point, but there are (at least) three different perspectives on the answer: (1) Frequentist. This is the view that probability is a number determined empirically by a series of tests. So this view says that such tests have been conducted on the dice and in 1/6 of cases (subject to some statistical factor) a given number was returned. (2) Absolutist. This is the view (although I am not sure anyone will promote it) that the probability (with its value here of 1/6) is a derived physical property of the dice (like temperature, say). As a physical property it has been determined by some detailed analysis. One possible such analysis might be on the phase space of all possible motions, which has been appropriately partitioned and it is found that 1/6 of the volume is associated with each resultant value. This analysis might be done for that dice, or for some idealised dice. So the absolutist would claim that physics determines (classical) probabilities, and would view the frequentist data as simply experimental (dis-)confirmation of a specific model. (3) Relativist. This is really the Bayesian viewpoint as espoused by Jaynes. This is using conditional probability $P(A|C)$ to define a probability in a relative way, with Bayes Theorem - which also allows us to construct $P(C|A)$ - this allows discussion of Inference and Prior Probabilities. The Bayesian view (especially as promoted by Jaynes) would be that there is no absolute probability, but only levels of knowledge, "observer inferencing" and (changing) prior assumptions. So the 1/6 value would be determined subjectively, as a measure of your "expectation" for example. It would have nothing to do with any intrinsic physics of the dice. So to return to Quantum Mechanics. It too deals with probabilities, which are calculated from $|\Psi(x)*\Psi(x)|$. So what do these probabilities mean now that we have done some analysis? Jaynes liked to promote his Bayesian view (ie 3 above) that he claimed reconciled Bohr with Einstein, thus interpreting $\Psi$ as somewhat epistemological. Phrases I have seen in other Stack answers like "the $\Psi$ collapse is all in your head" when discussing $\Psi$ might suggest that some physicists share an aspect of this view. However an alternative view might be that $\Psi$ really solves the Absolutists problem: namely it provides a well validated physical basis for calculating probabilities in an absolute sense. If this were true then the only physical way to get to 1/6 in the dice would be to do some giant quantum calculation from its atoms! Roy SimpsonRoy Simpson In QM one can also deal with probabilities instead of probability amplitudes when one describes the prepared beam in terms of density matrix. For example (I assume there is six states $|n>$): $\rho = (1/6)\sum_n {|n><n|}$. (1) Mixture is generally different from a pure (superposition) state with the same probabilities: $\psi = (1/6)^{1/2}\sum_n {|n>}$. (2) The dice experiment corresponds to (1). It would be nice if you could read something about difference between mixtures and superpositions in QM (see sources of coherent and non coherent light, for example). Vladimir KalitvianskiVladimir Kalitvianski The deeper difference between the classical and quantum dice is this - quantum dice obey the superposition principle. Consequently you can create entangled states of the form: $$|\Psi \rangle = \frac{1}{\sqrt{2}}(|x \rangle_{D1} |y \rangle_{D2} \pm |y \rangle_{D1} |x \rangle_{D2} ) $$ where the subscripts denote the particular die (1 or 2), $|x\rangle_{D}$ and $|y\rangle_{D}$ are two different possible states of the die. The simplest case is when the role of the "die" is played by qubits. Then the above expression becomes: $$|\Psi \rangle = \frac{1}{\sqrt{2}}(|\uparrow \rangle_{D1} |\downarrow \rangle_{D2} \pm |\downarrow \rangle_{D1} |\uparrow \rangle_{D2} ) $$ Entanglement appears to have no classical analog and it actually plays the role of a usable information resource in quantum computation. $\begingroup$ Entanglement has a classical analog. Given two quantum systems with state spaces S and T, the joint system has state space S⊗T. Any joint state that isn't a product x⊗y is entangled. Similarly, if two probabilistic systems have PDFs in vector spaces P and Q, the joint distribution lives in P⊗Q. You can then have states analogous to entanglement that are linear combinations of products of vectors in P and Q. But QM allows linear combinations with complex coefficients and that is what has no classcial analogue. (en.wikipedia.org/wiki/Quantum_entanglement#Pure_states) $\endgroup$ – Dan Piponi Aug 25 '11 at 0:03 I was going to make this a comment but here it comes, since maybe it is a simpler formulation of a fundamental difference than all the rest of the answers: The difference is that the value of the classical die under the couch can be seen/measured by many people.They will not tell you, and you still will not know, but the value that was thrown will not change. In quantum mechanics each observation/measurement changes the system, from one measurement to the other . The die's throw will not change from one look of a hidden to you observer, to the next. anna vanna v $\begingroup$ @anna your wording in the second para seems odd. I don't see how having $N$ observers, rather than just one, helps explain the difference between classical and quantum ... on second thoughts, I get your point. Perhaps you could clean up the language a bit. $\endgroup$ – user346 Feb 24 '11 at 6:43 $\begingroup$ @Deepak Vaid Thanks, I tried to make it clearer. $\endgroup$ – anna v Feb 24 '11 at 7:03 $\begingroup$ @anna v, observation is part of the experiment, if you add more observation those are different experiments (unless that with observation you mean to re read already made results) then different experiments are like roll dice again. $\endgroup$ – HDE Feb 24 '11 at 13:12 $\begingroup$ Classically, observation is not part of the experiment. Something is or is not. You can read off what the face of the die says, as many times as you want and it will not change if you do not throw it again. Quantum mechanically, each "reading" will change the value, and yes observation becomes part of the experiment. So one could rephrase what I said more simply : the difference in probability measurements between classical and qm view points depends on the fact that the observer is part of the experiment in qm and not in cm. $\endgroup$ – anna v Feb 24 '11 at 13:30 $\begingroup$ a second observer would only measure something different if you sequentially measured two values of non-commuting operators. If you leave the system in a diagonal state instead, the measurement remains constant $\endgroup$ – Tobias Kienzler Feb 25 '11 at 14:04 Not the answer you're looking for? Browse other questions tagged quantum-mechanics statistical-mechanics or ask your own question. Weak measurement and Hardy's paradox In QFT scattering, why is the summing over spins done after squaring the amplitude? How do we show that no hidden variable theories can replace QM? Couder-Fort Oil Bath Experiments and Quantum Entanglement Phenomena Observing a particle over a certain domain If we can't infer a later probability density given a current one, how do we describe the time evolution of the system? Double slit experiment starting conditions A very delayed quantum eraser Double slit experiment with quarter wave plate and polarization detector Quantum spin experiment and memory of apparatus
CommonCrawl
August 15, 2016 Wireless/SDR Additive White Gaussian Noise (AWGN) By Qasim Chaudhari The performance of a digital communication system is quantified by the probability of bit detection errors in the presence of thermal noise. In the context of wireless communications, the main source of thermal noise is addition of random signals arising from the vibration of atoms in the receiver electronics. You can also watch the video below. The term additive white Gaussian noise (AWGN) originates due to the following reasons: [Additive] The noise is additive, i.e., the received signal is equal to the transmitted signal plus noise. This gives the most widely used equality in communication systems. \begin{equation}\label{eqIntroductionAWGNadditive} r(t) = s(t) + w(t) which is shown in the figure below. Moreover, this noise is statistically independent of the signal. Remember that the above equation is highly simplified due to neglecting every single imperfection a Tx signal encounters, except the noise itself. [White] Just like the white colour which is composed of all frequencies in the visible spectrum, white noise refers to the idea that it has uniform power across the whole frequency band. As a consequence, the Power Spectral Density (PSD) of white noise is constant for all frequencies ranging from $-\infty$ to $+\infty$, as shown in Figure below. Nyquist investigated the properties of thermal noise and showed that its power spectral density is equal to $k \times T$, where $k$ is a constant and $T$ is the temperature in Kelvin. As a consequence, the noise power is directly proportional to the equivalent temperature at the receiver frontend and hence the name thermal noise. Historically, this constant value indicated in Figure above is denoted as $N_0/2$ Watts/Hz. When we view the constant spectral density (we do not discuss random sequences here, so this discussion is just for a general understanding) as a rectangular sequence, its iDFT must be a unit impulse. Furthermore, we saw that the iDFT of the spectral density is the auto-correlation function of the signal. Combining these two facts, an implication of a constant spectral density is that the auto-correlation of the noise in time domain is a unit impulse, i.e., it is zero for all non-zero time shifts. This is drawn in the figure below. In words, each noise sample in a sequence is uncorrelated with every other noise sample in the same sequence. Therefore, mean value of a white noise is zero. [Gaussian] The probability distribution of the noise samples is Gaussian with a zero mean, i.e., in time domain, the samples can acquire both positive and negative values and in addition, the values close to zero have a higher chance of occurrence while the values far away from zero are less likely to appear. This is shown in the figure below. As a result, the time domain average of a large number of noise samples is equal to zero. In reality, the ideal flat spectrum from $-\infty$ to $+\infty$ is true for frequencies of interest in wireless communications (a few kHz to hundreds of GHz) but not for higher frequencies. Nevertheless, every wireless communication system involves filtering that removes most of the noise energy outside the spectral band occupied by our desired signal. Consequently, after filtering, it is not possible to distinguish whether the spectrum was ideally flat or partially flat outside the band of interest. To help in mathematical analysis of the underlying waveforms resulting in closed-form expressions — a holy grail of communication theory — it can be assumed to be flat before filtering. For a discrete signal with sampling rate $F_S$, the sampling theorem dictates that the bandwidth of a signal is constrained by a lowpass filter within the range $\pm F_S/2$ to avoid aliasing. For the purpose of calculations, this filter is an ideal lowpass filter with \begin{equation*} H(F) = 1 & -F_S/2 < F < +F_S/2 \\ 0 & \text{elsewhere} \end{cases} \end{equation*} The resulting in-band power is shown in red in the figure below, while the rest is filtered out. As with all densities, the value $N_0$ is the amount of noise power $P_w$ per unit bandwidth $B$. \begin{equation}\label{eqIntroductionNoisePower1} N_0 = \frac{P_w}{B} For the case of real sampling, we can plug $B=F_S/2$ in the above equation and the noise power in a sampled bandlimited system is given as P_w = N_0\cdot \frac{F_S}{2} \label{eqIntroductionNoisePower2} Thus, the noise power is directly proportional to the system bandwidth at the sampling stage. Fooling the Randomness As we will see in other articles, a communication signal has a lot of structure in its construction. However, due to Gaussian nature of noise acquiring both positive and negative values, the result of adding a large number of noise-only samples tends to zero. \sum _n w[n] \rightarrow 0 \qquad \text{for large}~ n \end{equation*} Therefore, when a noisy signal is received, the Rx exploits this structure through correlating with what is expected. In the process, it estimates various unknown parameters and detects the actual message through averaging over a large number of observations. This correlation brings the order out while averaging drives the noise towards zero. An AWGN channel is the most basic model of a communication system. Some examples of systems operating largely in AWGN conditions are space communications with highly directional antennas and some point-to-point microwave links. While several imperfections are still present (e.g., carrier and timing offsets), the underlying signal structure stays intact leading to a simpler Rx design because no channel equalizer needs to be incorporated. AliasingAWGNSynchronization Channel Propagation Effects in mmWave Systems System Characterization Gardner Timing Error Detector: A Non-Data-Aided Version of Zero-Crossing Timing Error Detectors Monali says: Nice & quality information about AWGN. Very useful for Electronics student & PG students also. Thanks again for information. Qasim Chaudhari says: Thanks Monali. Glad you liked it. ABHISHEK SHAILENDRA KHANDARE says: Even My teacher wasn't explain this properly thx for such wonderful piece of knowledge. nice explanation very helpful Oscor says: I love the way it is explained (like for instance the interpretation of the unit pulse autocorrelation). Thanks! Udit says: How is PSD=k*T….it should be PSD=k*B It is the noise power spectral density which is directly proportional to the equivalent temperature at the receiver frontend. Multiply with $B$ (the bandwidth) to find the actual noise power. For example, as of 2019, Australia's population density (the number of people per unit area) is $\pi$ people/$\text{km}^2$ and its area is 7.692 million $\text{km}^2$. From here, you can easily get the actual Australian population. Hridoy Saidy says: what Actual value Additive White Gaussian Noise (AWGN)…? Since it is a random process, we talk about it in a statistical manner. The actual values vary for each sample sequence. Moe Daya says: how the power noise inside the bandwidth between 0 and Fs/2 will be removed ? i did not understand this part enough The noise power inside the desired bandwidth cannot be removed. This is a constraint set by nature. Vamshi says: If a discrete time signal(sequence) is corrupted by AWGN,then how can we derive the actual signal(sequence) after corruption? If it is a digitally modulated communication signal, you should read about the minimum distance rule. The underlying intuition is that AWGN having a Gaussian distribution has a squared distance in its exponent with a negative sign. Hence, the maximum is achieved through minimizing the distance between the received and target constellation points. saad sami says: Classify thermal noise as Energy or Power Signal. Justify your answer with mathematical proof. Mukesh Kumar Srivastav says: Very well explained. ABHI says: Very useful information, thanks. Alao Peter Olufemi says: Explicit, concise and educative
CommonCrawl
What are the relative risks of mortality and injury for fish during downstream passage at hydroelectric dams in temperate regions? A systematic review Dirk A. Algera1 na1, Trina Rytwinski1,2,3 na1, Jessica J. Taylor1,2,3, Joseph R. Bennett2,3, Karen E. Smokorowski4, Philip M. Harrison1,5, Keith D. Clarke6, Eva C. Enders7, Michael Power5, Mark S. Bevelhimer8 & Steven J. Cooke1,2,3 Environmental Evidence volume 9, Article number: 3 (2020) Cite this article Fish injury and mortality resulting from entrainment and/or impingement during downstream passage over/through hydropower infrastructure has the potential to cause negative effects on fish populations. The primary goal of this systematic review was to address two research questions: (1) What are the consequences of hydroelectric dam fish entrainment and impingement on freshwater fish productivity in temperate regions?; (2) To what extent do various factors like site type, intervention type, and life history characteristics influence the consequences of fish entrainment and impingement? The review was conducted using guidelines provided by the Collaboration for Environmental Evidence and examined commercially published and grey literature. All articles found using a systematic search were screened using a priori eligibility criteria at two stages (title and abstract, and full-text, respectively), with consistency checks being performed at each stage. The validity of studies was appraised and data were extracted using tools explicitly designed for this review. A narrative synthesis encompassed all relevant studies and a quantitative synthesis (meta-analysis) was conducted where appropriate. Review findings A total of 264 studies from 87 articles were included for critical appraisal and narrative synthesis. Studies were primarily conducted in the United States (93%) on genera in the Salmonidae family (86%). The evidence base did not allow for an evaluation of the consequences of entrainment/impingement on fish productivity per se; therefore, we evaluated the risk of freshwater fish injury and mortality owing to downstream passage through common hydropower infrastructure. Our quantitative synthesis suggested an overall increased risk of injury and immediate mortality from passage through/over hydropower infrastructure. Injury and immediate mortality risk varied among infrastructure types. Bypasses resulted in decreased injury risk relative to controls, whereas turbines and spillways were associated with the highest injury risks relative to controls. Within turbine studies, those conducted in a lab setting were associated with higher injury risk than field-based studies, and studies with longer assessment time periods (≥ 24–48 h) were associated with higher risk than shorter duration assessment periods (< 24 h). Turbines and sluiceways were associated with the highest immediate mortality risk relative to controls. Within turbine studies, lab-based studies had higher mortality risk ratios than field-based studies. Within field studies, Francis turbines resulted in a higher immediate mortality risk than Kaplan turbines relative to controls, and wild sourced fish had a higher immediate mortality risk than hatchery sourced fish in Kaplan turbines. No other associations between effect size and moderators were identified. Taxonomic analyses revealed a significant increased injury and immediate mortality risk relative to controls for genera Alosa (river herring) and Oncorhynchus (Pacific salmonids), and delayed mortality risk for Anguilla (freshwater eels). Our synthesis suggests that hydropower infrastructure in temperate regions increased the overall risk of freshwater fish injury and immediate mortality relative to controls. The evidence base confirmed that turbines and spillways increase the risk of injury and/or mortality for downstream passing fish compared to controls. Differences in lab- and field-based studies were evident, highlighting the need for further studies to understand the sources of variation among lab- and field-based studies. We were unable to examine delayed mortality, likely due to the lack of consistency in monitoring for post-passage delayed injury and mortality. Our synthesis suggests that bypasses are the most "fish friendly" passage option in terms of reducing fish injury and mortality. To address knowledge gaps, studies are needed that focus on systems outside of North America, on non-salmonid or non-sportfish target species, and on population-level consequences of fish entrainment/impingement. Worldwide over 58,000 dams (> 15 m height) have been constructed for various uses including irrigation, flood control, navigation, and hydroelectric power generation [1]. As the number of dams continues to increase worldwide, so too have concerns for their effects on fish populations. Dams can act as a barrier to migratory (i.e., anadromous, catadromous, potamodromous) and resident fish (i.e., those that complete their life cycle within a reservoir or section of the river), fragmenting rivers and degrading habitats. The negative impacts of dams on upstream migration of diadromous fish are widely acknowledged, and the installation of various types of fishways to facilitate upstream passage are commonplace [2]. However, downstream migration of fish at dams remains a challenge [3, 4]. Depending upon the life history of a given migratory fish, mature adults seeking spawning grounds (catadromous species) or juveniles or post-spawn adults (iteroparous species) seeking rearing and feeding habitats (anadromous species) may all need to move downstream past dams. Resident species may also move considerable distances throughout a riverine system for reproduction, rearing, and foraging (e.g., Kokanee Oncorhynchus nerka; White Sucker Catostomus commersonii; Walleye Sander vitreus) or simply move throughout reservoirs where they may traverse forebay areas. Injury and mortality resulting from entrainment, when fish (non-)volitionally pass through hydropower infrastructure, or impingement, when fish become trapped against infrastructure, associated with hydroelectric facilities may have serious consequences for fish populations [5, 6]. Sources of entrainment or impingement-related injury or mortality include the following: (1) fish passage through hydroelectric infrastructure (i.e., turbines, spillways, sluiceways, and other passage routes) during downstream migration for migratory fish; (2) the entrainment of resident fish; and (3) the impingement of adult or large fish (migratory or resident) against screens/trash racks. Some hydropower facilities are equipped with fish collection and bypass systems, primarily for juvenile salmonids, to facilitate downstream passage. Migrating fish will use existing dam structures such as spillways and outlet works, used to release and regulate water flow, for downstream passage. When no bypass is available and there are no spills occurring owing to low reservoir water levels, both resident and facultative migrant fish can be attracted to the turbine intake tunnels, often the only other source of downstream flow present in the forebay area of the dam. Entrainment, occurring when fish travel through a hydro dam to the tailraces, can result in physical injury and mortality from fish passing through turbines and associated components [7, 8]. Injury and mortality can occur through several means from hydroelectric components. Freefall from passing over a spillway, abrasion, scrapes, and mechanical strikes from turbine blades are well known causes of physical injury and mortality (reviewed in [6,7,8]). Injuries from turbulence and shear owing to water velocity differentials across the body length, occurs when passing over a spillway or through turbine components [7, 9]. Water pressure associated injuries and mortality can occur from low pressure, rapid changes in pressure, shear stress, turbulence, cavitation (extremely low water pressures that cause the formation of bubbles which subsequently collapse violently), strikes, or grinding when fish become entrained in turbine components [5, 10, 11]. Injury and mortality can also occur from fish being impinged against screens or trash racks that are intended to prevent debris, or in some cases fish, from being drawn into water intakes [12]. Since downstream migrants are not often observed (e.g., juvenile fish), historically far less consideration has been afforded to downstream passage, such that management strategies and/or structures specifically designed to accommodate downstream passage were not implemented nearly as frequently [13]. To date, literature on downstream passage largely focuses on juvenile survival, particularly in Pacific salmonids Oncorhynchus spp., popular commercial and recreational species in which the adults senesce after spawning. Minimal research exists on downstream passage and entrainment risk of resident fish species [6]. However, research on adult downstream passage in migratory fish is growing in popularity in temperate Europe and North America, particularly for species of conservation interest such as eels Anguilla spp. [14,15,16,17,18,19] and sturgeons Acipenser spp. [20,21,22]. To enhance downstream passage and reduce mortality, management strategies have included selectively timing spills to aid juvenile fish, the installation of "fish friendly" bypass systems and screens directing fish to these systems, and retrofitting dams with low-volume surface flow outlets [23] or removable spillway structures designed to minimize fish harm [24]. The use of light, sound, bubble curtains, and electrical currents to act as repellent from harmful paths or potentially an attractant to more desirable (fish friendly) paths have been explored [25,26,27]. Given that the timing of downstream migration differs among life stages and is species-dependent [6], mitigating injury and mortality during downstream passage in a multispecies system could prove challenging and disruptive to power generation operations. Furthermore, operational strategies can be complicated by environmental regulations such as water quality requirements. From a fish productivity perspective, minimizing impacts during downstream passage for migratory fish, unintended entrainment of resident species, and/or fish impingement, is an integral part of managing fish productivity. Downstream passage mortality from a single hydropower dam may appear low (i.e., 5–10%), but system-wide cumulative mortalities may be considerable in systems greatly fragmented by multiple dams [28]. Adult survival affects population dynamics (e.g., effective population size), and thus fisheries yields (e.g., sustainable yield, maximum sustainable yield). Juvenile survival affects recruitment (i.e., fish reaching an age class considered part of a fishery), ultimately contributing to fisheries productivity. Literature reviews and technical reports compiled to date have primarily focused on how fish injury and mortality occurs, and/or evaluate the effectiveness of various management strategies used to mitigate harm during downstream passage [6,7,8]. Given the contributions of migratory and resident adults and juveniles to fish production, a natural extension would be evaluating the impacts of fish injury and mortality from hydropower dam entrainment and impingement on fish productivity. Here, we use a 'systematic review' approach [29] to evaluate the existing literature base to assess the consequences of hydroelectric dam entrainment and impingement on freshwater fish productivity, and to identify to what extent factors like site type, intervention type, and life history characteristics influence the impact of different hydroelectric infrastructure on fish entrainment and impingement. Topic identification and stakeholder input During the formulation of the question for this review, an Advisory Team made up of stakeholders and experts was established and consulted. This team included academics, staff from the Oak Ridge National Laboratory (U.S. Department of Energy) and staff from Fisheries and Oceans Canada (DFO), specifically the Fish and Fish Habitat Protection Program (FFHPP) and Science Branch. The Advisory Team guided the focus of this review to ensure the primary question was both answerable and relevant, and suggested search terms to capture the relevant literature. The Advisory Team was also consulted in the development of the inclusion criteria for article screening and the list of specialist websites for searches. The objective of the systematic review was to evaluate the existing literature base to assess the consequences of fish entrainment and impingement associated with hydroelectric dams in freshwater temperate environments. Primary question What are the consequences of hydroelectric dam fish entrainment and impingement on freshwater fish productivity in temperate regions? Components of the primary question The primary study question can be broken down into the study components: Subject (population): Freshwater fish, including diadromous species, in temperate regions. Intervention: Infrastructure associated with hydroelectric facilities (i.e., turbines, spillways, sluiceways, outlet works, screens, water bypasses, louvers, fish ladders, penstocks, trash racks, etc.,). Comparator: No intervention or modification to intervention. Outcomes: Change in a component of fish productivity (broadly defined in terms of: mortality, injury, biomass, yield, abundance, diversity, growth, survival, individual performance, migration, reproduction, population sustainability, and population viability). Secondary question To what extent do factors such as site type, intervention type, life history characteristics influence the impact of fish entrainment and impingement? The search strategy for this review was structured according to the guidelines provided by the Collaboration for Environmental Evidence [30] and followed that published in the a priori systematic review protocol [31]. Note, no deviations were made from the protocol. Search terms and languages The following search string was used to query publication databases, Google Scholar, and specialist websites. Population terms [Fish* AND (Reservoir$ OR Impoundment$ OR Dam$ OR "Hydro electric*" OR Hydroelectric* OR "Hydro dam*" OR Hydrodam* OR "Hydro power" OR Hydropower OR "Hydro")] Intervention terms (Turbine$ OR Spill* OR Outlet* OR Overflow* OR Screen$ OR Tailrace$ OR "Tail race" OR Diversion OR Bypass* OR Tailwater$ OR Penstock$ OR Entrain* OR Imping* OR Blade$ OR In-take$ OR "Trash rack$" OR "Draft tube$") Outcome terms (Productivity OR Growth OR Performance OR Surviv* OR Success OR Migrat* OR Passag* OR Reproduc* OR Biomass OR Stress* OR Mortalit* OR Abundance$ OR Densit* OR Yield$ OR Injur* OR Viability OR Sustainability OR "Vital rate$" OR Persistence OR "Trauma") Search terms were limited to English language due to project resource restrictions. The search string was modified depending on the functionality of different databases, specialist websites and search engine (see Additional file 1). Full details on search settings and subscriptions can be found in Additional file 1. To ensure the comprehensiveness of our search, the search results were checked against a benchmark list of relevant papers provided by the Advisory Team. We also searched the reference lists of papers, until the number of relevant returns significantly decreased. This increased the likelihood that relevant articles not captured by the literature search were still considered. Publication databases The following bibliographic databases were searched in December 2016 using Carleton University's institutional subscriptions: ISI Web of Science core collection. Scopus. ProQuest Dissertations and Theses Global. WAVES (Fisheries and Oceans Canada). Science.gov. Note, the Fisheries and Oceans Canada database (WAVES) became a member of the Federal Science Library (FSL) in 2017 after this search was conducted (see Additional file 1). Internet searches were conducted in December 2016 using the search engine Google Scholar (first 500 hits sorted by relevance). Potentially useful documents that had not already been found in publication databases were recorded and screened for the appropriate fit for the review questions. Specialist websites Specialist organization websites listed below were searched in February 2017 using abbreviated search terms [i.e., search strings (1) fish AND hydro AND entrainment; (2) fish AND hydro AND impingement; (3) fish AND hydro AND mortality; and (4) fish AND hydro AND injury]. Page data from the first 20 search results for each search string were extracted (i.e., 80 hits per website), screened for relevance, and searched for links or references to relevant publications, data and grey literature. Potentially useful documents that had not already been found using publication databases or search engines were recorded. Alberta Hydro (https://www.transalta.com/canada/alberta-hydro/). British Columbia Hydro (https://www.bchydro.com/index.html). Centre for Ecology and Hydrology (https://www.ceh.ac.uk/). Centre for Environment, Fisheries and Aquaculture Science (https://www.cefas.co.uk/). Commonwealth Scientific and Industrial Research Organisation (https://www.csiro.au/). Electric Power Research Institute (https://www.epri.com/). EU Water Framework Directive (https://ec.europa.eu/environment/water/water-framework/index_en.html). Federal Energy Regulatory Commission (https://www.ferc.gov). Fisheries and Oceans Canada (https://www.dfo-mpo.gc.ca/index-eng.htm). Fisheries Research Service (https://www.gov.scot). Food and Agriculture Organization of the United Nations (http://www.fao.org/home/en/). Hydro Québec (http://www.hydroquebec.com/). Land and Water Australia (http://lwa.gov.au/). Manitoba Hydro (https://www.hydro.mb.ca/). Ministry of Natural Resources and Environment of the Russian Federation (http://www.mnr.gov.ru/). Ministry of the Environment New Zealand (https://www.mfe.govt.nz/). National Institute of Water and Atmospheric Research New Zealand (https://niwa.co.nz/). Natural Resources Canada (https://www.nrcan.gc.ca/home). Natural Resources Wales (https://naturalresources.wales/?lang=en). Newfoundland and Labrador Hydro (https://nlhydro.com/). Northern Ireland Environment Agency (https://www.daera-ni.gov.uk/northern-ireland-environment-agency). Office of Scientific and Technical Information (U.S. Department of Energy) (https://www.osti.gov/). Pacific Fisheries Environmental Laboratory (https://oceanview.pfeg.noaa.gov/projects). Parks Canada (https://www.pc.gc.ca/en/index). The Nature Conservancy (https://www.nature.org/en-us/). Trout Unlimited (https://www.tu.org/). United Nations Environment Programme (https://www.unenvironment.org/). US Fish and Wildlife Service (https://www.fws.gov/). Other literature searches Reference sections of accepted articles and 168 relevant reviews were hand searched to evaluate relevant titles that were not found using the search strategy (see Additional file 2 for a list of relevant reviews). Stakeholders were consulted for insight and advice for new sources of information. We also issued a call for evidence to target sources of grey literature through relevant mailing lists (Canadian Conference for Fisheries Research, American Fisheries Society), and through social media (e.g., Twitter, Facebook) in February and November 2017. The call for evidence was also distributed by the Advisory Team to relevant networks and colleagues. Estimating comprehensiveness of the search We did not undertake an explicit test of the comprehensive of our search by checking our search results against a benchmark list of relevant papers. This was largely because we knew that most of the evidence base on this topic was going to be considered grey literature sources, making estimation of comprehensiveness challenging. However, as mentioned above, we screened bibliographies of: (1) a large number of relevant reviews identified at title and abstract (84 reviews) or full-text screening (30 reviews); (2) additional relevant reviews identified from within the bibliographies of the reviews (54 reviews); and (3) included articles. We searched these reference lists of papers until the reviewer deemed that the number of relevant returns had significantly decreased. This increased the likelihood that relevant articles not captured by the literature search were still considered. Assembling a library of search results All articles generated by publication databases and Google Scholar were exported into separate Zotero databases. After all searches were complete and references found using each different strategy were compiled, the individual databases were exported into EPPI-reviewer (eppi.ioe.ac.uk/eppireviewer4) as one database. Due to restrictions on exporting search results, the Waves database results were screened in a separate Excel spreadsheet. Prior to screening, duplicates were identified using a function in EPPI Reviewer and then were manually removed by one reviewer (TR). One reviewer manually identified and removed any duplicates in the Waves spreadsheet (TR). All references regardless of their perceived relevance to this systematic review were included in the database. Article screening and study eligibility criteria Articles found by database searches and the search engine were screened in two distinct stages: (1) title and abstract, and (2) full text. Articles or datasets found by other means than database or search engine searches (i.e., specialist website or other literature searches) were entered at the second stage of this screening process (i.e., full text) but were not included in consistency checks. Prior to screening all articles, a consistency check was done at title and abstract stage where two reviewers (DAA and TR) screened 233/2324 articles (10% of the articles included in EPPI Reviewer which did not include grey literature, other sources of literature, or the articles in the Waves excel spreadsheet). The reviewers agreed on 86.30% of the articles. Any disagreements between screeners were discussed and resolved before moving forward. If there was any further uncertainty, the Review Team discussed those articles as a group to come up with a decision. Attempts were made to locate full-texts of all articles remaining after title and abstract in the Carleton University library and by using interlibrary loans. Reviewers did not screen studies (at title and abstract or full-text) for which they were an author. A consistency check was done again at full-text screening with 51/500 articles (10% of the articles included in EPPI Reviewer which did not include grey literature, other sources of literature, or the articles in the Waves excel spreadsheet). Reviewers (DAA and TR) agreed on 90.2% of articles. After discussing and resolving inconsistencies, the screening by a single reviewer (DAA) was allowed to proceed. A list of all articles excluded on the basis of full-text assessment is provided in Additional file 2, together with the reasons for exclusion. Each article had to pass each of the following criteria to be included: Eligible populations The relevant subjects of this review were any fish species, including diadromous species, in North (23.5° N to 66.5° N) or South (23.5° S to 66.5° S) temperate regions. Only articles located in freshwater ecosystems, including lakes, rivers, and streams that contain fish species that are associated with a hydroelectric dam system were included. Eligible interventions Articles that described infrastructure associated with hydroelectric facilities that may cause fish to be entrained or impinged (i.e., turbines, spillways, sluiceways, outlet works, screens, tailraces, water bypasses, tailwaters, penstocks, trash racks, etc.) were included. Articles that examined "general infrastructure", where entrainment or impingement was examined but no specific infrastructure component was isolated, were also included for data extraction. See Table 1 for definitions of the intervention types considered in the review. Only articles that describe water that moves via gravity were included. Articles were excluded where water was actively pumped for: (1) power generation (e.g., storage ponds [32]); (2) irrigation; or (3) cooling-water in-take structures for thermoelectric power plants. Other studies excluded described infrastructure associated with other operations: (1) nuclear facilities; (2) dams without hydro; (3) hydrokinetic systems (i.e., energy from waves/currents); or (4) general water withdrawal systems (e.g., for municipal drinking, recreation). Table 1 Intervention, fish injury/impact, and general hydropower terms and definitions used in the systematic review Eligible comparators This review compared outcomes based on articles that used Control-Impact (CI) and Controlled Trials (randomized or not). Before-After (BA) and studies that combined BA and CI designs, Before-After-Control-Impact (BACI), were considered for inclusion but none were found (i.e., there were no studies that collected before intervention data within same waterbody pre-installation/modification). Relevant comparators included: (1) no intervention (e.g., control experiments whereby each phase of a test procedure was examined for sources of mortality/injury other than passage through infrastructure such as upstream introduction and/or downstream recovery apparatus); (2) an unmodified version of the intervention on the same or different study waterbody, or (3) controlled flume study. Studies that only reported impact (i.e., treatment) data (i.e., no control site data) were excluded from this review. Note, at the request of stakeholders, studies that only reported impact-only data were included through the full-text screening stage but were excluded during the initial data extraction stage to obtain an estimate of the number of studies that used this type of study design in this area of study. Simulation studies, review papers, and policy discussions were also excluded from this review. Eligible outcomes Population-level assessments of entrainment and impingement impacts on fish productivity outcomes were considered for inclusion but were rarely conducted. Most metrics used to evaluate consequences of fish entrainment and impingement were related to fish mortality and injury. Any articles that used a metric related to: (1) lethal impact: direct fish mortality or indirect mortality (e.g., fish are disoriented after passage through hydroelectric dam and then predated upon), and (2) sublethal impacts: external and/or internal injury assessments (e.g., signs of scale loss, barotrauma, blade strike, etc.,)—were included. These metrics could include, but were not limited to, reported mortality rate (%, number), survival rate (%), recovery rate (%, number), the number of fish impinged or entrained (i.e., used as a measure of risk of impingement/entrainment and not mortality/injury per se), injury rate (% of population) with particular types of injuries (e.g., signs of blade strike), all injury types combined, or numbers of injuries. Furthermore, linkages between intervention and outcome needed to have been made clear to allow for the effects of fish mortality/injury from entrainment and impingement to be isolated from other potential impacts of hydroelectric power production such as barriers to migration and/or habitat degradation. Studies were excluded where no clear linkage between intervention and outcome were identified (e.g., if fish density was surveyed up-and down-stream of a hydro dam but any difference or change in fish density could not be clearly attributed to impingement or entrainment in isolation of other effects). Fish passage/guidance efficiency studies that determined the number of fish that passed through a particular hydropower system, typically through a bypass or under differing operating conditions, were excluded if there was no explicit entrainment/impingement or injury/mortality assessment. Studies that investigated passage route deterrence and/or enhanced passage efficiency facilitated via behavioural guidance devices and techniques (e.g., bubble screens, lights, sound; reviewed in [25]) were excluded, except where mortality or injury was assessed. Only English-language literature was included during the screening stage. Study validity assessment All studies included on the basis of full-text assessment were critically appraised for internal validity (susceptibility to bias) using a predefined framework (see Table 2 for definitions of terms such as study). If a study contained more than one project (i.e., differed with respect to one or more components of critical appraisal; see Table 3), each project received an individual validity rating and was labelled in the data extraction table with letters (e.g., "Ruggles and Palmeter 1989 A/B/C" indicating that there are three projects within the Ruggles and Palmeter article). For example, sample size (i.e., the total number of fish released) was an internal validity criterion (Table 3). If a study conducted a project with a sample size of > 100 fish it received a different internal validity assessment label than a project that used < 50 fish. The critical appraisal framework (see Table 3) developed for this review considered the features recommended by Bilotta et al. [36] and was adapted to incorporate components specific to the studies that answer our primary question. The framework used to assess study validity was reviewed by the Advisory Team to ensure that it accurately reflected the characteristics of a well-designed study. The criteria in our critical appraisal framework refer directly to internal validity (methodological quality), whereas external validity (study generalizability) was captured during screening or otherwise noted as a comment in the critical appraisal tool. The framework was based on an evaluation of the following internal validity criteria: study design (controlled trial or gradient of intervention intensity including "zero-control", or CI), replication, measured outcome (quantitative, quantitative approximation, semi-quantitative), outcome metric (a metric related to mortality, injury, productivity, or the number of fish entrained), control matching (how well matched the intervention and comparator sites were in terms of habitat type at site selection and/or study initiation, and sampling), confounding factors [environmental or other factors that differ between intervention and comparator sites and/or times, that occur after site selection and/or study initiation (e.g., flood, drought, unplanned human alteration)], and intra-treatment variation (was there variation within treatment and control samples). Each criterion was scored at a "High", "Medium", or "Low" study validity level based on the predefined framework outlined in Table 3. The study was given an overall "Low" validity if it scored low for one or more of the criteria. If the study did not score low for any of the criteria, it was assigned an overall "Medium" validity. If the study scored only high for all of the criteria, it was assigned an overall "High" validity. This approach assigns equal weight to each criterion, which was carefully considered during the development of the predefined framework. Reviewers did not critically appraise studies for which they were an author. Table 2 Definitions of terms used throughout the systematic review Table 3 Critical appraisal tool for study validity assessment Study validity assessments took place at the same time as data extraction and were performed by two reviewers (DAA and W. Twardek). For each study, one reviewer would assess study validity and extract the meta-data. However, a consistency check was first undertaken on 7.8% (8/104) of articles by three reviewers (DAA, WT, and TR). Validity assessments and meta-data on these studies were extracted by all three reviewers. Before DAA and WT proceeded independently and on their own subsets of the included studies, discrepancies were discussed and, when necessary, refinements to the validity assessment and meta-data extraction sheets were made to improve clarity on coding. Reviewers did not critically appraise studies for which they were an author. No study was excluded based study validity assessments. However, a sensitivity analysis was carried out to investigate the influence of study validity categories (see "Sensitivity analyses" below). Data coding and extraction strategy General data-extraction strategy All articles included on the basis of full-text assessment, regardless of their study validity category, underwent meta-data extraction. Data extraction was undertaken using a review-specific data extraction form given in Additional file 3. Extracted information followed the general structure of our PICO framework (Population, Intervention, Comparator, Outcome) and included: publication details, study location and details, study summary, population details, intervention and comparator details, outcome variables, etc. The number of fish injured, the number of fish killed, and the number of fish entrained/impinged were treated as continuous outcome variables. We further subgrouped the mortality outcome into immediate mortality (i.e., mortality was assessed ≤ 1 h after recapture was in the tailrace i.e., immediately below intervention), and delayed mortality [i.e., mortality was (re)assessed > 1 h after recapture and/or recapture was beyond the tailrace, i.e., further downstream of intervention]. Immediate mortality was used to capture the direct, lethal impact of the intervention, while delayed mortality allowed understanding of the potential indirect, lethal impacts (e.g., mortality as a result of infection or disease following injury from intervention some time later). In some cases, post-passage delayed mortality can be indirectly attributed to factors other than the hydropower infrastructure itself (e.g., predation after injury). When explicitly reported, delayed mortality from sources not directly attributed to hydropower infrastructure was excluded at the data extraction stage. Supplementary articles (i.e., articles that reported data that could also be found elsewhere or contained portions of information that could be used in combination with another more complete source) were identified and combined with the most comprehensive article (i.e., primary study source) during data extraction (Additional file 3). Data on potential effect modifiers and other meta-data were extracted from the included primary study source or their supplementary articles whenever available. In addition, all included articles on the basis of full-text assessment, regardless of their study validity category, underwent quantitative data extraction. Sample size (i.e., total number of fish released) and outcome (number of fish injured, killed, or entrained/impinged), where provided, were extracted as presented from tables or within text. When studies reported outcomes in the form of percentages, we converted this metric into a number of fish killed or injured, when the total number of fish released was provided. For studies that reported survival (e.g., number of fish that successfully passed through intervention) or detection histories from telemetry studies (i.e., number of detections), we converted these into the number of fish killed (assumed mortality) by subtracting the reported response from the total number of fish released. For fish injury, we extracted the total number of fish injured, regardless of injury type [i.e., if data were provided for > 1 injury type (e.g., descaled, bruising, eye injuries, etc.) the number of fish with any injury was extracted]. When multiple injuries were reported separately, we extracted the most comprehensive data available for a single injury type and noted the relative proportions/frequencies in the data extraction form (see Additional file 3). For delayed mortality responses, a cumulative outcome value was computed (i.e., the total number of fish killed from the entire assessment period—immediate time period + delayed time period). Data from figures were extracted using the data extraction software WebPlotDigitizer [37] when necessary. Data extraction considerations We found defining a 'study' in our review challenging as there was no clear distinction in the evidence base between studies and experiments (see Table 2 for definitions of terms). This was often because a single article could report multiple investigations within a single year [e.g., various changes in operational conditions (alone or in combination), various life stages or sources of released fish for the same or different species], or over multiple years. Often, at any one site, investigations conducted over multiple years could be reported within the same article, within different articles by the same authors, or by different authors in different articles (e.g., results from a technical report for a given time period are included in another publication by different authors conducting a similar updated study at the same site). In such cases, it was not always easy to discern whether the same investigations were repeated across years or whether the investigations were in fact changed (e.g., slight modifications in operational conditions were made). During data extraction, we diligently removed many duplicate sources of data when we were able to identify this information (i.e., overlapping data). However, this was an inherently challenging task due to the lack of detail in the study reports. As such, during data extraction there were a number of considerations made in defining our database of information. Each hydroelectric facility and research laboratory/testing facility (i.e., where lab studies were conducted), were given a "Site ID". If a single article reported data separately for different hydroelectric facilities within the same or different waterbodies, we regarded these data as independent and assigned each study a separate "Site ID". If at a given site (i.e., hydroelectric facility or laboratory), evaluations of responses were conducted for different: (1) operational conditions (e.g., turbine discharge, wicket gate opening width, dam height); (2) modifications of a specific intervention (e.g., number of turbine runner blades); or (3) depth at fish release; we considered these separate studies and each were given a "Study ID". We regarded these as separate studies since independent releases of fish were used i.e., different fish were released in each release trial (if more than one trial conducted) within each study. If at a given site, evaluations of responses were conducted for different interventions (e.g., mortality at turbines and at spillways), we only considered these separate studies if the fish were released separately for each intervention (i.e., different release points immediately above the intervention under evaluation, within the same or different years). When studies released a group of fish at a single location above all interventions, and the outcomes came from route-specific evaluations, these were considered the same study and received the same Study ID. A single study could report separate relevant comparisons (i.e., multiple non-independent data sets that share the same Site ID) for different species, and/or the same species but responses for different outcomes (i.e., mortality, injury, number of fish entrained/impinged). Furthermore, a single study could report a mortality response for the same species but separately for: (1) immediate mortality [i.e., spatial assessment was conducted just after intervention (in the tailrace) and/or the mortality assessment was conducted ≤ 1 h after release], and (2) delayed mortality (i.e., spatial assessment was conducted beyond the tailrace and/or the mortality assessment was conducted > 1 h after release) but otherwise the same for all other meta-data. For quantitative synthesis, we treated these comparisons as separate data sets (i.e., separate rows in the database that share the same Site ID). If authors reported responses for the same species for the same outcome category in a single study but separately for different: (1) life stages (e.g., the mortality of juveniles for species A, and the mortality of adults for species A); and/or (2) sources of fish (i.e., hatchery, wild, stocked sourced) and otherwise the same for all other meta-data, we extracted these as separate data sets for the database. Furthermore, if the same study (e.g., same operating condition) was conducted in multiple years at the same site, meta-data (and quantitative data when available) were extracted separately for each and given the same Study ID. For quantitative analyses, we aggregated these data sets to reduce non-independence and data structure complexity (see Additional file 4: Combining data across subgroups within a study). For all articles included on the basis of full-text assessment, we recorded, when available, the following key sources of potential heterogeneity: site type (laboratory or field-based studies), intervention type [i.e., turbine, spillway, sluiceway, water bypass, dam, general infrastructure, exclusionary/diversionary installations (e.g., screens, louvers, trash racks), and any combination of these interventions; see Table 1 for definitions], turbine type (e.g., Kaplan, Francis, S-turbine, Ossberger), hydro dam head height (m), fish taxa (at the genus and species level), life stage [egg (zygotes, developing embryos, larvae), age-0 (fry, young-of-the-year), juvenile (age-1), adult, mixed stages)], fish source [i.e., hatchery (fish raised in a hatchery environment and released into system), wild (fish captured/released that originate from the source waterbody), stocked (fish captured/released that were from the source waterbody but originated from a hatchery)], sampling method [i.e., telemetry, mark-recapture, net samples, visual, in-lab, passive integrated transponder tags (PIT tags)], and assessment time (h). Potential effect modifiers were selected with consultation with the Advisory Team. After consultation with the Advisory Team, there were effect modifiers that were originally identified in our protocol that were removed from data extraction for this review. Due to limitations in time and resources, we did not search external to the article for life history strategies, fish body size/morphology, or turbine size, as they were often not reported within the primary articles. Also, we did not include study design or comparator type since there was little variation across these variables [(e.g., all studies either used a control trial or CI study design (i.e., there were no BA or BACI study designs]. When sufficient data were reported and sample size allowed, these potential modifiers were used in meta-analyses (see "Quantitative synthesis" below) to account for differences between data sets via subgroup analyses or meta-regression. Descriptive statistics and a narrative synthesis All relevant studies included on the basis of full-text assessments, were included in a database which provides meta-data on each study. All meta-data were recorded in a MS-Excel database (Additional file 3) and were used to generate descriptive statistics and a narrative synthesis of the evidence, including figures and tables. Quantitative synthesis Eligibility for quantitative synthesis Relevant studies that were included in the database were considered unsuitable for meta-analysis (and were therefore not included in quantitative synthesis) if any of the following applied: Quantitative outcome data were not reported for the intervention and/or comparator group(s); The total number of fish released was not reported for the intervention and/or comparator group(s); For route specific outcomes (i.e., studies that release a single group of fish upstream of hydroelectric infrastructure whereby fish can take different routes through/over such infrastructure), the total number of fish that took a specific route through hydroelectric infrastructure was zero. The outcomes for both intervention and control groups were zero resulting in an undefined effect size (see "Effect size calculation" below). For both intervention and control groups, all fish released were killed or injured resulting in an estimated sampling variance of zero (i.e., a division of zero in the equation to calculate typical within-study variance—see "Effect size calculation" below). Quantitative synthesis—data preparation Where zero values for outcomes were encountered (168 of 569 data sets) for either the intervention or control group, data were imputed by adding one to each cell in the 2 × 2 matrix to permit calculation of the risk ratio [i.e., a value of one was added to each of event (number of fish killed or injured) or non-event (number of fish that survived or uninjured) cells in each of the two group] [38]. Note, we performed a sensitivity analysis to investigate the influence of the value of the imputation by comparing results using a smaller value of 0.5 [39, 40] (see "Sensitivity analyses" below). Exceptions occurred when mortality/injury were both zero for the intervention (A) and control group (C) within a data set (i.e., A = C = 0; risk ratios are undefined) (73 data sets) or when mortality/injury were 100% for both the intervention and control group within a data set (4 data sets from a single study) [39] (see Additional file 5 Quantitative synthesis database). To reduce multiple effect sizes estimates from the same study—which is problematic because this would give studies with multiple estimates more weight in analyses—data sets were aggregated (see Additional file 4 for full description) in three instances when studies reported: (1) responses from multiple life stages separately within the same outcome and intervention subgroup (e.g., mortality of species A age-0 and juveniles separately) (20 studies); (2) responses from multiple sources for fish released separately within the same outcome and intervention subgroup for the same species (e.g., mortality of species A hatchery reared individuals and wild sourced individuals separately) (8 studies); and (3) when the same study (e.g., same operating condition) was conducted in multiple years at the same site, and all other meta-data were the same (22 studies). Furthermore, there were a number of instances of multiple group comparisons whereby studies used a single control group and more than one treatment group within a single study or across studies within an article. In such cases, the control group was used to compute more than one effect size, and in consequence, the estimates of these effect sizes are correlated. This lack of independence needed to be accounted for when computing variances (see Additional file 4: handling dependence from multiple group comparisons, for a full description and the number of cases). Effect size calculation Studies primarily reported outcomes in the form of the number of events (e.g., number of fish killed or injured) and non-events (e.g., number of fish that survived or uninjured). Thus, to conduct a meta-analysis of the quantitative data we used risk ratio (RR) as an effect size metric [41]: $$RR = \frac{{A/n_{1} }}{{C/n_{2} }}$$ Risk ratios compare the risk of having an event (i.e., fish mortality or injury) between two groups, A waterbodies or simulated lab settings whereby fish are exposed to infrastructure associated with hydroelectric facilities, and C waterbodies/simulated settings without this intervention (control group), and n1 and n2 were the sample sizes of group A and group C. If an intervention has an identical effect to the control, the risk ratio will be 1. If the chance of an effect is reduced by the intervention, the risk ratio will be < 1; if it increases the chance of having the event, the risk ratio will be > 1. Therefore, a risk ratio of > 1 means that fish are more likely to be killed or injured with passage through/over hydroelectric infrastructure than killed or injured by sources other than contact with hydroelectric infrastructure. Risk ratios were log transformed to maintain symmetry in the analysis, with variance calculated as [41]: $$V_{LogRiskRatio} = \frac{1}{A} - \frac{1}{{n_{1} }} + \frac{1}{C} - \frac{1}{{n_{2} }}$$ We acknowledge that risk can be expressed in both relative terms (e.g., risk ratio) as well as absolute terms [i.e., risk difference (RD)]. Relative risk provides a measure of the strength of the association between an exposure (e.g., fish exposed to infrastructure associated with hydroelectric facilities) and an outcome (e.g., fish injury/mortality) whereas absolute risk provides the actual difference in the observed risk of events between intervention and control groups. A concern with using relative risk ratios is that it may obscure the magnitude of the effect of the intervention [42], making in some situations, the effect of the intervention seem worse than it actually is. For instance, the same risk ratio of 1.67 (i.e., the risk of fish mortality was 67% higher in the intervention group compared to the control group) can result from two different scenarios, for example: (1) an increase in mortality from 40% in the control group to 66% in the intervention group (i.e., RD = 24%), or (2) an increase from 3% in the control group to 5% in the intervention group (i.e., RD = 2%). From these examples, we can see that absolute risk (i.e., RD) provides insight into the actual size of a risk, and can, in some situations provide additional context for hydropower managers and regulators to help inform their decisions. Therefore, we chose to base our quantitative synthesis on pooled estimates using risk ratio as our effect size measure; however, to provide additional insight on the magnitude of risk to help inform decision making, we also calculated the absolute risk difference for individual comparisons, carried out in raw units [41]: $$RD = \frac{A}{{n_{1} }} - \frac{C}{{n_{2} }}$$ With variance calculated as [41]: $$V_{RiskDifference} = \frac{AB}{{n_{1}^{3} }} + \frac{CD}{{n_{2}^{3} }}$$ where B and D are the number of non-events (e.g., number of fish that survived or uninjured) for the intervention and control groups, respectively. Note, only those studies that were considered suitable for meta-analysis using risk ratio were used to calculate summary effects using the risk difference. However, where zero values for outcomes were encountered for either the intervention or control group (as described under "Quantitative synthesis—data preparation " above), data were not imputed by adding a value of one (or 0.5) since this was not necessary for risk difference calculations. Quantitative synthesis—meta-analysis To determine whether fish passing through/over infrastructure associated with hydroelectric facilities increased, on average, the risk of mortality or injury compared to controls, we first conducted random-effects meta-analyses using restricted maximum-likelihood (REML) to compute weighted average risk ratios for each outcome separately [i.e., injury (k = 104 effect sizes), immediate mortality (k = 162), and delayed mortality (k = 256)]. In each model, data from all intervention types and all temperate freshwater fish were combined. To further account for multiple data sets from the same study site (i.e., different studies or species), Study ID nested within Site ID was considered a random factor in each analysis. All summary effects (and associated 95% confidence intervals) were converted back to, and reported as, risk ratios [i.e., RR = exp(LogRiskRatio)]. Heterogeneity in effects was calculated using the Q statistic, which was compared against the χ2 distribution, to test whether the total variation in observed effect sizes (QT) was significantly greater than that expected from sampling error (QE) [43]. A larger Q indicates greater heterogeneity in effects sizes (i.e., individual effect sizes do not estimate a common population mean), suggesting there are differences among effect sizes that have some cause other than sampling error. We also produced forest plots to visualize mean effect sizes and 95% confidence intervals from individual comparisons. Mean effect sizes were considered statistically significant if their confidence intervals did not include an RR = 1. We also analyzed the impacts of fish entrainment and impingement associated with hydroelectric dams separately on outcomes for the select few taxonomic groups (at the genus and species level) when there were sufficient sample size to do so. As risk ratios may not be easily interpretable, we also calculated the percent relative effect (i.e., the percent change in the treatment group), whereby the control group was regarded as having a 100% baseline risk and the treatment group was expressed relative to the control: % increase (when RR > 1) = (RR − 1) × 100. For example, fish passing through turbines had a 320% increase in risk of mortality versus the risk of mortality in control fish released downstream of any hydroelectric infrastructure (100%). Also, as noted above, to provide additional context on the magnitude of risk, we report weighted average absolute risk differences, estimated following the same methods outlined in the paragraph immediately above as for estimating weighted average risk ratios. Because complex analyses beyond estimating summary effects using the risk difference are not recommended (i.e., investigating heterogeneity with moderators e.g., meta-regression) [38], we accompany pooled risk ratios with pooled absolute risk differences and 95% confidence intervals for main summary effects only (i.e., for each outcome, intervention type, and genus separately). We examined the robustness of our models by analyzing for publication biases in two ways. First, we used visual assessments of funnel plots (i.e., scatter plots of the effect sizes of the included studies versus a measure of their precision e.g., sample size, standard error, or sampling variance) [44]. Here, we produced funnel plots using 1/standard error. In the absence of publication bias, the funnel plot should resemble an inverted funnel. In the presence of publication bias, some smaller (less precise) studies with smaller effect sizes will be absent resulting in an asymmetrical funnel plot [45]. Second, we used Egger's regression test to provide more quantitative examinations of funnel plot asymmetry [46]. To test for associations between effect size and moderators, we used mixed-effects models for categorical moderators and meta-regression for continuous moderators, estimating heterogeneity using REML. We first evaluated the influence of intervention type on each outcome subgroup separately. Then, we tested for associations between other moderators (i.e., turbine type, hydro dam head height, site type, life stage, fish source, sampling method, assessment time) and effect sizes within intervention type subsets. We tested for associations within intervention subsets for two reasons. First, many moderators of interest were related to specific intervention types (e.g., turbine type, hydro dam head height). To reduce potential confounding effect of intervention type, associations between other moderators and effect sizes were evaluated separately for different interventions. Second, since information on all moderators was not always provided in articles (e.g., assessment time was not reported in all studies) and the distribution of moderators varied substantially between intervention types, we removed effect sizes with missing information and tested for associations within intervention type subsets. Before examining the influence of moderators within intervention subsets, we made the following modifications to our coding to reduce the number of studies we needed to exclude. First, since there was only a single case where juveniles and adult life stages were used together, we added this category to the mixed life stage category (applicable for the immediate mortality analysis only). Second, we combined studies that used mark-recapture sampling gear and methods (e.g., fin clips, balloon tags, or PIT tags for identification only, with or without netting) with netting alone methods (e.g., a known number of unmarked fish were released and recaptured in netting downstream of intervention(s)) into a single category (i.e., recapture). For studies that used telemetry (radio, acoustic, or PIT tags for remote tracking) either alone or in combination with any other category, we combined them into a single category (i.e., telemetry). Third, assessment time was categorized into three time periods: (1) < 24 h; (2) ≥ 24–48 h; and (3) > 48 h. Fourth, we included data sets that evaluated impacts of turbines + trash racks into the turbine intervention category (for immediate fish mortality only). We conducted χ2 tests to assess independence of moderators for each intervention separately. When moderators within an intervention subset were confounded, and/or the distribution between moderator categories was uneven, we avoided these problems by constructing independent subsets of data in a hierarchical approach. For example, within the immediate mortality outcome subgroup, there were no wild sourced fish used in studies conducted in a lab setting; therefore, the influence of fish source on effect size was investigated within the subset of field-based studies only. Where there was sufficient sample size within each of the subsets to include a moderator, we included the moderator into the model individually, and in combination when possible. We restricted the number of fitted parameters (j) in any model such that the ratio k/j, where k is the number of effect sizes, was > 5, which is sufficient in principle to ensure reasonable model stability and sufficient precision of coefficients [47]. Selection between the models (including the null model, i.e., a random-effects model with no moderator) was evaluated using sample-size-corrected Akaike Information Criterion (AICc) (i.e., based on whether the mixed-effects model(s) had a lower AICc than the null model) and accompanied by corresponding QE (test statistic of residual heterogeneity) and QM (heterogeneity explained by the model). The statistical significance of QM and QE were tested against a χ2 distribution. We only performed analyses on categorical moderators where there were sufficient combinable data sets (i.e., > 2 data sets from ≥ 2 sites). Thus, in some cases, we either combined similar categories to increase the sample size (detailed in results below) or deleted the categories that did not meet the sample size criteria. The single continuous moderator variable, hydro dam head height, was log-transformed to meet test assumptions. Sensitivity analyses Sensitivity analyses were carried out to investigate the influence of: (1) study validity categories; (2) imputing data (i.e., a value of one) to each cell in the matrix to permit calculation of the risk ratio where zero values for outcomes were encountered; (3) imputing a different value (i.e., 0.5) to each cell in the matrix to permit calculation of the risk ratio where zero values for outcomes were encountered; (4) multiple group comparisons where a single control group was compared to more than one intervention type within the same study and outcome subgroup, and (5) converting studies that reported survival (e.g., number of fish that successfully passed through intervention) or detection histories from telemetry studies (i.e., number of detections) into the number of fish killed (assumed mortality). First, models were fit using just those studies assessed as being "Medium" or "High" validity. Given that there were only two criteria for which a "Medium" score could be applied, and the relatively small differences between a "Medium" and "High" score for these criteria, we merged these two categories for the sensitivity analysis i.e., we assigned an overall "Medium/High" category all studies that did not score low for any criteria. Second, separate models were fit using only those studies that did not require computational adjustments during initial data preparation. Third, separate models were fit using all data sets calculated from imputing a value of 0.5 rather than one for risk ratios where zero values for outcomes were encountered. Fourth, separate models were fit using data sets that did not include multiple group comparisons. Lastly, models were fit using only those studies that did not require a conversion from fish survival or detection to assumed mortality by subtracting the reported response from the total number of fish released (only applicable for immediate and delayed mortality outcomes). In all five sets of analyses, the results were compared to the overall model fit to examine differences in pooled effect sizes. All meta-analyses were conducted in R 3.4.3 [48] using the rma.mv function in the metafor package [49]. Review descriptive statistics Literature searches and screening Searching five databases and Google Scholar resulted in finding 3121 individual records, of which 2418 articles remained after duplicate removal (Fig. 1). Title and abstract screening removed 1861 articles, leaving 557 articles for full-text screening. Full-text screening removed 418 articles, and 32 articles were unobtainable due to either insufficient citation information provided within the search hit, or they could not be located through internet, library, or inter-library loan sources. Unobtainable articles and articles excluded at full-text screening are listed with an exclusion decision in Additional file 2. A total of 107 articles were included for data extraction from database and Google Scholar searches. Screening bibliographies of relevant reviews identified at title and abstract or full-text screening resulted in an additional 99 articles included (~ 85% of which were grey literature sources that were not picked up by our database searches e.g., government reports, and theses). Full-text screening of grey literature sources from website searches and submissions via social media/email resulted in no additional articles for data extraction. ROSES flow diagram [50] showing literature sources and inclusion/exclusion process A total of 206 articles were initially included for data extraction. During data extraction, one article was excluded for an irrelevant intervention and 89 articles were excluded for having an impact-only study design (i.e., treatment-only, no comparator; Fig. 1 and Additional file 2). Further, 29 articles were identified as having overlapping data and/or projects (listed as Supplementary Articles in Additional file 3), resulting in a total of 87 articles with 264 studies included in the narrative synthesis. Of these, 75 articles with 222 studies were included in quantitative synthesis. Sources of articles used for data extraction A total of 60 grey literature (i.e., government/consultant reports, conference proceedings, book chapters) and 27 commercially published articles published throughout 1952–2016 were included for data extraction and quality assessment (Fig. 2). Grey literature accounted for a higher frequency of included articles in all decades with the exception of the current decade. Grey and commercially published literature published between 2000 and 2009 represented the greatest proportion of articles (29%), followed by those published in the 1990s (23%) and the 1980s (16%). Frequency of grey and commercially published literature included for data extraction and critical assessment in each decade Validity assessments were conducted for 128 individual projects identified from the 264 studies included (Additional file 6). Over half of the projects were assigned an overall "Low" validity (53%), whereas projects assigned overall "High" and "Medium" validity accounted for 30% and 17%, respectively. All projects critically appraised employed a CI design. Most projects (93%) reported quantitative data on fish mortality/injury relative to an appropriate control (98%) and satisfied the various performance bias criteria (Table 4). However, many projects were assigned a "High" ranking in one (or several) categories, but many of these projects received a "Low" ranking for confounding sampling, habitat, and environmental factors, consequently resulting in the increased proportion of overall "Low" ranked projects (see Table 4; Additional file 6). For example, a project assessed as meeting the criteria for a "High" ranking with exception of receiving a "Low" ranking in performance and sample bias because there was heterogeneity within treatment and control samples (e.g., environmental conditions or operating conditions varied during turbine releases). Table 4 Results of study validity assessment using the critical appraisal tool (see Table 3) The frequencies of overall "High", "Medium", and "Low" ranked studies varied over time (Fig. 3). The 1960s, 1990s, and 2000–2009 decades produced the most "High" and "Medium" ranked studies, and "High" and "Medium" ranked studies accounted for most of the studies conducted in these decades (77%, 75%, and 62%, respectively). The 1980s, 2000–2009, and 2010–2016 decades produced the most overall "Low" ranked studies. Within the 1970s, 1980s and 2010–2016, "Low" ranked studies accounted for most of the studies conducted in these decades (75%, 71%, and 75%, respectively). Frequency of studies within a given time-period in relation to study validity. Critical assessment criteria are outlined in Table 4 Narrative synthesis The narrative synthesis was based on 264 studies from 87 articles. Descriptive meta-data, coding, and quantitative data extracted from these studies can be found in Additional file 3. Study location Studies included in the narrative were conducted in five countries in the north temperate zone and two countries in the south temperate zone. The vast majority of studies were conducted in North America (97%), with the United States (93%) and Canada (4%) accounting for the highest and second highest number of studies. The remaining 3% of studies were conducted in European (France, Germany, Sweden) and Oceania (Australia and New Zealand) regions. Most studies were field based (75%), conducted at 46 sites (i.e., dams), with most sites located in the United States (78%; Table 5). Lab studies, conducted at four research centers based in the United States, accounted for 24% of the studies. Table 5 Site name, location, setting, and number of included studies Mortality/injury from entrainment/impingement was investigated in 35 species spanning 24 genera and 15 families (Fig. 4). The majority of studies were conducted on the Salmonidae family from genera Oncorhynchus (259 studies), Salmo (6 studies), and Salvelinus (6 studies). Anadromous fish represented just under 30% of the species included in the narrative but accounted for the bulk of the studies. Numerous resident (47% of species studied) and other migratory species (e.g., catadromous, potamodromous, 26% of species studied) were included but contributed far fewer studies. The most frequently studied species were Pacific salmonids (Oncorhynchus spp.) including Chinook Salmon (O. tshawytscha, 142 studies), Rainbow Trout/steelhead (O. mykiss, 76 studies), and Coho Salmon (O. kisutch, 42 studies). The most common non-salmonid species studied were American Shad (Alosa sapidissima, 11 studies), Pacific Lamprey (Entosphenus tridentatus, 10 studies), Bluegill (Lepomis macrochirus, 9 studies) American Eel (Anguilla rostrata, 6 studies), and Blueback Herring (Alosa aestivalis, 5 studies). Most species (25 species) contributed < 5 studies. Frequency of studies contributed by 11 families and 15 genera Most studies were conducted on juvenile fish (e.g., yearlings, smolts, 224 studies; Fig. 5). Hatchery and wild juvenile fish (179 and 34 studies, respectively) were the most commonly studied. Wild fish accounted for most studies of adult fish (8 of 10 studies), and very few studies were conducted on larval stages (3 studies). The frequency of studies in relation to the life history stage and source of fish used. Fish used in the studies were wild-type (Wild), originated from a hatchery (Hatchery), or were from the source waterbody but originated from a hatchery (Stocked). Age-0 less than 1 year old, Juvenile greater than 1 year old or when specified as juveniles, Larval egg and larval development stages, Mixed a mixture of life history stages Fish entrainment/impingement was studied for a variety of hydropower intervention types including turbines, spillways, bypasses, and exclusionary/diversionary installations (e.g., screens, louvers, trash racks). The most common intervention type studied was turbines (173 studies), followed by spillways (34 studies; Fig. 6). The "general" intervention type (i.e., where specific infrastructure was not isolated but entrainment/impingement was attributable to hydropower infrastructure) accounted for 33 studies. Intervention types included in the narrative but not commonly studied in isolation were exclusionary/diversionary installations, the dam, fish ladders, and outlet works. Some studies applied an intervention in combination with one or more other interventions. A combination of interventions (e.g., turbine and trash rack, spillway and removable weir) was used in six turbine studies, eight spillway studies, and seven bypass studies. Frequency of intervention types used in studies. Combination: when a study assessed entrainment/impingement using additional intervention types (e.g., screen, sluice, trash rack) in combination with the single intervention type Several turbine types were studied, with Kaplan turbines being the most common (81 studies) followed by Francis turbines (41 studies) (Fig. 7). Other turbines [Advanced Hydro Turbine System (AHTS), bulb, S-turbine, and Ossberger] were used in six studies. Very low head (VLH) hydraulic and rim-drive turbines were only used in a single study each. Pressure chambers that simulate passage through Kaplan or Francis turbines were used in 14 studies. Frequency of turbine type. Simulated: pressure chamber simulating turbine passage through a Kaplan or Francis turbine; AHTS: Advanced Hydro Turbine System. Note: some studies with turbine as the intervention type did not specify the turbine type used (34 studies) Study design and comparator All 264 studies from the 87 articles included in the narrative used a CI design. Impact-only articles (i.e., those with no comparator; I-only) were included at full text screening but excluded during data extraction (89 articles; see Additional file 3). Some articles included both CI and I-only datasets; I-only datasets were removed during data extraction. Comparator types included fish released downstream of an intervention (e.g., tailrace releases), and handling/holding (e.g., fish handled and placed into a holding tank). Downstream comparators, the most frequently used comparators, were most commonly used in field-based studies (194 studies). Only 15 field studies used handling/holding comparators, whereas all lab-based studies used handling/holding comparators (70 studies). The most frequently reported measured outcome was mortality (252 studies). Injury was reported in 128 studies, and number of fish entrained/impinged was reported in 3 studies. Delayed mortality (210 studies) was more frequently reported than immediate mortality (assessed < 1 h after recapture; 159 studies). Mark-recapture sampling gear and methods (e.g., nets, fin clips) were the most frequently used for assessing mortality (114 studies) and injury (44 studies) compared to tagging gear (e.g., telemetry) which was used in 21 and 15 studies for mortality and injury assessment, respectively. The most common injury type reported was descaling. When not specified, injuries were reported as mechanical, pressure, shear, major or minor. Lab studies most frequently investigated barotrauma injuries. For relative proportions of injury types reported in the studies see Additional file 3. Delayed mortality assessment time varied from 2 h to several days. Delayed mortality was most frequently assessed between 24 and 48 h (91 studies) or greater than 48 h (66 studies; Fig. 8). Injury assessment time also varied but was typically assessed within 48 h. Study frequency for immediate mortality, delayed mortality, and injury in relation to common post-recapture assessment times Description of the data Of the 264 studies (from 87 articles) included in the narrative synthesis, 222 studies (from 75 articles) with 522 data sets after aggregation were included in developing our quantitative synthesis database (Additional file 5). Of the 522 data sets used in Global meta-analyses below, 55% were assessed as having 'High' overall validity, 12% as having 'Medium' overall validity, and 33% as 'Low' overall validity. Data sets included in the quantitative synthesis were largely from North America (494), predominately from USA (475 of 494 data sets), followed by some from Oceania (18) and Europe (10). The majority of studies were field-based studies in rivers (72% of data sets), and the remaining were lab-based studies conducted in research facilities (28%). Among the 522 data sets, 104 data sets reported fish injuries, 162 data sets reported immediate fish mortality, and 256 reported delayed fish mortality (Table 6). The majority of studies on the impacts of fish entrainment and impingement were evaluations of turbines (67% of data sets), followed by general infrastructure, spillways, and turbines with trash racks (9%, 7%, and 6% of data sets respectively; Table 6). For all other interventions, impacts on fish responses were evaluated in ≤ 5% of data sets (Table 6). Table 6 The number of data sets for the three different outcomes by interventions Within the quantitative synthesis database, 31 species from 22 genera and 14 families were evaluated for impacts of fish entrainment and impingement. The most commonly evaluated species were from the Salmonidae family and included Chinook Salmon (203 data sets), Rainbow Trout/steelhead (133), and Coho Salmon (52). Studies reporting outcomes using juveniles (age 1 to smolt) as the life stage made up the largest portion (82.3% of data sets), whereas all other life stages were evaluated less frequently (eggs, age 0, age 0 + juveniles, juveniles + adults, adults, and mixed life stages, made up 3%, 4%, 2%, 0.2%, 3%, and 6% of data sets, respectively). Fish used in study evaluations of intervention impacts were primarily sourced from hatcheries (77% of data sets), followed by wild, mixed (i.e., a mixture of wild and hatchery), and stocked sourced fish (16%, 4%, and 2% of data sets, respectively). Information on the type of turbine used in evaluations was reported in 89% of turbine data sets, with the majority being Kaplan (43% of data sets) and Francis (37% of data sets) turbines. Hydro dam head height was reported in 54% of data sets involving spillways and ranged from 15.2 to 91.4 m. Various sampling methods were used to evaluate fish responses to interventions. All lab-based studies used visual methods (134 data sets), though some included mark-recapture methods (e.g., use of PIT tags for fish identification only; 13 data sets). For field-based studies, the majority used mark-recapture sampling gear and methods (e.g., fin clips, balloon tags, or PIT tags for identification only, with or without netting; 224 data sets) or telemetry methods (e.g., acoustic, radio, or PIT tags used for remote tracking; 115 data sets). Netting alone was also used but less frequently (36 data sets). Information on the assessment time for evaluating fish responses was reported in 84% of the data sets. Most data sets were short-term evaluations of the impacts of fish entrainment and impingement on fish responses, with 46% of the available data sets reporting assessment times < 24 h after fish were released. We found data sets reporting longer-term evaluations, with 32% of the available data sets reporting fish responses within ≥ 24–48 h after fish were released, and 22% of data sets reported data more than 48 h after fish were released. Global meta-analyses Fish injury The pooled risk ratio for fish injury was 3.17 (95% CI 1.74, 5.78; Fig. 9, Table 7A, and Additional file 7: Figure S1) indicating an overall increase in risk of fish injuries with passage through/over hydroelectric infrastructure relative to controls (i.e., 217% increase in risk over and above the risk in the control group). The forest plot for this meta-analysis suggested that a large number of cases (85 of 104 data sets) showed increased chances of fish injury relative to controls (i.e., 82% of studies had RRs > 1), with many of these individual comparisons being statistically significant (53 out of 85 cases had confidence intervals that did not include 1; Additional file 7: Figure S1). The Q test for heterogeneity suggested that there was substantial variation in effect sizes (Q = 2796.31, p < 0.0001). There was no obvious pattern of publication bias in either the funnel plot of asymmetry, or the Egger's regression test (z = 0.31, p = 0.741; Additional file 7: Figure S2). Summary flow chart of meta-analyses and results addressing our two main research questions and appropriate subsets (dashed boxes). Boxes indicate potential effect modifiers or subset categories under consideration. Grayed effect modifiers were associated with fish injury or mortality responses. Underlined value indicates statistically significant effect. Subset categories in red indicate an overall average increase in risk of fish injury or mortality with passage through/over hydroelectric infrastructure relative to controls; green indicates an overall average decrease in risk of fish injury or mortality with passage through/over hydroelectric infrastructure relative to controls. k: number of data sets (i.e., effect sizes); RR: mean effect size; CI: 95% confidence interval Table 7 Summary statistics from main analyses based on the risk ratio (RR) and the risk difference (RD) The sensitivity analysis for medium/high validity studies indicated a higher pooled risk ratio compared to the overall meta-analysis [RR = 4.15 (95% CI 2.42, 7.11), k = 72, p < 0.0001], suggesting that this result may not be robust to differences in study validity as assessed by critical appraisal, i.e., higher validity studies may result in higher risk ratio estimates (Additional file 7: Figure S3). Studies that did not require zero cell adjustments, as well as studies that did not include multiple group comparisons had similar results to the overall meta-analysis; [RR = 2.61 (95% CI 1.57, 4.33), k = 71, p = 0.0002; RR = 3.68 (95% CI 2.12, 6.39), k = 102, p < 0.0001, respectively]. Furthermore, using a value of 0.5 for zero cell adjustments yielded similar results to the overall meta-analysis using a data imputation of one [RR = 3.31 (95% CI 1.83, 5.99), k = 104, p < 0.0001]. These sensitivity analyses suggested that this result may be robust to computational adjustments made in initial data preparation, and the inclusion of a single study that compared two intervention types with a single control group (Additional file 7: Figures S4–S6). Immediate fish mortality The pooled risk ratio for immediate mortality was 3.35 (95% CI 2.38, 4.69; Fig. 9 and Table 7A), indicating an overall increase in risk of fish mortality immediately following passage through/over hydroelectric infrastructure relative to controls (i.e., 235% increase in risk over and above the risk in the control group). The forest plot for this meta-analysis suggested that 90% of studies (145 of 162) showed increased chances of fish mortality relative to controls (i.e., RRs > 1), with many of these studies having significant effect sizes (106 out of 145 cases) (Additional file 7: Figure S7). However, the Q test for heterogeneity suggested that there was significant heterogeneity between effect sizes (Q = 11,684.88, p < 0.0001). Funnel plots of asymmetry suggested possible evidence of publication bias towards studies showing increased chances of fish mortality relative to controls (Additional file 7: Figures S8, S9). Egger's regression test further supported this assessment (z = 4.58, p < 0.0001). Removing two outliers did not improve bias estimates (z = 4.51, p < 0.0001). Interestingly, when separating commercially published studies from grey literature studies, evidence of publication bias was only present in the latter (z = 0.74, p = 0.458, k = 18, and z = 4.65, p < 0.0001, k = 144, respectively). The meta-analysis based only on medium/high validity studies had a similar result to the overall meta-analysis [RR = 3.26 (95% CI 2.25, 4.73); k = 123, p < 0.0001], suggesting that this result may be robust to differences in study validity (Additional file 7: Figure S10). Furthermore, no evidence of bias was apparent from sensitivity analysis of studies that did not require computational adjustments in initial data preparation [RR = 3.03 (95% CI 2.08, 4.40); k = 108, p < 0.0001], as well as studies that did not include multiple group comparisons [RR = 3.01 (95% CI 2.17, 4.16); k = 155, p < 0.0001; Additional file 7: Figures S11, S12]. We could not obtain a pooled risk ratio using a value of 0.5 for zero cell adjustments due to instability of model results, because the ratio of the largest to smallest sampling variance was very large. The analysis based on studies that did not require a conversion from fish survival or detection to assumed mortality showed a higher pooled risk ratio compared to the overall meta-analysis [RR = 4.52 (95% CI 3.08, 6.63), k = 119, p < 0.0001]. Thus, this result may not be robust to conversions made to outcome metrics i.e., studies that measure actual fish mortality, instead of inferred mortality from survival estimates or detection histories, may result in higher risk ratio estimates (Additional file 7: Figure S13). Delayed fish mortality A pooled risk ratio for delayed fish mortality was not obtained due to instability of model results, because the ratio of the largest to smallest sampling variance was very large. Model instability also precluded our ability to test for associations between pooled risk ratios for delayed fish mortality and moderators. Effects of moderators on fish injury To address the question, to what extent does intervention type influence the impact of fish entrainment and impingement, there were only sufficient sample sizes (i.e., > 2 data sets from ≥ 2 sites) to include the following interventions for fish injury: (1) Turbines; (2) General infrastructure; (3) Bypasses; and (4) Spillways (Fig. 9). Intervention type was associated with pooled risk ratios (Table 8A), with spillways and turbines associated with higher risk ratios than general infrastructure and water bypasses for fish injury (792% and 406% increase vs. 250% increase and 82% decrease, respectively; Figs. 9 and 10, and Table 7B). Table 8 Associations between moderators and effect sizes for the subset of studies for fish injury Weighted pooled risk ratios by interventions for fish injury responses. Values in parentheses are the number of effect size estimates. Error bars indicate 95% confidence intervals. A mean RR value > 1 (right of the dashed line) indicates an overall increase in risk of fish injury with passage through/over hydroelectric infrastructure relative to controls. 95% confidence intervals that do not overlap with the dashed line indicate a significant effect. General: general infrastructure associated with more than one component of a hydroelectric facility There were only sufficient sample sizes and variation to permit meaningful tests of the influence of the following moderators: (1) Site type; (2) Fish source; (3) Assessment time. None of the factors were found to be confounded (Additional file 8: Table S1A). Site type was associated with average risk ratios (Table 8B), with studies conducted in a lab setting associated with higher risk ratios than field-based studies relative to controls (718% vs. 182% increase, respectively; Figs. 9 and 11). Assessment time was marginally associated with average risk ratios (Table 8B), with longer assessment time periods (≥ 24–48 h) associated with higher risk ratios than shorter duration assessment periods (< 24 h) (890% vs. 268% increase, respectively; Figs. 9 and 11). No detectable association was found between fish source and average effect sizes. The model including both site type and assessment time was more informative than any univariate model (Table 8B). However, there was still significant heterogeneity remaining in all moderated models (Table 8B). Weighted pooled risk ratios for fish injury for different site types and assessment times for studies involving turbines. See Fig. 10 for explanations General infrastructure For the quantitative synthesis, "general infrastructure" primarily included studies that simulated the effects of shear pressure during fish passage through turbines, spillways, and other infrastructure in a lab setting (e.g., [51, 52]). There was only sufficient sample size within life stage (eggs or juveniles) and assessment time (≥ 24–48 or > 48 h) to investigate the influence of modifiers on the impact of general infrastructure for fish injury. We only found a detectable association with average effect sizes and life stage (Table 8C), with the juvenile life stage associated with higher risk ratios than the egg life stage relative to controls (312% vs. 9% increase, respectively; Figs. 9 and 12). Weighted pooled risk ratios for fish injury for different life stages for studies involving general infrastructure. See Fig. 10 for explanations Bypasses The influence of factors was not investigated owing to inadequate sample sizes (Fig. 9). The influence of factors was not investigated owing to inadequate sample sizes (Fig. 9). The majority of spillway studies included chute and freefall designs and tended to focus on enumerating mortality rather than injury. Effects of moderators on Immediate fish mortality To address the question, to what extent does intervention type influence the impact of fish entrainment and impingement, there were only sufficient sample sizes (i.e., > 2 data sets from ≥ 2 sites) to include the following interventions for immediate mortality: (1) Turbines; (2) General infrastructure; (3) Bypasses; (4) Spillways, and (5) Sluiceways (Fig. 9). Intervention type was associated with pooled risk ratios for immediate fish mortality (Table 9A), with general infrastructure, turbines, and sluiceways associated with higher risk ratios than spillways and water bypasses (371%, 283%, and 261% increase vs. 101 and 11% increase, respectively) (Figs. 9 and 13, and Table 7B). Table 9 Associations between moderators and effect sizes for the subset of studies for immediate fish mortality Weighted pooled risk ratios by interventions for immediate fish mortality responses. See Fig. 10 for explanations. General: general infrastructure associated with more than one component of a hydroelectric facility There were only sufficient sample sizes to permit meaningful tests of the influence of the following factors: (1) Site type; (2) Source; (3) Life stage; and (4) Sampling method. Due to uneven distributions between fish source and sampling method categories, the influence of fish source and sampling method on effect size was investigated within the subset of field-based studies only (see below). Site type was associated with average risk ratios (Table 9B), with lab-based studies having higher risk ratios than to field-based studies (1776% vs. 247% increase, respectively) (Figs. 9 and 14). No detectable association was found between life stage and average risk ratios (Table 9B). There was still significant heterogeneity remaining in all moderated models (Table 9B). Weighted pooled risk ratios for immediate fish mortality for different site types for studies involving turbines. See Fig. 10 for explanations Within the subset of field-based turbine studies, there were adequate sample sizes to evaluate the influence of turbine type, sampling method, and fish source. Due to uneven distributions within sampling methods and fish source for different turbine types (i.e., there was no telemetry sampling methods or wild sourced fish used with Francis turbines) (Additional file 8: Table S2B), the influence of sampling method and fish source was evaluated within Kaplan turbines only (below). However, within the field-based subset, there was a detectable association between turbine type and average risk ratios (Table 9C), with Francis turbines having higher risk ratios than Kaplan turbines (522 vs. 144% increase, respectively; Figs. 9 and 15a). Weighted pooled risk ratios for immediate fish mortality for studies conducted in the field using different a turbine types and b sources of fish for Kaplan turbines. See Fig. 10 for explanations For the subset of Kaplan turbine studies, the magnitude of immediate mortality responses to turbines relative to controls varied with fish source (Table 9D), with wild sourced fish having higher risk ratios than hatchery sourced fish (Figs. 9; 15b). No detectable association was found between sampling method and average risk ratios (Table 9B). A model including fish source and sampling method was only slightly more informative than the univariate model including fish source (Table 9D). Sluiceways The influence of factors was not investigated owing to inadequate sample sizes (Fig. 9). Although small sample sizes precluded testing potential reasons for variation in fish mortality from spillways, other variables not tested in our analyses such as spillway height and design, use of energy dissipators, downstream water depth, and presence of rock outcrops at the base of the spillway outflow are known to be important for spillway related mortality [53, 54]. Taxonomic analyses There were only sufficient sample sizes to investigate impacts of hydroelectric infrastructure on outcomes of five temperate freshwater fish genera: (1) Alosa (river herring; injury, immediate and delayed mortality outcomes); (2) Anguilla (freshwater eels; delayed mortality only); (3) Lepomis (sunfish; injury only); (4) Salmo (Atlantic Salmon Salmo salar; delayed mortality only); and (5) Oncorhynchus (Pacific salmon and trout; injury, immediate and delayed mortality outcomes). Forest plots for all analyses are presented in Additional file 9. Alosa Overall, there was a similar increase in risk of injury and immediate mortality following passage through/over hydroelectric infrastructure relative to controls for river herrings (127% and 144% increase in risk over and above the risk in the control group, respectively) (Fig. 16a, b, and Table 7C). In contrast, there was no statistically significant effect of delayed mortality for this group (Fig. 16c and Table 7C). In all outcomes, either all or the majority of the data sets were from turbine studies (i.e., injury: all data sets; immediate mortality: 11 of 12; delay mortality: 7 of 9). Sample sizes were too small to evaluate the influence of moderator variables within outcome subsets for this genus. Weighted pooled risk ratios by fish genera (a–b) and interventions within Oncorhynchus fish (d, e) for responses to hydroelectric infrastructure. See Fig. 13 for explanations. General: general infrastructure associated with more than one component of a hydroelectric facility For freshwater eels, the overall risk of delayed mortality following passage through/over hydroelectric infrastructure was high relative to controls (1275% increase in risk over and above the risk in the control group; Fig. 16c and Table 7C). Two species of freshwater eels were represented, European (Anguilla anguilla) and American (Anguilla rostrata) eels, with 80% of the individual comparisons using adult eels and focusing on turbine impacts. Sample sizes were too small in this group as well to evaluate the influence of moderator variables within outcome subsets for this genus. Lepomis For sunfish, there was sufficient data available to evaluate the impact of turbines on injury. There was no statistically significant effect of turbines on sunfish injury as a whole (Fig. 16a, and Table 7C). There was adequate data available to evaluate the impact of turbines on delayed mortality with all comparisons representing a single species, the Atlantic Salmon. We found no overall significant effect of turbines on Atlantic Salmon mortality (Fig. 16c and Table 7C), with evident variation in delayed mortality responses (i.e., large upper confidence interval). Oncorhynchus Within the Pacific salmon and trout group, there was a similar overall increase in risk of injury and immediate mortality following passage through/over hydroelectric infrastructure relative to controls (323% and 237% increase in risk over and above the risk in the control group, respectively; Fig. 16a and b, and Table 7C). A pooled risk ratio for delayed mortality was not obtained for this group of fish due to instability of model results. Intervention type was associated with pooled risk ratios for both injury and immediate mortality outcomes (QM= 40.66, p < 0.0001, k = 43; QM= 10,881, p < 0.0001, k = 125, respectively). Spillways and turbines were associated with higher risk ratios than water bypasses for injury (1241% and 613% increase vs. 80% decrease, respectively; Fig. 16d), and immediate mortality (260% and 261% increase vs. 225% increase, respectively; Fig. 16e). However, there was still significant heterogeneity remaining in moderated models (QE= 1869.55, p < 0.0001, k = 43; QE= 214.69, p < 0.0001, k = 125, respectively). Furthermore, although pooled risk ratios for both spillways and turbines were significant (i.e., 95% CIs did not overlap with 1) in both outcome subsets, upper confidence intervals were large for injury responses, indicating substantial variation in the magnitude of negative injury responses among individual comparisons. To further explore reasons for heterogeneity in responses, we tested the influence of species type on effect sizes within the turbine subset of studies for all outcome subsets (i.e., the intervention with the largest sample size to permit meaningful analyses). No detectable association was found between species [i.e., Rainbow Trout and Chinook Salmon] and average risk ratios for Pacific salmon and trout injury (QM= 1.63, p = 0.201, k = 33). However, species was associated with average risk ratios for immediate mortality (QM= 89.93, p < 0.0001, k = 97), with studies on Rainbow Trout associated with higher risk ratios than either Coho or Chinook salmon to controls (539% vs. 279%, and 246% increase in risk over and above the risk in the control group, respectively; Fig. 17a). Weighted pooled risk ratios by a fish species for immediate mortality of Oncorhynchus fish from turbines, and b turbine type for immediate mortality of Coho Salmon (O. kisutch) from field-based studies. See Fig. 13 for explanations Within Pacific salmon and trout species subsets for immediate mortality responses to turbines, there were sufficient samples sizes to investigate the influence of the following moderators: (1) turbine type within field studies for both Coho and Chinook salmon; (2) sampling method within Kaplan turbine types for Chinook Salmon; and (3) site type for Rainbow Trout. Coho Salmon: Within the field-based subset, a detectable association was found between turbine type and average risk ratios (QM= 4.14, p = 0.042, k = 10), with Francis turbines having a much higher pooled risk ratio than Kaplan turbines relative to controls (1658 vs. 285% increase, respectively; Fig. 17b). There was little variation among data sets with respect to other moderators, i.e., all data sets used hatchery sourced fish, telemetry sampling methods, and juvenile fish. Chinook Salmon: Within the field-based subset, no detectable association was found between turbine type and average risk ratios (QM= 0.54, p = 0.461, k = 38). Within Kaplan turbines, no detectable association was found between sampling method (recapture vs. telemetry) and average risk ratios (QM= 0.17, p = 0.684, k = 25). Here as well, there was little variation among data sets with respect to other moderators i.e., all field-based data sets used juvenile fish and mostly hatchery sourced fish. Rainbow Trout: There was no detectable association between site type and average risk ratios (QM= 0.64, p = 0.425, k = 45). Otherwise, there was little variation among data sets with respect to other moderators i.e., all data sets used hatchery sourced fish (or not reported), recapture sampling methods, and juvenile fish, and 26 of 27 field-based studies evaluated Francis turbines. Review limitations Addressing fish productivity Although our research question pertains to fish productivity, owing to how the studies were conducted and the data typically reported in the commercially published and grey literature, it was not feasible to evaluate the consequences of entrainment/impingement on fish productivity per se as a measure of the elaboration of fish flesh per unit area per unit time. Rather, we evaluated the risk of freshwater fish injury and mortality owing to downstream passage through common hydropower infrastructure. Productivity is a broad term often represented more practically by various components of productivity (e.g., growth, survival, individual performance, migration, reproduction), which if negatively affected by human activities, would have a negative effect on productivity [55]. In terms of the consequences of entrainment to fish productivity in the upstream reservoir, all entrained fish are no longer contributing regardless of the outcome of their passage success (i.e., survival or mortality) if no upstream passage is possible. In the case of mortality, fish are permanently removed from the whole river system and thus cannot contribute to reproduction/recruitment. To estimate the impact of entrainment consequences to fish productivity, knowledge is required of the fish mortality in the context of population vital rates. Both of these metrics are extremely difficult and costly to measure in the field and are thus rarely quantified. However, since injury and mortality would directly impact components of fish productivity, we contend that evaluating injury and mortality contribute to addressing the impacts of entrainment and/or impingement on fish productivity. Poor data reporting In total, 166 data sets from 96 studies were excluded from quantitative synthesis, largely (53% of these data sets) for two main reasons: (1) quantitative outcome data (e.g., number of fish injured or killed) were not reported for the intervention and/or comparator group(s); or (2) the total number of fish released was either not reported at all for the intervention and/or comparator group(s), or only an approximate number of fish released were reported. Both cases did not allow for an effect size to be calculated, excluding studies from the meta-analysis. We did not attempt to contact authors for the missing data due to time constraints. Data availability through online data depositories and open source databases have improved dramatically over the years. Reporting fish outcomes as well as the total fish released for both treatment and control groups in publications (or through Additional files) would benefit future (systematic) reviews. Potential biases We attempted to limit any potential biases throughout the systematic review process. The collaborative systematic review team encompassed a diversity of stakeholders, minimizing familiarity bias. There was no apparent evidence of publication bias for fish injury studies (Additional file 7: Figure S2), but there was possible evidence of publication bias towards studies showing increased chances of fish mortality relative to controls (Additional file 7: Figure S8, S9). Interestingly, when separating commercially published studies from grey literature studies (i.e., reports and conference proceedings), evidence of publication bias was only present in the latter, of which represented 87% of the immediate mortality data sets. A possible explanation for this observation could be that these technical reports are often commissioned by hydropower operators to quantify known injury and mortality issues at their facilities. The commercially published literature in this evidence base was typically more question-driven and exploratory in design, whereas the technical reports were largely driven by specific objectives (i.e., typically placing empirical value on fish mortality known to occur at a given facility). This also highlights another important finding from our review that nearly 70% (i.e., 60/87 articles) of the evidence base was grey literature sources. Again, while we made every effort to systematically search for sources of evidence, we received limited response from our calls for evidence targeting sources of grey literature through relevant mailing lists, social media, and communication with the broader stakeholder community. As such, we believe there is still relevant grey literature that could have been included if it would have been more broadly available from those conducting the research (i.e., consultant groups or industry rendering reports easily accessible, or at least not proprietary). Geographical and taxonomic biases were evident in the quantitative synthesis—the majority of included studies were from the United States (91%) and a large percentage (81%) evaluated salmonid responses to hydroelectric infrastructure, potentially limiting interpretation of review results to other geographic regions and taxa. These biases were previously noted by other hydropower-related reviews (e.g., [56]). To limit availability bias, extensive efforts were made obtain all relevant materials through our resource network; however, there were several reports/publications (n = 32) that were unobtainable. A number of unpublished reports, older (e.g., pre-1950's) preliminary/progress reports, and other unofficial documents were cited in the literature but were unavailable because they were not published. This review was limited to English language, presenting a language bias. Other countries such as France, Germany, and China have hydropower developments and research the impacts on temperate fish species, but the relevant hydropower literature base (32 reports/articles) was excluded at full text screening due to language. Reasons for heterogeneity Several moderators were tested in our quantitative synthesis; however, considerable residual heterogeneity remained in the observed effects of hydropower infrastructure on fish injury and immediate mortality. In some cases, meta-data was extracted from studies within the evidence base but was not included in quantitative analyses owing to small sample sizes. Four main factors were noted as contributing to heterogeneity in fish injury and mortality. First, a top priority of hydropower operators is to identify trade-offs in facility operations and fish passage, attempting to balance fish passage requirements while maximizing power generation. Variation in geomorphology and hydrology among hydropower sites results in site-specific conditions, thus site-specific studies across a variety of operating conditions are required to determine the most favourable conditions for fish passage while maintaining power generation output. The facility or intervention characteristics (e.g., dam height, water levels, turbine model, etc.,) are a major factor in the resulting operating conditions of a hydropower facility at a given time. Some site characteristics would have direct implications for fish injury and mortality. For example, spillways with a freefall drop exceeding 50 m are known to result in higher injury and/or mortality compared to spillways with a shorter drop [53]. The present quantitative synthesis encompassed 42 field sites, resulting in considerable variability in site characteristics and operating conditions of the facilities or interventions (e.g., turbine wicket gate opening, spillway gate opening), which would have a measurable impact on injury and mortality. Owing to this variability, we were unable to achieve sufficient sample sizes to effectively include site-specific characteristics or operating conditions as effect modifiers. Second, environmental factors that affect migration/emigration and physiological processes that could have a measurable impact on fish injury and mortality. Water temperature affects locomotor activity and fatigue time [57,58,59], and thus may affect a fish's ability to avoid or navigate through infrastructure. Since fish are unable to regulate their body temperature, water temperature also affects many important physiological processes that are implicated in post-passage injury and/or mortality such as body condition and wound healing [60, 61]. For example, within the salmonid family there is variability in the emigration time of juveniles, even within the same species [62], such that there are numerous emigration events throughout the year. Juveniles emigrating during the summer may be more susceptible to injury and mortality owing to higher water temperatures at the time of emigration relative to emigrants in other seasons. Owing to the variability in environmental conditions during passage, it is unlikely that we would have been able to achieve sufficient sample sizes to effectively include environmental factors as effect modifiers. Third, behaviour is recognized as paramount to fish passage [56, 63], which would have a measurable effect on injury and/or mortality. Throughout the screening process many studies that had a fish behaviour component were excluded from the evidence base because there was no relevant injury and/or mortality outcome. The majority of these excluded studies examined various mechanisms to attract fish towards or deter fish from entering certain infrastructure (e.g., lights to attract to bypasses, strobe lights to deter from entering turbine intakes) (see [25, 64]) or focused on fish passage efficiency and route choice under various environmental conditions (e.g., flow regimes). Behaviour is difficult to incorporate into conservation science because there is high variation in behavioural data and behaviour studies have an individual-level focus, which often proves difficult to scale up to the population level [65, 66]. For example, fish have species-specific swimming behaviours that influence positional approaches to infrastructure (e.g., rheotaxis in juvenile salmonids; [67]), which may lead to increased entrainment risk. Behavioural commonalities do exist within and among species, so some behaviour-related heterogeneity was likely accounted for when species was included in our analyses. However, owing to the small sample size of behavioural studies within the evidence base with injury and/or mortality outcomes, we were unable to explicitly include any specific behavioural factors as a moderator in our analyses. Finally, fish passage issues are complex, so the studies in the evidence base employed a wide variety of assessment methodologies depending on research objectives, site characteristics, and target species. Combining data from studies that use different methodologies to assess fish injury and mortality can be problematic for meta-analyses because the data provided is not necessarily comparable among studies. Our evidence base encompasses several decades of fish passage research (1950 to 2016; Fig. 3) and vast improvements in fish tracking technology, experimental design, and statistical analyses have occurred over that timeframe. Early fish passage research employed rudimentary methodologies and lacked standardization compared to modern research, which could lead to measurable differences among older and more recent studies in the evidence base. Some tracking/marking techniques are more invasive than others, which could ultimately influence fish behaviour during downstream passage events. For example, surgically implanting an acoustic telemetry transmitter typically involves sedation and the implanted transmitter can produce an immune response, both of which may impair fish behaviour [68]. Conversely, PIT tags typically do not require sedation and are minimally invasive to implant in the fish. Furthermore, assessing mortality among the different fish identification techniques (physical marking, PIT tags, telemetry) requires varying levels of extrapolation. Injury and mortality can be directly observed and enumerated in studies that pass fish through a turbine and recapture occurs at the downstream turbine outlet. Releasing fish implanted with a transmitter relies on subsequent detection of the animal to determine the outcome, and the fate of the fish is inferred from these detections, not directly observed. Several factors can affect fish detection such as noisy environments (e.g., turbine generation, spilling water), technical issues related with different tracking infrastructure (e.g., multipath, signal collisions), and water conditions (e.g., turbidity [69]). A sensitivity analysis revealed that studies inferring fish mortality from detections histories (or survival estimates) produced lower risk ratio estimates than studies that directly measured mortality (e.g., release upstream—recapture downstream with net), suggesting disparities in mortality estimates between these two methods. Review conclusions Entrainment and impingement can occur during downstream passage at hydropower operations, causing fish injury and mortality, and these hydropower-related fish losses have the potential to contribute to decreased fish productivity [70, 71]. Even if fish survive an entrainment event, they are moved from one reach to another, influencing reach-specific productivity. Hydropower facilities differ dramatically in their infrastructure configuration and operations and each type of infrastructure presents different risks regarding fish injury and/or mortality [72]. Quantifying injury and mortality across hydropower projects and intervention types is fundamental for characterizing and either mitigating or off-setting the impact of hydropower operations on fish productivity. Here, we present what we believe to be the first comprehensive review that systematically evaluated the quality and quantity of the existing evidence base on the topic of the consequences of entrainment and impingement associated with hydroelectric dams for fish. We were unable to specifically address productivity per se in the present systematic review, rather our focus was on injury and mortality from entrainment/impingement during downstream passage (see "Review limitations" section above). With an exhaustive search effort, we assembled an extensive database encompassing various intervention types (i.e., infrastructure types), locations (lab, field studies), species, life stages (e.g., juveniles, adults), and sources (e.g., hatchery, wild). We identified 264 relevant studies (from 87 articles), 222 of which were eligible for quantitative analysis. Implications for policy/management The synthesis of available evidence suggests that hydropower infrastructure entrainment increased the overall risk of freshwater fish injury and immediate mortality in temperate regions, and that injury and immediate mortality risk varied among intervention types. The overall impact of hydroelectric infrastructure on delayed mortality was not evaluated due to model instability, likely because sampling variances of individual effect sizes were extremely large. Owing to variation among study designs encompassed within the overall analysis, uncertainty may be high, and thus there may be high uncertainty associated with the injury and immediate mortality risk estimates revealed in our analysis. Regardless of the wide range of studies included in our analyses contributing to high variability and our use of two different effective size metrics, the conclusions are consistent: downstream passage via hydropower infrastructure results in a greater risk of injury and mortality to fish than controls (i.e., non-intervention downstream releases). Bypasses were found to be the safest fish passage intervention, resulting in decreased fish injury and little difference in risk of immediate mortality relative to controls, a somewhat expected result given that bypasses are specifically designed as a safe alternative to spillway and turbine passage [13, 73]. In agreement with findings highlighted in earlier non-systematic reviews (i.e., [33, 63, 74, 75]), spillway and turbine passage resulted in the highest injury and immediate mortality risk on average, and that Francis turbines had a higher mortality risk relative to controls compared to Kaplan turbines ([56, 76, 77] but see Eicher Associates [78]). General infrastructure posed an increased risk of injury; however, this category encompassed testing on a diversity of hydropower infrastructure types (turbines, spillways, outlets) and thus is of limited use in addressing our secondary research question. Lab based turbine studies resulted in a higher risk of injury than field-based studies, suggesting that field trials may be underestimating fish injury from turbines. Taxonomic analyses for three economically important fish genera revealed that hydropower infrastructure increased injury and immediate mortality risk relative to controls for Alosa (river herring) and Pacific salmonids (salmon and trout), and delayed mortality risk for Anguilla (freshwater eels). Owing to small sample sizes within the evidence base, we were unable to include resident (and other underrepresented) species in our taxonomic analyses. However, we stress that the absence of these species within our evidence base and analysis does not suggest that injury and mortality risk is lower for these species, just that there is insufficient information to quantify such impacts. Furthermore, a lack of a statistically significant overall effect of injury or mortality from hydropower infrastructure for the two other genera that had 'sufficient' samples sizes for inclusion in our analyses (i.e., Lepomis and Salmo), does not imply they are not affected by hydropower infrastructure, only that we were not able to detect an effect (i.e., there could be an effect but we did not detect it, possibly due to low power). Our analyses also demonstrate that the relative magnitude of hydropower infrastructure impacts on fish appears to be influenced by study validity and the type of mortality metric used in studies. Higher risk ratios were estimated for analyses based on studies with lower susceptibility to bias and those that measured actual fish mortality, rather than inferred mortality from survival estimates or detection histories. Overall, placing an empirical value (whether relative or absolute) on the overall injury and mortality risk to fish is valuable to hydropower regulators with the caveat that our analyses encompass a broad range of hydrological variables (e.g., flow), operating conditions, and biological variables. Implications for research The evidence base of this review encompasses a small fraction of temperate freshwater fish, particularly biased towards economically valuable species such as salmonids in the Pacific Northwest of North America. As previously noted by others [56, 79], research on the impacts of hydropower infrastructure on resident fish and/or fish with no perceived economic value is underrepresented in the commercially published and grey literature. Several imperiled fishes also occupy systems with hydropower development although they have rarely been studied in the context of entrainment [80]. Therefore, studies that focus on systems outside of North America, on non-salmonid or non-sportfish target species, and on population-level consequences of fish entrainment/impingement are needed to address knowledge gaps. Aside from immediate (direct) mortality outcomes, which are more easily defined and measured using recapture-release methods [81], no clear guidelines or standardized metrics for assessing injuries and delayed mortality outcomes (e.g., temporal and/or spatial measurement) were overtly evident in our literature searches and screening. Consistency in monitoring and measuring fish injury and immediate mortality has been reached to some degree, but monitoring fish post-passage for delayed injury and mortality is lacking in general [74, 79]. The "gold standard" of examining the impacts of hydropower on fish should presumably include delayed mortality, which we were unable to assess in the present review. Drawing from issues we encountered during quantitative synthesis and commonalities among studies in our evidence base, some clear recommendations for standards pertaining to delayed mortality outcomes and general data analysis include: (1) assessing delayed mortality between 24 to 48 h; (2) using a paired control group (downstream release) for each treatment group (e.g., instead of a common control comparator among several treatment release groups); (3) using quantitative outcomes (instead of qualitative descriptors e.g., of the 50 fish released, most survived); (4) to the extent possible, use similar sampling methods and sampling distances between release and recapture (or survey) among treatment and control groups. Results of literature searches, a list of articles excluded at full-text, as well as the systematic review and critical appraisal databases, data preparation, and analysis details are included as additional files with this report. A ROSES form [82] for this systematic review report is included as Additional file 10. International Commission on Large Dams. Register of dams—general synthesis; 2015. http://www.icold-cigb.net/GB/World_register/general_synthesis.asp. Accessed 24 Nov 2016. Bunt CM, Castro-Santos T, Haro R. Performance of fish passage structures at upstream barriers to migration. Riv Res Appl. 2012;28:457–78. Calles O, Karlsson S, Hebrand M, Comoglio C. Evaluating technical improvements for downstream migrating diadromous fish at a hydroelectric plant. Ecol Eng. 2012;48:30–7. Buysse D, Mouton AM, Baeyens R, Coeck J. Evaluation of downstream migration mitigation actions for eel at an Archimedes screw pump pumping station. Fish Manag Ecol. 2015;22:286–94. Čada G. The development of advanced hydroelectric turbines to improve fish passage survival. Fisheries. 2001;26:14–23. Larinier M, Travade F. Downstream migration: problems and facilities. Bull Fr Pêche Piscic. 2002;364(suppl):181–207. Čada GF. Shaken, not stirred: the recipe for a fish friendly turbine. Oak Ridge National Laboratory; 1997. Contract No. DE-AC05-96OR22464. https://www.osti.gov/servlets/purl/510550. Accessed 24 Nov 2016. [EPRI] Electric Power Research Institute. Fish passage through turbines: application of conventional hydropower data to hydrokinetic technologies. Final Report. 2011. Report No. 1024638. Čada G, Loar J, Garrison L, Fisher R Jr, Neitzel D. Efforts to reduce mortality to hydroelectric turbine-passed fish: locating and quantifying damaging shear stress. Environ Manage. 2006;37:898–906. Čada G, Ahlgrimm J, Bahleda M, Bigford T, Stavrakas SD, Hall D, et al. Potential impacts of hydrokinetic and wave energy conversion technologies on aquatic environments. Fisheries. 2007;32:174–81. Brown RS, Colotelo AH, Pflugrath BC, Boys CA, Baumgartner LJ, Deng D, et al. Understanding barotrauma in fish passing hydro structures: a global strategy for sustainable development of water resources. Fisheries. 2014;39:108–22. Barnthouse LW. Impacts of entrainment and impingement on fish populations: a review of the scientific evidence. Environ Sci Policy. 2013;31:149–56. Katopodis C, Williams JG. The development of fish passage research in a historical context. Ecol Eng. 2012;48:8–18. Jansen HM, Winter HV, Bruijs MCM, Polman HJG. Just go with the flow? Route selection and mortality during downstream migration of silver eels in relation to river discharge. ICES J Mar Sci. 2007;64:1437–43. Carr JW, Whoriskey FG. Migration of silver American eels past a hydroelectric dam and through a coastal zone. Fish Manag Ecol. 2008;15:393–400. Travade F, Larinier M, Subra S, Gomes P, De-Oliveira E. Behaviour and passage of European silver eels (Anguilla anguilla) at a small hydropower plant during their downstream migration. Knowl Manag Aquat Ec. 2010;398(01):1–19. Besson ML, Trancart T, Acou A, Charrier F, Mazel V, Legault A, et al. Disrupted downstream migration behaviour of European silver eels (Anguilla anguilla, L.) in an obstructed river. Environ Biol Fish. 2016;99:779–91. Eyler SM, Welsh SA, Smith DR, Rockey MM. Downstream passage and impact of turbine shut-downs on survival of silver American eels at five hydroelectric dams on the Shenandoah River. Tran Am Fish Soc. 2016;145:964–76. Haro A, Watten B, Noreika J. Passage of downstream migrant American eels through an airlift-assisted deep bypass. Ecol Eng. 2016;91:545–52. Acolas ML, Rochard E, Le Pichon C, Rouleau E. Downstream migration patterns of one-year-old hatchery-reared European sturgeon (Acipenser sturio). J Exp Mar Biol Ecol. 2012;430–431:68–77. McDougall CA, Blanchfield PJ, Peake SJ, Anderson WG. Movement patterns and size-class influence entrainment susceptibility of lake sturgeon in a small hydroelectric reservoir. Tran Am Fish Soc. 2013;142:1508–21. McDougall CA, Anderson WG, Peake SJ. Downstream passage of lake sturgeon through a hydroelectric generating station: route determination, survival, and fine-scale movements. N Am J Fish Manage. 2014;34:546–58. Johnson GE, Dauble DD. Surface flow outlets to protect juvenile salmonids passing through hydropower dams. Rev Fish Sci. 2006;14:213–44. Adams NS, Plumb JM, Perry RW, Rondorf DW. Performance of a surface bypass structure to enhance juvenile steelhead passage and survival at Lower Granite Dam, Washington. N Am J Fish Manage. 2014;34:576–94. Popper AN, Carlson TJ. Application of sound and other stimuli to control fish behavior. Tran Am Fish Soc. 1998;127:673–707. Ostrand KG, Simpson WG, Suski CD, Bryson AJ. Behavioural and physiological response of White Sturgeon to an electrical Sea Lion barrier system. Mar Coast Fish. 2009;1:363–77. Zielinksi DP, Sorensen PW. Field test of a bubble curtain deterrent system for common carp. Fish Manag Ecol. 2015;22:181–4. Marohn L, Prigge E, Reinhold H. Escapement success of silver eels from a German river system is low compared to management-based estimates. Freshw Biol. 2014;59:64–72. Pullin AS, Stewart GB. Guidelines for systematic review in conservation and environmental management. Conserv Biol. 2006;20:1647–56. Collaboration for Environmental Evidence. In: Pullin AS, Frampton GK, Livoreil B, Petrokofsky G, editors. Guidelines and standards for evidence syn-thesis in environmental management. Version 5; 2018. http://www.environmentalevidence.org/information-for-authors. Accessed 31 May 2018. Rytwinski T, Algera DA, Taylor JJ, Smokorowski KE, Bennett JR, Harrison PM, et al. What are the consequences of fish entrainment and impingement associated with hydroelectric dams on fish productivity? A systematic review protocol. Environ Evid. 2017;6:8. Robbins TW, Mathur D. The muddy run pumped storage project: a case history. Tran Am Fish Soc. 1976;105:165–72. [OTA] Office of Technology Assessment. Fish passage technologies: protection at hydropower facilities. U.S. Government Printing Office, Washington, D.C.; 1995. Report No. OTA-ENV-641. [ASCE] American Society for Civil Engineers. Guidelines for design of intakes for hydroelectric plants. Committee on Hydropower Intakes; 1995. https://ascelibrary.org/doi/pdf/10.1061/9780784400739.bm. Accessed 12 July 2018. Čada GF, Coutant CC, Whitney RR. Development of biological criteria for the design of advanced hydropower turbines; 1997. Report No. DOE/ID-10578. Prepared for United States Department of Energy. Bilotta GS, Milner AM, Boyd IL. Quality assessment tools for evidence from environmental science. Environ Evid. 2014;3:14. Rohatgi A. WebPlotDigitalizer: HTML5 based online tool to extract numerical data from plot images. Version 4.1; 2015. https://automeris.io/WebPlotDigitizer/. Accessed 10 Jan 2018. Lipsey MW, Wilson DB. Practical meta-analysis. Thousand Oaks: SAGE Publications; 2001. Deeks JJ, Higgins JPT. Statistical Algorithms in Review Manager 5; 2010. http://tech.cochrane.org/revman. Accessed 6 Mar 2018. Deeks JJ, Higgins JPT, Altman DG. Analysing data and undertaking meta-analyses. In: Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions. Version 5.1.0; 2011. http://www.handbook.cochrane.org. Accessed 6 Mar 2018. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Effect sizes based on binary data (2x2 Tables). In: Borenstein M, Hedges LV, Higgins JPT, Rothstein HR, editors. Introduction to meta-analysis. Hoboken: Wiley; 2009. p. 33–9. Noordzij M, van Diepen M, Caskey FC, Jager KJ. Relative risk versus absolute risk: one cannot be interpreted without the other. Nephrol Dial Transplant. 2017;32:ii13–8. Hedges LV, Olkin I. Statistical methods for meta-analysis. Cambridge: Academic Press; 1985. Light RJ, Pillemer DB. Summing up: the science of reviewing research. Cambridge: Harvard University Press; 1984. Sterne JA, Egger M, Smith GD. Systematic reviews in health care: investigating and dealing with publication and other biases in meta-analysis. Br Med J. 2001;323:101–5. Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. Br Med J. 1997;315:629–34. Vittinghoff E, Glidden DV, Shiboski SC, McCulloch CE. Regression methods in biostatistics. New York: Springer; 2005. R Development Core Team. R: a language and environment for statistical computing. R foundation for Statistical Computing, Vienna, Austria; 2017. http://www.R-project.org. Accessed 25 Jan 2018. Viechtbauer W. Conducting meta-analyses in R with the metafor package. J Stat Softw. 2010;36(3):1–48. Haddaway NR, Macura B, Whaley P, Pullin AS. 2017. ROSES flow diagram for systematic reviews. Version 1.0. Figshare. https://doi.org/10.6084/m9.figshare.5897389. Deng Z, Guensch GR, McKinstry CA, Mueller RP, Dauble DD, Richmond MC. Evaluation of fish-injury mechanisms during exposure to turbulent shear flow. Can J Fish Aquat Sci. 2005;62:1513–22. Boys CA, Robinson W, Miller B, Pflugrath B, Baumgartner LJ, Navarro A, et al. How low can they go when going with the flow? Tolerance of egg and larval fishes to rapid decompression. Biol Open. 2016;5:786–93. Bell MC, Copp HD, Delacy AC. A compendium on the survival of fish passing through spillways and conduits. U.S. Army Corps of Engineers; 1972. Contract No. DACW-57-67-C-0105. Ruggles CP, Murray DG. A review of fish response to spillways. Ottawa: Canada Department of Fisheries and Oceans; 1983. p. 153. Bradford MJ, Randall RG, Smokorowski KE, Keatley BE, Clarke KD. A framework for assessing fisheries productivity for the Fisheries Protection Program. DFO Can Sci Advis Sec Res Doc. 2014;2013/067. v + 25 p. Pracheil BM, DeRolph CR, Schramm MP, Bevelhimer MS. A fish-eye view of riverine hydropower systems: the current understanding of the biological response to turbine passage. Rev Fish Biol Fish. 2016;26:153–67. Brett JR. Swimming performance of sockeye salmon (Oncorhynchus nerka) in relation to fatigue time and temperature. J Fish Res Bd Can. 1967;24:1731–41. Bernatchez L, Dodson JJ. Influence of temperature and current speed on the swim-ming capacity of lake whitefish (Coregonus clueaformis) and cisco (C. artedii). Can J Fish Aquat Sci. 1985;42:1522–9. Claireaux G, Couturier C, Groison A. Effect of temperature on maximum swim-ming speed and cost of transport in juvenile European sea bass (Dicentrarchus labrax). J Exp Biol. 2006;209:3420–8. Anderson CD, Roberts RJ. A comparison of the effects of temperature on wound healing in a tropical and a temperate teleost. J Fish Biol. 1975;7:173–82. Clarke A, Johnston NM. Scaling of metabolic rate with body mass and temperature in teleost fish. J Anim Ecol. 1999;68:893–905. Bennett TR, Wissmar RC, Roni P. Fall and spring emigration of juvenile coho salmon from East Twin River, Washington. Northwest Sci. 2011;85:562–70. Coutant CC, Whitney RR. Fish behavior in relation to passage through hydropower turbines: a review. Tran Am Fish Soc. 2000;129:351–80. Enders EC. Behavioral responses of migrating fish to environmental changes: implications to downstream fish passage. In: SE 2012 Vienna: 9th international symposium on ecohydraulics; 2012. Caro T. The behaviour—conservation interface. Trends Ecol Evol. 1999;14:366–9. Cooke SJ, Blumstein DT, Buchholz R, Caro T, Fernández-Juricic E, Franklin CE, et al. Physiology, behavior, and conservation. Phys Biochem Zool. 2014;87:1–14. Enders EC, Gessel MH, Williams JG. Development of successful fish passage structures for downstream migrants requires knowledge of their behavioural response to accelerating flow. Can J Fish Aquat Sci. 2009;66:2109–17. Semple SL, Mulder IM, Rodriguez-Ramos T, Power M, Dixon B. Long-term implantation of acoustic transmitters induces chronic inflammatory cytokine expression in adult rainbow trout (Oncorhynchus mykiss). Vet Immunol Immunopathol. 2018;205:1–9. Kessel ST, Cooke SJ, Heupel MR, Hussey NE, Simpfendorfer CA, Vagle S, et al. A review of detection range testing in aquatic passive acoustic telemetry studies. Rev Fish Biol Fisheries. 2014;24:199–218. Rosenberg DM, Berkes F, Bodaly RA, Hecky RE, Kelly CA, Rudd JWM. Large-scale impacts of hydroelectric development. Environ Rev. 1997;5:27–54. Hall CJ, Jordaan A, Frisk MG. Centuries of anadromous forage fish loss: consequences for ecosystem connectivity and productivity. Bioscience. 2012;62:723–31. Muir WD, Smith SG, Williams JG, Sandford BP. Survival of juvenile salmonids passing through bypass systems, turbines, and spillways with and without flow deflectors at Snake River Dams. N Am J Fish Manage. 2001;21(1):135–46. Mighetto L, Ebel WJ. Saving the salmon: a history of the U.S. Army Corps of Engineers' efforts to protect anadromous fish on the Columbia and Snake Rivers. Washington: U.S. Army Corps of Engineers, Portland and Walla Walla Districts; 1994. Federal Energy Regulatory Commission. Preliminary assessment of fish entrainment at hydropower projects—a report on studies and protective measures; 1995. Volume 1. Report No. DPR-10. Schilt CR. Developing fish passage and protection at hydropower dams. Appl Anim Behav Sci. 2007;104:295–325. Electric Power Research Institute. Fish entrainment and turbine mortality review and guidelines. Final Report; 1992. Report No. TR-101231. Larinier M. Environmental issues, dams and fish migration. Dams, fish and fisheries: opportunities, challenges and conflict resolution. Quebec City: Food and Agriculture Organization of the United Nations; 2001. p. 45–90. Eicher Associates. Turbine-related fish mortality: review and evaluation of studies. 1987; Report No. EPRI AP-5480. Prepared for Electric Power Research Institute. Roscoe DW, Hinch SG. Effectiveness monitoring of fish passage facilities: historical trends, geographic patterns and future directions. Fish Fish. 2010;11:12–33. Limburg KE, Waldman JR. Dramatic declines in North Atlantic diadromous fishes. Bioscience. 2009;59:955–65. Burnham KP, Anderson DR, White GC, Brownie C, Pollock KH. Design and analysis methods for fish survival experiments based on release-recapture. Bethesda: American Fisheries Society; 1987. Haddaway N, Macura B, Whaley P, Pullin A. ROSES for systematic review reports. Version 1.0. Figshare. 2017. https://doi.org/10.6084/m9.figshare.5897272. The authors would like to thank Daniel Struthers for help with website searches and pdf retrieval and William Twardek for help with meta-data extraction. We also thank several collaborators who provided valuable insights and comments including: DFO staff including Brent Valere, David Gibson, and Richard Kavanagh. We also thank the reviewers for their constructive comments. The study was primarily supported by Fisheries and Oceans Canada. Additional support is provided by the Natural Science and Engineering Research Council of Canada, The Canada Research Chairs Program, and Carleton University. Dirk A. Algera and Trina Rytwinski—Shared first authorship Fish Ecology and Conservation Physiology Laboratory, Department of Biology, Carleton University, 1125 Colonel By Drive, Ottawa, ON, Canada Dirk A. Algera, Trina Rytwinski, Jessica J. Taylor, Philip M. Harrison & Steven J. Cooke Canadian Centre for Evidence-Based Conservation, Institute of Environmental and Interdisciplinary Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, Canada Trina Rytwinski, Jessica J. Taylor, Joseph R. Bennett & Steven J. Cooke Department of Biology and Institute of Environmental and Interdisciplinary Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, Canada Great Lakes Laboratory for Fisheries and Aquatic Science, Fisheries and Oceans Canada, 1219 Queen Street East, Sault Ste. Marie, ON, Canada Karen E. Smokorowski Department of Biology, University of Waterloo, 200 University Ave., Waterloo, ON, Canada Philip M. Harrison & Michael Power Northwest Atlantic Fisheries Centre, Fisheries and Oceans Canada, 80 East White Hills, St. John's, NF, Canada Keith D. Clarke Freshwater Institute, Fisheries and Oceans Canada, 501 University Crescent, Winnipeg, MB, Canada Eva C. Enders Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN, USA Mark S. Bevelhimer Dirk A. Algera Trina Rytwinski Jessica J. Taylor Joseph R. Bennett Philip M. Harrison This review is based on a draft written by DAA and TR. TR performed searches. DAA and TR screened identified records, and DAA, TR, and WT extracted data. TR performed all quantitative analyses. All authors assisted in editing and revising the draft. All authors read and approved the final manuscript. Correspondence to Trina Rytwinski. Ethical approval and consent to participate Search strategy and results. A description of our search strategy and the results of the literature searches. For each source, we provide full details on the search date(s), search strings used, search settings/restrictions, and subscriptions (if applicable), and the number of returns. List of articles excluded on the basis of full-text assessment and reasons for exclusion. Separate lists of articles excluded on the basis of full-text assessment, articles that were unobtainable, and relevant reviews. Data-extraction sheet. Contains the coding (extracted data) for all articles/studies included in the narrative synthesis. Includes a description of the coding form, the actual coding of all articles/studies, a codes sheet, and a list of supplementary articles. Data preparation for quantitative synthesis. A description of data preparation for quantitative synthesis in relation to reducing multiple effect sizes estimates from the same study and handling of multiple group comparisons. Quantitative synthesis database. This file provides the full narrative synthesis database, a database of all data sets that were excluded from quantitative synthesis, and individual databases for all global and intervention type analyses (i.e., for fish injury, immediate and delayed mortality). Study validity assessments. Description of the study validity assessment tool and results of assessments for each article/study included in the narrative synthesis. Global meta-analyses and publication bias. All forest (i.e., summary plot of all effect size estimates) and funnel (i.e., visual assessment for publication bias using a scatter plot of the effects sizes versus a measure of their precision) plots from global and sensitivity analyses. Tests of independence of factors. Results of contingency analyses for independence of fish injury and immediate mortality moderators. Forest plots for taxonomic analyses. Forest plots from taxonomic analyses for the genera: Alosa, Anguilla, Lepomis, Salmo, and Oncorhynchus. Additional file 10. ROSES Systematic review checklist. Algera, D.A., Rytwinski, T., Taylor, J.J. et al. What are the relative risks of mortality and injury for fish during downstream passage at hydroelectric dams in temperate regions? A systematic review. Environ Evid 9, 3 (2020). https://doi.org/10.1186/s13750-020-0184-0 Accepted: 03 January 2020 Hydropower infrastructure Injury risk Mortality risk Temperate fish
CommonCrawl
Dual activity of Serratia marcescens Pt-3 in phosphate-solubilizing and production of antifungal volatiles Andong Gong1, Gaozhan Wang1, Yake Sun1, Mengge Song1, Cheelo Dimuna1, Zhen Gao1, Hualing Wang2 & Peng Yang1 Soil fertility decline and pathogen infection are severe issues for crop production all over the world. Microbes as inherent factors in soil were effective in alleviating fertility decrease, promoting plant growth and controlling plant pathogens et al. Thus, screening microbes with fertility improving and pathogen controlling properties is of great importance to humans. Bacteria Pt-3 isolated from tea rhizosphere showed multiple functions in solubilizing insoluble phosphate, promoting plant growth, producing abundant volatile organic compounds (VOCs) and inhibiting the growth of important fungal pathogens in vitro. According to the 16S rRNA phylogenetic and biochemical analysis, Pt-3 was identified to be Serratia marcescens. The solubilizing zone of Pt-3 in the medium of lecithin and Ca3(PO4)2 was 2.1 cm and 1.8 cm respectively. In liquid medium and soil, the concentration of soluble phosphorus reached 343.9 mg.L− 1, and 3.98 mg.kg− 1, and significantly promoted the growth of maize seedling, respectively. Moreover, Pt-3 produced abundant volatiles and greatly inhibited the growth of seven important phytopathogens. The inhibition rate ranged from 75.51 to 100% respectively. Solid phase micro-extraction coupled with gas chromatography tandem mass spectrometry proved that the antifungal volatile was dimethyl disulfide. Dimethyl disulfide can inhibit the germination of Aspergillus flavus, and severely destroy the cell structures under scanning electron microscopy. S. marcescens Pt-3 with multiple functions will provide novel agent for the production of bioactive fertilizer with P-solubilizing and fungal pathogens control activity. China as the most populous country is a giant producer and consumer of crops in the world. Improving soil fertility and increasing crop yield are necessary to local people. Phosphorus (P) ranked as the second most important macro-nutrient can promote plant growth and facilitate the absorption of N, K, Mg and other nutrients for plants [1]. However, the available P was rare due to its poor solubility in soil [2, 3]. More than 74% of the lands were deficient in P [4]. Hence, the P supplement to plants was mostly relying on the application of chemical P fertilizer. From 1949 to 1992, it was estimated that the accumulative amount of P fertilizer applied to soils reached 3.4 × 107 tons in China, of which about 2.6 × 107 tons was fixed by metal ions [5]. Chemical fertilizers further aggravated the mineralization pollution of soils such as acidification, hardening and fertility reduction et al. [6,7,8]. Hence, the most efficient and environmental method to solve soil problem was to solubilize phosphorite, and increase the content of available P. Microorganisms with P solubilizing activity are considered as major agents in alleviating soil mineralization problems [7, 9]. They are capable of turning insoluble P mineral into the soluble P and increased P content in soil [10]. Till now, several kinds of bacteria have been proved efficient in promoting P solubilizing such as Aspergillus niger [11], Pseudomonas sp. [12]. Actinomycetes sp., Bacillus sp. [13], Serratia sp. [7] and Burkholderia pyrrocinia [14]. Among these microbes, Serratia species as gram-negative bacteria was widely distributed in soil [15]. They were well known for the degradation of chitin by releasing of chitinase [16], as well as soilborne pathogens control activity [17]. Additionally, Serratia marcescens CTM50650, NBRI1213 and GPS-5 showed active P-solubilizing activity in medium and liquid suspension [7, 18, 19]. But, these strains only showed single activity which may limit the broad applications of them. In current study, we isolated bacteria Serratia marcescens Pt-3 with dual functions of P solubilizing and antifungal activity. The objectives of our study are to 1) evaluate the P-solubilizing activity of Pt-3 in the media, liquid solution and soils; 2) determine the antagonistic activity against different pathogens, 3) identified the predominant volatile antifungal compounds from Pt-3, and elucidated the inhibitory mechanism. Screening of bacteria with phosphate solubilizing activity To screen microbe with P-solubilizing activity, 893 bacteria were isolated from tea rhizosphere by serial dilution method. Among these bacteria, 9 strains could solubilize inorganic phosphate (Ca3(PO4)2), and 5 strains can solubilize organic phosphate (lecithin) in solid medium. Among these bacteria, strain Pt-3 showed valid activity both in organic and inorganic phosphate medium. A clear and wide halo zone was formed around the clone of Pt-3 in each medium (Fig. 1). And the diameter of halo zone in organic P medium was 2.1 cm. The P solubilization index (PSI) is 1.4. In inorganic media, the halo zone diameter was 1.8 cm, the PSI was 3.6. Solubilizing activity of Pt-3 in organic and inorganic Phosphate medium. CK, Phosphate medium without bacteria inoculation; Pt-3 & Organic P, bacteria Pt-3 inoculated in the organic Phosphate (soybean lecithin) medium; Pt-3 & Organic P, Pt-3 inoculated in the inorganic Phosphate (Ca3(PO4)2) medium Phosphate solubilizing activity of Pt-3 in liquid and soil conditions Pt-3 could also promote the solubilizing of P in liquid medium. In the treatment without Pt-3, few PO43− was detected in the suspension, and the concentration showed no changes during 20 days. When bacteria Pt-3 was added into the solution, insoluble Ca3(PO4)2 can be transformed to soluble PO43− and the concentration was increased dramatically during 20 days. In the first 6 days, the concentration of PO43− increased quickly from 0.7 to 343 mg.L− 1, then the concentration was stable around 300 mg.L− 1 (Fig. 2). Phosphate solubilizing activity of Pt-3 against Ca3(PO4)2 under liquid and soil conditions. a liquid medium of insoluble Ca3(PO4)2 inoculated with Pt-3 and cultured at 30 °C and 150 rpm for 18 days; b soil containing Ca3(PO4)2 inoculated with Pt-3 and placed at 30 °C for 24 days In soil condition, Pt-3 also showed valid P solubilizing activity. The concentration of soluble PO43− was increased from 2.2 to 3.9 mg.kg− 1, and stable at 3.8 mg.kg− 1 in the 18th days. In control treatment, the soluble PO43− showed less change ranged between 2.3 to 3.2 mg.kg− 1 and stable at 3.2 mg.kg− 1 in the 18th days. Compared to control, we could clearly observe that the concentration of PO43− in Pt-3 inoculation was equal to control treatment at the beginning. Then the concentration increased higher in Pt-3 treatment during the 6th to 24th days (Fig. 2). These results showed that Pt-3 can solubilize insoluble Ca3(PO4)2 in liquid and soil conditions. Pt-3 promoting the growth of maize seedling Based on the high P solubilizing efficiency of Pt-3, we conducted a 30-day maize growing experiment under atmospheric conditions. The results clearly proved that Pt-3 could promote the growth of maize seedling compared to control treatment. The index of plant shoot leaf length and dry weight in Pt-3 treatment was significantly higher than that in control treatment (Table 1). Additionally, the index of leaf width (2.55 ± 0.08 cm), root length (4.95 ± 0.15 cm) and number of leaves (5 per plant) for Pt-3 were also better than control treatment. Table 1 Effect of Serratia marcescens Pt-3 on the growth of maize seedling Molecular and biochemical characters of strain Pt-3 The single clone of Pt-3 was picked and used for biochemical analysis through MicroStation™ system. The results clearly proved that strain Pt-3 as gram negative bacteria showed positive reaction in the cultural cells of ten carbon sources including glucose, mannitol, maltose and sucrose et al. But it can not grow at the presence of lactose, phenyalanine respectively. It also showed positive reaction at high salt conditions (1 to 8% NaCl), pH 5.0 and 7.0, as well as different antibiotics (streptomycin, lincomycin, vancomycin et al.) (Table 2). These results were consistent with the biochemical characters of S. marcescens in MicroStation database. Table 2 Biochemical and physiological analysis of strain Pt-3 16S rRNA sequence was amplified from the genome of Pt-3. The sequence was aligned in GenBank database, which showed high similarity to Serratia sp. such as S. plymuthica, S. liquefaciens, S. entomophila, S. marcescens. The 16S rRNA sequences of homologous strains and Pt-3 were used to construct phylogenetic tree (Fig. 3). The phylogenetic analysis proved that Pt-3 was homologous to S. marcescens. They were classified into same clade with S. marcescens (KT438729.1, KY379049.1, KY992555.1). Hence, we could deduce that strain Pt-3 was S. marcescens based on 16S rRNA and biochemical analysis. Phylogenetic tree of Pt-3 and homologous strains based on 16S rRNA sequences Antifungal activity of VOCs from Pt-3 Pt-3 showed broad antifungal activity against different fungi without direct contact. In face-to-face dual cultural tests, the growth of eight important fungal pathogens was all greatly inhibited by Pt-3. The inhibitory rate against F. graminearum was 100%, and against other six pathogens (Magnaporthe oryzae, B. cinerea, A. flavus, A. fumigatus, Colletotrichum graminicola, A. alternata) the inhibitory rate ranged from 75.51 to 97.83% respectively (Table 3). We can deduce that Pt-3 can produce some antifungal VOCs, spread quickly and inhibit the growth of co-cultured fungal strains. Table 3 Broad spectrum antifungal activity of volatiles from strain Pt-3 To further prove the production of antifungal VOCs from Pt-3, active charcoal was added into the experiments. The diameter of A. flavus mycelia on PDA plate was 6 cm 5 dpi. Active charcoal added into the tests, the growth of A. flavus showed no difference. When Pt-3 was added into the tests, the growth of A. flavus was greatly inhibited, the diameter was 0.7 cm. But the inhibition activity of Pt-3 was weakened when active charcoals added into Pt-3 and A. flavus treatment. These results proved that Pt-3 can produce volatile and inhibit the growth of fungal strains, and active charcoal as adsorbent can absorb some VOCs from Pt-3, and weakened the inhibitory activity of Pt-3 (Fig. 4). Inhibitory activity of volatiles from Pt-3 against A. flavus affected by active charcoal in sealed airspace. Mycelia of A. flavus cultured in PDA medium (A. flavus) challenged with bacteria Pt-3 spread on NA medium (A. flavus + Pt-3) with the presence of active charcoal (A. flavus + C + Pt-3). A. flavus on PDA challenged with active charcoal was used as control (A. flavus + C) Identification of antifungal VOCs from Pt-3 The VOCs produced by strain Pt-3 were enriched by SPME equipment, then injected into GC-MS/MS system for further identification. Only one abundant compound was detected in the chromatogram of Pt-3 VOCs during 35 min (Fig. 5). The molecular weight for the compound was 94 Da (D) and showed great similarity (> 95%) to DMDS in NIST11.0 database. The mass peak for the fragments was similar to DMDS under same EI sources (Fig. 6). Additionally, the retention time of detected compounds was 2.628 min which was same to the standard DMTS. These results finally proved that DMDS was the predominant VOC produced by Pt-3. GC-MS analysis of volatiles emitted from strain Pt-3 in NA medium Comparison of mass spectrum of Pt-3 at Rt = 2.628 min and dimethyl disulfide in NIST 17 MS spectral database Ultra-structure analysis of fungal strain affected by Pt-3 VOCs A. flavus conidia inoculated on peanuts surface was challenged with Pt-3 for 5 days, the fungal cells on peanuts coat were analysed under scanning electron microscopy (SEM). In control treatment, the conidia can germinate into hyphae and formed conidiophore. Amounts of fresh conidia were produced on conidiophore, and spread over the peanuts coats. Whereas, the A. flavus conidia in Pt-3 treatment were severely damage. The conidia can not germinate to hyphae, and showed severely depressed structure (Fig. 7). Ultra-structure analysis of A. flavus cells infected on peanuts affected by volatiles from Pt-3 under scanning electron microscope Phosphate is one of the most important fertilizers to plant during the whole growth stage. But, the content of active phosphate is seriously deficient in soil of China [4]. Moreover, 95% of phosphate fertilizer applied to soil in season is fixed by metal ions, results in soil compaction and erosion. Hence, the urgent things for soil protection are to alleviate the use of soil fertilizer and improve the content of active phosphate. It is reported that microbes in soil play important roles in improving soil fertilizer [1]. Serratia marcescens as traditional soil bacteria shows positive P-solubilizing activity. Mohamed reported that the S. marcescens PH1 and HP2 can form halo zone in solid medium with diameter of 1.1 cm (SI = 3.2) and 0.9 cm (SI = 2.8) respectively [20]. S. marcescens CTM 50650 isolated from the phosphate mine also showed P solubilizing activity. The soluble P concentration reached 500 mg.L− 1 in liquid medium [7]. S. marcescens NBRI1213 exhibited maximum P solubilizing activity of 984 mg.L− 1 in liquid suspension [19]. These work indicates that S. marcescens with great P solubilizing activity in medium or liquid conditions. But, no P solubilizing activity of these strains are reported in soil conditions. In our current work, S. marcescens Pt-3 isolated from tea rhizosphere not only shows active phosphate solubilizing activity in medium, solution and soil conditions, it can also produce abundant antifungal volatiles and greatly inhibit the growth of seven important fungal pathogens. Pt-3 and associated volatiles will provide novel strategies for production of valid bioactive microorganism fertilizer. In the soil inoculated with Pt-3, the soluble P concentration increases from 2.25 to 3.98 mg.kg− 1 during 24 d, although, the P-solubilizing activity of Pt-3 in soil is weak, which is similar to other microbes in soil condition [12]. Take B. cereus YL6 as an example, the content of soluble P in the soil is 5.50 mg/kg, which is higher than that in control treatment (4.70 mg/kg). Even weak P-solubilizing activity of YL6 in soil, it really can promote the root growth of Chinese cabbage plants in field [12]. In previous work, we also proved that Burkholderia cepacia WY6-5 with weak P- solubilizing activity in soil, can also promote the growth of maize seedling [14]. Soil is a complex interaction environment. Complex compositions and metal ions in soil may interact with soluble P which may further be transferred into insoluble one [2, 21]. Additionally, with the extending of incubation time, the activity and concentration of microbes were changed. That can further affect the P solubilizing activity. Thus, soil compositions as important roles affect the P solubilizing activity of microbe in field. The candidate interactions among microbe, Phosphate and environment should be elucidated in further studies [2, 9]. In our previous work, we proved that volatiles produced by microbe can greatly inhibit the growth of important fungal pathogens, and may further control soil-borne pathogens such as Shewanella algae [22], Enterobacter asburiae [23], Alcaligenes faecalis [24] and Staphylococcus saprophyticus [22]. The effective volatiles were identified as dimethyl trisulfide, 1-Pentanol and Phenylethyl Alcohol, methyl isovalerate, 3,3-dimethyl-1,2-epoxybutane respectively. Whereas, the characters of volatiles from Serratia marcescens are still unknown till now. Our current work innovatively proved that Serratia marcescens Pt-3 can produce volatile dimethyl disulphide, effectively inhibited the growth of seven important phyto-pathogens, and severely damaged fungal cell structure. Additionally, some evidence proved that dimethyl disulfide produced by microbe can spread long distance and greatly inhibit the soil borne pathogens including nematode, Verticillium dahlia, Rhizoctonia solani and Cladosporium spp. [25]. The compounds can also induce systematic resistance and promote the growth of plant in field [26]. Serratia marcescens Pt-3 is an efficient phosphate solubilizing bacterium as well as a producer of volatile dimethyl disulfide that showed broad and effective antifungal activity to seven important fungal pathogens. Therefore, Serratia marcescens Pt-3 and the produced dimethyl disulfide with multiple functions, which can be used as effective bio-active agents in controlling plant disease and increasing soil fertility. Serratia marcescens Pt-3 with P-solubilizing and antifungal volatile dimethyl disulfide production activity will provide novel agents for the solving of fertility reduction and pathogen infection in soil. Microbes and plants Bacteria Pt-3 was isolated from tea rhizosphere in Xinyang, Henan province, China. Seven important phytopathogens including Aspergillus flavus, Fusarium graminearum, Alternaria alternata, Magnaporthe oryzae, Aspergillus fumigatus, Colletotrichum graminicola and Botrytis cinerea were isolated from diseased plants and stored in our lab [23, 24]. Peanut (cultivar Silihong) and maize (cultivar Kunyu) seeds were purchased from supermarket. The application of these seeds in our test was permitted in China, and complied with local legislation. Isolation of microbe with P-solubilizing activity Soil samples collected from tea rhizosphere about 10 cm in depth were stored at 4 °C, and used in less than 4 days. For bacteria screening, 1 g soil was placed in 2 ml tubes containing 1 ml sterilized water. The suspension was mix well on vertex for 5 min, then serially diluted to 10− 6. One hundred microliters of dilution was spread on NA medium, and cultured at 37 °C for 48 h. The bacteria clones with different phenotypes appeared in NA medium were picked out and streaked on a new NA medium for further tests. Two kinds of phosphate medium were used in the tests for screening P-solubilizing bacteria. The inorganic P medium contained (NH4)2SO4 0.5 g, MgSO4 0.3 g, NaCl 0.3 g, Ca3(PO4)2 8.0 g, glucose 10.0 g, 11% MnSO4 1 mL, 1% FeSO4 1 mL, agar 20 g, 1000 ml distilled water, pH = 7.2. The organic P media included (NH4)2SO4 0.5 g, NaCl 0.3 g, KCl 0.3 g, CaCO3 5 g, 11% MnSO4 1 mL, 1% FeSO4 1 mL, Soybean lecithin 0.8 g, yeast extraction 0.8 g, agar 20 g, 1000 ml distilled water, pH = 7.2. The medium was autoclaved at 121 °C and 1.01 MPa for 30 min, cooled to room temperature and poured into petri dishes (9 cm in diameter) with 20 ml in each respectively. One medium hole was punched out by a manual disk puncher with a diameter of 5 mm. Obtained microbe suspension (20 μL) collected from NA medium was inoculated into each hole in the medium (organic and inorganic P media). All inoculated media were cultured at 30 °C and darkness for 5 days. Bacterial clones with clear halo zone in phosphate media were considered as PSB and selected for further use. The P-solubilizing activity of each microbe was conducted for two times, and the phosphate solubilizing index (PSI) was calculated as the following equation [20, 21]. $$\mathrm{PSI}=\left(\mathrm{colony}\ \mathrm{diameter}+\mathrm{halozone}\ \mathrm{diameter}\right)/\mathrm{colony}\ \mathrm{diameter}\times 100.$$ P-solubilizing activity of Pt-3 in liquid broth and soil Pt-3 was cultured in NA medium for 48 h. The fresh bacteria bodies were collected and adjusted to 108 cfu/mL in sterilized water. To analyze the P-solubilizing activity of Pt-3, 100 μL of bacteria suspension was inoculated into 40 mL of the liquid inorganic P medium ((NH4)2SO4 0.5 g, MgSO4 0.3 g, NaCl 0.3 g, Ca3(PO4)2 8.0 g, glucose 10.0 g, 11% MnSO4 1 mL, 1% FeSO4 1 mL, 1000 ml distilled water with pH 7.2) in 100 ml flask. The media without bacteria were used as control. All flasks were placed at 30 °C and 100 rpm for 20 days. One milliliter of suspension was collected from the flask every 6 days. The suspension was centrifuged at 8000 rpm for 15 min, and the supernatant was transferred into a new tube for PO43− analysis. The soluble PO43− in the supernatant cultures was determined with a Segmented Continuous Flow Analyzer (Futura, Alliance, France) at a wavelength of 420 nm [2]. To determine the P-solubilizing activity of bacteria in soil, soil sample was collected from tea rhizosphere of 10 cm in depth, dried at room temperature and sieved through 40-mesh screen. 1.70 kg soil was filled into a plastic bag, 15% (v/w) of inorganic P medium was added into each soil sample and autoclaved at 121 °C for 20 min (three times). Strain Pt-3 at OD 0.354 was inoculated to soil samples at 5% (v/w). Sterilized water was used as control. Each test was conducted for three times, and all soil samples were cultured at 28 °C and darkness for 30 days. The soil (10 g) was collected from each bag every 6 days and used for PO43− quantitative analysis. The soil samples were re-suspended in 10 mL sterilized water, mixed at vortex for 10 min, and centrifuged at 3500 rpm for 10 min. The supernatant was then filtered through a 0.45 μm filter. Released PO43− was measured using the method described above. The P-solubilizing activity of Pt-3 in liquid and soil conditions were conducted for two times with three replication in each. Plant growth promoting activity of Pt-3 on maize seedling Soil sample was collected from the rhizosphere of tea plant about 5-10 cm in depth. The soil sample was dried at room temperature, sieved through a mesh of 2.00 mm in side length. Then, the soil was filled in plastic bags and autoclaved at 121 °C and 30 min for 3 times. After cooling to room temperature, the liquid P solubilizing medium was added into the soil samples (relative to the soil 15% v/w). Then, the soil was equally separated into six parts, and placed into six pots (19.2 cm in diameter, 14.2 cm in height) with 1.8 kg in each. Pt-3 suspension was inoculated into three pots and blended well with a stirring rod. The other three pots inoculated with sterilized water were used as control. Maize seeds were surface sterilized in 75% ethanol for 3 min, rinsed and soaked with sterilized water for 30 min, then placed on moistened filter paper for germination. The germinated seeds with same bud length were picked, and sowed into the pots filled with soil. Three pots were used in each treatment, and four seeds were planting in each pot. All pots were maintained in the open air and occasionally watered with sterile water for 30 days. Finally, the maize seedlings were obtained, and the parameters (including plant height, leaf and root length, leaf number and width, dry weight) were measured. Molecular identification of strain Pt-3 The genomic sequence of Pt-3 was extracted by Tris-HCl (Amresco) and EDTA (Amresco) methods [3]. The 16S rRNA sequences were amplified by PCR methods using the universal primers 27F (AGAGTTTGATCCTGGCTCAG) and 1541R (AAGGAGGTGATCCAG CCGC) [27, 28]. The PCR conditions used were as follows: initial denaturation at 94 °C for 5 min; followed by 30 cycles of 94 °C for 30 s, 55 °C for 30 s, and 72 °C for 40 s; then 72 °C for 10 min. PCR products were analyzed by gel electrophoresis and purified for sequencing analysis. The sequences were aligned in GenBank database. The obtained 16S rRNA sequences of bacterial strain with similarity over 95% were selected for further phylogenic tree analysis. Twelve strains including Pt-3, Serratia plymuthica BSP25, K-7, UBCF13, T237, ZC06-1, AG2105, SFC20131227, Serratia liquefaciens CPAC53, ATCC27592, and Serratia entomophila Mor4.1, DSM12358 were used for phylogenetic tree analysis. The tree was constructed using the MEGA software with the neighbor-joining method [2, 29, 30]. Biochemical and biophysical analysis of strain Pt-3 Strain Pt-3 was cultured on commercial BUG medium (Biolog, Hayward, CA) at 28 °C for 24 h. The fresh bacteria clone was inoculated into solution A (Biolog, Hayward, CA) and adjusted to percent transmittance 95% through Biolog Turbidimeter (Biolog, Hayward, CA). The bacteria suspensions were mixed well and distributed into Microplate (Biolog, Hayward, CA) with 100 μL in each hole. The plates were cultured at 28 °C for 12 h. The biochemical results of Pt-3 were recorded through Biolog Gen III system (Biolog, Hayward, CA) [24]. Broad spectrum antifungal activity of strain Pt - 3 The antifungal activity of strain Pt-3 was analyzed through FTF dual cultural tests in two Petri dishes. Pt-3 was cultured on the surface of NA plate at the bottom. Fresh fungal strains were inoculated to the center of PDA plate in the top. The PDA plate was placed face-to-face on the top of NA plate. The PDA plate with fungal co-cultured with NA plate without Pt-3 was used as control. All plate pairs were sealed with tapes and cultured at 28 °C and darkness for 7 d. The inhibition of Pt-3 against pathogenic fungi was conducted for two times, and the inhibitory rate was calculated based on the following formula: Inhibitory rate (%) = (mycelia diameter of the control- mycelia diameter of the Pt-3) / diameter of the control × 100. Effect of active charcoal on inhibitory activity of Pt-3 Pt-3 could inhibit the growth of fungal in face-to-face cultural tests. To further prove the antifungal activity of volatiles from Pt-3, active charcoal (5 g) was added in the tests to verify the fumigation efficacy of VOCs from Pt-3. Here, four pairs of Petri dishes were used: (a) A. flavus inoculated to the center of PDA plate. (b) Pt-3 on NA against A. flavus inoculation on PDA. (c) Active charcoal in Petri dish against A. flavus on PDA. (d) active charcoal, Pt-3 on NA against A. flavus on PDA. All plates were incubated at 28 °C for 5 days [22]. Similarly, the diameters of the fungi on PDA plate were recorded and the inhibitory rate was calculated as before. Identification of VOCs from Pt-3 by GC-MS VOCs from Pt-3 showed broad antifungal activity. To further identify the predominant compounds in VOCs, the VOCs was enriched by Solid-phase micro-extraction (SPME), and further identified by Gas Chromatography tandem Mass Spectrometry (GC-MS/MS). Pt-3 (100 μl, 108 cfu/ml) was spread on NA medium surface (40 ml) in a 100 ml flask. The flask was sealed with plastic membranes. NA medium without bacteria was used as control. All flasks were cultured at 28 °C and darkness for 48 h [22]. Flasks were transferred to a pre-heated 40 °C water-bath and allowed to equilibrate for 30 min. The adsorption head of SPME was injected into the flaks and absorbed for 40 min, then used for GC-MS analysis [31]. For GC-MS analysis, the column was DB-5 MS capillary column (30 m × 0.25 mm ID, 0.25 mm thickness film). Helium was used as carrier gas at the flow rate of 1 ml/min. The inlet temperature was 250 °C. The oven temperature was set as follows: 40 °C for 3 min, gradually increased to 160 °C at the rate of 3 °C/min, and maintained for 2 min. Finally, the temperature was increased to 220 °C at 8 °C/min, and lasted for 3 min [32, 33]. For MS, the spectrometers were operated in electron-impact (EI) mode, the scan range was 50–550 amu. The inlet, ionization source and quadrupole temperature were 300, 230 and 150 °C, respectively. The compounds identified in the spectrometry profiles of Pt-3, not presented in control samples were considered to be the final analytes. The compounds were aligned in National Institution of Standards and Technology (NIST 11) database [32, 33]. The retention time and mass spectrum of identified compounds was compared with commercial standards for final qualitative analysis. Scanning electron microscope analysis A. flavus mycelium was inoculated to the center of PDA medium and cultured at 28 °C for 5 days. Fresh conidia on PDA surface were washed off with sterilized water and filtrated through two layers of gauze, and diluted to a concentration of approximately 105/ml for peanuts inoculation tests [22]. Peanut seeds (100 g) of uniform size were placed in 250 ml flasks and autoclaved at 121 °C and 1.01 MPa for 20 min. Ten ml sterilized water containing A. flavus conidia (104 cfu/ml) was added to the flask, mixed well and adjust to water activity (aw) 0.9. The peanut seeds inoculated with A. flavus were divided in two petri dishes. One dish was challenged with Pt-3 (grown on NA plate) with face-to-face method. The other dish challenged with NA medium was used as control. All Petri dishes were incubated at 28 °C for 5 days. The peanut seeds were fixed in 0.1% osmic acid for 1 h. Then, a small piece of the peanut coat about 3 × 3 mm was peeled off, affixed to stubs, coated with gold and investigated through scanning electron microscope [23, 31]. All experiments were carried out at least in twice and results were reported as means ± standard deviations. The significant differences were determined using Student's T tests (p < 0.05) following one-way analysis of variance (ANOVA). The statistical analysis was performed using SPSS 16.0 software (SPSS Inc., Chicago, USA). All data generated or analyzed during this study are included in this published article. The 16S rRNA sequence of Pt-3 generated and analysed during the current study are available in the GenBank repository with accession number OL636198. Zaidi A, Khan MS, Amil M. Interactive effect of rhizotrophic microorganisms on yield and nutrient uptake of chickpea (Cicer arietinum L.). Eur J Agron. 2003;19(1):15–21. https://0-doi-org.brum.beds.ac.uk/10.1016/S1161-0301(02)00015-1. Lee CC, Sharon JA, Hathwaik LT, Glenn GM, Imam SH. Isolation of efficient phosphate solubilizing bacteria capable of enhancing tomato plant growth. J Soil Sci Plant Nutr. 2016;16(2):525–36. https://0-doi-org.brum.beds.ac.uk/10.4067/s0718-95162016005000043. Yarzábal LA, Pérez E, Sulbarán M, Ball MM. Isolation and characterization of mineral phosphate-solubilizing bacteria naturally colonizing a limonitic crust in the south-eastern Venezuelan region. Soil Biol Biochem. 2007;39(11):2905–14. https://0-doi-org.brum.beds.ac.uk/10.1016/j.soilbio.2007.06.017. Shi GY, Mo YM, Cen ZL, Zeng Q, Yu GM, Yang LT, et al. Identification of an inorganic phosphorus-dissolving bacterial strain BS06 and analysis on its phosphate solubilization ability. Microbiol China. 2015;42(7):1271–8. https://0-doi-org.brum.beds.ac.uk/10.13344/j.microbiol.china.140721. Yang J, Ruan XH. Soil circulation of phosphosrus and its effects on the soil loss of phosphorus. Soil Environ Sci. 2001;10(3):256–8. Islam MT, Deora A, Hashidoko Y, Rahman A, Ito T, Tahara S. Isolation and identification of potential phosphate solubilizing bacteria from the rhizoplane of Oryza sativa L. cv. BR29 of Bangladesh. Z Naturforsch C. J Biosci. 2007;62(1-2):103–10. https://0-doi-org.brum.beds.ac.uk/10.1515/znc-2007-1-218. Farhat MB, Farhat A, Bejar W, Kammoun R, Bouchaala K, Fourati A, et al. Characterization of the mineral phosphate solubilizing activity of Serratia marcescens CTM 50650 isolated from the phosphate mine of Gafsa. Arch Microbiol. 2009;191(11):815–24. https://0-doi-org.brum.beds.ac.uk/10.1007/s00203-009-0513-8. Kumari P, Sagervanshi A, Nagee A, Kumar A. Media optimization for inorganic phosphate solubilizing bacteria isolated from anand argiculture soil. Int J Pharm Sci Res. 2012;2(3):245–55. Goldstein AH, Krishnaraj PU. Cloning of a Serratia marcescens DNA fragment that induces quinoprotein glucose dehydrogenase-mediated gluconic acid production in Escherichia coli in the presence of stationary phase Serratia marcescens. FEMS Microbiol Lett. 2001;205(2):215–20. https://0-doi-org.brum.beds.ac.uk/10.1016/S0378-1097(01)00472-4. Viruel E, Erazzú LE, Martínez Calsina ML, Ferrero MA, Lucca ME, Siñerizet F. Inoculation of maize with phosphate solubilizing bacteria: effect on plant growth and yield. J Soil Sci Plant Nutr. 2014;14(4):819–31. https://0-doi-org.brum.beds.ac.uk/10.4067/s0718-95162014005000065. Li XL, Luo LJ, Yang JS, Li BZ, Yuan HL. Mechanisms for solubilization of various insoluble phosphates and activation of immobilized phosphates in different soils by an efficient and salinity-tolerant Aspergillus niger strain An2. Appl Biochem Biotechnol. 2015;175(5):2755–68. https://0-doi-org.brum.beds.ac.uk/10.1007/s12010-014-1465-2. Wang Z, Xu GY, Ma PD, Lin YB, Yang XN, Cao CL. Isolation and characterization of a phosphorus-solubilizing bacterium from rhizosphere soils and its colonization of Chinese cabbage (Brassica campestriss ssp.chinensis). Front Microbiol. 2017;8:1270. https://0-doi-org.brum.beds.ac.uk/10.3389/fmicb.2017.01270. Hanif MK, Hameed S, Imran A, Naqqash T, Shahid M, Van Elsas JD. Isolation and characterization of a β-propeller gene containing phosphobacterium Bacillus subtilis strain KPS-11 for growth promotion of potato (Solanum tuberosum L.). Front Microbiol. 2015;6:583. https://0-doi-org.brum.beds.ac.uk/10.3389/fmicb.2015.00583. Gong AD, Zhu ZY, Lu YN, Wan HY, Wu NN, Cheelo D, et al. Functional analysis of Burkholderia pyrrocinia WY6-5 on phosphate solubilizing, antifungal and growth-promoting activity of maize. J Integr Agric. 2019c;52(9):1574–86. https://0-doi-org.brum.beds.ac.uk/10.3864/j.issn.0578-1752.2019.09.009. Abdelhay A, Ferdaouss EHA, Saida A, Abderrazak R, Amin L, Arakrak M, et al. Screening of phosphate solubilizing bacterial isolates for improving growth of wheat. Eur J Biotechnol Biosci. 2017;5(6):07–11. https://0-doi-org.brum.beds.ac.uk/10.22271/bioscience. Saeed A, Zarei M, Aminzadeh S, Zolgharnein H, Safahieh A, Daliri M, et al. Characterization of a chitinase with antifungal activityfrom a native Serratia marcescens B4A. Braz J Microbiol. 2011;42(3):1017–29. https://0-doi-org.brum.beds.ac.uk/10.1590/S1517-83822011000300022. Someya N, Kataoka N, Komagata T, Hirayae K, Hibi T, Akutsu K. Biological control of cyclamen soilborne diseases by Serratia marcescens strain B2. Plant Dis. 2000;84(3):334–40. https://0-doi-org.brum.beds.ac.uk/10.1094/PDIS.2000.84.3.334. Tripura C, Sashidhar B, Podile AR. Ethyl methanesulfonate mutagenesis–enhanced mineral phosphate solubilization by groundnut-associated Serratia marcescens GPS-5. Curr Microbiol. 2007;54(2):79–84. https://0-doi-org.brum.beds.ac.uk/10.1007/s00284-005-0334-1. Lavania M, Nautiyal CS. Solubilization of tricalcium phosphate by temperature salt tolerant Serratia marcescens NBRI1213 isolated from alkaline soils. Afr J Microbiol Res. 2013;7(34):4403–13. https://0-doi-org.brum.beds.ac.uk/10.5897/AJMR2013.5773. Mohamed EAH, Farag AG, Youssef SA. Phosphate solubilization by Bacillus subtilis and Serratia marcescens isolated from tomato plant rhizosphere. J Environ Prot. 2018;9:266–77. https://0-doi-org.brum.beds.ac.uk/10.4236/jep.2018.93018. Archana K. Molecular characterization of mineral phosphate solubilization in Serratia marcescens and Methylobacterium sp. Mol Biol Biotechnol. 2011;1:47–50. Gong AD, Li HP, Shen L, Zhang JB, Wu AB, He WJ, et al. The Shewanella algae strain YM8 produces volatiles with strong inhibition activity against Aspergillus pathogens and aflatoxins. Front Microbiol. 2015;6:1091. https://0-doi-org.brum.beds.ac.uk/10.3389/fmicb.2015.01091. Gong AD, Dong FY, Hu MJ, Kong XW, Wei FF, Gong SJ, et al. Antifungal activity of volatile emitted from Enterobacter asburiae Vt-7 against Aspergillus flavus and aflatoxins in peanuts during storage. Food Control. 2019a. https://0-doi-org.brum.beds.ac.uk/10.1016/j.foodcont.2019.106718. Gong AD, Wu NN, Kong XW, Zhang YM, Hu MJ, Gong SJ, et al. Inhibitory effect of volatiles emitted from Alcaligenes faecalis N1-4 on Aspergillus flavus and aflatoxins in storage. Front Microbiol. 2019b. https://0-doi-org.brum.beds.ac.uk/10.3389/fmicb.2019.01419. Papazlatani C, Rousidou C, Katsoula A, Kolyvas M, Genitsaris S, Papadopoulou KK, et al. Assessment of the impact of the fumigant dimethyl disulfide on the dynamics of major fungal plant pathogens in greenhouse soils. Eur J Plant Pathol. 2016;146(2):391–400. https://0-doi-org.brum.beds.ac.uk/10.1007/s10658-016-0926-6. Piechulla B, Lemfack MC, Kai M. Effects of discrete bioactive microbial volatiles on plants and fungi. Plant Cell Environ. 2017;40(10):2042–67. https://0-doi-org.brum.beds.ac.uk/10.1111/pce.13011. Woo PCY, Teng JLL, Wu JKL, Leung FPS, Tse H, Fung AMY, et al. Guidelines for interpretation of 16S rRNA gene sequence-based results for identification of medically important aerobic Gram-positive bacteria. J Med Microbiol. 2009;58(8):1030–6. https://0-doi-org.brum.beds.ac.uk/10.1099/jmm.0.008615-0. Jenkins C, Ling CL, Ciesielczuk LH, Lockwood J, Hopkins S, McHugh TD, et al. Detection and identification of bacteria in clinical samples by 16S rRNA gene sequencing: comparison of two different approaches in clinical practice. J Med Microbiol. 2012;61(4):483–488.10. https://0-doi-org.brum.beds.ac.uk/10.1099/jmm.0.030387-0. Abel E, Ibrahim N, Huyop F. Identification of Serratia marcescens SE1 and determination of its herbicide 2,2-dichloropropionate (2,2-DCP) degradation potential. Malays J Microbiol. 2012;8(4):259–65. https://0-doi-org.brum.beds.ac.uk/10.21161/mjm.44412. Liu M, Liu X, Cheng BS, Ma XL, Lyu XT, Zhao XF, et al. Selection and evaluation of phosphate-solubilizing bacteria from grapevine rhizospheres for use as biofertilizers. Span J Agric Res. 2016;14(4):10–1. https://0-doi-org.brum.beds.ac.uk/10.5424/sjar/2016144-9714. Gao X, Massawe VC, Hanif A, Farzand A, Mburu DK, Ochola DK, et al. Volatile compounds of endophytic Bacillus spp. have biocontrol activity against Sclerotinia sclerotiorum. Phytopathology. 2018;108(12):1373–85. https://0-doi-org.brum.beds.ac.uk/10.1094/PHYTO-04-18-0118-R. Yuan J, Raza W, Shen Q, Huang Q. Antifungal activity of Bacillus amyloliquefaciens NJN-6 volatile compounds against Fusarium oxysporum f. sp. cubense. Appl Environ Microbiol. 2012;78(16):5942–4. https://0-doi-org.brum.beds.ac.uk/10.1128/AME.01357-12. Huang R, Li GQ, Zhang J, Yang L, Che HJ, Jiang DH, et al. Control of postharvest Botrytis fruit rot of strawberry by volatile organic compounds of Candida intermedia. Phytopathology. 2011;101(7):859–69. https://0-doi-org.brum.beds.ac.uk/10.1094/PHYTO-09-10-025. This work was supported by the National Natural Science Foundation of China (31701740, 31800074), Scientific and Technological Frontiers in Project of Henan Province (212102110447), and Nanhu Scholars Program for Young Scholars of XYNU. Henan Key Laboratory of Tea Plant Biology, College of Life Science, Xinyang Normal University, Xinyang, 464000, People's Republic of China Andong Gong, Gaozhan Wang, Yake Sun, Mengge Song, Cheelo Dimuna, Zhen Gao & Peng Yang College of Forestry, Hebei Agricultural University, Baoding, 071000, People's Republic of China Hualing Wang Andong Gong Gaozhan Wang Yake Sun Mengge Song Cheelo Dimuna Zhen Gao Peng Yang Experiments were designed by Andong Gong, Peng Yang. Andong Gong, Gaozhan Wang, Cheelo Dimuna, Mengge Song performed the experiments. Data were analyzed by Zhen Gao, Yake Sun, Hualing Wang. The manuscript was written by Andong Gong, Hualing Wang. All authors read and approved the final manuscript. Correspondence to Andong Gong or Peng Yang. Gong, A., Wang, G., Sun, Y. et al. Dual activity of Serratia marcescens Pt-3 in phosphate-solubilizing and production of antifungal volatiles. BMC Microbiol 22, 26 (2022). https://0-doi-org.brum.beds.ac.uk/10.1186/s12866-021-02434-5 Phosphate solubilizing Antifungal activity
CommonCrawl
Molecular de-novo design through deep reinforcement learning Marcus Olivecrona ORCID: orcid.org/0000-0002-8177-27871, Thomas Blaschke1, Ola Engkvist1 & Hongming Chen1 This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. Drug discovery is often described using the metaphor of finding a needle in a haystack. In this case, the haystack comprises on the order of \(10^{60}{-}10^{100}\) synthetically feasible molecules [1], out of which we need to find a compound which satisfies the plethora of criteria such as bioactivity, drug metabolism and pharmacokinetic (DMPK) profile, synthetic accessibility, etc. The fraction of this space that we can synthesize and test at all—let alone efficiently—is negligible. By using algorithms to virtually design and assess molecules, de novo design offers ways to reduce the chemical space into something more manageable for the search of the needle. Early de novo design algorithms [1] used structure based approaches to grow ligands to sterically and electronically fit the binding pocket of the target of interest [2, 3]. A limitation of these methods is that the molecules created often possess poor DMPK properties and can be synthetically intractable. In contrast, the ligand based approach is to generate a large virtual library of chemical structures, and search this chemical space using a scoring function that typically takes into account several properties such as DMPK profiles, synthetic accessibility, bioactivity, and query structure similarity [4, 5]. One way to create such a virtual library is to use known chemical reactions alongside a set of available chemical building blocks, resulting in a large number of synthetically accessible structures [6]; another possibility is to use transformational rules based on the expertise of medicinal chemists to design analogues to a query structure. For example, Besnard et al. [7] applied a transformation rule approach to the design of novel dopamine receptor type 2 (DRD2) receptor active compounds with specific polypharmacological profiles and appropriate DMPK profiles for a central nervous system indication. Although using either transformation or reaction rules can reliably and effectively generate novel structures, they are limited by the inherent rigidness and scope of the predefined rules and reactions. A third approach, known as inverse Quantitative Structure Activity Relationship (inverse QSAR), tackles the problem from a different angle: rather than first generating a virtual library and then using a QSAR model to score and search this library, inverse QSAR aims to map a favourable region in terms of predicted activity to the corresponding molecular structures [8,9,10]. This is not a trivial problem: first the solutions of molecular descriptors corresponding to the region need to be resolved using the QSAR model, and these then need be mapped back to the corresponding molecular structures. The fact that the molecular descriptors chosen need to be suitable both for building a forward predictive QSAR model as well as for translation back to molecular structure is one of the major obstacles for this type of approach. The Recurrent Neural Network (RNN) is commonly used as a generative model for data of sequential nature, and have been used successfully for tasks such as natural language processing [11] and music generation [12]. Recently, there has been an increasing interest in using this type of generative model for the de novo design of molecules [13,14,15]. By using a data-driven method that attempts to learn the underlying probability distribution over a large set of chemical structures, the search over the chemical space can be reduced to only molecules seen as reasonable, without introducing the rigidity of rule based approaches. Segler et al. [13] demonstrated that an RNN trained on the canonicalized SMILES representation of molecules can learn both the syntax of the language as well as the distribution in chemical space. They also show how further training of the model using a focused set of actives towards a biological target can produce a fine-tuned model which generates a high fraction of predicted actives. In two recent studies, reinforcement learning (RL) [16] was used to fine tune pre-trained RNNs. Yu et al. [15] use an adversarial network to estimate the expected return for state-action pairs sampled from the RNN, and by increasing the likelihood of highly rated pairs improves the generative network for tasks such as poem generation. Jaques et al. [17] use Deep Q-learning to improve a pre-trained generative RNN by introducing two ways to score the sequences generated: one is a measure of how well the sequences adhere to music theory, and one is the likelihood of sequences according to the initial pre-trained RNN. Using this concept of prior likelihood they reduce the risk of forgetting what was initially learnt by the RNN, compared to a reward based only on the adherence to music theory. The authors demonstrate significant improvements over both the initial RNN as well as an RL only approach. They later extend this method to several other tasks including the generation of chemical structures, and optimize toward molecular properties such as cLogP [18] and QED drug-likeness [19]. However, they report that the method is dependent on a reward function incorporating handwritten rules to penalize undesirable types of sequences, and even then can lead to exploitation of the reward resulting in unrealistically simple molecules that are more likely to satisfy the optimization requirements than more complex structures [17]. In this study we propose a policy based RL approach to tune RNNs for episodic tasks [16], in this case the task of generating molecules with given desirable properties. Through learning an augmented episodic likelihood which is a composite of prior likelihood [17] and a user defined scoring function, the method aims to fine-tune an RNN pre-trained on the ChEMBL database [20] towards generating desirable compounds. Compared to maximum likelihood estimation finetuning [13], this method can make use of negative as well as continuous scores, and may reduce the risk of catastrophic forgetting [21]. The method is applied to several different tasks of molecular de novo design, and an investigation was carried out to illustrate how the method affects the behaviour of the generative model on a mechanistic level. Recurrent neural networks A recurrent neural network is an architecture of neural networks designed to make use of the symmetry across steps in sequential data while simultaneously at every step keeping track of the most salient information of previously seen steps, which may affect the interpretation of the current one [22]. It does so by introducing the concept of a cell (Fig. 1). For any given step t, the \(cell_{t}\) is a result of the previous \(cell_{t-1}\) and the current input \(x^{t-1}\). The content of \(cell_t\) will determine both the output at the current step as well as influence the next cell state. The cell thus enables the network to have a memory of past events, which can be used when deciding how to interpret new data. These properties make an RNN particularly well suited for problems in the domain of natural language processing. In this setting, a sequence of words can be encoded into one-hot vectors the length of our vocabulary X. Two additional tokens, GO and EOS, may be added to denote the beginning and end of the sequence respectively. Learning the data. Depiction of maximum likelihood training of an RNN. \(x^t\) are the target sequence tokens we are trying to learn by maximizing \(P(x^t)\) for each step Learning the data Training an RNN for sequence modeling is typically done by maximum likelihood estimation of the next token \(x^{t}\) in the target sequence given tokens for the previous steps (Fig. 1). At every step the model will produce a probability distribution over what the next character is likely to be, and the aim is to maximize the likelihood assigned to the correct token: $$J(\Theta ) = -\sum _{t=1}^T {\log P{(x^{t}\mid x^{t-1},\ldots ,x^{1})}}$$ The cost function \(J(\Theta )\), often applied to a subset of all training examples known as a batch, is minimized with respect to the network parameters \(\Theta\). Given a predicted log likelihood \(\log P\) of the target at step t, the gradient of the prediction with respect to \(\Theta\) is used to make an update of \(\Theta\). This method of fitting a neural network is called back-propagation. Due to the architecture of the RNN, changing the network parameters will not only affect the direct output at time t, but also affect the flow of information from the previous cell into the current one iteratively. This domino-like effect that the recurrence has on back-propagation gives rise to some particular problems, and back-propagation applied to RNNs is referred to as back-propagation through time (BPTT). BPTT is dealing with gradients that through the chain-rule contains terms which are multiplied by themselves many times, and this can lead to a phenomenon known as exploding and vanishing gradients. If these terms are not unity, the gradients quickly become either very large or very small. In order to combat this issue, Hochreiter et al. introduced the Long-Short-Term Memory cell [23], which through a more controlled flow of information can decide what information to keep and what to discard. The Gated Recurrent Unit is a simplified implementation of the Long-Short-Term Memory architecture that achieves much of the same effect at a reduced computational cost [24]. Generating new samples Once an RNN has been trained on target sequences, it can then be used to generate new sequences that follow the conditional probability distributions learned from the training set. The first input—the GO token—is given and at every timestep after we sample an output token \(x^t\) from the predicted probability distribution \(P(X^t)\) over our vocabulary X and use \(x^t\) as our next input. Once the EOS token is sampled, the sequence is considered finished (Fig. 2). Generating sequences. Sequence generation by a trained RNN. Every timestep t we sample the next token of the sequence \(x^{t}\) from the probability distribution given by the RNN, which is then fed in as the next input Tokenizing and one-hot encoding SMILES A SMILES [25] represents a molecule as a sequence of characters corresponding to atoms as well as special characters denoting opening and closure of rings and branches. The SMILES is, in most cases, tokenized based on a single character, except for atom types which comprise two characters such as "Cl" and "Br" and special environments denoted by square brackets (e.g [nH]), where they are considered as one token. This method of tokenization resulted in 86 tokens present in the training data. Figure 3 exemplifies how a chemical structure is translated to both the SMILES and one-hot encoded representations. Three representations of 4-(chloromethyl)-1H-imidazole. Depiction of a one-hot representation derived from the SMILES of a molecule. Here a reduced vocabulary is shown, while in practice a much larger vocabulary that covers all tokens present in the training data is used There are many different ways to represent a single molecule using SMILES. Algorithms that always represent a certain molecule with the same SMILES are referred to as canonicalization algorithms [26]. However, different implementations of the algorithms can still produce different SMILES. Consider an Agent, that given a certain state \(s\in {\mathbb {S}}\) has to choose which action \(a\in {\mathbb {A}}(s)\) to take, where \({\mathbb {S}}\) is the set of possible states and \({\mathbb {A}}(s)\) is the set of possible actions for that state. The policy \(\pi (a \mid s)\) of an Agent maps a state to the probability of each action taken therein. Many problems in reinforcement learning are framed as Markov decision processes, which means that the current state contains all information necessary to guide our choice of action, and that nothing more is gained by also knowing the history of past states. For most real problems, this is an approximation rather than a truth; however, we can generalize this concept to that of a partially observable Markov decision process, in which the Agent can interact with an incomplete representation of the environment. Let \(r(a \mid s)\) be the reward which acts as a measurement of how good it was to take an action at a certain state, and the long-term return \(G(a_t, S_t = \sum _{t}^T{r_t})\) as the cumulative rewards starting from t collected up to time T. Since molecular desirability in general is only sensible for a completed SMILES, we will refer only to the return of a complete sequence. What reinforcement learning concerns, given a set of actions taken from some states and the rewards thus received, is how to improve the Agent policy in such a way as to increase the expected return \({\mathbb {E}}[G]\). A task which has a clear endpoint at step T is referred to as an episodic task [16], where T corresponds to the length of the episode. Generating a SMILES is an example of an episodic task, which ends once the EOS token is sampled. The states and actions used to train the agent can be generated both by the agent itself or by some other means. If they are generated by the agent itself the learning is referred to as on-policy, and if they are generated by some other means the learning is referred to as off-policy [16]. There are two different approaches often used in reinforcement learning to obtain a policy: value based RL, and policy based RL [16]. In value based RL, the goal is to learn a value function that describes the expected return from a given state. Having learnt this function, a policy can be determined in such a way as to maximize the expected value of the state that a certain action will lead to. In policy based RL on the other hand, the goal is to directly learn a policy. For the problem addressed in this study, we believe that policy based methods is the natural choice for three reasons: Policy based methods can learn explicitly an optimal stochastic policy [16], which is our goal. The method used starts with a prior sequence model. The goal is to finetune this model according to some specified scoring function. Since the prior model already constitutes a policy, learning a finetuned policy might require only small changes to the prior model. The episodes in this case are short and fast to sample, reducing the impact of the variance in the estimate of the gradients. In "Target activity guided structure generation" section the change in policy between the prior and the finetuned model is investigated, providing justification for the second point. The prior network Maximum likelihood estimation was employed to train the initial RNN composed of 3 layers with 1024 Gated Recurrent Units (forget bias 5) in each layer. The RNN was trained on the RDKit [27] canonical SMILES of 1.5 million structures from ChEMBL [20] where the molecules were restrained to containing between 10 and 50 heavy atoms and elements \(\in \{H, B, C, N, O, F, Si, P, S, Cl, Br, I\}\). The model was trained with stochastic gradient descent for 50,000 steps using a batch size of 128, utilizing the Adam optimizer [28] (\(\beta _1 = 0.9\), \(\beta _2 = 0.999\), and \(\epsilon = 10^{-8}\)) with an initial learning rate of 0.001 and a 0.02 learning rate decay every 100 steps. Gradients were clipped to \([-3, 3]\). Tensorflow [29] was used to implement the Prior as well as the RL Agent. The agent network We now frame the problem of generating a SMILES representation of a molecule with specified desirable properties via an RNN as a partially observable Markov decision process, where the agent must make a decision of what character to choose next given the current cell state. We use the probability distributions learnt by the previously described prior model as our initial prior policy. We will refer to the network using the prior policy simply as the Prior, and the network whose policy has since been modified as the Agent. The Agent is thus also an RNN with the same architecture as the Prior. The task is episodic, starting with the first step of the RNN and ending when the EOS token is sampled. The sequence of actions \(A = {a_1, a_2,\ldots ,a_T}\) during this episode represents the SMILES generated and the product of the action probabilities \(P(A) = \prod _{t = 1}^T{\pi (a_t \mid s_t)}\) represents the model likelihood of the sequence formed. Let \(S(A)\in [-1, 1]\) be a scoring function that rates the desirability of the sequences formed using some arbitrary method. The goal now is to update the agent policy \(\pi\) from the prior policy \(\pi _{Prior}\) in such a way as to increase the expected score for the generated sequences. However, we would like our new policy to be anchored to the prior policy, which has learnt both the syntax of SMILES and distribution of molecular structure in ChEMBL [13]. We therefore denote an augmented likelihood \(\log P(A)_{\mathbb {U}}\) as a prior likelihood modulated by the desirability of a sequence: $$\log P(A)_{\mathbb {U}} = \log P(A)_{Prior} + \sigma S(A)$$ where \(\sigma\) is a scalar coefficient. The return G(A) of a sequence A can in this case be seen as the agreement between the Agent likelihood \(\log P(A)_{\mathbb {A}}\) and the augmented likelihood: $$G(A) = -[\log P(A)_{\mathbb {U}} - \log P(A)_{\mathbb {A}}]^2$$ The goal of the Agent is to learn a policy which maximizes the expected return, achieved by minimizing the cost function \(J(\Theta ) = -G\). The fact that we describe the target policy using the policy of the Prior and the scoring function enables us to formulate this cost function. In the Additional file 1 we show how this approach can be described using a REINFORCE [30] algorithm with a final step reward of \(r(t) = [\log P(A)_{\mathbb {U}} - \log P(A)_{\mathbb {A}}]^2 / \log P(A)_{\mathbb {A}}\). We believe this is a more natural approach to the problem than REINFORCE algorithms directly using rewards of S(A) or \(\log P(A)_{Prior} + \sigma S(A)\). In "Learning to avoid sulphur" section we compare our approach to these methods. The Agent is trained in an on-policy fashion on batches of 128 generated sequences, making an update to \(\pi\) after every batch has been generated and scored. A standard gradient descent optimizer with a learning rate of 0.0005 was used and gradients were clipped to \([-3, 3]\). Figure 4 shows an illustration of how the Agent, initially identical to the Prior, is trained using reinforcement learning. The training shifts the probability distribution from that of the Prior towards a distribution modulated by the desirability of the structures. This method adopts a similar concept to Jaques et al. [17], while using a policy based RL method that introduces a novel cost function with the aim of addressing the need for handwritten rules and the issues of generating structures that are too simple. The Agent. Illustration of how the model is constructed. Starting from a Prior network trained on ChEMBL, the Agent is trained using the augmented likelihood of the SMILES generated In all the tasks investigated below, the scoring function is fixed during the training of the Agent. If instead the scoring function used is defined by a discriminator network whose task is to distinguish sequences generated by the Agent from 'real' SMILES (e.g. a set of actives), the method could be described as a type of Generative Adversarial Network [31], where the Agent and the discriminator would be jointly trained in a game where they both strive to beat the other. This is the approach taken by Yu et al. [15] to finetune a pretrained sequence model for poem generation. Guimaraes et al. demonstrates how such a method can be combined with a fixed scoring function for molecular de novo design [32]. The DRD2 activity model In one of our studies the objective of the Agent is to generate molecules that are predicted to be active against a biological target. The dopamine type 2 receptor DRD2 was chosen as the target, and corresponding bioactivity data was extracted from ExCAPE-DB [33]. In this dataset there are 7218 actives (pIC50 > 5) and 343204 inactives (pIC50 < 5). A subset of 100,000 inactive compounds was randomly selected. In order to decrease the nearest neighbour similarity between the training and testing structures [34,35,36], the actives were grouped in clusters based on their molecular similarity. The Jaccard [37] index, for binary vectors also known as the Tanimoto similarity, based on the RDKit implementation of binary Extended Connectivity Molecular Fingerprints with a diameter of 6 (ECFP6 [38]) was used as a similarity measure and the actives were clustered using the Butina clustering algorithm [39] in RDKit with a clustering cutoff of 0.4. In this algorithm, centroid molecules will be selected, and everything with a similarity higher than 0.4 to these centroids will be assigned to the same cluster. The centroids are chosen such as to maximize the number of molecules that are assigned to any cluster. The clusters were sorted by size and iteratively assigned to the test, validation, and training sets (assigned 4 clusters each iteration) to give a distribution of \(\frac{1}{6}\), \(\frac{1}{6}\), and \(\frac{4}{6}\) of the clusters respectively. The inactive compounds, of which less than 0.5% were found to belong to any of the clusters formed by the actives, were split randomly into the three sets using the same ratios. A support vector machine (SVM) classifier with a Gaussian kernel was built in Scikit-learn [40] on the training set as a predictive model for DRD2 activity. The optimal C and Gamma values utilized in the final model were obtained from a grid search for the highest ROC-AUC performance on the validation set. Structure generation by the Prior After the initial training, 94% of the sequences generated by the Prior as described in "Generating new samples" section corresponded to valid molecular structures according to RDKit [27] parsing, out of which 90% are novel structures outside of the training set. A set of randomly chosen structures generated by the Prior, as well as by Agents trained in the subsequent examples, are shown in the Additional file 2. The process of generating a SMILES by the Prior is illustrated in Fig. 5. For every token in the generated SMILES sequence, the conditional probability distribution over the vocabulary at this step according to the Prior is displayed. The sequence of distributions are depicted in Fig. 5. For the first step, when no information other than the initial GO token is present, the distribution is an approximation of the distribution of first tokens for the SMILES in the ChEMBL training set. In this case "O" was sampled, but "C", "N", and the halogens were all likely as well. Corresponding log likelihoods were −0.3 for "C", −2.7 for "N", −1.8 for "O", and −5.0 for "F" and "Cl". How the model thinks while generating the molecule on the right. Conditional probability over the next token as a function of previously chosen ones according to the model. On the y-axis is shown the probability distribution for the character to be choosen at the current step, and on the x-axis is shown the character that in this instance was sampled. E = EOS A few (unsurprising) observations: Once the aromatic "n" has been sampled, the model has come to expect a ring opening (i.e. a number), since aromatic moieties by definition are cyclic. Once an aromatic ring has been opened, the aromatic atoms "c", "n", "o", and "s" become probable, until 5 or 6 steps later when the model thinks it is time to close the ring. The model has learnt the RDKit canonicalized SMILES format of increasing ring numbers, and expects the first ring to be numbered "1". Ring numbers can be reused, as in the two first rings in this example. Only once "1" has been sampled does it expect a ring to be numbered "2" and so on. Learning to avoid sulphur As a proof of principle the Agent was first trained to generate molecules which do not contain sulphur. The method described in "The Agent network" is compared with three other policy gradient based methods. The first alternative method is the same as the Agent method, with the only difference that the loss is defined on an action basis rather than on an episodic one, resulting in the cost function: $$J(\Theta ) = \left[ \sum _{t=0}^T{(\log \pi _{Prior}(a_t, s_t) - \log \pi _{\Theta }(a_t, s_t))} + \sigma S(A)\right] ^2$$ We refer to this method as 'Action basis'. The second alternative is a REINFORCE algorithm with a reward of S(A) given at the last step. This method is similar to the one used by Silver et al. to train the policy network in AlphaGo [41], as well as the method used by Yu et al. [15]. We refer to this method as 'REINFORCE'. The corresponding cost function can be written as: $$J(\Theta ) = S(A)\sum _{t=0}^T \log \pi _{\Theta }(a_t, s_t)$$ A variation of this method that considers prior likelihood is defined by changing the reward from S(A) to \(S(A)+ \log P(A)_{Prior}\). This method is referred to as 'REINFORCE + Prior', with the cost function: $$J(\Theta ) = [\log P(A)_{Prior} + \sigma S(A)]\sum _{t=0}^T \log \pi _{\Theta }(a_t, s_t)$$ Note that the last method by nature strives to generate only the putative sequence with the highest reward. In contrast to the Agent, the optimal policy for this method is not stochastic. This tendency could be restrained by introducing a regularizing policy entropy term. However, it was found that such regularization undermined the models ability to produce valid SMILES. This method is therefor dependent on only training sufficiently long for the model to reach a point where highly scored sequences are generated, without being settled in a local minima. The experiment aims to answer the following questions: Can the models achieve the task of generating valid SMILES corresponding to structures that do not contain sulphur? Will the models exploit the reward function by converging on naïve solutions such as 'C' if not imposed handwritten rules? Are the distributions of physical chemical properties for the generated structures similar to those of sulphur free structures generated by the Prior? The task is defined by the following scoring function: $$\begin{aligned} S(A) =\left\{ \begin{array}{ll} 1 &{}\quad \text {if valid and no S} \\ 0 &{}\quad \text {if not valid} \\ -1 &{}\quad \text {if contains S} \end{array}\right. \end{aligned}$$ All the models were trained for 1000 steps starting from the Prior and 12,800 SMILES sequences were sampled from all the models as well as the Prior. A learning rate of 0.0005 was used for the Agent and Action basis methods, and 0.0001 for the two REINFORCE methods. The values of \(\sigma\) used were 2 for the Agent and 'REINFORCE + Prior', and 8 for 'Action basis'. To explore what effect the training has on the structures generated, relevant properties for non sulphur containing structures generated by both the Prior and the other models were compared. The molecular weight, cLogP, the number of rotatable bonds, and the number of aromatic rings were all calculated using RDKit. The experiment was repeated three times with different random seeds. The results are shown in Table 1 and randomly selected SMILES generated by the Prior and the different models can be seen in Table 2. For the 'REINFORCE' method, where the sole aim is to generate valid SMILES that do not contain sulphur, the model quickly learns to exploit the reward funtion by generating sequences containing predominately 'C'. This is not surprising, since any sequence consisting only of this token always gets rewarded. For the 'REINFORCE + Prior' method, the inclusion of the prior likelihood in the reward function means that this is no longer a viable strategy (the sequences would be given a low prior probability). The model instead tries to find the structure with the best combination of score and prior likelihood, but as is evident from the SMILES generated and the statistics shown in Table 1, this results in small, simplistic structures being generated. Thus, both REINFORCE algorithms managed to achieve high scores according to the scoring function, but poorly represented the Prior. Both the Agent and the 'Action basis' methods have explicitly specified target policies. For the 'Action basis' method the policy is specified exactly on a stepwise level, while for the Agent the target policy is only specified to the likelihoods of entire sequences. Although the 'Action basis' method generates structures that are more similar to the Prior than the two REINFORCE methods, it performed worse than the Agent despite the higher value of \(\sigma\) used while also being slower to learn. This may be due to the less restricted target policy of the Agent, which could facilitate optimization. The Agent achieved the same fraction of sulphur free structures as the REINFORCE algorithms, while seemingly doing a much better job of representing the Prior. This is indicated by the similarity of the properties of the generated structures shown in Table 1 as well as the SMILES themselves shown in Table 2. Table 1 Comparison of model performance and properties for non-sulphur containing structures generated by the two models Table 2 Randomly selected SMILES generated by the different models Similarity guided structure generation The second task investigated was that of generating structures similar to a query structure. The Jaccard index [37] \(J_{i, j}\) of the RDKit implementation of FCFP4 [38] fingerprints was used as a similarity measure between molecules i and j. Compared to the DRD2 activity model ("The DRD2 activity model" section), the feature invariant version of the fingerprints and the smaller diameter 4 was used in order to get a more fuzzy similarity measure. The scoring function was defined as: $$\begin{aligned} S(A) = -1 + 2 \times \frac{\min \{ J_{i, j}, k \}}{k} \end{aligned}$$ This definition means that an increase in similarity is only rewarded up to the point of \(k\in [0, 1]\), as well as scaling the reward from \(-1\) (no overlap in the fingerprints between query and generated structure) to 1 (at least k degree of overlap). Celecoxib was chosen as our query structure, and we first investigated whether Celecoxib itself could be generated by using the high values of \(k=1\) and \(\sigma =15\). The Agent was trained for 1000 steps. After a 100 training steps the Agent starts to generate Celecoxib, and after 200 steps it predominately generates this structure (Fig. 6). Celecoxib itself as well as many other similar structures appear in the ChEMBL training set used to build the Prior. An interesting question is whether the Agent could succeed in generating Celecoxib when these structures are not part of the chemical space covered by the Prior. To investigate this, all structures with a similarity to Celecoxib higher than 0.5 (corresponding to 1804 molecules) were removed from the training set and a new reduced Prior was trained. The prior likelihood of Celecoxib for the canonical and reduced Priors was compared, as well as the ability of the models to generate structures similar to Celecoxib. As expected, the prior probability of Celecoxib decreased when similar compounds were removed from the training set from \(\log _e P = -12.7\) to \(\log _e P = -19.2\), representing a reduction in likelihood of a factor of 700. An Agent was then trained using the same hyperparameters as before, but on the reduced Prior. After 400 steps, the Agent again managed to find Celecoxib, albeit requiring more time to do so. After 1000 steps, Celecoxib was the most commonly generated structure (about a third of the generated structures), followed by demethylated Celecoxib (also a third) whose SMILES is more likely according to the Prior with \(\log _e P = -15.2\) but has a lower similarity (\(J = 0.87\)), resulting in an augmented likelihood equal to that of Celecoxib. These experiments demonstrate that the Agent can be optimized using fingerprint based Jaccard similarity as the objective, but making copies of the query structure is hardly useful. A more useful example is that of generating structures that are moderately to the query structure. The Agent was therefore trained for 3000 steps, starting from both the canonical as well as the reduced Prior, using \(k = 0.7\) and \(\sigma = 12\). The Agents based on the canonical Prior quickly converge to their targets, while the Agents based on the reduced Prior converged more slowly. For the Agent based on the reduced Prior where \(k=1\), the fact that Celecoxib and demethylated Celecoxib are given similar augmented likelihoods means that the average similarity converges to around 0.9 rather than 1.0. For the Agent based on the reduced Prior where \(k=0.7\), the lower prior likelihood of compounds similar to Celecoxib translates to a lower augmented likelihood, which lowers the average similarity of the structures generated by the Agent. To explore whether this reduced prior likelihood could be offset with a higher value of \(\sigma\), an Agent starting from the reduced Prior was trained using \(\sigma =15\). Though taking slightly more time to converge than the Agent based on the canonical Prior, this Agent too could converge to the target similarity. The learning curves for the different model is shown in Fig. 6. Average similarity \({{{J}}}\) of generated structures as a function of training steps. Difference in learning dynamics for the Agents based on the canonical Prior, and those based on a reduced Prior where everything more similar than \(J=0.5\) to Celecoxib have been removed An illustration of how the type of structures generated by the Agent evolves during training is shown in Fig. 7. For the Agent based on the reduced Prior with \(k=0.7\) and \(\sigma =15\), three structures were randomly sampled every 100 training steps from step 0 up to step 400. At first, the structures are not similar to Celecoxib. After 200 steps, some features from Celecoxib have started to emerge, and after 300 steps the model generates mostly close analogues of Celecoxib. Evolution of generated structures during training Structures sampled every 100 training steps during the training of the Agent towards similarity to Celecoxib with \(k=0.7\) and \(\sigma =15\) We have investigated how various factors affect the learning behaviour of the Agent. In real drug discovery applications, we might be more interested in finding structures with modest similarity to our query molecules rather than very close analogues. For example, one of the structures sampled after 200 steps shown in Fig. 7 displays a type of scaffold hopping where the sulphur functional group on one of the outer aromatic rings has been fused to the central pyrazole. The similarity to Celecoxib of this structure is 0.4, which may be a more interesting solution for scaffold-hopping purposes. One can choose hyperparameters and similarity criterion tailored to the desired output. Other types of similarity measures such as pharmacophoric fingerprints [42], Tversky substructure similarity [43], or presence/absence of certain pharmacophores could also be explored. Target activity guided structure generation The third example, perhaps the one most interesting and relevant for drug discovery, is to optimize the Agent towards generating structures with predicted biological activity. This can be seen as a form of inverse QSAR, where the Agent is used to implicitly map high predicted probability of activity to molecular structure. DRD2 was chosen as the biological target. The clustering split of the DRD2 activity dataset as described in "The DRD2 activity model" section resulted in 1405, 1287, and 4526 actives in the test, validation, and training sets respectively. The average similarity to the nearest neighbour in the training set for the test set compounds was 0.53. For a random split of actives in sets of the same sizes this similarity was 0.69, indicating that the clustering had significantly decreased training-test set similarity which mimics the hit finding practice in drug discovery to identify diverse hits to the training set. Most of the DRD2 actives are also included in the ChEMBL dataset used to train the Prior. To explore the effect of not having the known actives included in the Prior, a reduced Prior was trained on a reduced subset of the ChEMBL training set where all DRD2 actives had been removed. The optimal hyperparameters found for the SVM activity model were \(C=2^{7}, \gamma =2^{-6}\), resulting in a model whose performance is shown in Table 3. The good performance in general can be explained by the apparent difference between actives and inactive compounds as seen during the clustering, and the better performance on the test set compared to the validation set could be due to slightly higher nearest neighbour similarity to the training actives (0.53 for test actives and 0.48 for validation actives). Table 3 Performance of the DRD2 activity model The output of the DRD2 model for a given structure is an uncalibrated predicted probability of being active \(P_{active}\). This value is used to formulate the following scoring function: $$\begin{aligned} S(A) = -1 + 2 \times P_{active} \end{aligned}$$ The model was trained for 3000 steps using \(\sigma = 7\). After training, the fraction of predicted actives according to the DRD2 model increased from 0.02 for structures generated by the reduced Prior to 0.96 for structures generated by the corresponding Agent network (Table 4). To see how well the structure-activity-relationship learnt by the activity model is transferred to the type of structures generated by the Agent RNN, the fraction of compounds with an ECFP6 Jaccard similarity greater than 0.4 to any active in the training and test sets was calculated. Table 4 Comparison of properties for structures generated by the canonical Prior, the reduced Prior, and corresponding Agents In some cases, the model recovered exact matches from the training and test sets (c.f. Segler et al. [13]). The fraction of recovered test actives recovered by the canonical and reduced Prior were 1.3 and 0.3% respectively. The Agent derived from the canonical Prior managed to recover 13% test actives; the Agent derived from the reduced Prior recovered 7%. For the Agent derived from the reduced Prior, where the DRD2 actives were excluded from the Prior training set, this means that the model has learnt to generate "novel" structures that have been seen neither by the DRD2 activity model nor the Prior, and are experimentally confirmed actives. We can formalize this observation by calculating the probability of a given generated sequence belonging to the set of test actives. For the canonical and reduced Priors, this probability was \(0.17\times 10^{-3}\) and \(0.05\times 10^{-3}\) respectively. Removing the actives from the Prior thus resulted in a threefold reduction in the probability of generating a structure from the set of test actives. For the Agents, the probabilities rose to \(15.0\times 10^{-3}\) and \(40.2\times 10^{-3}\) respectively, corresponding to an enrichment of a factor of 250 over the Prior models. Again the consequence of removing the actives from the Prior was a threefold reduction in the probability of generating a test set active: the difference between the two Priors is directly mirrored by their corresponding Agents. Apart from generating a higher fraction of structures that are predicted to be active, both Agents also generate a significantly higher fraction of valid SMILES (Table 4). Sequences that are not valid SMILES receive a score of \(-1\), which means that the scoring function naturally encourages valid SMILES. A few of the test set actives generated by the Agent based on the reduced Prior along with a few randomly selected generated structures are shown together with their predicted probability of activity in Fig. 8. Encouragingly, the recovered test set actives vary considerably in their structure, which would not have been the case had the Agent converged to generating only a certain type of very similar predicted active compounds. Structures designed by the Agent to target DRD2. Molecules generated by the Agent based on the reduced Prior. On the top are four of the test set actives that were recovered, and below are four randomly selected structures. The structures are annotated with the predicted probability of being active Removing the known actives from the training set of the Prior resulted in an Agent which shows a decrease in all metrics measuring the overlap between the known actives and the structures generated, compared to the Agent derived from the canonical Prior. Interestingly, the fraction of predicted actives did not change significantly. This indicates that the Agent derived from the reduced Prior has managed to find a similar chemical space to that of the canonical Agent, with structures that are equally likely to be predicted as active, but are less similar to the known actives. Whether or not these compounds are active will be dependent on the accuracy of the target activity model. Ideally, any predictive model to be used in conjunction with the generative model should cover a broad chemical space within its domain of applicability, since it initially has to assess representative structures of the dataset used to build the Prior [13]. Figure 9 shows a comparison of the conditional probability distributions for the reduced Prior and its corresponding Agent when a molecule from the set of test actives is generated. It can be seen that the changes are not drastic with most of the trends learnt by the Prior being carried over to the Agent. However, a big change in the probability distribution even only at one step can have a large impact on the likelihood of the sequence and could significantly alter the type of structures generated. A (small) change of mind. The conditional probability distributions when the DRD2 test set active 'COc1ccccc1N1CCN(CCCCNC(=O)c2ccccc2I)CC1' is generated by the Prior and an Agent trained using the DRD2 activity model, for the case where all actives used to build the activity model have been removed from the Prior. E = EOS To summarize, we believe that an RNN operating on the SMILES representation of molecules is a promising method for molecular de novo design. It is a data-driven generative model that does not rely on pre-defined building blocks and rules, which makes it clearly differentiated from traditional methods. In this study we extend upon previous work [13,14,15, 17] by introducing a reinforcement learning method which can be used to tune the RNN to generate structures with certain desirable properties through augmented episodic likelihood. The model was tested on the task of generating sulphur free molecules as a proof of principle, and the method using augmented episodic likelihood was compared with traditional policy gradient methods. The results indicate that the Agent can find solutions reflecting the underlying probability distribution of the Prior, representing a significant improvement over both traditional REINFORCE [30] algorithms as well as previously reported methods [17]. To evaluate if the model could be used to generate analogues to a query structure, the Agent was trained to generate structures similar to the drug Celecoxib. Even when all analogues of Celecoxib were removed from the Prior, the Agent could still locate the intended region of chemical space which was not part of the Prior. Further more, when trained towards generating predicted actives against the dopamine receptor type 2 (DRD2), the Agent generates structures of which more than 95% are predicted to be active, and could recover test set actives even in the case where they were not included in either the activity model nor the Prior. Our results indicate that the method could be a useful tool for drug discovery. It is clear that the qualities of the Prior are reflected in the corresponding Agents it produces. An exhaustive study which explores how parameters such as training set size, model size, regularization [44, 45], and training time would influence the quality and variety of structures generated by the Prior would be interesting. Another interesting avenue for future research might be that of token embeddings [46]. The method was found to be robust with respect to the hyperparameters \(\sigma\) and the learning rate. All of the aforementioned examples used single parameter based scoring functions. In a typical drug discovery project, multiple parameters such as target activity, DMPK profile, synthetic accessibility etc. all need to be taken into account simultaneously. Applying this type of multi-parametric scoring functions to the model is an area requiring further research. DMPK: drug metabolism and pharmacokinetics DRD2: dopamine receptor D2 QSAR: quantitive structure activity relationship RNN: recurrent neural network RL: natural logarithm BPTT: back-propagation through time A : sequence of tokens constituting a SMILES Prior: an RNN trained on SMILES from ChEMBL used as a starting point for the Agent an RNN derived from a Prior, trained using reinforcement learning J : Jaccard index ECFP6: Extended Molecular Fingerprints with diameter 6 SVM: FCFP4: Extended Molecular Fingerprints with diameter 4 and feature invariants Schneider G, Fechner U (2005) Computer-based de novo design of drug-like molecules. Nat Rev Drug Discov 4(8):649–663. doi:10.1038/nrd1799 Böhm HJ (1992) The computer program ludi: a new method for the de novo design of enzyme inhibitors. J Comput Aided Mol Des 6(1):61–78. doi:10.1007/BF00124387 Gillet VJ, Newell W, Mata P, Myatt G, Sike S, Zsoldos Z, Johnson AP (1994) Sprout: recent developments in the de novo design of molecules. J Chem Inf Comput Sci 34(1):207–217. doi:10.1021/ci00017a027 Ruddigkeit L, Blum LC, Reymond JL (2013) Visualization and virtual screening of the chemical universe database gdb-17. J Chem Inf Model 53(1):56–65. doi:10.1021/ci300535x Hartenfeller M, Zettl H, Walter M, Rupp M, Reisen F, Proschak E, Weggen S, Stark H, Schneider G (2012) Dogs: reaction-driven de novo design of bioactive compounds. PLOS Comput Biol 8:1–12. doi:10.1371/journal.pcbi.1002380 Schneider G, Geppert T, Hartenfeller M, Reisen F, Klenner A, Reutlinger M, Hähnke V, Hiss JA, Zettl H, Keppner S, Spänkuch B, Schneider P (2011) Reaction-driven de novo design, synthesis and testing of potential type II kinase inhibitors. Future Med Chem 3(4):415–424. doi:10.4155/fmc.11.8 Besnard J, Ruda GF, Setola V, Abecassis K, Rodriguiz RM, Huang X-P, Norval S, Sassano MF, Shin AI, Webster LA, Simeons FRC, Stojanovski L, Prat A, Seidah NG, Constam DB, Bickerton GR, Read KD, Wetsel WC, Gilbert IH, Roth BL, Hopkins AL (2012) Automated design of ligands to polypharmacological profiles. Nature 492(7428):215–220. doi:10.1038/nature11691 Miyao T, Kaneko H, Funatsu K (2016) Inverse qspr/qsar analysis for chemical structure generation (from y to x). J Chem Inf Model 56(2):286–299. doi:10.1021/acs.jcim.5b00628 Churchwell CJ, Rintoul MD, Martin S Jr, Visco DP, Kotu A, Larson RS, Sillerud LO, Brown DC, Faulon J-L (2004) The signature molecular descriptor: 3. Inverse-quantitative structure-activity relationship of icam-1 inhibitory peptides. J Mol Graph Model 22(4):263–273. doi:10.1016/j.jmgm.2003.10.002 Wong WW, Burkowski FJ (2009) A constructive approach for discovering new drug leads: using a kernel methodology for the inverse-qsar problem. J Cheminform 1:44. doi:10.1186/1758-2946-1-4 Mikolov T, Karafiát M, Burget L, Cernock'y J, Khudanpur S (2010) Recurrent neural network based language model. In: Kobayashi T, Hirose K, Nakamura S (eds) 11th annual conference of the international speech communication association (INTERSPEECH 2010), Makuhari, Chiba, Japan. ISCA, 26–30 Sept 2010 Eck D, Schmidhuber J (2002) A first look at music composition using lstm recurrent neural networks. Technical report, Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale Segler MHS, Kogej T, Tyrchan C, Waller MP (2017) Generating focussed molecule libraries for drug discovery with recurrent neural networks. arXiv:1701.01329 Gómez-Bombarelli R, Duvenaud DK, Hernáandez-Lobato JM, Aguilera-Iparraguirre J, Hirzel TD, Adams RP, Aspuru-Guzik A (2016) Automatic chemical design using a data-driven continuous representation of molecules. CoRR. arXiv:1610.02415 Yu L, Zhang W, Wang J, Yu Y (2016) Seqgan: sequence generative adversarial nets with policy gradient. CoRR. arXiv:1609.05473 Sutton R, Barton A (1998) Reinforcement learning: an introduction, 1st edn. MIT Press, Cambridge Jaques N, Gu S, Turner RE, Eck D (2016) Tuning recurrent neural networks with reinforcement learning. CoRR. arXiv:1611.02796 Leo A, Hansch C, Elkins D (1971) Partition coefficients and their uses. Chem Rev 71(6):525–616. doi:10.1021/cr60274a001 Bickerton GR, Paolini GV, Besnard J, Muresan S, Hopkins AL (2012) Quantifying the chemical beauty of drugs. Nat Chem 4:90–98. doi:10.1038/nchem.1243 Gaulton A, Bellis LJ, Bento AP, Chambers J, Davies M, Hersey A, Light Y, McGlinchey S, Michalovich D, Al-Lazikani B, Overington JP (2012) Chembl: a large-scale bioactivity database for drug discovery. Nucleic Acids Res 40:1100–1107. doi:10.1093/nar/gkr777 Version 22 Goodfellow IJ, Mirza M, Xiao D, Courville A, Bengio Y (2013) An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv:1312.6211 Goodfellow I, Bengio Y, Courville A (2016) Deep learning, 1st edn. MIT Press, Cambridge Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. doi:10.1162/neco.1997.9.8.1735 Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555 SMILES. http://www.daylight.com/dayhtml/doc/theory/theory.smiles.html. Accessed 7 Apr 2017 Weininger D, Weininger A, Weininger JL (1989) Smiles. 2. Algorithm for generation of unique smiles notation. J Chem Inf Comput Sci 29(2):97–101. doi:10.1021/ci00062a008 RDKit: open source cheminformatics. Version: 2016-09-3. http://www.rdkit.org/ Kingma DP, Ba J: Adam (2014) A method for stochastic optimization. CoRR. arXiv:1412.6980 Tensorflow. Version: 1.0.1. http://www.tensorflow.org Williams RJ (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach Learn 8(3):229–256 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems 27 (NIPS 2014), Montreal, Quebec, Canada. NIPS foundation, 8–13 Dec 2014 Lima Guimaraes G, Sanchez-Lengeling B, Cunha Farias PL, Aspuru-Guzik A (2017) Objective-reinforced generative adversarial networks (ORGAN) for sequence generation models. arXiv:1705.10843 Sun J, Jeliazkova N, Chupakin V, Golib-Dzib J-F, Engkvist O, Carlsson L, Wegner J, Ceulemans H, Georgiev I, Jeliazkov V, Kochev N, Ashby TJ, Chen H (2017) Excape-db: an integrated large scale dataset facilitating big data analysis in chemogenomics. J Cheminform 9(1):17. doi:10.1186/s13321-017-0203-5 Sheridan RP (2013) Time-split cross-validation as a method for estimating the goodness of prospective prediction. J Chem Inf Model 53(4):783–790 Unterthiner T, Mayr A, Steijaert M, Wegner JK, Ceulemans H, Hochreiter S (2014) Deep learning as an opportunity in virtual screening. In: Deep learning and representation learning workshop. NIPS, pp 1058–1066 Mayr A, Klambauer G, Unterthiner T, Hochreiter S (2016) Deeptox: toxicity prediction using deep learning. Front Environ Sci 3:80 Jaccard P (1901) Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bull Soc Vaud Sci Nat 37:547–579 Rogers D, Hahn M (2010) Extended-connectivity fingerprints. J Chem Inf Model 50(5):742–754. doi:10.1021/ci100050t Butina D (1999) Unsupervised data base clustering based on daylight's fingerprint and tanimoto similarity: a fast and automated way to cluster small and large data sets. J Chem Inf Comput Sci 39(4):747–750. doi:10.1021/ci9803381 Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830 Version 0.17 Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489 Reutlinger M, Koch CP, Reker D, Todoroff N, Schneider P, Rodrigues T, Schneider G (2013) Chemically advanced template search (CATS) for scaffold-hopping and prospective target prediction for 'orphan' molecules. Mol Inform 32(2):133–138 Senger S (2009) Using Tversky similarity searches for core hopping: finding the needles in the haystack. J Chem Inf Model 49(6):1514–1524 Zaremba W, Sutskever I, Vinyals O (2014) Recurrent neural network regularization. CoRR. arXiv:1409.2329 Wan L, Zeiler M, Zhang S, LeCun Y, Fergus R (2013) Regularization of neural networks using dropconnect. In: Proceedings of the 30th international conference on international conference on machine learning, Vol 28. ICML'13, pp 1058–1066 Bengio Y, Ducharme R, Vincent P, Janvin C (2003) A neural probabilistic language model. J Mach Learn Res 3:1137–1155 MO contributed concept and implementation. All authors co-designed experiments. All authors contributed to the interpretation of results. MO wrote the manuscript. HC, TB, and OE reviewed and edited the manuscript. All authors read and approved the final manuscript. The authors thank Thierry Kogej and Christian Tyrchan for general assistance and discussion, and Dominik Peters for his expertise in LATEX. The source code and data supporting the conclusions of this article is available at https://github.com/MarcusOlivecrona/REINVENT, doi:10.5281/zenodo.572576. Project name: REINVENT Project home page: https://github.com/MarcusOlivecrona/REINVENT Archived version: http://doi.org/10.5281/zenodo.572576 Operating system: Platform independent Programming language: Python Other requirements: Python2.7, Tensorflow, RDKit, Scikit-learn License: MIT. MO, HC, and OE are employed by AstraZeneca. TB has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 676434, "Big Data in Chemistry" ("BIGCHEM", http://bigchem.eu). The article reflects only the authors' view and neither the European Commission nor the Research Executive Agency (REA) are responsible for any use that may be made of the information it contains. Hit Discovery, Discovery Sciences, Innovative Medicines and Early Development Biotech Unit, AstraZeneca R&D Gothenburg, 43183, Mölndal, Sweden Marcus Olivecrona, Thomas Blaschke, Ola Engkvist & Hongming Chen Marcus Olivecrona Thomas Blaschke Ola Engkvist Hongming Chen Correspondence to Marcus Olivecrona. Equivalence to REINFORCE. Proof that the method used can be described as a REINFORCE type algorithm. Generated structures. Structures generated by the canonical Prior and different Agents. Olivecrona, M., Blaschke, T., Engkvist, O. et al. Molecular de-novo design through deep reinforcement learning. J Cheminform 9, 48 (2017). https://doi.org/10.1186/s13321-017-0235-x DOI: https://doi.org/10.1186/s13321-017-0235-x De novo design
CommonCrawl
nature communications Evolution of new regulatory functions on biophysically realistic fitness landscapes Evolutionary potential of transcription factors for gene regulatory rewiring Claudia Igler, Mato Lagator, … Călin C. Guet Distribution of fitness effects of mutations obtained from a simple genetic regulatory network model R. G. Brajesh, Dibyendu Dutta & Supreet Saini The evolution, evolvability and engineering of gene regulatory DNA Eeshit Dhaval Vaishnav, Carl G. de Boer, … Aviv Regev Long-term evolution on complex fitness landscapes when mutation is weak David M. McCandlish Expression level is a major modifier of the fitness landscape of a protein coding gene Zhuoxing Wu, Xiujuan Cai, … Xiaoshu Chen Asymmetry between Activators and Deactivators in Functional Protein Networks Ammar Tareen, Ned S. Wingreen & Ranjan Mukhopadhyay Harmonious genetic combinations rewire regulatory networks and flip gene essentiality Aaron M. New & Ben Lehner The structure of genotype-phenotype maps makes fitness landscapes navigable Sam F. Greenbury, Ard A. Louis & Sebastian E. Ahnert Limit cycle dynamics can guide the evolution of gene regulatory networks towards point attractors Stuart P. Wilson, Sebastian S. James, … Leah A. Krubitzer Tamar Friedlander1 na1 nAff2, Roshan Prizak1 na1, Nicholas H. Barton1 & Gašper Tkačik ORCID: orcid.org/0000-0002-6699-14551 Nature Communications volume 8, Article number: 216 (2017) Cite this article Regulatory networks Gene expression is controlled by networks of regulatory proteins that interact specifically with external signals and DNA regulatory sequences. These interactions force the network components to co-evolve so as to continually maintain function. Yet, existing models of evolution mostly focus on isolated genetic elements. In contrast, we study the essential process by which regulatory networks grow: the duplication and subsequent specialization of network components. We synthesize a biophysical model of molecular interactions with the evolutionary framework to find the conditions and pathways by which new regulatory functions emerge. We show that specialization of new network components is usually slow, but can be drastically accelerated in the presence of regulatory crosstalk and mutations that promote promiscuous interactions between network components. Phenotypes evolve largely through changes in gene regulation1,2,3,4, and such evolution may be flexible and rapid5, 6. Of particular importance are mutations affecting affinity and specificity of transcription factors (TFs) for their upstream signals or for their binding sites, short fragments of DNA that TFs interact with to activate or repress transcription of specific target genes. Mutations in these binding sites or at sites that alter TF specificity are crucial because of their ability to "rewire" the regulatory network—to weaken or completely remove existing interactions and add new ones, either functional or spurious. Emergence of novel functions in such a network will usually be constrained to evolutionary trajectories that maintain a viable pattern of existing interactions. This raises a fundamental question about the effects of such constraints on the accessibility of different regulatory architectures and the timescales needed to reach them. The case that we focus on here is the divergence of gene regulation, which can give rise to a variety of new phenotypes, e.g., via expansion in TF families. A regulatory function previously accomplished by a single (or several) TF(s) is now carried out by a larger number of TFs, allowing for additional fine-tuning and precision, or, alternatively, for an expansion of the regulatory scope7,8,9,10,11,12,13,14,15,16,17. The main avenue for such expansions are gene duplications18,19,20. Rapid weakening of expression of the duplicates21 or alternatively selection to increase expression22, 23 facilitate the preservation and fixation of duplicates. Gene duplications generate extra copies of the TFs and thus provide the "raw material" for evolutionary diversification. Subsequent specialization of TFs often involves divergence in both their inputs (e.g., ligands) and outputs (regulated genes)3, 24. Examples range from repressors involved in bacterial carbon metabolism that arose from the same ancestor via a series of duplication–divergence events25, and ancestral TF Lys14 in the metabolism of Saccharomyces cerevisiae, which diverged into three different TFs regulating different subsets of genes in Candida albicans 26, to many variants of Lim and Pou-homeobox genes involved in neural development across different organisms27. In some systems the ligand sensing and gene regulatory functions are distributed across two or more molecules, as for bacterial two-component pathways28 and eukaryotic signaling cascades29; here, too, specialization can occur by a series of mutations in multiple relevant components. Immediately following a duplication event, molecular recognition between TFs, their input signals, and their binding sites is specific but undifferentiated between the two TF copies. Under selection to specialize, recognition sequences and ligand preferences of the two TFs can diverge, but only if some degree of matching between TFs and their binding sites is continually retained to ensure network function. Binding sites are thus forced to coevolve in tandem with the TFs. Yet little is known about the resulting limits to evolutionary outcomes, the likelihood of potential evolutionary trajectories, and the relevant timescales; specifically, it is unclear how these quantities of interest depend on important parameters, such as the number of regulated genes, the length and specificity of the binding sites, the correlations between the input signals, etc. Theoretical understanding of TF duplication is still incomplete, with existing models predominantly belonging to two categories. The first category of gene duplication–differentiation models studies subfunctionalization of isolated proteins (e.g., enzymes) that do not have any regulatory role30. When cis-regulatory mutations that control the expression of the duplicated gene are included31,32,33,34, this is done in a simplified fashion, e.g., by a small number of discrete alleles that represent TF-binding sites appearing and disappearing at fixed rates33, 34. Because this approach ignores the essentials of molecular recognition, it cannot model co-evolution between TFs and their binding sites—the topic of our interest. The second category of studies tracks regulatory sequences explicitly and uses a biophysical description of TF–BS (binding site) interactions, properly accounting for the fact that TFs can bind a variety of DNA sequences with different affinities35,36,37. In conjunction with thermodynamic models of gene regulation38,39,40,41, this approach has been used to study the evolution of binding sites given a single TF37, 42,43,44,45, while mostly overlooking the issue of TF duplication and subfunctionalization (but see refs 46, 47). Here we synthesize these two frameworks—the biophysical description of gene regulation and the evolutionary modeling of TF specialization—to construct a realistic description of the fundamental step by which regulatory networks have evolved. A biophysical model of this set-up gives rise to complex fitness landscapes that are markedly different from simple forms considered previously; in what follows, we show that realistic landscapes exert a major influence over the evolutionary outcomes and dynamics. The structure of the paper is as follows. We first introduce the basic model with two TFs and two regulated genes, and analyze its steady state distribution of outcomes, showing that the huge genotypic space maps to very few phenotypes. We next analyze the possible dynamical trajectories and timescales leading to each phenotype. Finally, we extend the basic model to a larger number of regulated genes and study the effect of "promiscuity-promoting mutations," i.e., mutations that render TFs less specific for their binding sites. A biophysically realistic fitness landscape In our model, n TF transcription factors regulate n G genes by binding to sites of length L base pairs; for simplicity, we consider each gene to have one such binding site. The specificity of a TF for any sequence is determined by the TF's preferred (consensus) sequence; sequences matching consensus are assigned lowest energy, E = 0, which corresponds to tightest binding, and every mismatch between the consensus and the binding site increases the energy by \(\epsilon \); this additive 'mismatch' model has a long history in gene regulation literature35, 43, 48, 49. The equilibrium probability that the binding site of gene j (j = 1,…,n G ) is bound by active TFs of any type i (i = 1,…,n TF) is a proxy for the gene expression level and is given by the thermodynamic model of gene regulation38, 50: $${p_{jm}}\left( {\left\{ {{k_{ij}}} \right\},\left\{ {{C_i}(m)} \right\}} \right) = \frac{{{\sum}_i {{C_i}(m){e^{ - \epsilon {k_{ij}}}}} }}{{1 + {\sum}_i {{C_i}(m){e^{ - \epsilon {k_{ij}}}}} }},$$ where C i (m) is dimensionless concentration of active TFs of type i in condition m, k ij is the number of mismatches between the consensus sequence of the i-th TF species and the binding site of the j-th gene, and \(\epsilon \) is the energy per mismatch in units of k B T. Concentration C i (m) of active TFs depends on condition m, which can represent either time or space (e.g., during developmental gene expression programs) or a discrete external environment (e.g., the presence/absence of particular chemical signals). The simplest case considered here assumes the existence of two such signals that can be either present or absent, in any combination, for a total number of four possible environments (m = 00, 01, 10, 11), occurring with probabilities α m ; an important parameter will be the correlation, −1 ≤ ρ ≤ 1, between the two signals. Each TF has two binary alleles, σ i ∈[00, 01, 10, 11], determining its specificity for the two signals. If the TF i is responsive to a signal and that signal is present in environment m, then its active concentration C i (m) = C 0; otherwise, C i (m) = 0. Given constants C 0, \(\epsilon \), and the genotype \({\cal D}\)—comprising TF consensus and binding site sequences as well as TF sensitivity alleles σ i —the thermodynamic model of Eq. (1) fully specifies expression levels for all genes in all environments (Supplementary Note 1). Figure 1a, b illustrates this set-up for a simple case n TF = n G = 2, assuming that the two copies of the TF emerged through an initial gene duplication event and are fixed in the population. The original TF regulates two downstream genes by binding to their binding sites. It is sensitive to both external signals, which can be present with a varying degree of correlation (Fig. 1a). After duplication, three types of mutation can occur, as shown in Fig. 1c: point mutations in the binding sites (rate μ), mutations in the TF coding sequence that change TF's preferred (consensus) specificity (rate r TF μ) and mutations in the two signal-sensing alleles (rate r S μ), which can give each TF specificity to both signals, to one of them, or to neither. An example in Fig. 1d shows the state of the system after several mutations have affected the degree of (mis)match between the TFs and the binding sites, k ij ; an especially important quantity that tracks the overall divergence of the TF specificity is denoted as M, the match between the two TF consensus sequences. Schematic of the model. a Simplified physiology of signal transduction: external signaling molecules (red and green squares) are sensed by the cell (1), activate transcription factors inside the cell (2), which in turn activate the corresponding downstream genes (3). The temporal/spatial appearance of the two external signals can be correlated to different extent, as measured by correlation coefficient, ρ. These signals can correspond to different time periods in development, spatial regions in the organism or tissue, or external conditions/ligands. b TF, initially responsive to two external signals (red and green 'slots') and regulating two genes, duplicates and the additional copy fixes in the population. Immediately after duplication, the two copies are undifferentiated. c Various mutation types that can occur post-duplication with their associated rates. d After accumulating several mutations, the pattern of mismatches between TF consensus sequences and the binding sites is reflected in new values of {k ij }, which determine the activation levels of the two genes according to Eq. (1). M, the number of matches between the consensus sequences of the two TFs (with a value between 0 and L), keeps track of the overall divergence of the TF specificities. For a list of model parameters and baseline values see Supplementary Table 1 To complete the evolutionary model, a fitness function is required. We assume selection for the genes to acquire distinct expression patterns in response to external signals, and thus define this fully specialized state as having the highest fitness in our model. Specifically, we penalize the deviations in actual gene expression, p jm , from the ideal expression levels, \(p_{jm}^*\): $$F = - s \displaystyle {\sum_j} {\sum_m} \alpha_m \beta_{jm} {\left( p_{jm} - p_{jm}^{*} \right)}^2 ,$$ where the ideal expression level \(p_{jm}^*\) is 1 (fully induced) for the first gene if signal 1 is present and the expression is 0 (not induced) otherwise, and similarly for the second gene; β jm can be used to vary the relative weight of different errors (e.g. of a gene being uninduced when it should be induced and vice versa, see Supplementary Note 3), and s is the selection intensity. Importantly, selection does not directly depend on the TFs, but only on the expression state of the genes they regulate; genes, however, can only be expressed when TFs bind to proper binding sites, implicitly selecting on TFs. For this reason it is also very easy to generalize our model to regulation by repressor TFs, a case we explore in Supplementary Note 2. We consider mutation rates to be low enough that a beneficial mutation fixes before another beneficial mutation arises51, allowing us to assume that the population is almost always fixed. The probability that the population occupies a particular genotypic state, \(P({\cal D},t)\), evolves according to a continuous-time discrete-space Markov chain that specifies the rate of transition between any two genotypes. The transition rates are a product between the mutation rates between different states and the fixation probability that depends on the fitness advantage a mutant has over the ancestral genotypes43, 52. The size of genotype space is high-dimensional but still tractable, because our model only requires us to keep track of mismatches and not full sequences, i.e., to write out the dynamical equations for the reduced-genotypes, \({\cal G} = \left\{ {M,{k_{ij}},{\sigma _i}} \right\}\). Standard Markov chain techniques can then be used to compute the evolutionary steady state, first hitting times to reach specific evolutionary outcomes, or to perform stochastic simulations (Supplementary Methods). Figure 2 shows the interplay of biophysical constraints that give rise to a realistic fitness landscape for our problem. Given a match, M, between two TF consensus sequences, only certain combinations of mismatches, (k 1j ,k 2j ), of the TFs with each of the two binding sites are possible. A particular allowed combination can be realized by different numbers of genotypes, as shown in Fig. 2a, providing a detailed account of the entropy of the neutral distribution. For each of the four environments, Eq. (1) predicts gene expression at every pair of mismatch values (Fig. 2b); together with the probabilities of different environments occurring, the gene expression pattern determines the genotypes' fitness, F. TF specialization then unfolds on this landscape by different types of mutations (e.g., Fig. 2c). Although the landscape is complex and high-dimensional, it is highly structured and ultimately fully specified by only a handful of biophysical parameters. Furthermore, because of the sigmoidal shape of binding probability as a function of mismatch k (Eq. (1)), it is possible to assign phenotypes of 'strong' and 'weak' binding to every TF–BS interaction, allowing us to depict network interactions graphically, as shown in Fig. 2d, and to classify the possible macroscopic evolutionary outcomes, as we will show next. Biophysical and evolutionary constraints shape the genotype–phenotype-fitness map after TF duplication. a Match, M, between transcription factor consensus sequences (here, of length L = 5), constrains the possible mismatch values, k 1j , k 2j , between the gene's binding site and either TF. For example, when the two TFs are identical (M = L = 5, bottom left), they must have equal mismatches with all genes (k 1j = k 2j ). Some combinations of mismatches are impossible given M (white), while others are realized by different numbers of genotypes (grayscale). b Expression level (color) for a regulated gene given all mismatch combinations, k 1j , k 2j , at M = 3. Impossible mismatch combinations are colored white. Each of the four panels shows expression levels in four possible environments, m = 00, 10, 01, 11. Fitness F depends on the structure of mismatches a, the biophysics of binding b, and the frequencies of different environments, α m . Here we choose α so that the marginal probability of each input signal f 1,2 is always f 1 = f 2 = \(\frac{1}{2}\) but the correlation can be varied, and assign weight β jm = 1 whenever the gene should be induced but is not, and β jm = \(\frac{1}{2}\) when it is induced when it should not. The general case when f 1 ≠ f 2 ≠ 0.5 is analyzed in Supplementary Note 2. c A single point mutation, e.g., a change in one TF's binding specificity from 'T' to 'G', can simultaneously affect the match, M, and either increase, decrease, or leave intact the mismatches, k 11 and k 12, that determine fitness. d TF–BS interactions with mismatch k that is low enough to ensure a high binding probability (p > 1/2) are assigned to a "strong binding" phenotype (solid link); conversely, p < 1/2 is a 'weak binding' phenotype (dotted link) Evolutionary outcomes in steady state Evolutionary outcomes in steady state are determined by a balance between selection and drift. The steady state distribution over reduced-genotypes is 53 $${P_{{\rm{SS}}}}({\cal G}) = P({\cal G},t \to \infty ) = {P_0}({\cal G}){\rm{exp}}(2NF({\cal G})),$$ where P 0 is the neutral distribution of genotypes and N is the population size. Eq. (3) is similar to the energy/entropy balance of statistical physics42, 54, with fitness F playing the role of negative of energy and log P 0 the role of entropy; in our model, both of these quantities are explicitly computable, as is the resulting steady state distribution. Understanding the high dimensional distribution over genotypes is difficult, but classification of individual TF–BS interactions into "strong" and "weak" ones, as described above, allows us to systematically and uniquely assign every genotype to one of a few possible macroscopic outcomes, or "macrostates," graphically depicted in Fig. 3a and defined precisely in Supplementary Note 1. Thus, in the 'No Regulation' state, input signals are not transduced to the target genes, either because TF–BS mismatches are high and there is no binding or because TFs themselves lose responsiveness to the input signals; in the 'One TF Lost' state, a single TF regulates both genes (as before duplication), while the other TF is lost, i.e., its specificity has diverged so far that it does not bind any of the sites; the 'Specialize Binding' state corresponds to each TF regulating its own gene without cross-regulating the other but the signal sensing domains are not yet signal specific, as they are in the 'Specialize Both', the state which we have defined to have the highest fitness. Finally, the 'Partial' macrostate predominantly features configurations where each of the TFs binds at least one binding site, but one of the TFs still binds both sites or retains responsiveness for both input signals; functionally, these configurations lead to large "crosstalk," where input signals are non-selectively transmitted to both target genes. Steady state evolutionary outcomes of TF duplication. a Left: evolutionary macrostates (see text) depicted graphically as network phenotypes with solid (dashed) lines indicating strong (weak) TF–BS interactions. Red and green squares in the TFs represent the corresponding signal sensing domains. Right: input–output table, where columns represent the presence of either (red or green) external signal and rows represent the resulting gene activation for each phenotype. b (Top) Distribution of fitness values across genotypes in each macrostate (color-coded as in a), shown as violin plots, for two values of signal correlation, ρ. Black dots = median fitness in the macrostate. (Bottom) The number of genotypes in each macrostate (logarithmic scale). c Most probable outcome of gene duplication in steady state (color-coded as in a), as a function of selection strength, Ns, and the correlation between two external signals, ρ. d Free fitness \(\hat F\) (at Ns = 25) for different macrostates as a function of correlation between signals, ρ: for most macrostates, free fitness increases with signal correlation, except for 'No regulation', which is naturally unaffected by it, and 'Specialize Both', which dominates for low correlation values. e The dominant macrostate (as in c), as a function of the signal frequencies, f 1, f 2, and the signal correlation, ρ, at fixed Ns = 25. For simplicity we plot only cases where f 1 = f 2. Signals in the hashed region are mathematically impossible. f Steady state distributions for mismatches (P SS(k ij | σ 1 = 10, σ 2 = 01), upper row) and the match between the two TF consensus sequences (P SS(M | σ 1 = 10, σ 2 = 01), lower left), under strong selection (red; at baseline parameters denoted by the red cross in c) and neutrality (blue; Bernoulli distributions). Comparison between analytical calculation and 400 replicates of the stochastic simulation (lower right). Here and in subsequent figures, baseline parameter values are L = 5, \(\epsilon \) = 3, r S = r TF = 1 Ultimately, these macrostates are the functional network phenotypes that we care about. The number of genotypes in each macrostate, however, can vary by orders of magnitude; for example, the 'No Regulation' state is larger by ∼104 relative to the high-fitness 'Specialize Both' state, for our baseline choice of parameters (L = 5, \(\epsilon \) = 3). Selection can act against this strong entropic bias, and the distribution of fitness values across genotypes within each macrostate is shown in Fig. 3b. Clearly, the mean or median fitness within each macrostate is a poor substitute for the detailed structure of fitness levels that depend nonlinearly on TF–BS mismatches and the degeneracy of the sequence space. Unlike the entropic term in Fig. 3b, fitness also depends on the statistics of the environment, α m , and in particular, the correlation ρ between the two signals. For example, when the signals are strongly correlated, the 'Initial' state right after duplication or the 'One TF Lost' state can achieve quite high fitnesses, since responding to the wrong signal or having a high degree of crosstalk will still ensure largely appropriate gene expression pattern in all likely environments. In contrast, at strong negative correlation, many genotypes in 'Specialize Binding' and 'Initial' states will suffer a large fitness penalty because their sensing domains are not specialized for the correct signals, while the 'Specialize Both' state will have high fitness regardless of the environmental signal correlation. How do fitness and entropy combine to determine macroscopic evolutionary outcomes? Fig. 3c shows the most probable macrostate as a function of selection strength and signal correlation (Supplementary Note 2). At weak selection, specific TF–BS interactions cannot be maintained against mutational entropy and the system settles into the most numerous, 'No Regulation' state. Higher selection strengths can maintain a limited number of TF–BS interactions in 'Partial' states. Beyond a threshold value for Ns, the evolutionary outcome depends on the signal correlation: when signals are anti-correlated or weakly correlated, the TFs reach the fully specialized state, whereas high positive correlation favors losing one TF and having the remaining TF regulate both genes and respond to both signals. As signal correlation increases, so does the selection strength required to support full specialization. Detailed insight at a fixed value of Ns is provided by plotting the free fitness \(\hat F\), as in Fig. 3d, which combines the fitness and the entropy of the neutral distribution from Fig. 3b into a single quantity that determines the likelihood of each macrostate given ρ; the macrostate with highest free fitness is shown as the most probable outcome in Fig. 3c for Ns = 25, but free fitness also allows us to see, quantitatively, how much more likely the dominant macrostate is relative to other outcomes. Figure 3e examines the case where not only the correlation, ρ, but also the frequencies, f 1, f 2, of encountering both signals are varied: for low frequencies, even selection strength of Ns = 25 is insufficient to maintain TF specificity against drift, while for high frequencies and positive correlation one TF is lost while the remaining TF regulates both genes (Supplementary Note 2). The map of evolutionary outcomes is very robust to parameter variations. The energy scale of TF–DNA interactions is that of hydrogen bonds: \(\epsilon \sim 3\) (in k B T units), consistent with direct measurements. The scale of C 0 is set to ensure that consensus sites are occupied at saturation while fully mismatching sites are essentially empty. The only remaining important biophysical parameter is L, the length of the binding sites. As expected, increasing L expands the regions of 'No Regulation' and 'Partial' at low Ns, due to entropic effects. Surprisingly, however, one can demonstrate that the important boundary between the 'Specialize' and 'One TF Lost' states is independent of L; furthermore, the map in Fig. 3c is exactly robust to the overall rescaling of the mutation rate, μ, and even to separate rescaling of individual rates r S, r TF. TFs can also act as repressors, whereby a gene is active unless a repressor binds its binding site and inhibits its expression. The analysis in that case is very similar to the activator case. The evolutionary outcomes differ only if the penalties β jm are non-uniform. Specifically, we consider that unnecessary gene activation incurs a lower penalty β jm than does failure to activate a gene when needed. Due to the scarcity of genotypes allowing for TF–BS binding compared to the abundance of genotypes for which no binding occurs, this effectively scales the selection pressure, such that higher selection pressure values Ns are required to obtain the more specialized macrostates 'Partial' followed by 'Specialize Both' (Supplementary Note 2). We compare the steady-state marginal distributions of TF–BS mismatches and the match, M, between the two TFs, under strong selection to specialize (Ns = 25) vs neutral evolution (Ns = 0). Mismatch distributions for k 11 and k 21 in Fig. 3f display a clear difference in the two regimes: strong selection favors a small mismatch of the BS with the cognate TF, sufficient to ensure strong binding but nonzero due to entropy, and a large mismatch with the noncognate TF, to reduce crosstalk. Surprisingly, however, the distribution of matches M between two TF consensus sequences shows only a tiny signature of selection, with both distributions peaking around one match. As a consequence, inferring selection to specialize from measured binding preferences of real TFs might not be feasible with realistic amounts of data. Evolutionary dynamics and fast pathways to specialization Next, we focus on evolutionary trajectories and the timescales to reach the fully specialized state after gene duplication. An example trajectory is shown in Fig. 4a: the two TFs start off identical (with maximal match, M = L = 5) until, as a result of the loss of specificity for both signals, TF1 starts to drift, diverging from TF2 (sharply decreasing M in 'One TF Lost' state) and losing interactions with both binding sites. Subsequently TF1 reacquires preference to the red signal, which drives the reestablishment of TF1 specificity for one binding site during a short 'Specialize Binding' epoch, followed quickly by the specialization of TF2 for the green signal at the start of 'Specialize Both' epoch of maximal fitness. Slow and fast pathways to TF specialization. a Temporal traces of TF–TF match M (top), and TF–BS mismatches k ij (middle: TF1, bottom: TF2) with the corresponding signal specificity mutations denoted on dashed lines, for one example evolutionary trajectory at baseline parameters. Macrostates are color-coded as in the top legend and Fig. 3. b Average dynamics of fitness NF (blue, left scale) and TF–TF match M (red, right scale). For every timepoint, the dominant macrostate is denoted in color. c Snapshots of dominant macrostates (at increasing time post-duplication as indicated in the panels), shown for different combinations of selection strength Ns and signal correlation ρ as in Fig. 3. Contours mark dwell times in the dominant macrostates (in units of μ−1). Red cross=baseline parameters. d Schematic of the two alternative pathways to specialization. τ slow and τ fast are the total times to specialization for the 'slow' and the 'fast' pathway, respectively. e Relative duration of the two pathways, as a function of binding site length L (gray line, top axis), TF consensus sequence mutation rate r TF (red), and signal domain mutation rate r S (blue, bottom axis). Pie charts indicate the fraction of slow (pink) and fast (green) pathways at each parameter value Dynamics of the TF–TF match, M, and the scaled fitness, NF, become smooth and gradual when discrete transitions and the consequent large jumps in fitness are averaged over individual realizations, as in Fig. 4b. Importantly, we learn that the sequence of dominant macrostates leading towards the final (and steady) state, 'Specialize Both', involves a long intermediate epoch when the system is in the 'One TF Lost' state. We examine this sequence of most likely macrostates in detail in Fig. 4c, and visualize it analogously to the map of evolutionary outcomes in steady state shown in Fig. 3c. High Ns and correlation (ρ) values favor trajectories passing through the 'One TF Lost' state, while intermediate Ns (5 ≲ Ns ≲ 20) and low correlation values enable transitions through 'Partial' macrostate; along the latter trajectory, the binding of neither TF is completely abolished. Typical dwell times in dominant states, indicated as contours in Fig. 4c, suggest that specialization via the 'One TF Lost' state should be slower than through the 'Partial' state, which is best seen at t = 1/μ, where specialization has already occurred at intermediate Ns and low, but not high, ρ values. It is easy to understand why pathways towards specialization via the 'One TF Lost' state are slow. As the example in Fig. 4a illustrates, so long as one TF maintains binding to both sites and thus network function (especially when signals are strongly correlated), the other TF's specificity will be unconstrained to neutrally drift and lose binding to both sites, an outcome which is entropically highly favored. After the TF's sensory domain specializes, however, the binding has to re-evolve essentially from scratch in a process that is known to be slow45 unless selection strength is very high. In contrast to this 'Slow' pathway, the 'Fast' pathway via the 'Partial' state relies on sequential loss of "crosstalk" TF–BS interactions, with the divergence of TF consensus sequences followed in lock-step by mutations in cognate binding sites. Specifically, the likely intermediary of the fast pathway is a 'Partial' configuration in which the first TF responds to both signals but only regulates one gene, whereas the second TF is already specialized for one signal, but still regulates both genes. The fast and the slow pathways are summarized in Fig. 4d. A detailed analysis (Supplementary Note 4) reveals how different biophysical and evolutionary parameters change the relative probability and the average duration (Fig. 4e) of both pathways. For example, increasing the length, L, of the binding sites favors the slow pathway as well as drastically increases its duration, leading to very slow evolutionary dynamics. In contrast, time to specialize via the fast pathway is unaffected by an increase in L. Increasing the rate of TF-specificity-affecting mutations, r TF, has a qualitatively similar effect, while increasing the mutation rate affecting the sensory domain, r S, favors the fast pathway. Indeed, in the limit when r S is much larger than the other two mutation rates, the sensing domain specializes almost instantaneously, making the complete loss of binding by either TF very deleterious and thus avoiding the 'One TF Lost' state; the adaptation dynamics is initially rapid, with binding sites responding to diverging TF consensus sequences, and subsequently slow, when TF consensus sequences further minimize their match, M, in a nearly neutral process. Promiscuity-promoting mutations Typically, each TF must regulate more than one target gene. As the number of regulated genes per TF (n G /n TF) increases, intuition suggests that the evolution of the TF's consensus sequence should become more and more constrained: while a mutation in an individual binding site can lower the total fitness by increasing mismatch and thereby impeding TF–BS binding, a single mutation in the TF's consensus has the ability to simultaneously weaken the interaction with many binding sites, leading to a high fitness penalty. Our analysis of the biophysical fitness landscape confirmed that the landscape gets progressively more frustrated as the number of regulated genes per TF increases, due to the explosion of constraints that TFs have to satisfy to ensure the maintenance of functional regulation (Supplementary Note 5). Consequently, one can expect extremely long times to specialization. How can it nevertheless proceed at observable rates? Energy matrices for many real TFs display 'promiscuous' specificity where, at a particular position within the binding site, binding to multiple nucleotides is equally preferable. We wondered how our findings would be affected if consensus sequence specificity of the TFs could pass through such intermediate promiscuous states. Figure 5a shows how TF consensus sequence and the corresponding binding site can co-evolve using point mutations, or using the new "promiscuity-promoting" mutation type for the TF: promiscuity-promoting mutation renders one position in the recognition sequence of the TF insensitive to the corresponding DNA base in the binding site (Supplementary Note 6). Evolutionary pressure on the binding sites is therefore temporarily relieved, until the specificity of the TF is re-established by a back mutation. Without promiscuity-promoting mutations, TF–BS co-evolution must proceed in a tight sequence of compensatory mutations; with promiscuity-promoting mutations, such a precise sequence is no longer required, although one extra mutation is needed to reestablish high TF–BS specificity. With promiscuity, the fraction of deleterious mutations along the evolutionary path towards specialization is reduced, an effect that grows stronger with increasing L. As shown in Fig. 5b, this has drastic effects on the time to specialization. Without promiscuity, increasing the selection strength, Ns, decreases the required time when each TF regulates one gene, as expected for a landscape with large neutral plateaus but with no fitness barriers. For n G > 2, however, the landscape develops barriers that need to be crossed, and evolutionary time starts increasing with Ns. In contrast, promiscuity enables fast emergence of TF specialization even with multiple regulated genes in a broad range of evolutionary parameters (although there are also costs due to high promiscuity). Promiscuity-promoting mutations speed up specialization with multiple regulated genes per TF. a In the absence of promiscuity-promoting mutations, a compensatory series of point mutations in the TF's consensus (upper sequence) and its binding site (lower sequence) is needed to maintain TF–BS specificity (top; light red). Alternatively, in the presence of promiscuity-promoting mutations in the TF consensus, a position in the TF's recognition sequence (marked by a star) can lose and later regain sequence specificity (middle; light yellow). Promiscuity decreases the fraction of deleterious mutations along typical pathways to specialization (bottom, computed using baseline parameters). b Time to specialization as a function of selection strength, Ns, without (left) and with (right) promiscuity promoting mutations in the TF, for different numbers of regulated genes per TF, n G (color). Numbers in gray (right) denote the speed-up ratio The role that the shape of a fitness landscape plays for the dynamics and the final outcomes of evolution has been appreciated in population genetics for a long time. This has stimulated a large body of theoretical research into evolution on toy model landscapes55, 56, as well as motivated efforts to map out real, small-scale landscapes experimentally. For limited classes of problems, mostly those involving molecular recognition, biophysical constraints are informative enough to permit computational exploration of complex landscapes. Such is the case for the secondary structure of RNA57, antibody–antigen interactions58, protein–protein interactions59, and TF–DNA binding60, explored here. We exploit this prior knowledge to construct a fitness landscape for a more complicated evolutionary event, the specialization of two TFs after duplication, a key evolutionary step by which gene regulatory networks expand. The biophysical model naturally captures a number of essential features, without having to introduce them 'by hand': the fact that specialization is driven by avoidance of regulatory crosstalk; the importance of the mutational entropy; the dependence on number of downstream genes; the existence of transient network configurations preceding specialization, which crucially impact dynamics; and the importance for evolutionary outcomes of the statistical properties of the signals that TFs respond to. Importantly, the expressive power of our framework does not come at increased modeling cost: while complex, the fitness landscape is still determined only by a few, mostly known, parameters, and an exponentially large space of genotypes can be systematically coarse grained to a small set of functional network phenotypes. This combination of biophysical and co-evolutionary approaches is applicable generally to the evolution of molecular interactions, e.g., in protein interaction networks. In steady state, our results robustly identify correlation between the environmental signals that drive TFs as a key determinant for specialization, as shown in Fig. 3c–e. Unless the new signal, for which a post-duplication TF can specialize, is sufficiently independent (uncorrelated) from the existing signals that the regulatory network processes, one TF copy will be lost due to drift. As a consequence, the effective dimensionality of environmental signals dictates the complexity of genetic regulatory networks61, reminiscent of information-theoretic tradeoffs in sensory neuroscience62; in evolutionary terms, selection to maintain complex regulation needs to withstand the mutational flux into vastly more numerous but less functional network phenotypes. Recently, it has been shown that finite biochemical specificity also limits the complexity of genetic regulatory networks63; an interesting direction for future research is to understand how the balance between regulatory crosstalk, environmental signal statistics, and evolutionary constraints ultimately determines the number of TFs that can be stably maintained. A related question concerns the expected match between pairs of TFs in a large network as a signature of selection for specialized function; for an isolated pair of TFs, our results in Fig. 3f predict only a tiny deviation from neutrality. Timescales and pathways to specialization are completely shaped by the properties of the biophysical fitness landscape, and thus cannot be captured by simple allelic models that ignore the topology of the sequence space (Supplementary Note 7). We show that the fast pathway to specialization transitions through 'Partial' states where neither of the two TFs completely loses binding. Interestingly, it is exactly the existence of crosstalk interactions that permits fast adaptation via these transient states, by maintaining the network function through one TF, while the other is free to diverge in a series of mutations to the TF and its future binding site64. Crosstalk thus enables some amount of network plasticity during early adaptation, yet is ultimately selected against, when TFs become fully specialized65, 66. In the protein–protein-interaction literature, 'Partial' states are sometimes referred to as promiscuous states, and they have been suggested as evolutionarily accessible intermediaries that relieve the two interacting molecules of the need to evolve in a tight (and likely very slow) series of compensatory mutations67. In contrast to the fast pathway, the slow pathway involves a complete loss of TF–BS binding interactions; the long timescale emerges from long dwell times while the TF and the binding sites evolve in a nearly neutral landscape before TF–BS specificity is reacquired. Long-binding sites and (perhaps counter-intuitively) fast TF mutation rates favor the slow pathway, while fast sensing domain mutation rates favor the fast pathway. The situation changes qualitatively when each TF regulates more genes68. On the one hand, entropy makes pathways that pass through the 'One TF Lost' state dynamically uncompetitive, as multiple binding sites would have to emerge de novo to reestablish interactions with a diverged TF. This would favor fast pathways through 'Partial' states. On the other hand, the biophysical fitness landscape develops frustration (or sign epistasis) as n G > 2 and the timescales to specialization lengthen with increasing selection strength when passing through 'Partial' states. We demonstrate that frustration is relieved by promiscuity-promoting mutations in the TF, enabling fast emergence of specialization even with multiple-regulated genes. Recent experimental works have demonstrated how a combination of cis and trans mutations can rewire gene regulatory networks allowing for the emergence of new functions via transient and promiscuous configurations, in accordance with our model15. While we focused on a specific evolutionary scenario involving TF duplication, gene regulatory networks can rewire in numerous other ways. For example, Sayou et al. studied the evolution of TF–DNA binding specificity while the TF remains present in a single copy14. Duplicated TFs can also be re-used in ways that are different from what we considered26. Our results do, however, make predictions for expected timescales to reach different network configurations after gene duplication, which can be compared to bioinformatic data; alternatively, genomic data on TF duplication events could be used to infer selection pressures favoring regulatory divergence. Taken together, our results paint a picture of TF specialization that most likely proceeds through intermediate states with high crosstalk, in which one TF has already specialized for its input signals but not yet for the target genes, while the other TF is not yet specialized for the input signals but only regulates one gene. In addition, these intermediate states are likely to be more promiscuous, binding different sites with the same affinity, with the promiscuity reverting to specific binding towards the end of specialization. This picture is qualitatively different from the paradigmatic idea of a simple and sequential progression of compensatory mutations in the TF and its binding sites46, 69. It depends fundamentally on the biophysical model of TF–BS interactions, predicts significantly faster specialization times, as well as the existence of promiscuous TF variants that are starting to be observed in genomic analyses of duplication-specialization events14, 15. We consider mutation rates to be low enough that a beneficial mutation fixes before another beneficial mutation arises51, allowing us to assume that the population is almost always captured by a single genotype. The probability that the population occupies a particular genotypic state, \(P({\cal D},t)\), evolves according to a continuous-time discrete-space Markov chain. Transition rates between states are a product between the mutation rates between different genotypes and the fixation probabilities that depend on the fitness advantage a mutant has over the ancestral genotype43, 52, r xy = 2Nμ xy Φ y→x , where N is the population size, μ xy is the mutation rate from genotype y to x, and Φ y→x is the probability of fixation of a single copy of x in a population of y. Our model only requires us to keep track of mismatches and not full sequences (i.e., the reduced-genotypes, \({\cal G} = \left\{ {M,{k_{ij}},{\sigma _i}} \right\}\)), which significantly reduces the genotype space dimensionality. This framework allows for calculation of the steady state distribution of genotypes, or reduced-genotypes Eq. (3) and classification of genotypes into relevant macrostates. To calculate the neutral distribution P 0 of the reduced-genotypes (distribution in the absence of selection), we enumerate the number of possible BS sequences j that have mismatch values (k 1j ,k 2j ) with respect to two TFs that match each other at M out of L consensus positions: $${N_{{\kern 1pt} {\rm{seq}}{\kern 1pt} }}\left( {{k_1},{k_2}{\rm{|}}M} \right) \hskip -2pt= \mathop {\sum}\limits_{{j_0} = j_0^{{\rm{min}}}}^{j_0^{{\rm{max}}}} {\left( {\begin{array}{*{20}{c}} M \\ {{J_0}} \\ \end{array}} \right){3^{M - {j_0}}}\left( {\begin{array}{*{20}{c}} {L - M} \\ {L - {j_0} - {k_1}} \\ \end{array}} \right)\left( {\begin{array}{*{20}{c}} {{j_0} + {k_1} - M} \\ {L - {j_0} - {k_2}} \\ \end{array}} \right){2^{{k_1} + {k_2} + 2{j_0} - L - M}}} \\ j_{\rm{0}}^{{\rm{min}}} \hskip -3pt= {\rm{max}}\left( {{\rm{max}}\left( {0,M - {\rm{min}}\left( {{k_1},{k_2}} \right)} \right),\left\lceil {\frac{{L + M - {k_1} - {k_2}}}{2}} \right\rceil } \right) \hskip3.6pc\\ j_0^{{\rm{max}}} = {\rm{min}}\left( {M,L - {\rm{max}}\left( {{k_1},{k_2}} \right)} \right) \hskip13.3pc$$ The neutral distribution (up to proportionality constant) equals $${P_0}(x) \sim {N_{{\rm{seq}}}}\left( {{k_{11}},{k_{21}}{\rm{|}}M} \right){N_{{\rm{seq}}}}\left( {{k_{12}},{k_{22}}{\rm{|}}M} \right)\left( {\begin{array}{*{20}{c}} L \\ M \\ \end{array}} \right){3^{L - M}}.$$ We iterate this calculation for various parameter combinations (Ns, ρ, f 1,2). For each, we determine the most probable macrostate at steady state (Supplementary Note 2) as illustrated in Fig. 3. To determine evolutionary dynamics we numerically integrate \(P({\cal G},t)\) in time-steps corresponding to one generation t g : $$P\left( {{\cal G},t + {t_g}} \right) = P({\cal G},t) + {\bf{R}}{t_g}P({\cal G},t),$$ where R is the Markov chain transition matrix. Again, at every time-point we determine the most probable macrostate (Supplementary Note 2), as illustrated in Fig. 4. To follow different pathways to specialization and the timescale to reach each, we calculate mean first hitting time T S←x from any reduced-genotype x, to a subset of reduced-genotypes S, by using the recursive equation $${T_{S \leftarrow x}} = {t_g} + \mathop {\sum}\limits_y {{a_{yx}}{T_{S \leftarrow y}}} ,$$ where a yx are the elements of the transition probability matrix A = I + R t g . In particular, we consider subsets S z of genotypes that belong to a particular macrostate z, and compute the mean first hitting times, \({T_{{S_z} \leftarrow x}}\), to this macrostate. Time to specialization, τ, is the time to reach 'Specialize Both' macrostate. We also calculate "dwell times", t dwell(z) by using a similar procedure. Dwell time in a particular macrostate z, is the mean (taken over all the genotypes in z, S z ) first hitting time to any other macrostate, starting from S z . We supplement these analytical solutions by stochastic simulations. Using Gillespie algorithm70, we draw random times in which substitutions between distinct (reduced-)genotypes occurred. At each simulation run a we generate a specific evolutionary trajectory. By repeating this procedure numerous times, we obtain statistics over the distributions and evolutionary pathways. We use stochastic simulations to either validate the analytical calculations or substitute them when they are hard. That is the case, for example, for calculation of mean hitting time to a particular macrostate conditioned on not hitting another macrostate before, as in {τ fast} and {τ slow} (Fig. 4). More details about the methods are given in Supplementary Methods. The authors declare that all data supporting the findings of this study are available within the article and its Supplementary Information file. King, M. C. & Wilson, A. C. Evolution at two levels in humans and chimpanzees. Science 188, 107–116 (1975). Gilad, Y., Oshlack, A., Smyth, G. K., Speed, T. P. & White, K. P. Expression profiling in primates reveals a rapid evolution of human transcription factors. Nature 440, 242–245 (2006). Wray, G. A. The evolutionary significance of cis-regulatory mutations. Nat. Rev. Genet. 8, 206–216 (2007). Carroll, S. B. Evolution at two levels: on genes and form. PLoS Biol. 3, e245 (2005). Yona, A. H., Frumkin, I. & Pilpel, Y. A relay race on the evolutionary adaptation spectrum. Cell 163, 549–559 (2015). Babu, M. M., Teichmann, S. A. & Aravind, L. Evolutionary dynamics of prokaryotic transcriptional regulatory networks. J. Mol. Biol. 358, 614–633 (2006). Kacser, H. & Beeby, R. Evolution of catalytic proteins: on the origin of enzyme species by means of natural selection. J. Mol. Evol. 20, 38–51 (1984). Simionato, E. et al. Origin and diversification of the basic helix-loop-helix gene family in metazoans: insights from comparative genomics. BMC Evol. Biol. 7, 33 (2007). Larroux, C. et al. Genesis and expansion of metazoan transcription factor gene classes. Mol. Biol. Evol. 25, 980–996 (2008). Hobert, O., Carrera, I. & Stefanakis, N. The molecular and gene regulatory signature of a neuron. Trends Neurosci. 33, 435–445 (2010). Achim, K. & Arendt, D. Structural evolution of cell types by step-wise assembly of cellular modules. Curr. Opin. Genet. Dev. 27, 102–108 (2014). McKeown, A. N. et al. Evolution of DNA specificity in a transcription factor family produced a new gene regulatory module. Cell 159, 58–68 (2014). Baker, C. R., Tuch, B. B. & Johnson, A. D. Extensive DNA-binding specificity divergence of a conserved transcription regulator. Proc. Natl Acad. Sci. USA 108, 7493–7498 (2011). Sayou, C. et al. A promiscuous intermediate underlies the evolution of LEAFY DNA binding specificity. Science 343, 645–648 (2014). Pougach, K. et al. Duplication of a promiscuous transcription factor drives the emergence of a new regulatory network. Nat. Commun. 5, 4868 (2014). Nadimpalli, S., Persikov, A. V. & Singh, M. Pervasive variation of transcription factor orthologs contributes to regulatory network evolution. PLoS Genet. 11, e1005011 (2015). Arendt, D. The evolution of cell types in animals: emerging principles from molecular studies. Nat. Rev. Genet. 9, 868–882 (2008). Ohno, S. Evolution by Gene Duplication (Springer-Verlag, 1970). Magadum, S., Banerjee, U., Murugan, P., Gangapur, D. & Ravikesavan, R. Gene duplication as a major force in evolution. J. Genet. 92, 155–161 (2013). Yona, A. H. et al. Chromosomal duplication is a transient evolutionary solution to stress. Proc. Natl Acad. Sci. USA 109, 21010–21015 (2012). Lan, X. & Pritchard, J. K. Coregulation of tandem duplicate genes slows evolution of subfunctionalization in mammals. Science 352, 1009–1013 (2016). Conant, G. C., Birchler, J. A. & Pires, J. C. Dosage, duplication, and diploidization: clarifying the interplay of multiple models for duplicate gene evolution over time. Curr. Opin. Plant Biol. 19, 91–98 (2014). Loehlin, D. W. & Carroll, S. B. Expression of tandem gene duplicates is often greater than twofold. Proc. Natl Acad Sci. USA 113, 5988–5992 (2016). Wittkopp, P. J. & Kalay, G. Cis-regulatory elements: molecular mechanisms and evolutionary processes underlying divergence. Nat. Rev. Genet. 13, 59–69 (2012). Nguyen, C. C. & Saier, M. H. Phylogenetic, structural and functional analyses of the LacI-GalR family of bacterial transcription factors. FEBS Lett. 377, 98–102 (1995). Pérez, J. C. et al. How duplicated transcription regulators can diversify to govern the expression of nonoverlapping sets of genes. Genes Dev. 28, 1272–1277 (2014). Hobert, O. & Westphal, H. Functions of LIM-homeobox genes. Trends Genet. 16, 75–83 (2000). Parkinson, J. S. Signal transduction schemes of bacteria. Cell 73, 857–871 (1993). Bowler, C. & Chua, N. H. Emerging themes of plant signal transduction. Plant Cell 6, 1529–1541 (1994). Innan, H. & Kondrashov, F. The evolution of gene duplications: classifying and distinguishing between models. Nat. Rev. Genet. 11, 97–108 (2010). Force, A. et al. Preservation of duplicate genes by complementary, degenerative mutations. Genetics 151, 1531–1545 (1999). Lynch, M. & Force, A. The probability of duplicate gene preservation by subfunctionalization. Genetics 154, 459–473 (2000). Force, A. et al. The origin of subfunctions and modular gene regulation. Genetics 170, 433–446 (2005). Proulx, S. R. Multiple routes to subfunctionalization and gene duplicate specialization. Genetics 190, 737–751 (2012). Maerkl, S. J. & Quake, S. R. A systems approach to measuring the binding energy landscapes of transcription factors. Science 315, 233–237 (2007). Wunderlich, Z. & Mirny, L. A. Different gene regulation strategies revealed by analysis of binding motifs. Trends Genet. 25, 434–440 (2009). Payne, J. L. & Wagner, A. The robustness and evolvability of transcription factor binding sites. Science 343, 875–877 (2014). Shea, M. A. & Ackers, G. K. The OR control system of bacteriophage lambda: a physical-chemical model for gene regulation. J. Mol. Biol. 181, 211–230 (1985). Kinney, J. B., Murugan, A., Callan, C. G. & Cox, E. C. Using deep sequencing to characterize the biophysical mechanism of a transcriptional regulatory sequence. Proc. Natl Acad. Sci. USA 107, 9158–9163 (2010). Sherman, M. S. & Cohen, B. A. Thermodynamic state ensemble models of cis-regulation. PLoS Comput. Biol. 8, e1002407 (2012). Article ADS MathSciNet CAS PubMed PubMed Central Google Scholar He, X., Samee, M. A. H., Blatti, C. & Sinha, S. Thermodynamics-based models of transcriptional regulation by enhancers: the roles of synergistic activation, cooperative binding and short-range repression. PLoS Comput. Biol. 6, e1000935 (2010). Article ADS PubMed Central Google Scholar Berg, J., Willmann, S. & Lässig, M. Adaptive evolution of transcription factor binding sites. BMC Evol. Biol. 4, 42 (2004). Lässig, M. From biophysics to evolutionary genetics: statistical aspects of gene regulation. BMC Bioinformatics 8, 1–21 (2007). Lynch, M. & Hagner, K. Evolutionary meandering of intermolecular interactions along the drift barrier. Proc. Natl Acad. Sci. USA 112, E30–E38 (2015). Tuğrul, M., Paixão, T., Barton, N. H. & Tkačik, G. Dynamics of transcription factor binding site evolution. PLoS Genet. 11, e1005639 (2015). Poelwijk, F. J., Kiviet, D. J. & Tans, S. J. Evolutionary potential of a duplicated repressor-operator pair: simulating pathways using mutation data. PLoS Comput. Biol. 2, e58 (2006). Article ADS PubMed PubMed Central Google Scholar Burda, Z., Krzywicki, A., Martin, O. C. & Zagorski, M. Distribution of essential interactions in model gene regulatory networks under mutation-selection balance. Phys. Rev. E 82, 011908 (2010). Von Hippel, P. H. & Berg, O. G. On the specificity of DNA-protein interactions. Proc. Natl Acad. Sci. USA 83, 1608 (1986). Gerland, U., Moroz, J. D. & Hwa, T. Physical constraints and functional characteristics of transcription factor-dna interaction. Proc. Natl Acad. Sci. USA 99, 12015–12020 (2002). Bintu, L. et al. Transcriptional regulation by the numbers: models. Curr. Opin. Genet. Dev. 15, 116–124 (2005). Desai, M. M. & Fisher, D. S. Beneficial mutation-selection balance and the effect of linkage on positive selection. Genetics 176, 1759–1798 (2007). Kimura, M. On the probability of fixation of mutant genes in a population. Genetics 47, 713–719 (1962). Gillespie, J. H. Population Genetics: A Concise Guide, 2nd edn (The Johns Hopkins University Press, 2004). Sella, G. & Hirsh, A. E. The application of statistical physics to evolutionary biology. Proc. Natl Acad. Sci. USA 102, 9541–9546 (2005). Kauffman, S. & Levin, S. Towards a general theory of adaptive walks on rugged landscapes. J. Theor. Biol. 128, 11–45 (1987). Article MathSciNet CAS PubMed Google Scholar Kryazhimskiy, S., Tkačik, G. & Plotkin, J. B. The dynamics of adaptation on correlated fitness landscapes. Proc. Natl Acad. Sci. USA 106, 18638–18643 (2009). Schuster, P., Fontana, W., Stadler, P. F. & Hofacker, I. L. From sequences to shapes and back: a case study in RNA secondary structures. Proc. R. Soc. Lond. B: Biol. Sci. 255, 279–284 (1994). Adams, R. M., Mora, T., Walczak, A. M. & Kinney, J. B. Measuring the sequence-affinity landscape of antibodies with massively parallel titration curves. eLife 5, e23156 (2016). Podgornaia, A. I. & Laub, M. T. Pervasive degeneracy and epistasis in a protein-protein interface. Science 347, 673–677 (2015). Aguilar-Rodrguez, J., Payne, J. L. & Wagner, A. A thousand empirical adaptive landscapes and their navigability. Nat. Ecol. Evol. 1, 0045 (2017). Friedlander, T., Mayo, A. E., Tlusty, T. & Alon, U. Evolution of bow-tie architectures in biology. PLoS Comput. Biol. 11, e1004055 (2015). Tkačik, G., Prentice, J. S., Balasubramanian, V. & Schneidman, E. Optimal population coding by noisy spiking neurons. Proc. Natl Acad. Sci. USA 107, 14419–14424 (2010). Friedlander, T., Prizak, R., Guet, C. C., Barton, N. H. & Tkačik, G. Intrinsic limits to gene regulation by global crosstalk. Nat. Commun. 7, 12307 (2016). Shultzaberger, R. K., Maerkl, S. J., Kirsch, J. F. & Eisen, M. B. Probing the informational and regulatory plasticity of a transcription factor DNA-binding domain. PLoS Genet. 8, e1002614 (2012). Rowland, M. A. & Deeds, E. J. Crosstalk and the evolution of specificity in two-component signaling. Proc. Natl Acad. Sci. USA 111, 5550–5555 (2014). Eldar, A. Social conflict drives the evolutionary divergence of quorum sensing. Proc. Natl Acad. Sci. USA 108, 13635–13640 (2011). Aakre, C. D. et al. Evolving new protein-protein interaction specificity through promiscuous intermediates. Cell 163, 594–606 (2015). Sengupta, A. M., Djordjevic, M. & Shraiman, B. I. Specificity and robustness in transcription control networks. Proc. Natl Acad. Sci. USA 99, 2072–2077 (2002). de Vos, M. G. J., Dawid, A., Sunderlikova, V. & Tans, S. J. Breaking evolutionary constraint with a tradeoff ratchet. Proc. Natl Acad. Sci. USA 112, 14906–14911 (2015). Gillespie, D. T. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comput. Phys. 22, 403–434 (1976). Article ADS MathSciNet CAS Google Scholar We thank the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007–2013) under REA grant agreement Nr. 291734 (T.F.), ERC grant Nr. 250152 (N.B.), and Austrian Science Fund grant FWF P28844 (G.T.). Tamar Friedlander Present address: The Robert H. Smith Institute of Plant Sciences and Genetics in Agriculture, Faculty of Agriculture Hebrew University of Jerusalem, P.O. Box 12, Rehovot, 7610001, Israel Tamar Friedlander and Roshan Prizak contributed equally to this work. Institute of Science and Technology Austria, Am Campus 1, A-3400, Klosterneuburg, Austria Tamar Friedlander, Roshan Prizak, Nicholas H. Barton & Gašper Tkačik Roshan Prizak Nicholas H. Barton Gašper Tkačik T.F., R.P., N.H.B., and G.T. designed the study. T.F. and R.P. carried out the calculations and analysis. T.F. and G.T. wrote the paper. Correspondence to Gašper Tkačik. Peer Review file Friedlander, T., Prizak, R., Barton, N.H. et al. Evolution of new regulatory functions on biophysically realistic fitness landscapes. Nat Commun 8, 216 (2017). https://doi.org/10.1038/s41467-017-00238-8 Received: 15 January 2017 Ammar Tareen Ned S. Wingreen Ranjan Mukhopadhyay Survival of the simplest in microbial evolution Torsten Held Daniel Klemmer Michael Lässig Nature Communications (2019) Towards a Dynamic Interaction Network of Life to unify and expand the evolutionary theory Eric Bapteste Philippe Huneman BMC Biology (2018) Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Influence of confinement on free radical chemistry in layered nanostructures Khashayar Ghandi1, Cody Landry1, Tait Du2, Maxime Lainé3, Andres Saul4 & Sophie Le Caër3 Scientific Reports volume 9, Article number: 17165 (2019) Cite this article Mathematics and computing Nanoscience and technology Solid Earth sciences The purpose of the present work was to study how chemical reactions and the electronic structure of atoms are affected by confinement at the sub-nanometer scale. To reach this goal, we studied the H atom in talc, a layered clay mineral. Talc is a highly 2D-confining material with the width of its interlayer space close to angstrom. We investigated talc with a particle accelerator-based spectroscopic method that uses elementary particles. This technique generates an exotic atom, muonium (Mu), which can be considered as an isotope of the H atom. Moreover, the technique allows us to probe a single atom (H atom) at any time and explore the effects of the layered clay on a single ion (proton) or atom. The cation/electron recombination happens in two time windows: one faster than a nanosecond and the other one at longer than microseconds. This result suggests that two types of electron transfer processes take place in these clay minerals. Calculations demonstrated that the interlayer space acts as a catalytic surface and is the primary location of cation/electron recombination in talc. Moreover, the studies of the temperature dependence of Mu decay rates, due to the formation of the surrogate of H2, is suggestive of an "H2" formation reaction that is thermally activated above 25 K, but governed by quantum diffusion below 25 K. The experimental and computational studies of the hyperfine coupling constant of Mu suggest that it is formed in the interlayer space of talc and that its electronic structure is extremely changed due to confinement. All these results imply that the chemistry could be strongly affected by confinement in the interlayer space of clays. Confinement within nanostructures can affect the chemistry of atoms and molecules1,2,3,4,5,6,7,8,9,10,11,12,13. The more confined the nanostructure, the larger the expected effect of the confinement on electronic structure and chemical dynamics6,14,15,16,17. The kinetics of elementary reactions is the best probe of the effects of confinement on chemical dynamics; the most sensitive probe of confinement effects on electronic structure is the one that measures the electron density. The isotropic hyperfine coupling constant (HFCC) is very efficient in this regard15,16,17,18,19,20. It measures the strength of the coupling between unpaired electrons with the nuclear magnetic moment. HFCC is proportional to the electron spin density at the nuclei14,15,16,17,18,19,20,21,22,23,24,25,26. In this work, we use HFCC to characterize free radicals. After its characterization, the electron transfer and reaction dynamics, which lead to free radical formation or decay, are reported in a confined environment. We also determine how the electronic structure changes under sub-nanometer confinement, which is a particularly interesting scale. In particular, we address some unanswered questions related to confinement at its smallest level: How are chemical reactions affected by the combination of surface and confinement effects at the angstrom scale? How does the electronic structure change under angstrom-scale confinement? What is the effect of extreme confinement on the reactivity induced by ionizing radiation? For this purpose, a good starting point is to study the effects of confinement on the H atom, which is the most fundamental entity in chemistry. Indeed, investigation of H atom and its isotopes has played a fundamental role in the evolution of modern science14,15,16,17,18,19,20,27,28,29,30,31,32; as such, determining the behavior of H atom in various environments is crucial in the development of dimensionally constrained systems such as heterogeneous catalysts31, nanometer-scale semiconductors32, and hydrogen storage devices29. Clay is an excellent medium to study the effect of confinement on chemical reaction channels and on the electronic properties of the H atom. Indeed, it provides natural abundant two-dimensional layered nano- and sub-nano- structures33. They are able to confine molecules and have the potential to provide catalytic surfaces at the same time33. The structure of clays also leads to a large surface area, as well as swelling, and ion exchangeability properties34. Finally, the compositional and structural features of layered clay minerals enable them to be modified by a large variety of polymers, organic and biological molecules35. We will focus here on talc (Si4O10Mg3(OH)2). It is a phyllosilicate with a layered structure (Fig. 1a)36,37. One octahedral sheet (O) with magnesium atoms is sandwiched between two tetrahedral (T) sheets of silica. Each TOT layer is separated by an interlayer space around 3 Å wide (although considering the van der Waals radius of the atoms on the surface of the interlayer this space is close to 1 Å). This empty space36 provides a confined environment to do chemistry. Moreover, as commercial materials, talc and clays play important roles in various applications such as catalysts in chemical industry38 and waste management in the nuclear industry36,39. (a) Layered structure of talc (figure obtained with the VESTA software)37. An octahedral sheet (O) with magnesium atoms is sandwiched between two tetrahedral sheets (T) containing silicon atoms. Each sheet has a thickness of about 2 Å89. The interlayer space between TOT layers is about 3 Å. Mg: orange; Si: blue, O: red; H atoms: light pink. (b) Schematic diagram of μSR. See the methods section for description. There are two types of surfaces in talc (basal and edges). Basal ones are the surfaces limiting the interlayer space and the most abundant ones, while the edges, created by the breakage of the Si–O or Mg–O bonds, are the other surfaces not in the interlayer space40. These two types of surfaces exhibit different behaviors towards molecules. Basal surfaces are hydrophobic whereas edge surfaces are hydrophilic40. The surface chemistry of talc and other phyllosilicate minerals have been investigated by titration, adsorption and electrophoretic measurements41. Based on the surface chemistry of edges in talc it was suggested that they induced certain catalytic activities42 such as peptide bond formation by activating reactants43. We report here catalytic behaviors of the basal surfaces (towards electron transfer reactions and hydrogen formation as described later). Electron transfer following irradiation can be used to generate H atoms or their isotopes in materials18,19,36,44. This has indeed been shown for talc36. The reaction e− + MgOH → MgO− + H is an electron transfer reaction that was reported in talc36. This reaction followed by dimerization of H atoms accounts for the H2 production in talc under irradiation36. Studying electron transfer reaction dynamics, H atom chemical dynamics and determining the electronic structures of H atom in clays, over a wide temperature range is a demanding task. It requires a technique that: i) can be used at any temperature; ii) is not limited by optical detection (as it is impossible for clays); iii) is time-resolved to be able to detect short-lived species; and iv) can provide the electronic structure of the free radicals. The only way to fulfill all above criteria is to use the hydrogen surrogate, muonium, and positive muon-based spectroscopic techniques as the positive muon (µ+) (called hereafter muon) is an exotic particle behaving as a light proton (H+)19,20,44. Muonium (Mu ≡ μ+e−), obtained after capture of an electron by a muon, has one-ninth of the mass of H with almost the same Bohr radius (0.53 Å) and ionization energy (13.539 eV for Mu and 13.598 eV for H)20. Because Mu is electronically equivalent to an H atom, it can be used as a surrogate of the latter when it cannot be studied directly45. Examples are H atom in ionic liquids19, in ionic solids20,28 and in semiconductors46,47. This latter case is in particular important for talc, since the band gap of talc is calculated to be 5.3 eV, proving that it is a semiconductor48. Since Mu can serve as a surrogate of H atom49,50,51,52, it can also be added to molecules to form free radicals18,25. Among many works on use of Mu as surrogate of H atom, those that measured the isotropic HFCC are most relevant to our work in this paper. For Mu in most solids, such as diamond50, C60 fullerenes (and K4C60 and K6C60)51,53 and sulfur52, the lowest energy site is in the center of "cages" made by surrounding nuclei in the lattice. Also in all above-mentioned studies of Mu in solids and fluids, Mu HFCC is smaller than the Mu HFCC in vacuum with smallest values in Si and Ge (other than shallow donor semiconductors). When Mu is in a large enough cage, (e.g. in C60 where it has ~0.71 nm diameter) the decrease in HFCC is rather small due to weak Van der Waals interactions between Mu and the cage51,53. However in smaller three-dimensional cages in semiconductors (like in Si and Ge) the HFCCs are much smaller (close to half the vacuum value)44,46,47,49,50. Also in all above-mentioned studies of Mu in solids or fluids, HFCC mostly decreases with temperature. The calculated HFCC in sulfur is in particular interesting52. While Mu HFCC in the center of the cage (energetically preferred site) is 4013 MHz (which is slightly smaller than the value in vacuum), changing the distance from the center by 50 pm strongly affected the HFCC value (4013 MHz versus +50 pm/1654 MHz and −50 pm/2339 MHz) but did not seem to significantly affect the energetics. This suggests there should be a very large negative temperature dependence of HFCC. The distance between the sulfur atoms is 400 pm in their model which is larger than interstitial distances in Si and Ge but smaller than the space in C60. Results and Discussions Talc synthesis and characterization are detailed in the Methods section and in the Supplementary Information (Figs 1–3). The unit cell parameter along the c axis, d001 (measured by X Ray Diffraction), for talc is 9.43 ± 0.02 Å (see Fig. 1a) showing that there is no water layer in the interlayer space36. The thickness of TOT is 6.4 Å (Fig. 1a). The principle of the muon-based technique used in this work is described in the Methods section and illustrated in Fig. 1b. The reactions we probe are electron transfer and H atom reactions by using Mu formation, muon spin relaxation54,55, and Mu decay56,57. Investigating Mu remains one of the most effective ways of providing information in solid samples on the dynamics and electronic structure of hydrogen atoms19,20,28,56,58,59,60,61. The knowledge of the electronic structure can be obtained via the determination of the HFCC of the trapped Mu (H) atom. The HFCCs and the yield of H atom or free radicals were obtained following the muon spin precession in a transverse magnetic field (Figs 1b and 2). This is similar to free induction decays in NMR and pulsed ESR. If the positive muon does not couple to an unpaired electron, the spin precession is at Larmor frequency (Fig. 2B). This is the case for free muons (like H+)62 and also if it binds to a lone pair (like the lone pairs on oxygen in diamagnetic molecules, e.g. H2Oµ+ ~ H3O+). The fraction of these muons represents the diamagnetic fraction. The oscillation of spin polarization of muons coupled to an unpaired electron leads to a faster spin precession18,25, depending on HFCC (Fig. 2A). In talc, this oscillation decays in roughly 0.05 µs, showing that the species (confined "H" atom in the interlayer as explained later) is extremely reactive. Time-domain asymmetry, expressed in percentage, in synthetic talc at (nominally) 100 G. (A) Decay rate of Mu. Data recorded at 3 K within a 0.05 μs time range. The black dots and bars represent experimental data points and uncertainty, respectively, whereas the red line indicates a theoretical fit to the data using Eq. 1 reported in Methods section. (B) Decay rates of the positive muon (µ+). The data were recorded at 3 and 320 K within a 6 μs time range. The black dots (resp. red) and line represent the experimental data and theoretical fit (to Eq. 1 in Methods section) at 320 K (resp. 3 K). The information on chemical dynamics is obtained in two ways; one is by following the sub-ns thermalization process that leads to the formation of different species, such as Mu and diamagnetic species19,20,54,63. This is obtained by the analysis of the amplitudes of time-dependent spectra of the different fractions (Fig. 2). The other one is by studying the decay rates (e.g., Mu in Fig. 2A and diamagnetic species in Fig. 2B). To measure the decay rates we fitted the time domain spectra, like in Fig. 2, to exponential decays (the best fits were exponential functions). Decay rate of the diamagnetic fraction The decay rates of the diamagnetic fraction as a function of temperature in talc from time domain fits of the spectra over roughly 6 µs are displayed in Fig. 3. The temperature dependence of the decay rate of the diamagnetic fraction is fitted well by the following expression: 1/τ = (0.148 ± 0.001) + (−0.127 ± 0.004) exp(−T0/T) with T0 = 122 ± 8 K and τ is in µs. As can be seen from the small error bars, the uncertainty on the relaxation rates is very small. Figures 2B and 3 clearly indicate that the decay rate decreases when temperature increases. For temperatures below 25 K, the decay rate is constant. The decay rates of the diamagnetic fraction as a function of T−1. T is the temperature. The error bars are statistical uncertainties. The line corresponds to an exponential fit (see text). The observed decay rates are due to rate of electron transfer to muons (like electron transfer to H+) at the microsecond time scale. The decay rates suggest that the diamagnetic fraction is mostly due to positive muons or to positive muons bound to oxygen lone pairs. Indeed, a molecular species like MuOH would have a slower reaction rate23. The spin relaxation of positive muons in a diamagnetic solid material is typically due to dipolar interactions between positive muons and dipoles. This could not be a potential mechanism for the observed decay rates, because such relaxation rates are usually an order of magnitude smaller than what is observed here64,65,66,67. Furthermore, the only spin-active nuclei with significant abundance in synthetic talc are protons, which are in MgOH sites in the layers (Fig. 1a). They are less abundant than other nuclei in talc. The fact that we observe this electron/cation (muon) recombination (or interaction due to polaron generation) rates at the microsecond time scale suggests the existence of electron transfer processes with much slower rate than the ultrafast electron transfer leading to Mu formation, which is more than four orders of magnitude faster (faster than ~1010 s−1)19,20,23,54. Electron transfer processes in clay minerals are important from an applied point of view, since clays can serve as host materials for photochemical reactions68. They can also be used for the electron-transfer based polymerization of some organic compounds69 and are known to catalyze redox reactions70. In addition, the radiation damage to a material is in part due to electrons that are formed from ionization of the material. The study of the interaction of ionizing radiation in clays has important applications as clays are used as a natural barrier in the deep geological disposal of high-activity and medium-activity long-life nuclear waste71,72. Only a few studies have up to now dealt with this important topic36,72. Also, due to the solid form of the samples, no kinetic data were reported so far72. Our observations here imply that there are two types of radiation damage and electron transfer reaction in clays. The first one is fast and occurs within less than 0.1 ns and the other is much slower and happens within microseconds (and longer). This implies that there are two types of electrons in the layered nanostructure of talc. Note that this is formed here by radiation but this can be extended more generally to all cases in chemistry or material technologies where free electrons are formed. The ultrafast electron transfer that has been observed causing Mu formation (see the signal in Fig. 2) is most probably due to the electron gas behavior that is free in the interlayer space. The interlayer space provides a free space in two-dimension for hot electrons (electrons with energy larger than thermal energy). This is consistent with the results at all temperatures, and even at very low temperatures (close to 1 K) where the electron mobility is more than four orders of magnitude larger than in the slow electron transfer reported in Fig. 3. If the electrons participating in the ultrafast process were not in a free electron gas state, we would have expected a very significant drop of the amplitude of Mu at the lowest temperatures we studied. This also suggests that both cations (positive muons) and "free electrons" are confined in the interlayer space (see Fig. 1a). Cation implantation To test the above hypothesis and to understand the inverse Arrhenius temperature dependence of the cation (muon)/electron recombination rate, the location of the cation (muon) must be understood. Calculations were performed in order to follow μ+ implantation as a probe for cation sites within talc. The details of calculations are explained in the methods section. The results are displayed in Fig. 4a. The cation is implanted preferentially in the regions with a large negative electrostatic potential (which corresponds to large positive values for electrons). Moreover, electrons are more likely to be extracted from the system in regions where the electron density is close to the electrochemical potential for electrons (Fermi level). (a) Attractive electrostatic potential exerted on the muon calculated at −0.83 Ry (left figure). The most favorable space for the cation is the interlayer space (yellow color). (b) Total electron density near the Fermi level (right figure). In our calculations, the sum of the ionic and Hartree potentials for the electrons ranges from large negative values close to the nuclei, to a maximum of 0.87 Ry. From Fig. 4a displaying the attractive potential for the point positive charge (positive muons in this case) calculated at −0.83 Ry, it is clear that the middle of the interlayer space is the most favorable location for muons (protons, or in general cations) as shown by the yellow region. This is an electrostatic confinement to almost a two-dimensional plane, which confirms our hypothesis that the interlayer space is the favorite location for positive charges in clays. It can also have significant implications for both the catalytic properties of talc for electron transfer reactions and for its semiconducting behavior (in particular the location of positive charge carriers in semiconductor clays) suggesting that exchangeable catalytic cations could be located in this space. We believe that these features enable clay minerals such as talc to serve as a class of excellent supports for immobilizing catalytic cations. Figure 4b presents the total electron density near the electrochemical potential of electrons. This shows the sites that are the most prone to ionization. It is clear that these sites are located close to all oxygen atoms. Among these oxygen atoms, there are those in contact with the interlayer space, and hence, near the cations. Muons are preferentially thermalized in the center of the interlayer space. Electrons released from ionization sites near this space begin to move towards these muons. During this process, when the electrons are within the range of the muons/cations they create neutral species (Mu). Note that the released electrons have a greater energy than the trapped electrons, so their wavefunction will be delocalized, making the electron capture more favorable. This should however be at energies close to thermal energy73. The electrons that are not close enough and do not form Mu can cause the relaxation of the diamagnetic signal described above (Fig. 3). They are farther and therefore react with positive muons at the microsecond time scale. Hence, the capture rate of these "slow" electrons will be the rate of the diamagnetic muon decay. Clearly, this capture rate has a non-Arrhenius and indeed an inverse Arrhenius behavior. This could be due to following reasons: (i) there would be more slow electrons in the interlayer at lower temperatures, due to more radiation-induced ionization at lower temperatures; (ii) two different electron transfer reactions could compete, i.e. electron capture by the muon and by other species (within the layers), such as MgOH groups (the decay rate being equal to k [e−], k being the rate constant and [e−] the electron concentration); (iii) the muon/electron capture rate increases when the relative kinetic energy of them is lowered as shown by recent theoretical quantum electrodynamics predictions73; (iv) the electron capture is more favorable when the lattice is closer to the equilibrium structure. At higher temperatures, many lattice degrees of freedom are excited and this could lead to an unfavorable condition for electron capture. We will show below (see Fig. 6 and the related discussion) that only (i), (iii) and (iv) can be the main mechanisms to account for our observations. We can rule mechanism (ii) out, since such a mechanism would cause a non-exponential muon decay that we did not observe (see e.g. Fig. 2b). Mu decay rate Figure 5 displays the Arrhenius plot of the Mu decay rates. In contrast with the latter case (Fig. 3), this decay rate increases when temperature increases. The following expression fits best to experimental data: 1/τ = 17.18 + 5737 exp (−T1/T) + 12.4 exp (−T2/T) with T1 = 1984 K and T2 = 33 K. 1/τ is expressed in µs−1 and the temperatures (T) is in K. Arrhenius plot of Mu decay rate. The line is an exponential line corresponding to the best fit of the data. The temperature-dependent decay is, in principle, due to one of the following three mechanisms. (i) The Mu atom encounter electrons in the clay. The spin exchange19 between these electrons and the electron in Mu leads to spin relaxation. The addition reaction (of electrons to H (Mu)) could, in principle, lead to hydride ion (H−) formation as well58. (ii) The reaction of Mu with holes in talc. Studies of ionizing radiation of talc have proven that the radiation-induced holes are located on oxygen atoms (linked to magnesium or silicon atoms), as shown by ESR experiments36. (iii) The Mu + H reaction forming MuH, which is equivalent to the H2 formation under ionizing irradiation36. This latter mechanism is the most likely one for the following reasons: i) holes are not mobile, yet they have the same concentration as electrons, and electrons and H usually outcompete holes to participate in reactions with Mu; ii) a previous work on the interaction of ionizing radiation with talc has shown a large concentration of H2 that was not accounted for by stable H atom concentration36. This has been associated with an unobservable H + H reaction in talc36. Furthermore, pulse radiolysis experiments performed in nanoporous silica have attributed most of the decay of solvated electron in the ns-µs timescale range to the e− + SiOH → SiO− + H reaction74. This suggests that the reaction with Mu, which is happening in the same timescale, occurs with H atoms and not with electrons, as electrons will react preferentially with MgOH groups in the present case. It is known that H atoms are generated by radiolysis of MgOH groups in the layers36. Therefore, the decay rates we report here are due to diffusion of H and Mu towards each other to form MuH. Our results show that there is a diffusional barrier around 18 kJ.mol−1, but at low temperatures, the transport takes place via quantum diffusion with an activation barrier of 0.3 kJ mol−1. It is expected that the temperature where quantum diffusion starts would be lower for H diffusion due to its larger mass as compared to Mu. This quantum tunneling occurs for a distance close to 1 to 3 Å (between Mu close to the center of interlayer space and H close to MgO). Our results therefore suggest that the relaxation mechanism of the muon in Mu is due to Mu and H diffusion (which is thermally-activated above 25 K and is due to quantum diffusion below 25 K). This implies that the clay material could be considered as a quantum material at low temperatures, with a much larger temperature expected for electron quantum transport and a lower temperature expected for hydrogen atom quantum transport. This is to our knowledge the first report of chemical dynamics in the interlayer space of clays, and the first report of quantum transport and quantum effects on chemical reactions in talc. Temperature dependence of different fractions The temperature dependence of different fractions of muoniated species (diamagnetic fraction, PD; Mu fraction, PMu and lost fraction19,20,23 which is PL = 1 − PMu − PD) is displayed in Fig. 6. Evolution of different population fractions (diamagnetic fraction, Mu fraction and lost fraction) as a function of temperature. The lines are shown to help the reader locate the data points for each fraction. The Mu fraction is less sensitive to temperature. Indeed, Mu is formed via ultra-fast electron transfer to positive muons19,20: $${{\rm{\mu }}}^{+}+{{\rm{e}}}^{-}\to {\rm{Mu}}$$ The diamagnetic fraction increases in the 0–50 K temperature range and then remains constant. The lost fraction is due to the reactions of electrons and H atoms in the radiation track within clays with Mu in 0.1 to 10 ns. The lost fraction decreases with temperature. Considering that the trend depicted in Fig. 3 is similar to the trend shown for the diamagnetic fraction (but in opposite directions vs temperature), it is possible that the increase in the diamagnetic fraction with temperature is linked to the fact that this population decays slower when temperature increases, and with an almost constant rate for temperatures below 50 K. This similar temperature dependence could be a coincidence, considering that the two mechanisms involved have more than three orders of magnitude difference in time scale. Considering however that one potential mechanism behind the temperature dependence shown in Fig. 3 is the increase of radiation-induced damage (increase in electron concentration and hence Mu (H) atoms) with the decrease in temperature, this link seems to be non-coincidental. In this case, we expect more lost (or Mu) fractions, as they are mostly due to electrons formed by radiation damage, and a smaller diamagnetic fraction at lower temperatures. Certainly, the diamagnetic and lost fractions show this trend. Therefore, we conclude here that radiation damage increases when temperature decreases. HFCC coupling constant The measured HFCC (in this section) is used with our computational data to further understand the nature of the observed Mu (H) atom and of its precursor in talc. The following trends were observed (Methods, Fig. 2, and Supplementary Information Figs 4–9). We detect one type of Mu at all temperatures. This suggests that there is only one site for H atoms. The details are explained in the Methods section and also in the Supplementary Information. Although one way to interpret the experimental data is the observation of a 2D H atom due to large HFCC and its comparison with calculations75,76,77,78, the thickness of the interlayer space may be still too large to account for this interpretation. Therefore we will compare in the following our results with those obtained assuming a confined but three-dimensional space. Note that the decay rate of Mu is very large at 320 K due to its reaction with H, making the measurement of the HFCC at this high temperature less reliable. Therefore, the HFCC value above 250 K is not presented. The HFCC value of Mu globally decreases with temperature (Fig. 7). It ranges from 5500 MHz at 3 K to 4200 MHz at 250 K. The value at 3 K is ~23% larger than the value in vacuum (4459 MHz), while it is 6% lower at 250 K. The large value measured at 3 K is to our knowledge the largest HFCC value reported in any material. It is even larger than the one reported in stishovite (5170 MHz)79. Moreover, the large temperature decrease (almost 30% per 250 K) is to our knowledge the largest temperature dependence reported for Mu (H). Indeed, in water, the HFCC value increases with temperature by about 1% per 250 K21. The decrease we measure here is equivalent to a 30% decrease in electron density at the nucleus per 250 K. This corresponds to an electron density transfer to the atoms which are close in the clay mineral. Both the temperature dependence and the large value at 3 K are due to the highly confined environment of the Mu (H) atom in the interlayer space. This is also consistent with the observation of the kinetics data discussed above. Therefore, the highly confined environment affects both the electronic structure and the chemical dynamics, which is of crucial importance in the field of the catalytic applications of clays. Evolution of the HFCC value of Mu with temperature. The line indicates the value measured in the vacuum. To understand the nature of the Mu (H) atom site and the negative temperature dependence of its HFCC value, calculations were undertaken at the same level as the electrostatic calculations described above. The corresponding results are displayed in Fig. 8. Left: One of the two potential final sites for Mu (H) atom (site A) in the center of the interlayer space. The energy and HFCC value are given as a function of displacement from the minimum energy position for site A. Right: One of the two potential final sites for the H atom (site B) close to the interlayer boundary. The energy and HFCC value are given as a function of displacement from the minimum energy position for site B. Top: Position A is in the middle of the interlayer space. Position B is in a hexagonal site close to the interlayer boundary. Position B has a lower energy than A by 0.7 eV. In all cases, oxygen, silicon and magnesium atoms are represented by red, blue and brown colors, respectively. The isosurfaces of the spin density at 0.007 eV/a0 are shown in yellow. Bottom: in all cases, the energies are plotted with squares and are visualized on the right y-axis. The HFCC values are plotted with circles and are visualized on the left y-axis. As references we include the HFCC value at the energy minimum (blue line) and the vacuum value calculated with the same program (brown line). The calculations suggest that two sites are possible for Mu (H). Position A (Fig. 8) is in the middle of the interlayer space while position B (Fig. 8) is close to the interlayer boundary. The lowest energy position is B and the energy difference between the sites is about 0.7 eV. The HFCC values of Mu at 0 K in sites A, B, and in vacuum are calculated to be 3639, 4543, and 4525 MHz, respectively. Therefore, the HFCC value in site A is smaller than that obtained in vacuum, whereas it is slightly higher in site B. The reason for the smaller value in site A is the antibonding interaction between the muon and four oxygen atoms (one above and three below) which results in an effective decrease of the total spin density around the muon atom. A further effect that can explain the decrease of the HFCC value due to the decrease of the spin density at the muon site is the deformation of the spin density due to the electrostatic attraction of the positive Si atom below. This is similar to the observed Mu HFCC in Si and Ge46,47,80 while different with the cases where Mu was in a less confined space51,52,53. Concerning the larger HFCC value in site B than in vacuum, it is difficult to give a definitive answer. One probable explanation is that in this site the muon is on top of the –OH group, where there is almost no bonding with the neighboring atoms (see the isosurfaces of the spin density in Fig. 8). The first neighbors of this almost free Mu atom are six negative oxygen atoms, which repels the spin density increasing the spin density at the muon site. This should be the case as well for Mu in Stishovite67. To check the energies of Mu at the two sites and the temperature dependence of the HFCC, the position of Mu was shifted along the three directions. The evolution of the energy and of the HFCC values as a function of the reduced coordinates (0 < x, y, z < 1) is displayed in Fig. 8 (bottom; left y-axis for HFCC values and right y-axis for energy). Clearly, any displacement increases the energy (Fig. 8), as shown by the parabolas with the positive concavity. Site A has an HFCC value that is smaller than that of vacuum. This value can increase by moving Mu in any of the three directions. As the energy increases very quickly (especially if Mu is moved along the z axis), the HFCC values are expected to increase with temperature. On the contrary, site B (corresponding to the lowest energy) has an HFCC value higher than the vacuum value. Furthermore, in this case, the maximum HFCC value is at the lowest energy point. One can then imagine that when increasing temperature the HFCC value should decrease, as the nuclear wave function with higher energy states will be populated. Based on the comparison of the experimental values of the HFCCs with the calculated values at 0 K and the predicted temperature dependencies of the two sites, we conclude then that site B corresponds to the location we observe experimentally. To understand the nature of the precursor of Mu, we analyzed the ratios of the amplitudes of the two frequencies associated with Mu at the magnetic field values used for each experiment. We conclude that the precursor of the species present in the interlayer space exhibits a low HFCC around 190 MHz (we call it species C hereafter). The transformation from C to Mu at site B occurs with a rate around 108 s−1. With such a precursor and rate of Mu formation, the ratios of the amplitude of the higher frequency to the amplitude of the lower frequency (see Supplementary Information) decrease drastically when going from ~100 G to ~200 G, to the degree that the higher frequency signal would not be observed, and the amplitudes would be small in general81,82. Results from calculations have revealed some clues for this precursor (Fig. 9). It can only correspond to a Mu bound to a hole and being in a triplet state. The HFCC value of this species was calculated to be 185.3 MHz, in good agreement with the value determined experimentally. The spin density of C is localized at the muon (located in the interlayer space) and on oxygen atoms. This species will then form Mu in the interlayer space at site B via electron transfer from an electron to hole, and the subsequent release of Mu to site B. Calculated precursor having an HFCC of 185.3 MHz and being in a triplet state. The spin density is mainly located on the muon (in the interlayer space) and on oxygen atoms. We have shown that a strong confinement (sub-nm), easily available in 2D-confining layered clay minerals such as talc, could have extreme effects on chemical reactions and on the electronic structure of free radicals. We have also evidenced that at least two types of electrons, with different characteristic time constants differing by at least three orders of magnitude, are generated in talc. This induces different radiation damage types in clays exposed to ionizing radiation, which is an important knowledge for the deep nuclear waste storage where clays constitute the geological barrier. Moreover, we have shown that the H2 formation mechanism is thermally activated above 25 K, while below 25 K it is due to quantum diffusion. This suggests that clay could be considered as a quantum material at low temperatures. We have also evidenced that cations are preferentially located in regions with a large negative electrostatic potential, i.e. in the interlayer space. The reaction of cations with free electrons first leads to neutral chemical species located close to the middle of interlayer. This is however a local minimum on the potential energy surface. The atom (within less than 0.1 ns) then moves to the global minimum, which is close to the O atom at the border of the interlayer space (via electron transfer to the hole). This leads to the formation of a radical with a high HFCC value, in consistency with the existence of hydrogen in the interlayer space of talc. Due to extreme confinement, the HFCC value has a large temperature dependence. The interlayer space thus provides electrostatic confinement, which in turn induces large changes to the electronic structure of the atom (molecule) present. Comparing our data in this work with previous studies, we discovered that the effects of confinement are multifaceted. One effect is on electronic structure of an atom or molecule in confined media. The effect of the confinement on the electronic structure of the Mu at sites A and B is reflected in the iso-surfaces of the spin densities (Fig. 8) and the HFCC15,16,17,18,19,20. Site A is in the middle of the interlayer space and the electron transfer to the neighboring O atoms decreases its HFCC value (3639 MHz) significantly with respect to the vacuum value. Site B is close to the boundary of the interlayer space and the total spin density is almost entirely localized around the muon. The six neighboring O atoms repel the electronic density localizing further the spin density and increasing the HFCC value. This can be the reason for the large HFCC observed experimentally at lowest temperatures. Another effect of confinements is on temperature dependence of the HFCC. Similar to the increase of HFCC, its large temperature dependency is an effect of confinement. This is similar to predictions for Mu inside a sulfur cage that was discussed in the introduction52. It is expected that the temperature dependence would also increase by making the potential energy surface and variation of HFCC with distance (Fig. 8) sharper in a more confined space. The other effect of confinement is on the richness of chemistry, such as existence of different Mu sites (like having different catalytic sites in a catalyst nanostructure). Most probably, by decreasing the interlayer distance these sites would collapse into one site (one site would disappear). This will decrease the richness of chemistry inside the confined environment. Although in the first glance we also expect the reactions of electrons and protons and reactions of H and Mu (equivalent to H2 formation), become faster with decrease in interlayer space (increase in confinement) however with a drastic change to confinement we would also expect the changes to the atomic structures of the lattice. This means we would expect different effects that may cancel each other on the electronic structure and reaction rates. Moreover, to understand the kinetic controlled processes under confinement the barriers of different transformations should be calculated as a function of confinement. This certainly requires a long-term investigation of both theoretical and experimental effects of confinement to explore the multifaceted confinement effects. Our results also give more insights into the properties of clay minerals as catalysts, and demonstrate the very particular role played by the interlayer space, which can act as a very interesting angstrom-scale catalytic reactor for electron transfer reactions. This has implications in the petroleum refining industry (catalytic cracking, hydrogenation and other processes), in organic synthesis and in environmental applications. Confined layered systems such as clay minerals have then the potential to facilitate a rich chemistry, including the transformation of CO2 to value-added products. In addition, we showed that muon methods provide a powerful tool for the study of confinement effects and of reactivity under confinement. Their power lies in their ability to provide efficient and relevant probes of confinement in chemistry. These probes are parameters that are proportional to the electronic structure of free radicals on different nuclei and also to reaction rates of elementary chemical reactions at a wide range of temperatures, from ultra-low temperatures where quantum effects are manifested, to temperatures relevant for industrial applications. Studies such as the present one play a fundamental role in the evolution of dimensionally constrained interfaces, modern catalysis, semiconductors, electron transfer processes, quantum materials chemistry and energy technologies. Talc synthesis Synthetic talc was synthesized by hydrothermal synthesis from gels of appropriate compositions prepared according to the conventional gelling method described previously83. Characterization includes infrared spectroscopy (IR), thermogravimetric analysis (TGA), and x-ray diffraction (XRD) presented in Supplementary Information Figs 1–3. Characterization of talc samples In order to record infrared (IR) spectra, 1% of sample was pelletized in KBr and then analyzed by a Bruker Tensor 27 FT-IR spectrophotometer. All the spectra were collected in transmission mode in the 4000–370 cm−1 energy range with a 4 cm−1 resolution from 100 scans. A pure KBr pellet was used for the reference. Thermogravimetric Analysis (TGA) experiments were performed with a Mettler-Toledo TGA/DSC 1 analyzer. An alumina crucible of 70 µL containing approximately 20 mg of sample was heated at a heat flow of 10 °C.min−1 under a dinitrogen flux of 50 mL min−1 in order to reach a final temperature of 900 °C and then brought back to room temperature. Data were analyzed using the STARe software. Powder X-ray patterns were recorded on a Bruker D8 Advance diffractometer equipped with a grazing parabolic Göbel mirror and a Cu emitter (λCuKα = 1.541 Å, 40 kV/40 mA). The diffracted beam was collected by a position sensitive Vantek detector. Muon spin spectroscopy The spin polarization of the positive muons (called simply "muons" in this manuscript) is almost 100%. This and the preferential asymmetric emission of high energy positrons (that can be detected) along the direction of the muon spin at the time of decay (Fig. 1b) allowed us to probe the evolution of the muon spin, coupled (or not) to unpaired electrons19,20. The sample, talc, was implanted with positive muons (μ+) at the M20 muon channel of TRIUMF laboratory in Vancouver, Canada, and the muoniated products were studied in real time. Samples of talc were subjected to several degassing cycles to remove oxygen (pumping/flushing with nitrogen) and stored in an air-tight container for transportation to the facility. The samples were subsequently stored in an in-house made mobile glove box until the day of the measurements. They were then placed in an aluminized Mylar cell that was sealed with silver-foil tape, where it was subsequently transferred to a Horizontal Gas Flow (HGF) Cryostat (suitable for temperatures from 2.8 to 330 K) compatible with the LAMPF spectrometer. This spectrometer uses a Helmholtz coil which provides a maximum field of 4000 G and is suitable for transverse and longitudinal field studies. The muon beam arises from pion decay at rest and possesses ~100% spin-polarization. Thermalization begins as the muon interacts with the material and eventually reaches thermal equilibrium towards the end of the radiation track. The spin of the μ+ is initially polarized opposite to the direction of its momentum (anti-parallel polarization). Before a muon interacts with the sample, it passes through a counter that sends an electronic signal to a time-digital-converter (TDC) and a counter is incremented, starting a "clock" (Fig. 1b). The positive muon after thermalization eventually decays to a positron and neutrino–antineutrino pair. The positron is emitted asymmetrically preferentially along the muon spin direction at the time of decay. When the positron is detected, the "clock" is stopped and the time intervals are collected in a histogram. Two positron detectors are positioned in the plane of the muon spin precession. The asymmetries are fitted in time domain (and double checked for consistency with the frequencies from the Fast Fourier Transform (FFT) signals) using the following equation: $$A(t)={\sum }_{i}{A}_{i}\exp (-{\lambda }_{i}t)\cos ({w}_{i}t+{\psi }_{i})$$ where, in a given environment, A represents the asymmetry of the muon fraction i, λi represents the muon relaxation rate, t is the time, wi is the precession frequency, and \({\psi }_{i}\) is the initial phase of the given fraction. The smallest relaxation rate accessible by μSR is limited by the muon lifetime. The lowest relaxation rate that can be measured in a transverse magnetic field in a continuous muon beam source like the one at TRIUMF is close to 10−2 µs−1 44,64. Muons or muons incorporated in a diamagnetic molecule (e.g., MuH) precess at the muon Larmor frequency (13.554 kHz G−1). A fraction of muons capture electrons to form an atom considered as an isotope of the H atom, and which is called muonium (Mu ≡ μ+e−)18,57,84. Although muons in the form of Mu have four precession frequencies in transverse field, only two of these are low enough to be resolved by conventional detection apparatus: $${\nu }_{12}=\frac{1}{2}[({\nu }_{e}-{\nu }_{\mu })-{({({\nu }_{e}+{\nu }_{\mu })}^{2}+{A}_{\mu }^{2})}^{\frac{1}{2}}+{A}_{\mu }]$$ $${\nu }_{23}=\frac{1}{2}[({\nu }_{e}-{\nu }_{\mu })+{({({\nu }_{e}+{\nu }_{\mu })}^{2}+{A}_{\mu }^{2})}^{\frac{1}{2}}-{A}_{\mu }]$$ where v12 and v23 are the resulting detected precession frequencies, ve and vμ are the electron (2.80247 MHz G−1) and muon (13.554 kHz G−1) Larmor frequencies respectively, and Aμ is the hyperfine coupling constant for muonium in its particular environment. Using this, we can determine the hyperfine coupling constant of muonium Mu in synthetic talc by finding the precession frequencies. The spectra at TRIUMF, taken via the LAMPF spectrometer, were acquired by the use of a transverse magnetic field with respect to the muon spin. Bin size was set to 195.3125 ps and a total histogram length of 6 μs was used. For the calculations, we used Quantum Espresso (QE)85 and the WIEN2K86 codes based on density functional theory. For the Quantum Espresso code we have used ultra-soft pseudopotentials and the Perdew, Burke and Ernzerhof (PBE) functional87 with a plane-wave and charge-density cutoff of 80 Ry and 320 Ry, respectively. We used a 9 × 5 × 5 Monkhorst-Pack88 grid for the first Brillouin zone sampling of the 42-atom triclinic unit cell. The code has been used to calculate the electronic density close to the Fermi level and the electrostatic potential exerted on the muon which is the sum of the bare ionic potential and of the self-consistent Hartree potential generated by the valence electrons (Fig. 4). The systematic search of the adsorption site for the Mu atom has been performed by including an additional H atom to the monoclinic cell. The calculations for the additional hydrogen assumed that the Mu atoms were spin-polarized. The hyperfine coupling constants reported in Fig. 8 and also for the charged cell in Fig. 9 have been calculated using the full potential linearized augmented plane wave plus local orbitals method as implemented in the WIEN2K code. The calculations were performed using the generalized gradient approximation of PBE for exchange and correlation and a cutoff parameter RKmax = 3. The radii of the muffin-tin spheres were set to 0.56 a.u. for H, 1.90 a.u. for Mg, 1.03 for O and 1.4 a.u. for Si. The energy difference between the A and B sites calculated with the QE code after full relaxation is 0.707 eV and 0.716 eV with the WIEN2K code. The hyperfine field calculated by the program (in kGauss) has been multiplied by ge gμμμ/2πℏ to obtain the HFCC value in MHz at the muon sites reported in the manuscript. The value calculated for a free Mu atom, 4524 MHz, to be compared to be compared with the experimental hyperfine value of 4463 MHz. One can obtain a better agreement with the experimental value by correcting the spin density at the nucleus taking into account the difference in bohr radius between the H and Mu atoms. With this correction one gets 4458 MHz for the free Mu atom. Handbook of nanostructured materials and nanotechnology; Nalwa, H. S., Ed. (Academic Press, 1999). Ferry, D., Stephen, M. G. Transport in nanostructures; Cambridge University Press, Vol. 6 (1999). Fernandez-Garcia, M., Martinez-Arias, A., Hanson, J. C. & Rodriguez, J. A. Nanostructured oxides in chemistry: characterization and properties. Chem. Rev. 104, 4063–4104 (2004). Introduction to nanoscale science and technology; Ventra, M., Evoy, S., Heflin, J. R., Eds (Springer Science & Business Media, 2006). Nanostructured catalysts; Scott, S. L., Crudden, C. M. & Jones, C. W., Eds (Springer Science & Business Media, 2008). Aquino, N. The hydrogen and helium atoms confined in spherical boxes. Adv. Quantum Chem. 57, 123–171 (2009). Multifunctional polymer nanocomposites; Leng, J. & Lau, A. K., Eds (CRC Press, 2010). One-dimensional nanostructures: principle and applications; Zhai, T. & Yao, J., Eds (John Wiley & Sons, 2012). Fu, Q., Yang, F. & Bao, X. Interface-confined oxide nanostructures for catalytic oxidation reactions. Acc. Chem. Res. 8, 1692–1701 (2013). Suresh, S. Semiconductor nanomaterials, methods and applications: a review Nanosci. Nanotechnol. 3, 62–74 (2013). Singh, A. N., Thakre, R. D., More, J. C., Sharma, P. K. & Agrawal, Y. K. Block copolymer nanostructures and their applications: A review Polymer-Plastics Technol. Engin. 10, 1077–1095 (2015). Miners, S. A., Rance, G. A. & Khlobystov, A. N. Chemical reactions confined within carbon nanotubes. Chem. Soc. Rev. 45, 4727–4746 (2016). De Martino, M. T., Abdelmohsen, L. K., Rutjes, F. P. & van Hest, J. C. Nanoreactors for green catalysis Beil. J. Organic Chem. 29, 716–733 (2018). Mazo, R. M. Partition Function of an Atom in a Spherical Box. Am. J. Phys. 28, 332–335 (1960). Suryanarayana, D. & Weil, J. A. On the hyperfine splitting of the hydrogen atom in a spherical box. J. Chem. Phys. 64, 510–513 (1976). Ludena, E. V. SCF calculations for hydrogen in a spherical box. J. Chem. Phys. 66, 468–470 (1977). Ley-Koo, E. & Rubinstein, S. The hydrogen atom with spherical boxes with penetrable walls. J. Chem. Phys. 71, 351–357 (1979). Reid, I. D., Azuma, T. & Roduner, E. Surface-adsorbed free radicals observed by positive-muon avoided-level crossing resonance. Nature 345, 328–330 (1990). Ghandi, K., Miyake, Y. In Charged Particle and Photon Interactions with Matter, Advances, Applications and Interfaces; Mozumder, A. & Hatano, Y., Eds (Taylor & Francis: 2011). Ghandi, K. & MacLean, A. Muons as hyperfine interaction probes in chemistry. Hyperfine Interactions 230, 17–34 (2015). Ghandi, K., Brodovitch, J.-C., Addison-Jones, B. & Percival, P. W. Hyperfine coupling constant of muonium in sub- and supercritical water. Physica B 289, 476–481 (2000). Ghandi, K., Brodovitch, J.-C., McCollum, B. & Percival, P. W. Enolization of Acetone in Superheated Water Detected via Radical Formation. J. Am. Chem. Soc. 125, 9594–9596. (2003). Ghandi, K., Arseneau, D. J., Bridges, M. D. & Fleming, D. Muonium formation as a probe of radiation chemistry in supercritical CO2. J. Phys. Chem. A 52, 11613–11625 (2004). Ghandi, K., Zahariev, F. & Wang, Y. Alkyl radicals in zeolites. J. Phys. Chem. A 109, 7242–7251 (2005). Bridges, M. D., Arseneau, D. J., Fleming, D. G. & Ghandi, K. Hyperfine interactions and molecular motion of the Mu-ethyl radical in faujasites: NaY, HY and USY. J. Phys. Chem. C 111, 9779–9793 (2007). Lauzon, M. et al. Generation and detection of the cyclohexadienyl radical in phosphonium ionic liquids. Phys. Chem. Chem. Phys. 39, 5957–5962 (2008). Pryor, W. A., Stanley, J. P. & Griffith, M. G. The Hydrogen Atom and Its Reactions in Solution. Science 169, 181–183 (1970). Kiefl, R. F. et al. Quantum Diffusion of Muonium in KCl. Phys. Rev. Lett. 62, 792–795 (1989). Li, Z., Zhu, G., Lu, G., Qiu, S. & Yao, X. Ammonia Borane Confined by a Metal-Organic Framework for Chemical Hydrogen Storage: Enhancing Kinetics and Eliminating Ammonia. J. Am. Chem. Soc. 132, 1490–1491 (2010). Cormier, P., Clarke, R., McFadden, R. & Ghandi, K. Selective Free Radical Reactions using Supercritical Carbon Dioxide. J. Am. Chem. Soc. 136, 2200–2203 (2014). Bünermann, O. et al. Electron-hole pair excitation determines the mechanism of hydrogen atom adsorption. Science 350, 1346–1349 (2015). Stajic, J. Hydrogen atom makes graphen magnetic. Science 352, 424–424 (2016). Porion, P., Michot, L. J., Fauguère, A. M. & Delville, A. Structural and Dynamical Properties of the Water Molecules Confined in Dense Clay Sediments: a Study Combining 2H NMR Spectroscopy and Multiscale Numerical Modeling. J. Phys. Chem. C 111, 5441–5453 (2007). Zhou, C. H. & Keeling, J. Fundamental and applied research on clay minerals: From climate and environment to nanotechnology App. Clay Sci. 74, 3–9 (2013). Chang, M.-Y. & Juang, R.-S. Use of chitosan–clay composite as immobilization support for improved activity and stability of β-glucosidase. Biochem. Eng. J. 35, 93–98 (2007). Lainé, M. et al. Reaction mechanisms in talc under ionizing radiation: evidence of a high stability of H• atoms. J. Phys. Chem. C 120, 2087–2095 (2016). Momma, K. & Izumi, F. VESTA 3 for three-dimensional visualization of crystal, volumetric and morphology data. J. Appl. Cryst. 44, 1272–1276 (2011). Pinnavaia, T. J. Intercalated Clay Catalysts. Science 220, 365–371 (1983). Murray, H. H. Overview - clay mineral applications. Appl. Clay. Sci. 5, 379–395 (1991). Du, H. & Miller, J. D. A molecular dynamics simulation study of water structure and adsorption states at talc surfaces Int. J. Min. Process. 84, 172–184 (2007). Mälhammar, G. Determination of some surface properties of talc. Colloids Surf. 44, 61–69 (1990). Ramos-Bernal, S. & Negron-Mendoza, A. Radiation heterogeneous processes of 14C-acetic acid adsorbed in Na-Montmorillonite. J. Radioanal. Nucl. Chem. 160, 487–492 (1992). Bujdak, J. & Rode, B. M. The effect of clay structure on peptide bond formation catalysis. J. Mol. Cat. A: Chem. 144, 129–136 (1999). Yaouanc, A. & Dalmas de Réotier, P. Muon spin rotation, relaxation and resonance. Applications to condensed matter. (Oxford University Press: Oxford, 2011). Sherren, C. N. et al. Merging the chemistry of electron-rich olefins with imidazolium ionic liquids: radicals and hydrogen-atom adducts. Chem. Sci. 331, 448–450 (2011). Cox, S. F. J. Muonium as a model for interstitial hydrogen in the semiconducting and semimetallic elements. Rep. Prog. Phys. 72, 116501 (2009). Möller, J. S. et al. Playing quantum hide-and-seek with the muon: localizing muon stopping sites. Physica Scripta 88, 068510 (2013). Alencar, A. B., Barboza, A. P. M., Archanjo, B. S., Chacham, H. & Neves, B. R. A. Experimental and theoretical investigations of monolayer and few-layer talc. 2D Mater. 2, 015004 (2015). Holzschuh, E. Direct measurement of muonium hyperfine frequencies in Si and Ge. Phys. Rev. B 27, 102–111 (1983). Sahoo, N. et al. Electronic structure and hyperfine interaction of muonium in semi-conductors. Hyperfine Interactions 18, 525–541 (1984). Kiefl, R. F. et al. Evidence for endohedral muonium in KxC60 and consequences for electronic structure. Phys. Rev. Lett. 69, 2005–2008 (1992). Webster, B., McCormack, K. L. & Macrae, R. M. Paramagnetic muonium states in elemental sulfur. J. Chem. Soc., Faraday Trans. 93, 3423–3427 (1997). Percival, P. W., Addison-Jones, B., Brodovitch, J.-C. & Sun-Mack, S. Radio-frequency muon spin resonance studies of endohedral and exohedral muonium adducts of fullerenes. Appl. Magn. Reson. 11, 315–323 (1996). Cormier, P., Alcorn, C., Legate, G. & Ghandi, K. Muon Radiolysis Affected by Density Inhomogeneity in Near-Critical Fluids Rad. Res. 181, 396–406 (2014). Ghandi, K. et al. Ultra-fast electron capture by electrosterically-stabilized gold nanoparticles. Nanoscale 7, 11545–11551 (2015). Mills, J. et al. Generation of Thermal Muonium in Vacuum. Phys. Rev. Lett. 56, 1463–1466 (1986). Fleming, D. G. et al. Kinetic Isotope Effects for the Reactions of Muonic Helium and Muonium with H2. Science 331, 448–450 (2011). Macfarlane, W. A. et al. Low temperature quantum diffusion of muonium in KCl. Hyperfine Interactions 85, 23–29 (1994). King, P. J. C. & Yonenaga, I. Low temperature muonium behaviour in Cz-Si and Cz-Si0.91Ge0.09. Physica B 308–310, 546–549 (2001). Storchak, V. G., Eshchenko, D. G. & Brewer, J. H. Quantum diffusion of muonium atoms in solids: Localization vs. band-like propagation. Physica B 374–375, 347–350 (2006). Shimomura, K., Kadono, R., Koda, A., Nishiyama, K. & Mihara, M. Electronic structure of Mu complex donor state in rutile TiO2. Phys. Rev. B 92(075203), 1–6 (2015). Flory, A. T., Murnick, D. E., Leventhal, M. & Kossler, W. J. Probing the Superconducting Vortex Structure by Polarized-µ+ Spin Precession. Phys. Rev. Lett. 33, 969–972 (1974). Ghandi, K., Bridges, M. D., Arseneau, D. J. & Fleming, D. G. Muonium formation as a probe of radiation chemistry in sub- and supercritical carbon dioxide. J. Phys. Chem. A 108, 11613–11625 (2004). Storchak, V. G. & Prokof'ev, N. V. Quantum diffusion of muons and muonium atoms in solids. Rev. Mod. Phys. 70, 929–978 (1998). Belousov, Y. M. Depolarization rate calculation of the muon spin polarization in diamagnetic diatomic media. Physica B 289-290, 431–434 (2000). Silva, E. L. et al. Hydrogen impurity in yttria: Ab initio and μSR perspectives. Phys. Rev. B 85(165211), 14 (2012). Vieira, R. B. L. et al. Muon-Spin-Rotation study of yttria-stabilized zirconia (ZrO2:Y): Evidence for muon and electron separate traps. J. Phys. Conf. Ser. 551(012050), 6 (2014). Shichi, T. & Takagi, K. Clay minerals as photochemical reaction fields. J. Photochem. Photobiol. C: Photochem. Rev. 1, 113–130 (2000). Solomon, D. H. Clay minerals as electron acceptors and/or electron donors in organic reactions. Clays Clay Miner. 16, 31–39 (1968). Laszlo, P. Chemical reactions on clays. Science 235, 1473–1477 (1987). Geological disposal of radioactive wastes and natural analogues; Miller, W., Alexander, R., Chapman, N., McKinley, J.; Smellie, J. A. T., Eds; Pergamon, Vol. 2 (2000). Lainé, M. et al. Reaction mechanisms in swelling clays under ionizing radiation: influence of the water amount and of the nature of the clay mineral. RSC Adv. 7, 526–534 (2017). Morrison, A. H. E., Liu, G. & Ghandi, K. Presenting Muon Thermalization with Feynman QED. JPS Conf. Proc. 21(011065), 9 (2018). Musat, R. M., Cook, A. R., Renault, J.-P. & Crowell, R. A. Nanosecond Pulse Radiolysis of Nanoconfined Water. J. Phys. Chem. C 116, 13104–13110 (2012). Yang, X. L., Guo, S. H., Chan, F. T., Wong, K. W. & Ching, W. Y. Analytic solution of a two-dimensional hydrogen atom. I. Nonrelativistic theory. Phys. Rev. A 43, 1186–1196 (1991). Guo, S. H., Yang, X. L., Chan, F. T., Wong, K. W. & Ching, W. Y. Analytic solution of a two-dimensional hydrogen atom. II. Relativistic theory Phys. Rev. A 43, 1197–1205 (1991). Aquino, N., Campoy, G. & Flores-Riveros, A. Accurate energy eigenvalues and eigenfunctions for the two-dimensional confined hydrogen atom. Int. J. Quantum Chem. 103, 267–277 (2005). Soylu, A., Bayrak, O. & Boztosun, I. The energy eigenvalues of the two dimensional hydrogen atom in a magnetic field. Int. J. Mod. Phys. E 15, 1263–1271 (2006). Funamori, N. et al. Muonium in Stishovite: Implications for the Possible Existence of Neutral Atomic Hydrogen in the Earth's Deep Mantle. Sci. Rep. 5, 8437 (2015). Porter, A. R., Towler, M. D. & Needs, R. J. Muonium as a hydrogen analogue in silicon and germanium; quantum effects and hyperfine parameters. Phys. Rev. B 60, 13534–13546 (1999). Percival, P. W., Roduner, E. & Fischer, H. Radiolysis effects in muonium chemistry. Chem. Phys. 32, 353–367 (1978). West, R. & Percival, P. W. Organosilicon compounds meet subatomic physics: Muon spin resonance. Dalton Trans. 39, 9209–9216 (2010). Hamilton, D. L. & Henderson, C. M. The preparation of silicate compositions by a gelling method. Mineral. Mag. 36, 832–838 (1968). Yamazaki, T. Evolution of Meson Science in Japan. Science 233, 334–338 (1986). Giannozzi, P. B. et al. Quantum Espresso: a modular and open-source software project for quantum simulations of materials. J. Phys.: Condens. Matter 21, 395502 (2009). Blaha, P., Schwarz, K., Madsen, G. K. H., Kvasnicka, D. & Luitz, J. WIEN2k, An Augmented Plane Wave Plus Local Orbitals Program for Calculating Crystal Properties Vienna University of Technology, 2th Edition, (Vienna 2001). Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett. 77, 3865–3868 (1996). Monkhorst, H. J. & Pack, J. D. Special points for Brillouin-zone integrations. Phys. Rev. B 13, 5188–5192 (1976). Karmous, M. S., Ben Rhaiem, H., Robert, J.-L., Lanson, B. & Ben Haj Amara, A. Charge location effect on the hydration properties of synthetic saponite and hectorite saturated by Na+, Ca2+ cations: XRD investigation. Appl. Clay. Sci. 46, 43–50 (2009). This research was financially supported by the Natural Sciences and Engineering Research Council of Canada and by the National Research Council of Canada through TRIUMF. This work was also supported by a grant from Région Ile-de-France in the framework of DIM Oxymore and by Jean d'Alembert chair award to K Ghandi. We are grateful to Dr Jean-Louis Robert for providing synthetic talc. C. Landry also acknowledges Ontario Graduate Scholarship for their support. We are also grateful to Paul Shaver for his help, and thank the staff of the Centre for Molecular and Materials Science at TRIUMF for their technical support. University of Guelph, Department of chemistry, Guelph, ON, N1G 2W1, Canada Khashayar Ghandi & Cody Landry Université de Sherbrooke, Faculté de médecine, Sherbrooke, QC, J1H 5N4, Canada Tait Du LIONS, NIMBE, CEA, CNRS, Université Paris Saclay, CEA Saclay, F-91191, Gif-sur-Yvette, Cedex, France Maxime Lainé & Sophie Le Caër Aix-Marseille University, CINaM-CNRS UMR 7325 Campus de Luminy, F-13288, Marseille, Cedex 9, France Andres Saul Search for Khashayar Ghandi in: Search for Cody Landry in: Search for Tait Du in: Search for Maxime Lainé in: Search for Andres Saul in: Search for Sophie Le Caër in: S.L.C. and K.G. brought the idea and supervised all stages of experimentation and paper writing. T.D. and C.L. prepared all the samples. M.L., S.L.C., T.D., C.L. and K.G. performed the experiments. T.D. performed the data analysis under supervision of K.G. and with the help of C.L. and A.S. performed the first principle calculations. S.L.C., K.G., A.S., C.L. and T.D. discussed the results and contributed to writing the paper although the paper was mostly written by S.L.C., K.G. and A.S. Correspondence to Khashayar Ghandi. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Ghandi, K., Landry, C., Du, T. et al. Influence of confinement on free radical chemistry in layered nanostructures. Sci Rep 9, 17165 (2019) doi:10.1038/s41598-019-52662-z DOI: https://doi.org/10.1038/s41598-019-52662-z By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights Author Highlights
CommonCrawl
The matter of division of one number by another resulting in a quotient and a remainder is perhaps known by all. It is a piece of knowledge that is universally known and is kind of taken-for-granted. But it is not generally known that this division concept is one of the favorites in creating interesting math puzzles, some of which even reach the status of classics. In this puzzle session, we will present one such classic puzzle and its variations. As Euclid was reportedly the first to formally record the long known result of division of a number by another, and named the result as division lemma (or theorem), this puzzle based on the matter of division of one number by another may be called Euclid's division lemma puzzle. Euclid expressed the result of division in the equation, $\text{Dividend}=\text{Divisor}\times{\text{Quotient}}+\text{Remainder}$, Or, $a=bq+r$, where $a$ is the dividend, $b$ is the divisor, $q$ is the quotient and $r$ is the remainder. The strict relationship between the divisor and the remainder is, Remainder must be less than the divisor and greater than or equal to 0. Let us state the puzzle for you solve. Euclid's division lemma puzzle on Counting of eggs This is a rich folk puzzle based on Euclid's division lemma. We will use a concise version of the original puzzle without compromising its challenge. Reportedly this puzzle appeared first in a book by A. Rampal and others. The puzzle in brief A trader selling eggs in a village had a quarrel with a man who pulled down the egg basket breaking all the eggs. The trader appealed to the Panchayat for compensation from the offender. When the Panchayat asked the trader how many eggs were broken, the trader replied, If counted in pairs, one will be left; If counted in threes, two will be left; If counted in fours, three will be left; If counted in fives, four will be left; If counted in sixes, five will be left; If counted in sevens, nothing will be left; And my basket can contain not more than 150 eggs. Question: How many eggs did the trader have? We recommend a time of 20 minutes for solving the puzzle. By our experience, we say that if you have taken more than 30 minutes on the puzzle and still don't know how to solve the puzzle, chances are, you are moving around randomly. Fact is, knowing the answer is not enough—you must be satisfied about the method by which you have found out the answer. The method of solution is the more important component—it might be random or systematic step by step. Again, even when the problem solving method is systematic, it might take a few quick steps to the solution or might be a lengthier series of steps. That's why we devoted time on more than way to solve this puzzle. If you are curious, go on and enjoy the different types of approaches to reach one same destination. But before going ahead, let's tell you the answer right now—it is 119. After the two solution approaches, we didn't stop, but went on to extend the puzzle, even created new puzzles for you and finally formed a framework or problem model for this type of puzzles. First approach to solve Euclid's division lemma puzzle: by overall Pattern identification, LCM, and Mathematical reasoning We mark early that the result of division by 7 is not like the other five divisions. In the other five divisions by 2, 3, 4, 5 and 6 we identify the crucial common pattern that, In each case, if we add 1 to the number being divided, the remainder becomes equal to the divisor. In other words adding 1 to the desired number makes each of 2, 3, 4, 5 and 6 a factor of the number. Based on this pattern we form the binding condition that must be satisfied by the solution, rather any solution to the puzzle if we remove the limiting condition of maximum capacity 150, A number 1 less than a multiple of LCM 60 of 2, 3, 4, 5 and 6, and also divisible by 7 will satisfy all six division conditions. Trials: Trying the first multiple 60 less 1, target number 59 is found to be not divisible by 7, but the second multiple of 60, that is, 120 less 1, and target number 119 is divisible by 7 and thus satisfies all given conditions. So answer is—119 was the number of eggs in the basket of the trader. In this approach we have not written any equation, but just used the underlying division and factor concepts in our mathematical reasoning. In the second approach to the solution now, we will use the mathematical approach with equations. Second approach to solve Euclid's division lemma puzzle: by division lemma equations, identification of pattern, LCM and trial In this approach to solve, we write down the equations for five division by 2, 3, 4, 5 and 6 assuming suitable symbols for quotients, and the desired number as $n$, $n=2q_2+1$; $n=5q_5+4$, and $n=6q_6+5$. If we add $1$ to $n$, for all five cases the remainder becomes equal to the divisor so that the divisor can be considered as a factor of $(n+1)$. Thus $(n+1)$ must be a multiple of the LCM of 2, 3, 4, 5 and 6, that is, 60. Trying and failing with first multiple 60, success is achieved with second multiple of 60, that is 120. Reducing 120 by 1 we find 119 to be divisible by 7 in addition. Go through each of the solutions and compare. We will now extend the puzzle by modifying one single condition in the original puzzle. We will modify the limit of 150 eggs to "more than 200 and less than 900". If you are still curious you will see how further exploration reveals new patterns. Euclid's division lemma puzzle extended The extended puzzle statement in brief is as follows, And my basket can contain more than 200 but less than 900 eggs. In the extended puzzle the limit of 150 eggs in the basket is changed to a new limiting range of more than 200 but less than 900. Effectively we wanted to find the nature of second and subsequent numbers satisfying the same given conditions, the first being 119. Give this second puzzle some time before going ahead. You will surely enjoy to find the way to the solution yourself. Solution to Euclid's division lemma puzzle extended: by LCM multiple analysis and remainder analysis for 7, identifying a second pattern We choose the first approach that we have used for solving the original puzzle. The statement that is independent of any limiting capacity of the basket, but still satisfies all the other six division conditions is, The desired number must be a multiple of 60, the LCM of 2, 3, 4, 5 and 6, less 1 and also divisible by 7. Let us start from second multiple of 60, that is, 120 as the starting point. $120-1=119$, when divided by 7 remainder is 0. For the 3rd multiple of 60 less 1, $180-1=179$ when divided by 7 remainder is 4. This is expected as every additional 60 adds 4 to the remainder when divided by 7, $60-7\times{8}=60-56=4$. For the 4th multiple of 60 minus 1, remainder will thus be, $4+4-7=1$, (every 60 adds 4 to the remainder and when it exceeds 7, the excess becomes the remainder; this is modulo 7 operation). For the 5th multiple of 60 minus 1, remainder will be, $1+4=5$, for 6th multiple of 60 minus 1, remainder is $5+4-7=2$, for 7th multiple of 60 minus 1, remainder is $2+4=6$, for 8th multiple of 60 minus 1, remainder is $6+4-7=3$, and for 9th multiple of 60 minus 1, remainder will be, $3+4-7=0$. At last following the pattern of remainders, 0, 4, 1, 5, 2, 6, 3, the remainder again returns to 0 for 9th multiple of 60 minus 1, that is, 539. 539 is then the second number after 119 satisfying the given six division conditions. Also, 0, 4, 1, 5, 2, 6, 3 and then 0 is the repeating pattern of remainders, when consecutive multiples of 60 less 1 are divided by 7. Now we can easily answer the question, what is the third such number? It will be distant from the second by 7 numbers of multiple of 60, that is, it will be, $9+7=16$th multiple of 60 minus 1, that is, 959 which is greater than 900. Desired answer in this case is then 539. Seems difficult? It might feel so, but isn't it interesting to discover a general pattern and predict all possible such numbers satisfying the given six division conditions? If you feel still more curious and explorative, you may even form what we call a framework for any new puzzle of this type. Well, exploration has its own attractions! New puzzles based on Euclid's division lemma We are now in a position to form new puzzles of this type based on Euclid's division lemma. First new puzzle on Euclid's division lemma What is the number larger than 40 and smaller than 100, which when divided by 2, 3, and 4, remainder will be one, two and three respectively, and will be divisible by 5? If you have followed the reasoning till now, you should be able to solve this smaller problem easily within 3 minutes. Give it a try. Solution to first new puzzle on Euclid's division lemma: Pattern identification and rule formation By the experience we have gained till now we can identify the crucial binding condition easily. The desired number will be a multiple of LCM 12 of 2, 3, and 4 less 1 and divisible by 5. This is the rule that determines every such number without any constraining limits. This rule helps us to create the method to find the solution to the given problem. Solution to the first new puzzle on Euclid's division lemma: Method creation, solution and remainder pattern identification First multiple of 12 less 1: 11 not divisible by 5: remainder 1: every 12 when divided by 5 will add 2 to this remainder. Second multiple of 12 less 1: 23 not divisible by 5: remainder 3 (1+2=3). Third multiple of 12 less 1: 35 divisible by 5 but less than 40 (remainder 3+2=5 minus 5=0. modulo 5). Fourth multiple of 12 less 1: 47 not divisible by 5: remainder 2 (0+2=2). Fifth multiple of 12 less 1: 59 not divisible by 5: remainder 4 (2+2=4). Sixth multiple of 12 less 1: 71 not divisible by 5: remainder 1 (4+2=6 minus 5=1, modulo 5). Seventh multiple of 12 less 1: 83 not divisible by 5: remainder 3 (1+2=3, the next one will be the answer). Eighth multiple of 12 less 1: 95 divisible by 5: remainder 0 (3+2=5 minus 5=0, modulo 5). 95 is the desired number satisfying all conditions. The repeating remainder pattern is, 0, 2, 4, 1, 3 and then again 0, and two such numbers will be separated by 5 multiples of 12, that is 60. The third such number will then be simply, 95 + 60 = 155. Second new puzzle on Euclid's division lemma What is the first number which when divided by 2, 3 and 4 remainder will be 1, 2 and 3 respectively and will be divisible by 6? Try to solve this puzzle to gain a new insight into the nature of this types of puzzles. Solution to the second new puzzle based on Euclid's division lemma As before we form the binding condition for the solution, The number must be a multiple of LCM 12 of 2, 3, and 4, less 1 and also divisible by 6. It is easy to see that a multiple of 12 less 1 will never be divisible by 6. So the answer is, no such number exists. The answer lies in the relationship between the last divisor which is a factor with remainder 0 and the other divisors each with remainder 1 less than itself, The first set of divisors 2, 3, 4 has factors common to the last divisor 6, and so the multiple of LCM 12 of 2, 3, and 4 less 1 cannot have 6 as a factor. This insight helps us to form the binding condition for a valid division lemma puzzle of similar nature, The divisor with 0 remainder cannot have any common factor with the set of other divisors. We are now in a position to form a framework or problem model for similar puzzles based on division lemma. Specification or problem model for similar puzzles based on Euclid's division lemma A valid similar puzzle based on Euclid's division lemma must have, A set of integers as divisors to a number with each division resulting in remainder 1 less than the divisor, and another lone integer as a factor to the same number but having no common factor with rest of the divisors. With this specification in place let us state the third new puzzle for you to solve. We assure you that we will close this session with this last puzzle. Third new puzzle on Euclid's division lemma What is the first number which when divided by 2, 3, 4, 5, and 6 remainder will be 1, 2, 3, 4, and 5 respectively and the number will be divisible by 11? Do give it a try. It should not be difficult for you now. Solution to the third new puzzle based on Euclid's division lemma As before the desired number will be a multiple of 60 less 1 and divisible by 11. Every additional 60 when divided by 11 will add to the remainder 5 or excess to 11 after adding 5 to the remainder. The repeating remainder pattern will then be, 0, 5, 10, 4, 9, 3, 8, 2, 7, 1, 6 and then 0. Two consecutive solutions will be separated by 11 multiples of 60, that is 660. The 1st multiple of 60 less 1 results in remainder 4 which is 8 multiples of 60 behind from next remainder 0 in the repeating remainder sequence. So, $1+8=9$th multiple of 60 less 1, that is 539 will be the first number satisfying all given puzzle conditions. The second such number will be the 20th multiple of 60 less 1, that is 1199, and so on. There will be infinite number of such integers satisfying the given six division conditions. The specification by no means is a complete one as we have included the phrase "similar puzzle" in every case. There can be varieties of other puzzles using the basic concepts of division, remainder and factors. Puzzles you may enjoy Mathematical puzzles Reverse cheque puzzle Reverse cheque puzzle solution Monkey and the coconuts puzzle with solutions Logic analysis puzzles Method based solution to Einstein's logic analysis puzzle, whose fish How to solve Einstein's puzzle whose fish confidently, improved method based solution Logic puzzle, When is Cheryl's birthday Matchstick puzzles Solution to 6 triangles to 5 triangles in 2 moves, first matchstick puzzle Matchstick puzzle 5 squares to 4 squares in 2 moves Matchstick puzzle, Turn around the fish in 3 moves Fifth Matchstick puzzle, Move 3 sticks in tic-tac-toe figure to form 3 perfect squares Hexagonal wheel to 3 triangles by removing 4 sticks Convert 5 squares to 4 squares in 3 stick moves, third 5 square matchstick puzzle Counting egg puzzle Euclid's division lemma puzzle solution Mathematical puzzle Folk puzzle Key pattern identification Key method creation Binding condition for solution Basic factors and multiples concepts Remainder analysis Repeating remainder sequence Repeating pattern Binding condition for problem validity Problem specification Problem modelling Patterns and methods SSC CGL Solution Set 6 Profit and Loss 1 video
CommonCrawl
Distribution of Gaps and Adhesive Interaction Between Contacting Rough Surfaces Part of a collection: Mark Robbins, in memoriam Joseph M. Monti ORCID: orcid.org/0000-0002-5971-97791 nAff2, Antoine Sanner ORCID: orcid.org/0000-0002-7019-21033,4 & Lars Pastewka ORCID: orcid.org/0000-0001-8351-73363,4 Tribology Letters volume 69, Article number: 80 (2021) Cite this article Understanding the distribution of interfacial separations between contacting rough surfaces is integral for providing quantitative estimates for adhesive forces between them. Assuming non-adhesive, frictionless contact of self-affine surfaces, we derive the distribution of separations between surfaces near the contact edge. The distribution exhibits a power-law divergence for small gaps, and we use numerical simulations with fine resolution to confirm the scaling. The characteristic length scale over which the power-law regime persists is given by the product of the rms surface slope and the mean diameter of contacting regions. We show that these results remain valid for weakly adhesive contacts and connect these observations to recent theories for adhesion between rough surfaces. Contact between nominally flat, rough surfaces has been the subject of study for countless experimental, analytical, and numerical investigations over the past century [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]. A universal theme found in most cases is that contact is limited to the peaks or asperities of the rough topography and the real area of contact \(A_\text {rep}\) is much less than the apparent projected area \(A_0\). Substantial progress has been made in determining the relationship between \(A_\text {rep}\) and the applied normal force F in non-adhesive, frictionless systems assuming elastic response. In such cases, the proportionality \(A_\text {rep} = \kappa _\text {rep} F/h_0^\prime E^*\) is found when \(A_\text {rep} \lesssim 0.1A_0\). Here, \(h_0^\prime\) is the root mean square (rms) slope of the rough topography, \(E^*\) is the elastic contact modulus, and the dimensionless constant \(\kappa _\text {rep}\approx 2\) [2, 6, 7]. This result implies a load-independent mean compressive stress, \(\sigma _\text {rep}=h_0^\prime E^*/\kappa _\text {rep}\) in contacting regions. Since flattening a region with slope \(h_0^\prime\) introduces a strain of order \(h_0^\prime\) into the surface, \(\sigma _\text {rep}\) reflects the stress required to flatten the rough topography. Recently, interest has turned to systems including attractive interactions that lead to macroscopic adhesion. Two opposite limits exist in the classical literature on the adhesion of smooth spheres: in the Derjaguin–Muller–Toporov (DMT) [32] limit, weak attractive forces between solids do not alter the geometry of contact, but do reduce the global mean pressure; in the opposite limit, known as the Johnson–Kendall–Roberts (JKR) [33] limit, strong attractive forces significantly change the structure of the contact edge and can lead to contact hysteresis. For the contact of spheres, the Tabor and Maugis parameters [34] describe the continuous transition between the two limits in terms of the relative strengths of adhesive and elastic parameters. Working in a DMT-like limit, Pastewka and Robbins developed a theory to predict the onset of stickiness in contacts of self-affine rough surfaces [23] that was later independently confirmed by Müser [35]. They split the total normal force as \(F=F_\text {rep}+F_\text {att}\) into the sum of a repulsive contribution \(F_\text {rep}>0\) and an attractive contribution \(F_\text {att}<0\). Repulsive and attractive contributions originate from repulsive surface patches of total area \(A_\text {rep}\) and attractive surface patches of area \(A_\text {att}\) (see Fig. 1). The total force is then given by $$\begin{aligned} F = \sigma _\text {rep} A_\text {rep} - \sigma _\text {att} A_\text {att}, \end{aligned}$$ where \(\sigma _\text {att}\) is the mean stress in the attractive patches that is roughly constant and of order \(\sigma _\text {att}=w/\Delta r\), where w is the work of adhesion and \(\Delta r\) the range of the attractive interaction. Like in the purely non-adhesive limit, the geometry of contact is fractal in the DMT-like limit with proportionality between \(A_\text {rep}\) and the contact perimeter \(P_\text {rep}\) given by $$\begin{aligned} P_\text {rep}=\pi A_\text {rep}/d_\text {rep}, \end{aligned}$$ where the mean contact diameter \(d_\text {rep}\) is approximately constant [7, 23, 36]. Note that Eq. (2) holds generally for any geometric object, but \(d_\text {rep}\) varies with \(A_\text {rep}\) in most cases. Short-ranged attractive interactions generate narrow bands of approximately constant width \(d_\text {att}\) located around contact regions (see Fig. 1). If \(d_\text {att}\) is small, then the total area contributing to attractive forces \(A_\text {att}=P_\text {rep} d_\text {att}\), which can be related to \(A_\text {rep}\) via mutual proportionality with \(P_\text {rep}\) given by Eq. (2). This means, that for non-sticky interfaces, the (repulsive) contact area is given by the expression $$\begin{aligned} A_\text {rep}=\kappa \frac{F}{h_0^\prime E^*} \end{aligned}$$ with an effective \(1/\kappa =1/\kappa _\text {rep}-1/\kappa _\text {att}\). The adhesive interaction hence increases the effective value of the dimensionless constant \(\kappa\). A macroscopic force is required to separate the two surface when \(|F_\text {att}|\approx F_\text {rep}\) or equivalently \(\kappa _\text {att}\approx \kappa _\text {rep}\). Interfaces that require a macroscopic force for separation are called "sticky". a Contact map for a self-affine surface with \(H = 0.8\) and \(\lambda _\text {min} = 16 a_0\). \(A_\text {rep}\) is the sum over all pixels shown in black, and the contact perimeter \(P_\text {rep}\) is marked in red. b Schematic of a contact region created by a contacting asperity, showing the mean contact diameter \(d_\text {rep}\). The gap \(\Delta (x)\) between surfaces grows as the lateral distance \(x^{3/2}\) (Color figure online) This theory depends sensitively upon the distribution of interfacial separations or gaps, that is assumed to be unaltered from the non-adhesive scenario. Previous work has primarily focused on the behavior of the mean gap \({\bar{g}}\), which is commonly found to be exponentially related to the normal load in non-adhesive contact as \(F \propto \exp \left( -{\bar{g}}/\gamma h_0\right)\), where \(h_0\) is the rms surface height and \(\gamma\) is a dimensionless constant of order unity [8, 9, 12,13,14,15,16,17, 21, 24, 29]. Almqvist et al. [16], the contact mechanics challenge [29] and Wang and Müser [37] have reported distributions of interfacial separations, but these works have either not focused on the behavior at small gaps that is important for understanding short-ranged adhesion or indirectly reported it through analysis of percolation in Reynolds flow. In this paper, we derive the distribution of interfacial separations in the vicinity of contacting regions and show numerically that our expression holds even in the weakly adhesive limit. This distribution can be used to compute the total attractive contribution to the force and hence the force–area relationship for weakly adhesive interfaces. Simulation Methods In our simulations, we invoke the standard mapping that allows the contact of two rough, elastic solids to be treated as contact between an initially flat, elastic solid and a rough, rigid surface [38]. If the elastic properties of the two original surfaces are encoded by the Young's moduli \(E_1\) and \(E_2\) and Poisson's ratios \(\nu _1\) and \(\nu _2\), the combined elastic response is given by the elastic contact modulus $$\begin{aligned} \frac{1}{E^*} = \frac{1-\nu _1^2}{E_1} + \frac{1-\nu _2^2}{E_2}. \end{aligned}$$ The roughness profile of each periodic, \(L \times L\) surface with nominal area \(A_0=L^2\) is described by a self-affine fractal between an upper cutoff length scale \(\lambda _\text {max}\) and a lower cutoff \(\lambda _\text {min}\), where length scales are given in terms of the pixel size \(a_0\). This means that the power spectral density (PSD) \(C(\mathbf {q})\) of the isotropic, self-affine roughness depends only on wavevector magnitude \(q = |\mathbf {q}|\) and satisfies $$\begin{aligned} C(q) = {\left\{ \begin{array}{ll} C_0 &{} \text {if}\;q\le q_1 \\ C_0\left( \frac{q}{q_1}\right) ^{-2(1+H)} &{} \text {if}\;q_1 < q \le q_2 \\ 0 &{} \text {if}\;q > q_2 \end{array}\right. }, \end{aligned}$$ where \(q_1 = 2\pi /\lambda _\text {max}\) and \(q_2 = 2\pi /\lambda _\text {min}\) are the wavevector magnitudes corresponding to the roughness cutoffs and H is the Hurst exponent (\(0< H < 1\)) that determines correlations in the roughness. The resolution of the calculations is given by \(\lambda _\text {min}/a_0\), while the ratio \(L/\lambda _\text {max}\) controls the surface "representativity" [18]. The distribution of heights P(h) for a Gaussian self-affine rough surface with mean height \({\bar{h}}\) and \(L/\lambda _\text {max} \gg 1\) has (by construction) Gaussian form. We use \(\lambda _\text {min} \ge 32 a_0\) for the results presented here to ensure that the contact edge is sufficiently resolved. We use a Fourier-filtering algorithm to create such self-affine surfaces for our calculations [36, 39]. Performing the mapping described above results in a rough surface that is the incoherent sum of the rough profiles of the original surfaces. For the combination of two profiles that have identical statistical properties, the commonly utilized statistical measures of roughness—the rms height \(h_0 = \sqrt{\langle h^2\rangle }\), rms slope \(h_0^\prime = \sqrt{\langle |\nabla h|^2\rangle }\), and rms curvature \(h_0^{\prime \prime } = \sqrt{\langle |\nabla ^2 h|^2\rangle }/2\)—of the combined surface each increase by a factor of \(\sqrt{2}\). We therefore here work in the limit of a rigid rough surface contacting a flat deformable elastic half-space and all statistical properties are to be interpreted for a combined surface. Note that \(\langle \cdot \rangle\) denotes the spatial average over the domain where the topography function h(x, y) is defined. We use a static boundary element method to compute the linear elastic deformation induced by normal contact on an isotropic half-space (Refs. [40,41,42]). In order for linear elasticity to be a good approximation, the surface slope must be small. We use \(h_0^\prime = 0.1\) to ensure that this approximation is justified, but note that real surfaces may have values \(h_0^\prime \approx 1\) or larger [31, 43, 44]. Our simulations assume frictionless contact with a non-interpenetration constraint. Pixels on the surface of the elastic solid are considered to be in contact when they bear a compressive pressure; each contacting pixel contributes an area \(a_0^2\) to the total contact area. For non-adhesive calculations, we solve the interpenetration constraint using a constrained conjugate-gradient optimizer [45]. For adhesive calculations, we assume an interaction energy of $$\begin{aligned} v(g) = - w \exp \left( -g/\rho \right) , \end{aligned}$$ that depends on the gap g, where \(\rho\) is the interaction range. (Note that the range defined in Ref. [23] is \(\Delta r\approx 1.36 \rho\) for this potential.) The overall adhesive energy is then given by $$\begin{aligned} E_\text {att} = \int dxdy\, v(g(x,y)), \end{aligned}$$ where g(x, y) is the local gap at position x, y. This attractive interaction is minimized with an interpenetration constraint realized through the constrained non-linear conjugate-gradient algorithm of Ref. [46]. Similar boundary element methods have been used extensively to study rough contacts [11, 18,19,20,21,22,23, 25,26,27,28, 30]. The attractive interaction is identical to the one employed in the contact mechanics challenge [29]. The distribution of interfacial separations g between a rough surface and an undeformed elastic solid with surface at \(h = 0\) is equal to the (Gaussian) distribution of heights, $$\begin{aligned} p(g) = \frac{1}{\sqrt{2\pi }h_0}\text {exp}\left[ -\frac{(g-{\bar{g}})^2}{2h_0^2} \right] \quad \text {for}\quad g \ge 0, \end{aligned}$$ where \({\bar{g}}\) is the initial mean surface height. The width of the distribution is the same as for the rough surface itself and scales with the long wavelength cutoff, \(h_0 \sim \lambda _\text {max}^H\). When the solids are pushed together under load, the elastic solid deforms and the mean interfacial separation \({\bar{g}}\) shrinks. We now write the gap distribution as the additive decomposition $$\begin{aligned} p(g) = p_c(g) + p_n(g) + p_f(g), \end{aligned}$$ where \(p_c(g)\) contains the distribution of the gaps g within the contacting area, \(p_n(g)\) the contribution from near the contact edge, and \(p_f(g)\) the contribution from farther distances from the contact edge. Since p(g) is normalized, the probability of contact for a given contact fraction \(c=A_\text {rep}/A_0\) is $$\begin{aligned} p_c(g) = c\delta (g), \end{aligned}$$ where \(\delta\) is the Dirac \(\delta\)-function. The next contribution \(p_n(g)\) comes from small separations near the contact edge. As shown in numerical calculations in Refs. [7, 23], the contact edge has a total perimeter \(P_\text {rep} = \pi A_\text {rep}/d_\text {rep}\) with a constant \(d_\text {rep}\). (This means that both area and contact edge are fractal objects—see Fig. 1a—and suggests that their fractal dimensions are identical.) We can write this contribution as $$\begin{aligned} p_n(g) = \frac{P_\text {rep}}{A_0}\int _0^\xi dx\ \delta (g - \Delta (x)) \end{aligned}$$ where \(\Delta (x)\) describes how the mean gap between the two contacting surfaces varies as a function of distance x from the contact edge (see Fig. 1b). Note that we take the integral in Eq. (11) out to a characteristic length \(\xi\) that is close enough to the contact edge such that the number of points contributing to the gap distribution is still proportional to \(P_\text {rep}\). The arguments that lead to Eq. (11) rely on the geometry of the contact patches formed between contacting self-affine surfaces. Refs. [7, 23] showed that in this case \(P_\text {rep}\propto A_\text {rep}\), compared to \(P_\text {rep}\propto A_\text {rep}^{1/2}\) as expected for simple contact shapes like circles. The larger perimeter scaling exponent arises because fractal contact regions are not compact, and a significant perimeter contribution comes from regions inside the convex hull enclosing individual patches. As patches become larger, they simply contain more non-contacting regions. The presence of non-contacting regions makes the contact geometry appear locally rectangular. The deformation cross-section for each line segment drawn through a single continuous contact diameter looks like that of a cylindrical indenter, rather than a spherical one. Within Euclidean geometry, a rectangle with constant thickness along its minor axis has the property \(P_\text {rep}\propto A_\text {rep}\), if the rectangle is thin enough such that the contributions to \(P_\text {rep}\) from the shorter sides can be neglected. The characteristic width of the contacting rectangle, and hence the contact diameter for the cylinder, is \(d_\text {rep}\). Working with this analogy, we now derive an analytic prediction for the distribution of gaps produced by non-adhesive contact of smooth surfaces. Assuming a non-adhesive cylindrical contact [47, 48], we have $$\begin{aligned} \Delta (x) = \frac{4h_0^\prime d_\text {rep}}{3}\left( \frac{x}{d_\text {rep}}\right) ^{3/2} = g_0 \left( \frac{x}{d_\text {rep}}\right) ^{3/2}, \end{aligned}$$ where \(h_0^\prime\) is the local slope of the cylinder at the contact edge. It can be shown generally for non-adhesive contact that the local separation \(\Delta (x) \propto x^{3/2}\) for small lateral distances x from the contact edge [38]. The characteristic scale for the interfacial separation at the contact edge is \(g_0 = 4h_0^\prime d_\text {rep}/3\), the prefactor in Eq. (12). Inserting Eqs. (12) into (11) yields $$\begin{aligned} p_n(g) = \frac{2 P_\text {rep} d_\text {rep}}{3 A_0 g_0}\left( \frac{g_0}{g}\right) ^{1/3} = \frac{2\pi c}{3g_0} \left( \frac{g_0}{g}\right) ^{1/3}, \end{aligned}$$ where Eq. (2) was used for the right-hand equality. This is our prediction for the gap distribution near the contact edge. For small g, the distribution diverges as \(g^{-1/3}\). We note that \(\int _0^\infty dg\,p(g) = 1\), but taking the integral out to infinity for the contribution to p(g) given by Eq. (13) diverges. The contribution \(p_n(g)\) to the gap distribution can therefore only be valid up to \(g\sim g_0\), the characteristic gap that is reached a distance \(d_\text {rep}\) from the contact edge. The "far" contribution \(p_f(g)\) looks like the undeformed distribution of heights given by Eq. (8) (see also Refs. [16, 29]). Almqvist et al. [16] have observed a divergence of the gap distribution for small g but have not quantified the exponent. Pastewka and Robbins [23] have used this divergence for their theory of "stickiness" but have not provided extensive numerical evidence for its validity. We now supplement these observations with additional high-resolution numerical data. We performed simulations with sufficiently fine resolution (\(\lambda _\text {min} = 32 a_0\) and \(128 a_0\)) to test Eq. (13). Figure 2 shows distributions of interfacial separations for non-contacting grid points. The distributions were normalized to the respective non-contacting fractional area, \(1-c\), and divided by the prefactor of Eq. (13) to collapse all data points onto a single \((g/g_0)^{-1/3}\) power-law. We used the numerically measured value of \(d_\text {rep}\) (see Ref. [23] on details of how to compute \(d_\text {rep}\)) for the data collapse. The power-law regime emerges for both \(H = 0.3\) (Fig. 2a) and \(H = 0.8\) (Fig. 2b). The probability distribution of interfacial separations for \(H = 0.3\) (a) and \(H = 0.8\) (b) normalized to \(1-c\) and divided by the prefactor in Eq. (13) for the ratios \(\lambda _\text {max}/\lambda _\text {min}\) indicated in the legend, with \(\lambda _\text {min} = 32 a_0\) (solid symbols) and \(\lambda _\text {min} = 128 a_0\) (open symbols, color matches \(\lambda _\text {max}/\lambda _\text {min}\) in the legend). Here, \(A/A_0 \approx 0.03\) for \(\lambda _\text {min} = 32 a_0\) and \(A/A_0 \approx 0.04\) for \(\lambda _\text {min} = 128 a_0\). The power-law \(p_n(g)\) from Eq. (13) is shown as a dashed black line The data only collapse over a limited range of gaps. The divergence at small gap predicted by Eq. (13) is always cut off at a minimum length scale, below which p(g) is uniformly distributed. In our simulations, the cutoff scale is \(\sim 10^{-3}-10^{-2}a_0\); the threshold for saturation of p(g) at small \(g/g_0\) is inversely proportional to the resolution \(a_0/\lambda _\text {min}\). For \(H = 0.3\) (Fig. 2a), contributions to p(g) from near-contact and far-from-contact overlap (i.e., \(p_n(g)\) and \(p_f(g)\) have similar magnitude) for \(\lambda _\text {min} = 32 a_0\) (solid symbols). The power-law regime only clearly emerges over 1–2 decades for \(H = 0.3\) if the resolution of the calculation is improved by increasing \(\lambda _\text {min}\), as is shown for \(\lambda _\text {min} = 128 a_0\) (open symbols). For \(H=0.8\), on the other hand, the power-law regime extends over a much larger range of gaps even for our "coarse" calculations with \(\lambda _\text {min} = 32 a_0\). As for \(H=0.3\), increasing the short-wavelength cutoff to \(\lambda _\text {min}=128 a_0\) extends the power-law to smaller gaps. One can integrate \(p_n(g)\) to calculate the cumulative fractional area \(c_n(g_c)\) in the near-contacting region closer than a cutoff gap \(g_c\). We find $$\begin{aligned} c_n(g_c) = \int _0^{g_c} dg\,p_n(g) = \pi c \left( \frac{g_c}{g_0}\right) ^{2/3}, \end{aligned}$$ which for \(g_c=g_0\) yields \(c_n/c=\pi\). This means the area in the near-contact region is \(\sim 3\) times the area in the contacting regions. The area in the near-contact region is equal to the contact area when \(g_c\approx 0.18g_0\). The breakdown of the power-law region at small g occurs at \(g/g_0\approx 10^{-3}\) or smaller (for \(H = 0.8\)), meaning that the area within the roll-off region at small g is at most about \(3\%\) of the contact area. For large gaps the power-law regime is cut off by a Gaussian gap distribution that reflects the distribution of undeformed or weakly deformed parts of the surface (see also Eq. (8)). For both \(H=0.3\) and \(H=0.8\), the uptick in p(g) in the range \(g/g_0 \sim 1-10\) is the peak of this Gaussian height distribution. Since \(h_0\propto (\lambda _\text {max}/\lambda _\text {min})^H\), the power-law regime extends much further for \(H = 0.8\) (Fig. 2b) than for \(H=0.3\) (Fig. 2a). Increasing \(\lambda _\text {max}/\lambda _\text {min}\) shifts the Gaussian peak out to larger \(g/g_0\) (most prominently for \(H > 0.5\)), and extends the range over which \(p(g) \approx p_n(g)\). However, for the largest \(\lambda _\text {max}/\lambda _\text {min}\) we studied, \(p(g) < p_n(g)\) for \(0.1 \lesssim g/g_0 \lesssim 1\). This suggests that the cutoff of the integral in Eq. (11) may be up to an order of magnitude smaller than \(g_0\), about equal to the point where the near-contact area matches the contact area. Still larger calculations may be required to verify this result. The crucial question for adhesive theories is how applicable our results are for gap distributions in the presence of attractive interactions. We quantify the strength of the attractive interaction by the value of \(1/\kappa _\text {att}\) (see Ref. [23] and discussion below), since interfaces become sticky for \(1/\kappa _\text {att} \gtrsim 1/2\) [23]. For the data collapse, we use the values of \(d_\text {rep}\) measured in the non-adhesive calculations. Figure 3 shows the gap distributions for adhesive calculations as w increases (with constant \(\rho\)), using surfaces with \(H=0.8\) and \(\lambda _\text {min}=128 a_0\). To facilitate the comparison between non-adhesive and adhesive calculations, the adhesive simulations are conducted with the constraint that the mean gap is identical to the non-adhesive simulation (\(1/\kappa _\text {att} = 0\)). This choice of constraint means that the contact areas are not equal, particularly for sticky surfaces. For non-sticky surfaces (up to \(1/\kappa _\text {att}\approx 0.2\)), the gap distribution follows the predicted power-law over the same range as the non-adhesive result. At \(1/\kappa _\text {att}\approx 0.2\) (\(A_\text {rep}/A_0 \approx 0.06\)) there is a slight deviation toward smaller gaps. The sticky cases with \(1/\kappa _\text {att}\approx 1\) and \(1/\kappa _\text {att}\approx 2\) (\(A_\text {rep}/A_0 \approx 0.12\) and 0.16, respectively) clearly deviate from our prediction for the divergence. These sticky interfaces appear to still exhibit a regime where \(p(g)\propto g^{-1/3}\), but the range of the power-law regime becomes narrower as \(1/\kappa _\text {att}\) increases. This means that the prefactor from Eq. (13) no longer captures the intensity of the divergence and much of the non-contacting area is pushed out toward larger gaps. This is also reflected by the increase of the peak at larger gaps. The characteristic gap below which the distribution rolls-off and becomes constant appears to increase slightly in the sticky limit. Comparison of the interfacial probability distributions for adhesive contact with increasing w, for \(H = 0.8\), \(\lambda _\text {min} = 128 a_0\), \(\lambda _\text {max}/\lambda _\text {min} =256\), and \(L/\lambda _\text {max}=2\). The corresponding non-adhesive contact distribution from Fig. 2b is replotted (\(1/\kappa _\text {att} = 0\), \(A_{rep}/A_0 \approx 0.04\)), and the adhesive distributions are normalized by \(g_0\) and c obtained from the non-adhesive result. The power-law \(p_n(g)\) from Eq. (13) is shown as a dashed black line. The dotted gray line corresponds to \(g/g_0 = \rho /g_0\) with \(\rho = 4a_0\). The behavior of the near-contact interfacial separation is an important consideration in the context of adhesive contact because it determines the attractive contribution to the force, $$\begin{aligned} \frac{F_\text {att}}{A_0} = \int _0^\infty dg\, p(g) \frac{dv}{dg}. \end{aligned}$$ Assuming that the contribution from \(p_f(g)\) is negligible, i.e., that our potential is sufficiently short-ranged, the attractive pressure per unit contact area is then given by $$\begin{aligned} \frac{F_\text {att}}{A_\text {rep}} = \frac{2\pi }{3} \int _0^\infty \frac{dg}{g_0}\, \left( \frac{g_0}{g}\right) ^{1/3} \frac{dv}{dg} = -\frac{2\pi }{3} \frac{w}{(g_0^2\rho )^{1/3}}, \end{aligned}$$ where we have used the interaction law v(g) from Eq. (6). Note that Eq. (16) defines the value of the dimensionless constant \(1/\kappa _\text {att}=F_\text {att}/h_0^\prime E^* A_\text {rep}\), which we used to quantify the strength of adhesion in Fig. 3 and which can be used to determine the effective range of adhesion \(\Delta r\) (see Ref. [23]). This expression can also be used to quantify what "short-ranged" adhesion means for rough surfaces and, therefore, to determine the limits of the theories of Pastewka and Robbins [23] and Müser [35]. Figure 4 shows the normalized integrand of Eq. (16) as a function of the normalized gap \(g/g_0\). The integrand depends on the range of the interaction potential \(\rho\). For \(\rho \approx 0.1 g_0\) and below, the main contribution to the integrand and thereby the attractive force comes from gaps with \(g/g_0<0.1\) where the power-law holds. The roll-off region at small gap contributes negligibly to this integral. This means that the adhesion range is short enough if \(\rho \lesssim 0.1 g_0\). The calculations presented here were carried out with \(\rho =4a_0\) (as is typical for attractive interactions, e.g., Refs. [49, 50]) and our adhesive calculations have \(g_0>25 a_0\), which means the range is sufficiently small. Interactions with a larger range (e.g., electrostatic interactions) interact with the full topography of the surface and require corrections to the expression for \(F_\text {att}\). The same is true for extremely smooth surfaces with small values of \(g_0\). This defines bounds for theory outlined in Refs. [23, 35]. Integrand of Eq. (16), i.e., non-dimensionalized contribution to the overall attractive force as a function of gap \(g/g_0\) for different ranges \(\rho\) of the adhesive interaction Besides leaving the gap distribution unmodified, the DMT-like limit also implies that the repulsive area \(A_\text {rep}\) equals the typical definition of the contact area, namely, vanishing gaps \(g=0\). The latter definition makes sense in continuum theories (like the present work), but not for models that consider the full intermolecular interaction, which exhibits soft repulsion (e.g., Ref. [23]), or for those that include thermal fluctuations (e.g., Refs. [51, 52]). Like in the JKR model [33], these definitions of contact no longer agree for sticky interfaces. In addition, JKR-like contacts separate as \(x^{1/2}\) and not \(x^{3/2}\) near the contact edge [53], leading to a gap distribution \(p_n(g)\propto g\) that is clearly distinct from what we observe in our calculations. Furthermore, in the sticky limit, the contact geometry changes substantially and is not expected to give rise to the simple proportional contact area–perimeter relationship used in our calculations. This is especially true for soft solids, which can deform to fill in interior non-contacting regions without large elastic energy penalties, thereby increasing the contact area at the expense of the overall contact perimeter. Nevertheless, the present calculations are a powerful demonstration of the universal emergence of the \(g^{-1/3}\) divergence in the distribution of gaps between elastically stiff rough surfaces. Since this behavior is a direct consequence of the fractal character of the contacting interfaces as manifested in the proportionality between perimeter and contact area, our results are another indirect demonstration of this aspect of the contact geometry. The distribution is unaltered by weak adhesive interactions, giving additional support for the DMT-like approximation that underlies the adhesive theories of Pastewka and Robbins [23] and Müser [35]. Code Availability We used the contact.engineering ecosystem for all calculations. The code is available at https://github.com/ContactEngineering under the terms of the MIT license. Data Availibility Data are available from the authors upon request. Greenwood, J.A., Williamson, J.B.P.: Contact of nominally flat surfaces. Proc. R. Soc. Lond. A 295(1442), 300–319 (1966) Bush, A.W., Gibson, R.D., Thomas, T.R.: The elastic contact of a rough surface. Wear 35(1), 87–111 (1975) Dieterich, J.H., Kilgore, B.D.: Direct observation of frictional contacts: new insights for state-dependent properties. Pure Appl. Geophys. 143(1–3), 283–302 (1994) Dieterich, J.H., Kilgore, B.D.: Imaging surface contacts: power law contact distributions and contact stresses in quartz, calcite, glass and acrylic plastic. Tectonophysics 256(1–4), 219–239 (1996) Persson, B.N.J.: Theory of rubber friction and contact mechanics. J. Chem. Phys. 115(8), 3840–3861 (2001) Persson, B.N.J.: Elastoplastic contact between randomly rough surfaces. Phys. Rev. Lett. 87(11), 116101-1–116101-4 (2001) Hyun, S., Pel, L., Molinari, J.-F., Robbins, M.O.: Finite-element analysis of contact between elastic self-affine surfaces. Phys. Rev. E 70(2), 1–12 (2004) Pei, L., Hyun, S., Molinari, J.-F., Robbins, M.O.: Finite element modeling of elasto-plastic contact between rough surfaces. J. Mech. Phys. Solids 53(11), 2385–2409 (2005) Benz, M., Rosenberg, K.J., Kramer, E.J., Israelachvili, J.N.: The deformation and adhesion of randomly rough and patterned surfaces. J. Phys. Chem. B 110(24), 11884–11893 (2006) Hyun, S., Robbins, M.O.: Elastic contact between rough surfaces: effect of roughness at large and small wavelengths. Tribol. Int. 40(10–12), 1413–1422 (2007) Campañá, C., Müser, M.H.: Contact mechanics of real vs. randomly rough surfaces: a Green's function molecular dynamics study. Europhys. Lett. 77(3), 38005 (2007) Persson, B.N.J.: Relation between interfacial separation and load: a general theory of contact mechanics. Phys. Rev. Lett. 99(12), 125502 (2007) Yang, C., Persson, B.N.J.: Molecular dynamics study of contact mechanics: contact area and interfacial separation from small to full contact. Phys. Rev. Lett. 100(2), 024303 (2008) Yang, C., Persson, B.N.J.: Contact mechanics: contact area and interfacial separation from small contact to full contact. J. Phys. 20(21), 215214 (2008) Lorenz, B., Persson, B.N.J.: Interfacial separation between elastic solids with randomly rough surfaces: comparison of experiment with theory. J. Phys. 21(1), 015003 (2009) Almqvist, A., Campañá, C., Prodanov, N., Persson, B.N.J.: Interfacial separation between elastic solids with randomly rough surfaces: comparison between theory and numerical techniques. J. Mech. Phys. Solids 59(11), 2355–2369 (2011) Akarapu, S., Sharp, T., Robbins, M.O.: Stiffness of contacts between rough surfaces. Phys. Rev. Lett. 106(20), 204301 (2011) Yastrebov, V.A., Anciaux, G., Molinari, J.-F.: Contact between representative rough surfaces. Phys. Rev. E 86(3), 035601 (2012) Pohrt, R., Popov, V.L.: Normal contact stiffness of elastic solids with fractal rough surfaces. Phys. Rev. Lett. 108(10), 104301 (2012) Prodanov, N., Dapp, W.B., Müser, M.H.: On the contact area and mean gap of rough, elastic contacts: dimensional analysis, numerical corrections, and reference data. Tribol. Lett. 53(2), 433–448 (2013) Pastewka, L., Prodanov, N., Lorenz, B., Müser, M.H., Robbins, M.O., Persson, B.N.J.: Finite-size scaling in the interfacial stiffness of rough elastic contacts. Phys. Rev. E 87(6), 062809 (2013) Yastrebov, V.A., Anciaux, G., Molinari, J.-F.: The contact of elastic regular wavy surfaces revisited. Tribol. Lett. 56(1), 171–183 (2014) Pastewka, L., Robbins, M.O.: Contact between rough surfaces and a criterion for macroscopic adhesion. Proc. Natl. Acad. Sci. USA 111(9), 3298–3303 (2014) Yastrebov, V.A., Anciaux, G., Molinari, J.-F.: From infinitesimal to full contact between rough surfaces: evolution of the contact area. Int. J. Solids Struct. 52, 83–102 (2015) Pastewka, L., Robbins, M.O.: Contact area of rough spheres: large scale simulations and simple scaling laws. Appl. Phys. Lett. 108(22), 221601 (2016) Yastrebov, V.A., Anciaux, G., Molinari, J.-F.: On the accurate computation of the true contact-area in mechanical contact of random rough surfaces. Tribol. Int. 114, 161–171 (2017) Yastrebov, V.A., Anciaux, G., Molinari, J.-F.: The role of the roughness spectral breadth in elastic contact of rough surfaces. J. Mech. Phys. Solids 107, 469–493 (2017) Müser, M.H., Dapp, W.B., Bugnicourt, R., Sainsot, P., Lesaffre, N., Lubrecht, T.A., Persson, B.N.J., Harris, K., Bennett, A., Schulze, K., Rohde, S., Ifju, P., Sawyer, W.G., Angelini, T., Ashtari Esfahani, H., Kadkhodaei, M., Akbarzadeh, S., Wu, J.J., Vorlaufer, G., Vernes, A., Solhjoo, S., Vakis, A.I., Jackson, R.L., Xu, Y., Streator, J., Rostami, A., Dini, D., Medina, S., Carbone, G., Bottiglione, F., Afferrante, L., Monti, J., Pastewka, L., Robbins, M.O., Greenwood, J.A.: Meeting the contact-mechanics challenge. Tribol. Lett. 65(4), 1–18 (2017) Weber, B., Suhina, T., Junge, T., Pastewka, L., Brouwer, A.M., Bonn, D.: Molecular probes reveal deviations from Amontons' law in multi-asperity frictional contacts. Nat. Commun. 9(1), 888 (2018) Dalvi, S., Gujrati, A., Khanal, S.R., Pastewka, L., Dhinojwala, A., Jacobs, T.D.B.: Linking energy loss in soft adhesion to surface roughness. Proc. Natl. Acad. Sci. USA 116(51), 25484–25490 (2019) Derjaguin, B.V., Muller, V.M., Toporov, Y.U.P.: Effect of contact deformation on the adhesion of particles. J. Colloid Interface Sci. 52(3), 105–108 (1975) Johnson, K.L., Kendall, K., Roberts, A.D.: Surface energy and the contact of elastic solids. Proc. R. Soc. Lond. A 324, 301–313 (1971) Maugis, D.: Adhesion of spheres: the JKR-DMT transition using a Dugdale model. J. Colloid Interface Sci. 150(1), 243–269 (1992) Müser, M.H.: A dimensionless measure for adhesion and effects of the range of adhesion in contacts of nominally flat surfaces. Tribol. Int. 100, 41–47 (2016) Ramisetti, S.B., Campañá, C., Anciaux, G., Molinari, J.-F., Müser, M.H., Robbins, M.O.: The autocorrelation function for island areas on self-affine surfaces. J. Phys. 23(21), 215004 (2011) Wang, A., Müser, M.H.: Percolation and Reynolds flow in elastic contacts of isotropic and anisotropic, randomly rough surfaces. Tribol. Lett. 69(1), 1 (2020) Johnson, K.L.: Contact Mechanics. Cambridge University Press, Cambridge (1985) Jacobs, T.D.B., Junge, T., Pastewka, L.: Quantitative characterization of surface topography using spectral analysis. Surf. Topogr. 5(1), 013001 (2017) Stanley, H.M., Kato, T.: An FFT-based method for rough surface contact. J. Tribol. 1(July), 2–6 (1997) Campañá, C., Müser, M.H.: Practical Green's function approach to the simulation of elastic semi-infinite solids. Phys. Rev. B 74(7), 75420 (2006) Pastewka, L., Sharp, T.A., Robbins, M.O.: Seamless elastic boundaries for atomistic calculations. Phys. Rev. B 86, 075459 (2012) Gujrati, A., Khanal, S.R., Pastewka, L., Jacobs, T.D.B.: Combining TEM, AFM, and profilometry for quantitative topography characterization across all scales. ACS Appl. Mater. Interf. 10(34), 29169–29178 (2018) Gujrati, A., Sanner, A., Khanal, S., Moldovan, N., Zeng, H., Pastewka, L., Jacobs, T.D.B.: Comprehensive topography characterization of polycrystalline diamond coatings. Surf. Topogr. 9(1), 014003 (2021) Polonsky, I.A., Keer, L.M.: A numerical method for solving rough contact problems based on the multi-level multi-summation and conjugate gradient techniques. Wear 231(2), 206–219 (1999) Bugnicourt, R., Sainsot, P., Dureisseix, D., Gauthier, C., Lubrecht, A.A.: FFT-based methods for solving a rough adhesive contact: description and convergence study. Tribol. Lett. 66(1), 29 (2018) Baney, J.M., Hui, C.-Y.: A cohesive zone model for the adhesion of cylinders. J. Adhes. Sci. Technol. 11(3), 393–406 (1997) Yang, F., Cheng, Y.-T.: Revisit of the two-dimensional indentation deformation of an elastic half-space. J. Mater. Res. 24(06), 1976–1982 (2009) Grierson, D.S., Liu, J., Carpick, R.W., Turner, K.T.: Adhesion of nanoscale asperities with power-law profiles. J. Mech. Phys. Solids 61(2), 597–610 (2013) Thimons, L.A., Gujrati, A., Sanner, A., Pastewka, L., Jacobs, T.D.B.: Hard material adhesion: which scales of roughness matter? Exp. Mech. (2021). https://doi.org/10.1007/s11340-021-00733-6 Cheng, S., Robbins, M.O.: Defining contact at the atomic scale. Tribol. Lett. 39(3), 329–348 (2010) Zhou, Y., Wang, A., Müser, M.H.: How thermal fluctuations affect hard-wall repulsion and thereby hertzian contact mechanics. Front. Mech. Eng. 5, 67 (2019) Tada, H., Paris, P.C., Irwin, G.R.: The Stress Analysis Of Cracks Handbook, 3rd edn. ASME Press, New York (2000) This work has emerged from numerous enlightening interactions with Mark Robbins over the last decade. All of us were inspired by him and JMM and LP will forever be thankful for having been given the opportunity to closely work with Mark. We also thank Martin Müser for useful discussions and Sindhu Singh for implementing the optimization algorithm of Ref. [46]. Open Access funding enabled and organized by Projekt DEAL. We thank the DAAD for support for a short visit of JMM to Freiburg and the US National Science Foundation (Grant DMR-1411144 and DMR-1929467), the European Commission (Marie-Curie IOF-272619), and the Deutsche Forschungsgemeinschaft (Grants PA 2023/2 and EXC 2193/1—390951807) for funding. Computations were carried out on the Johns Hopkins University Homewood High Performance Cluster, the BlueCrab cluster at the Maryland Advanced Research Computing Center, and NEMO at the University of Freiburg (DFG Grant INST 39/963-1 FUGG). Joseph M. Monti Present address: Sandia National Laboratories, Albuquerque, USA Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD, 21218, USA Department of Microsystems Engineering, University of Freiburg, Georges-Köhler-Allee 103, 79110, Freiburg, Germany Antoine Sanner & Lars Pastewka Cluster of Excellence livMatS, Freiburg Center for Interactive Materials and Bioinspired Technologies, University of Freiburg, Georges-Köhler-Allee 105, 79110, Freiburg, Germany Antoine Sanner Lars Pastewka JMM and LP devised the study. JMM and AS carried out and analyzed contact mechanics calculations. JMM wrote the first manuscript draft. All authors contributed to editing and finalizing the manuscript. Correspondence to Lars Pastewka. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Monti, J.M., Sanner, A. & Pastewka, L. Distribution of Gaps and Adhesive Interaction Between Contacting Rough Surfaces. Tribol Lett 69, 80 (2021). https://doi.org/10.1007/s11249-021-01454-6 Surface roughness Contact mechanics
CommonCrawl
export.arXiv.org > astro-ph > arXiv:1507.07900 astro-ph.HE astro-ph.GA (refers to | cited by ) Astrophysics > High Energy Astrophysical Phenomena Title: Ultra-Luminous X-ray Sources in Haro 11 and the Role of X-ray Binaries in Feedback in Lyman-alpha Emitting Galaxies Authors: A. H. Prestwich, F. Jackson, P. Kaaret, M. Brorby, T. P. Roberts, S.H.Saar, M. Yukita (Submitted on 28 Jul 2015) Abstract: Lyman Break Analogs (LBA) are local proxies of high-redshift Lyman Break Galaxies (LBG). Studies of nearby starbursts have shown that Lyman continuum and line emission are absorbed by dust and that the Lyman-alpha is resonantly scattered by neutral hydrogen. A source of feedback is required to prevent scattering and allow the Lyman-alpha emission to escape. There are two X-ray point sources embedded in the Lyman Break Analog (LBA) galaxy Haro 11. Haro 11 X-1 is an extremely luminous (L$_{X} \sim 10^{41}$ ergs s$^{-1}$), spatially compact source with a hard X-ray spectrum. Haro 11 X-1 is similar to the extreme Black Hole Binary (BHB) M82 X-1. The hard X-ray spectrum indicates Haro 11 X-1 may be a Black Hole Binary (BHB) in a low accretion state. The very high X-ray luminosity suggests an intermediate mass black hole that could be the seed for formation of a supermassive black hole. Source Haro 11 X-2 has an X-ray luminosity L$_{X} \sim 5\times10^{40}$ ergs s$^{-1}$ and a soft X-ray spectrum. This strongly suggests that Haro 11 X-2 is an X-ray binary in the ultra luminous state. Haro 11 X-2 is coincident with the star forming knot that is the source of the Lyman-alpha emission, raising the possibility that strong winds from X-ray binaries play an important part in injecting mechanical power into the Interstellar Medium (ISM), thus blowing away neutral material from the starburst region and allowing the Lyman-alpha to escape. We suggest that feedback from X-ray binaries may play a significant role in allowing Lyman-alpha emission to escape from galaxies in the early universe. Comments: Accepted for publication in Astrophysical Journal Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Astrophysics of Galaxies (astro-ph.GA) DOI: 10.1088/0004-637X/812/2/166 Cite as: arXiv:1507.07900 [astro-ph.HE] (or arXiv:1507.07900v1 [astro-ph.HE] for this version) From: Andrea Prestwich [view email] [v1] Tue, 28 Jul 2015 18:57:12 GMT (1230kb,D)
CommonCrawl
Discontinuous elliptic problems in $R^N$: Lower and upper solutions and variational principles DCDS Home Examples of topologically transitive skew-products April 2000, 6(2): 329-350. doi: 10.3934/dcds.2000.6.329 On the shift differentiability of the flow generated by a hyperbolic system of conservation laws Stefano Bianchini 1, S.I.S.S.A. (I.S.A.S.), Via Beirut 2/4, 34013 Trieste, Italy Received July 1999 Revised September 1999 Published January 2000 We consider the notion of shift tangent vector introduced in [7] for real valued BV functions and introduced in [9] for vector valued BV functions. These tangent vectors act on a function $u\in L^1$ shifting horizontally the points of its graph at different rates, generating in such a way a continuous path in $L^1$. The main result of [7] is that if the semigroup $\mathcal S$ generated by a scalar strictly convex conservation law is shift differentiable, i.e. paths generated by shift tangent vectors at $u_0$ are mapped in paths generated by shift tangent vectors at $\mathcal S_t u_0$ for almost every $t\geq 0$. This leads to the introduction of a sort of differential, the "shift differential", of the map $u_0 \to \mathcal S_t u_0$. In this paper, using a simple decomposition of $u\in $BV in terms of its derivative, we extend the results of [9] and we give a unified definition of shift tangent vector, valid both in the scalar and vector case. This extension allows us to study the shift differentiability of the flow generated by a hyperbolic system of conservation laws. Keywords: Shift differential, flow generated by a hyperbolic system., hyperbolic conservation laws. Mathematics Subject Classification: 35L6. Citation: Stefano Bianchini. On the shift differentiability of the flow generated by a hyperbolic system of conservation laws. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 329-350. doi: 10.3934/dcds.2000.6.329 Tai-Ping Liu, Shih-Hsien Yu. Hyperbolic conservation laws and dynamic systems. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 143-145. doi: 10.3934/dcds.2000.6.143 Alberto Bressan, Marta Lewicka. A uniqueness condition for hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 673-682. doi: 10.3934/dcds.2000.6.673 Gui-Qiang Chen, Monica Torres. On the structure of solutions of nonlinear hyperbolic systems of conservation laws. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1011-1036. doi: 10.3934/cpaa.2011.10.1011 Stefano Bianchini. A note on singular limits to hyperbolic systems of conservation laws. Communications on Pure & Applied Analysis, 2003, 2 (1) : 51-64. doi: 10.3934/cpaa.2003.2.51 Xavier Litrico, Vincent Fromion, Gérard Scorletti. Robust feedforward boundary control of hyperbolic conservation laws. Networks & Heterogeneous Media, 2007, 2 (4) : 717-731. doi: 10.3934/nhm.2007.2.717 Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185 Fumioki Asakura, Andrea Corli. The path decomposition technique for systems of hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 15-32. doi: 10.3934/dcdss.2016.9.15 Anupam Sen, T. Raja Sekhar. Structural stability of the Riemann solution for a strictly hyperbolic system of conservation laws with flux approximation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 931-942. doi: 10.3934/cpaa.2019045 Alberto Bressan, Graziano Guerra. Shift-differentiabilitiy of the flow generated by a conservation law. Discrete & Continuous Dynamical Systems - A, 1997, 3 (1) : 35-58. doi: 10.3934/dcds.1997.3.35 Mapundi K. Banda, Michael Herty. Numerical discretization of stabilization problems with boundary controls for systems of hyperbolic conservation laws. Mathematical Control & Related Fields, 2013, 3 (2) : 121-142. doi: 10.3934/mcrf.2013.3.121 Eitan Tadmor. Perfect derivatives, conservative differences and entropy stable computation of hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4579-4598. doi: 10.3934/dcds.2016.36.4579 Tatsien Li, Libin Wang. Global exact shock reconstruction for quasilinear hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 597-609. doi: 10.3934/dcds.2006.15.597 Boris Andreianov, Mohamed Karimou Gazibo. Explicit formulation for the Dirichlet problem for parabolic-hyperbolic conservation laws. Networks & Heterogeneous Media, 2016, 11 (2) : 203-222. doi: 10.3934/nhm.2016.11.203 Yu Zhang, Yanyan Zhang. Riemann problems for a class of coupled hyperbolic systems of conservation laws with a source term. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1523-1545. doi: 10.3934/cpaa.2019073 Constantine M. Dafermos. Hyperbolic balance laws with relaxation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4271-4285. doi: 10.3934/dcds.2016.36.4271 Kenta Nakamura, Tohru Nakamura, Shuichi Kawashima. Asymptotic stability of rarefaction waves for a hyperbolic system of balance laws. Kinetic & Related Models, 2019, 12 (4) : 923-944. doi: 10.3934/krm.2019035 K. T. Joseph, Philippe G. LeFloch. Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits. Communications on Pure & Applied Analysis, 2002, 1 (1) : 51-76. doi: 10.3934/cpaa.2002.1.51 Weishi Liu. Multiple viscous wave fan profiles for Riemann solutions of hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 871-884. doi: 10.3934/dcds.2004.10.871 Paolo Baiti, Helge Kristian Jenssen. Blowup in $\mathbf{L^{\infty}}$ for a class of genuinely nonlinear hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 837-853. doi: 10.3934/dcds.2001.7.837 Zhi-Qiang Shao. Lifespan of classical discontinuous solutions to the generalized nonlinear initial-boundary Riemann problem for hyperbolic conservation laws with small BV data: shocks and contact discontinuities. Communications on Pure & Applied Analysis, 2015, 14 (3) : 759-792. doi: 10.3934/cpaa.2015.14.759 Stefano Bianchini
CommonCrawl
Deserving poor and the desirability of a minimum wage Tomer Blumkin ORCID: orcid.org/0000-0003-3092-88221,2,3 & Leif Danziger1,2,3,4 IZA Journal of Labor Economics volume 7, Article number: 6 (2018) Cite this article This paper provides a normative justification for the use of a minimum wage as a redistributive tool in a competitive labor market. We show that a government interested in improving the wellbeing of the deserving poor, while being less concerned with their undeserving counterparts, can use a minimum wage to enhance the efficiency of the tax-and-transfer system in attaining this goal. A minimum wage is used in many countries as a redistributive tool for the benefit of unskilled workers.Footnote 1 However, its normative justification is highly controversial due to its adverse effect on employment and the possibility of redistribution through the tax-and-transfer system. Beginning with the seminal contribution of Mirrlees (1971), the major concern of the optimal income-tax literature has been with the government's role in pursuing distributional goals in the presence of asymmetric information about workers' characteristics. The canonical framework stipulates a competitive labor-market setting, which would lead to a Pareto-efficient allocation in the absence of government intervention. However, due to redistributive concerns and informational constraints, the government faces a non-trivial tradeoff between equity and efficiency. Hence, the government will generally choose an allocation that is not Pareto optimal, and the optimal income-tax literature primarily focuses on how expanding the set of policy tools available to the government beyond the tax-and-transfer system may improve this tradeoff. One strand of this literature investigates whether a minimum wage could be such additional policy tool in a competitive labor-market environment. The focus is mainly on the intensive-margin setting where the choice is confined to working hours, and the key insight is that a minimum wage is in general not a desirable supplement to the tax-and-transfer system (Allen 1987; Guesnerie and Roberts 1987). A notable exception is Boadway and Cuff (2001) who demonstrate that if unemployment benefits are denied from workers who turn down wage offers exceeding the minimum wage, then a minimum wage can serve as a warranted supplement to an optimal tax-and-transfer system. The reason is that a minimum wage can then serve to distinguish between voluntarily (skilled) and involuntarily (unskilled) unemployed workers and thereby effectively target unemployment benefits to the latter. More recently, Danziger and Danziger (2015, 2018) show that a graduated (rather than a constant) minimum wage combined with an optimal tax-and-transfer system can be instrumental in achieving a Pareto improvement and study its welfare properties.Footnote 2 Lee and Saez (2012) instead focus on the extensive margin and examine the desirability of a minimum wage in an occupational-choice model with fixed working hours. They show that if rationing is efficient, namely, the involuntary unemployment triggered by a minimum wage will primarily hit the workers with the highest disutility of work, a minimum wage can be desirable. The normative justification for a minimum wage in the occupational-choice extensive-margin model of Lee and Saez (2012) hinges on a restrictive assumption about the tax system. In particular, Lee and Saez consider an occupation tax which imposes a fixed levy on each occupation (high-skilled and low-skilled) independently of the earned income. If the production function exhibits perfect substitutability between skilled and unskilled labor inputs (i.e., a linear production function as in Saez 2002), this assumption would not be restrictive as the income level in each occupation would be given exogenously. An occupation tax would then be equivalent to an income tax. However, with complementarity between the various skill inputs, this assumption becomes restrictive as it rules out the tax being conditional on the endogenously determined income earned in each occupation. In particular, a more general income tax would allow, for each occupation, to set the tax liability for any income other than that realized in equilibrium. With an extensive-margin adjustment of labor, if the tax could be conditioned on income, a minimum wage could be fully replicated by a confiscatory 100% income tax on any income level below a minimal threshold coinciding with the realized income level in the low-skill sector in equilibrium. A minimum wage would then be redundant, as the allocation attained under the augmented income tax system would be equivalent to the one attained under the restrictive income tax system supplemented by a minimum wage. The above literature assumes that the skill distribution is given. However, the tax-and-transfer system and the minimum wage may of course affect the human-capital formation and thereby the skill distribution. In a recent paper, Gerritsen and Jacobs (2016) consider an occupational choice model with competitive labor markets and endogenous human capital formation and address the question of whether income redistribution is more efficiently achieved through an increase in the minimum wage or through changes in the income tax. The threat of involuntary unemployment associated with an increase in the minimum wage may induce some low-skilled individuals to upgrade their skills to avoid unemployment. Provided that taxes rise with income, this results in revenue gains from increased high-skill employment offsetting the revenue losses from increased low-skill unemployment. Gerritsen and Jacobs show that for a minimum wage increase to dominate a distributional-equivalent tax change, revenue gains from higher skill formation should outweigh revenue losses from inefficient low-skilled labor rationing.Footnote 3 In this paper, we offer an alternative normative justification for the use of a minimum wage to supplement an optimal tax-and-transfer system. We consider the intensive-margin setting that captures the difference between wage and income and therefore provides a natural framework for examining the social desirability of a minimum wage as a supplement to the tax-and-transfer system.Footnote 4 Central to our argument is the distinction between the deserving and the undeserving poor, where deservedness refers to the society's willingness to provide public support as reflected in the relative weights in the social welfare function. We capture this distinction by assuming that unskilled workers differ in their disutility from work, where those incurring a low disutility from work are referred to as deserving, while those incurring a high disutility from work are referred to as undeserving. The association between incurring a high disutility from work and being perceived as undeserving can be interpreted in several manners. A high disutility from work may reflect laziness, so that society has some bias in favor of the poor workers who are more willing to work hard (long hours) relative to those poor workers who are less willing to do so. Incurring a high disutility from work may alternatively reflect family circumstances, such as being a teenager or a secondary earner of a household. In both cases, the worker is likely to have a higher reservation wage and typically opts for a part-time job. For instance, teenage workers are less constrained by long-term financial obligations (e.g., mortgages), have more attractive outside opportunities (e.g., schooling), and may receive their parents' support; likewise, secondary earners may rely on their spouses' income and therefore already enjoy a high level of consumption. Of course, incurring a high disutility from work could well be associated with social circumstances such as disability and single parenthood that warrant public support. Individuals in these categories may to some extent be identified and targeted by specialized welfare programs that address their particular needs (e.g., Temporary Assistance to Needy Families and Social Security Disability Insurance). However, in many cases, distinguishing between more and less deserving within the pool of low-skilled workers exhibiting a high disutility from work may be a daunting challenge for the government. For instance, according to the US Census Bureau Data (see Weisbach 2009 and the references therein), among the top ten disabilities, back/spinal problems (excluding spinal cord injuries and paralysis) are by far the most prevalent. Another common source of claimed disability is mental problems (excluding retardation). However, both back pains and mental problems are difficult to verify compared with disabilities such as blindness, heart/artery problems, and diabetes which are readily verifiable. Due to the problem of verification faced by the government, incurring a high-disutility from work is imperfectly correlated with welfare deservedness. That being said, our model assumes that the correlation is sufficiently large to warrant assigning a lower welfare weight to low-skilled workers with a high disutility from work. In our model, the government maximizes a social welfare function that favors the deserving poor. However, as the disutility from work is unobservable, the government cannot directly identify the deserving poor and is faced with a screening problem. We demonstrate that a minimum wage can help the government overcome this challenge. When working hours are rationed in a manner which is sufficiently close to being constrained efficient (precisely defined below in the formal model) in the sense that most of the involuntary underemployment triggered by the imposition of a minimum wage falls on the undeserving poor, extra transfers offered by the government to unskilled workers can be targeted toward the deserving poor rather than being accorded to all poor across the board.Footnote 5 We demonstrate that by relying on the screening of workers through the rationing of working hours, the government may overcome its inability to identify the deserving poor directly. Consequently, a minimum wage may become a desirable supplement to an optimal tax-and-transfer system.Footnote 6 The notion of welfare deservedness has attracted much attention in recent years and has become a key issue in the public discourse about the role of the welfare system. Abundant evidence shows that society is generally sympathetic toward supporting the poor but that generosity is often conditioned on the recipients either working hard or being disabled. For instance, Gilens (1999) reports that people are more concerned about the conditions determining which recipients should benefit from social security programs than about the cost of the programs, the main question for taxpayers being not so much "who gets what?" but rather "who deserves what?". In other words, it is not the government support for the truly needy that sparks considerable public resentment, but rather the perception that many individuals receiving welfare are undeserving.Footnote 7 These trends are reflected in the 1996 welfare reform in the USA and the shift from the Aid to Families with Dependent Children program to the Temporary Assistance for Needy Families program with its emphasis on the work requirement, as well as the significant expansion in recent years of the Earned Income Tax Credit program that conditions welfare on labor market participation. Several previous studies have distinguished between the deserving and undeserving poor in order to provide a normative foundation for commonly used policy tools such as the Earned Income Tax Credit program and Workfare to target benefits to the deserving poor. For instance, Besley and Coate (1992, 1995) assume that the government objective is to alleviate poverty rather than to maximize social welfare. Effectively, this eliminates disutility from work from the government objective and may be interpreted to reflect the view that high disutility from work indicates a socially unacceptable trait. They show that workfare can then be an effective screening tool that supplements means testing. Relatedly, Kanbur et al. (1994) establish the case for levying a negative marginal tax rate on the working poor when the government aims to minimize an income-based poverty index. Cuff (2000) employs a framework where individuals differ along the skill dimension and in their disutility from work. She demonstrates that work requirements can be desirable if the government objective is to maximize the well-being of the unskilled workers incurring a low disutility from work (the deserving poor). Saez (2002) discusses the possibility of assigning a relatively low marginal social weight to unemployed unskilled workers and shows that this would reinforce the case for an Earned Income Tax Credit. Finally, Blumkin et al. (2015) demonstrate that statistical stigma can be an effective welfare ordeal mechanism to sort out those claimants considered undeserving. We consider a simple setup with just the key ingredients necessary to demonstrate our point. The economy is comprised of skilled and unskilled workers that produce a single consumption good the price of which is unity. The mass of each skill group is unity. The output X is given by $$ X=F\left({N}^u,{N}^s\right), $$ where Nu and Ns denote the total working hours of the unskilled and skilled workers, respectively. The function F is increasing has constant returns to scale, and exhibits diminishing marginal productivity in the input of each skill level.Footnote 8 Let c denote consumption, n working hours, and g(n) the disutility of work, where g(0) = 0, g′ > 0, g′′ > 0 and \( \underset{n\to 0}{\lim }{g}^{\prime }(n)=0 \). The utility of the skilled workers (indexed by superscript s) is given by us ≡ cs − g(ns). Unskilled workers differ in their disutility from work. For a fraction α ∈ (0, 1) of the unskilled workers (indexed by superscript d) the utility is given by ud ≡ cd − g(nd). For the remaining fraction 1 − α of the unskilled workers (indexed by superscript l), the utility is given by ul ≡ cl − kg(nl), where k > 1. That is, type-l unskilled workers incur a higher disutility (both total and marginal) from work relative to their type-d unskilled counterparts for the same working hours supplied. We will henceforth refer to type-d workers as deserving poor and to type-l workers as undeserving poor. That is, we interpret the choice to work fewer hours in the labor market (stemming from the higher disutility from work) as reflecting a socially undesirable trait. The total labor supply of the skilled workers is given by Ns = ns, and the total labor supply of the unskilled workers by Nu = αnd + (1 − α)nl. Assuming a competitive labor market, each worker is paid the value of his marginal product. Therefore, ws ≡ ∂F(Nu, Ns)/∂Ns is the wage of skilled workers and wu ≡ ∂F(Nu, Ns)/∂Nu is the wage of unskilled workers. We assume that ws > wu. The government's social welfare is given by a weighted average of the utilities, $$ W\equiv {\sum}_i{\beta}^i{u}^i, $$ where ∑iβi = 1 and i = s, d, l. We assume that the social welfare weight assigned to type-s workers is less than their fraction in the population, i.e., 0 ≤ βs < 1/2. This represents society's egalitarian preferences and is fairly standard.Footnote 9 We also assume that the social welfare weight assigned to type-l workers is lower than their fraction in the population, i.e., 0 ≤ βl < (1 − α)/2. This reflects the public's perception that individuals whose preferences induce them to work relatively fewer hours are less deserving of government support (see our elaborate discussion in the introduction).Footnote 10 The benchmark regime: no minimum wage We start by analyzing the benchmark case with no minimum wage so that a non-linear tax-and-transfer system is the only available redistributive policy tool. The government maximizes the social welfare given in (2) by choosing a triplet of consumption-work bundles (ci, ni), i = s, d, l, satisfying the revenue constraint $$ F\left({N}^u,{N}^s\right)\ge {c}^s+{\alpha c}^d+\left(1-\alpha \right){c}^l $$ and the six incentive-compatibility constraints denoted by ICij, i, j = s, d, l where i ≠ j, which express that a worker of type i has no incentive to mimic a worker of type j, i.e., that $$ {c}^i-{k}^ig\left({n}^i\right)\ge {c}^j-{k}^ig\left(\frac{n^j{w}^j}{w^i}\right), $$ where ks = kd = 1, kl = k and wd = wl = wu. The following lemma summarizes important properties of the optimal solution.Footnote 11 Lemma: In a social welfare optimum without a minimum wage: IC sd and IC ld are the only binding incentive-compatibility constraints; nd > nl. Proof: See Appendix A. Part (i) of the lemma states that the downward incentive-compatibility constraint ICsd is binding. This accords with the standard optimal-tax model where the direction of redistribution goes from the high to the low earners. The fact that the downward incentive-compatibility constraint ICsd binds implies that the skilled workers are indifferent between choosing their intended bundle and mimicking the deserving unskilled workers. Part (i) of the lemma also states that the upward incentive-compatibility constraint ICld is binding even though, as shown by part (ii) of the lemma, the undeserving poor work less and hence earn less than the deserving poor. This feature derives from the government's lower valuation of the undeserving poor and implies that the latter are indifferent between choosing their intended bundle and working more in order to mimic their deserving counterparts. Figure 1 illustrates the government optimal solution under the benchmark regime in the income-consumption space. The three upward-sloping curves, denoted by us, ud, and ul, represent the indifference curves associated with the skilled, deserving unskilled, and undeserving unskilled workers, respectively. Notice that the single-crossing property holds, namely, the indifference curve of the undeserving unskilled is steeper than that of the deserving unskilled, which in turn is steeper than that of the skilled. The equilibrium income-consumption bundles associated with the skilled, deserving unskilled, and undeserving unskilled workers are given by E, F, and G, correspondingly. As stated in part (i) of the lemma, the incentive-compatibility constraint ICsd, associated the skilled workers, and the incentive-compatibility constraint, ICld, associated with the undeserving unskilled workers, are binding. That is, the bundles E and F lie on the same indifference curve, us, associated with the skilled workers, whereas the bundles F and G lie on the same indifference curve, ul, associated with the undeserving unskilled workers. As both types of unskilled workers earn the same wage, the fact that the income level associated with bundle F exceeds that associated with bundle G indicates that the deserving unskilled workers work more hours than their undeserving counterparts, as stated in part (ii) of the lemma. Benchmark equilibrium The welfare-enhancing role of a minimum wage A binding minimum wage sets a lower bound for the wage that can be paid to the unskilled workers and thus effectively determines a binding upper bound for their working hours. The ensuing excess supply of unskilled workers necessitates some form of rationing. Rationing rules may be characterized by the extent to which they are efficient. Rationing is defined as efficient when the total hours of work are shared in a manner that maximizes the aggregate surplus of the workers whose workload is being allocated (put differently, the allocation of a given number of working hours minimizes the total disutility of labor). We focus on a rationing rule which is constrained efficient in the sense that the working hours of the unskilled labor are allocated so as to maximize the aggregate surplus of the unskilled workers given the tax-and-transfer system in place. In our context, as will be formally shown in Appendix B (Part I), constrained efficient rationing entails that the entire incidence of the involuntary underemployment falls on the undeserving poor (as they derive the least surplus from working).Footnote 12 With random rationing, all the unskilled workers would be equally likely to be employed less than they desire. Realistically, rationing would lie somewhere in between these two extremes and we will focus on the case where the rationing is sufficiently close to being constrained efficient, henceforth called nearly constrained efficient, in that a sufficiently large share of the incidence of involuntary underemployment falls on the undeserving poor.Footnote 13 As we will show, in such a case, supplementing an optimal tax-and-transfer system with a binding minimum wage can enhance welfare: Proposition: If rationing of employment hours is nearly constrained efficient , supplementing an optimal tax-and-transfer system with a binding minimum wage enhances welfare. Proof: See Appendix B. The rationale for the desirability of the minimum wage is as follows. Part (i) of the lemma shows that in the absence of a minimum wage, the incentive-compatibility constraint ICld associated with the undeserving poor would be binding. This limits the government's redistributive capacity as increasing the transfer to the deserving poor would induce the undeserving poor to mimic their deserving counterparts, thereby violating ICld. However, with constrained efficient rationing (and the case of nearly constrained efficient rationing follows by continuity considerations) a minimum wage would block this undesirable supply-side response, causing the entire incidence of the induced involuntary underemployment to fall on the undeserving poor. Namely, the undeserving poor will be forced to work less than they would prefer given the tax-and-transfer schedule. With the mimicking possibilities of the undeserving poor being blocked, the government is able to offer more generous transfers to the deserving poor. Effectively, the minimum wage plays a screening role that ensures that the extra transfers are targeted to those deserving, rather than being accorded to all unskilled workers. Figure 2 illustrates the proposition in the income-consumption space. The solid indifference curves represent the benchmark equilibrium in the absence of a minimum wage, as depicted in Fig. 1, whereas the dashed indifference curves represent the equilibrium in the presence of a minimum wage. The bundles E', F', and G' reflect a revenue-neutral perturbation to the benchmark allocation given by the bundles E, F, and G. Under the perturbed regime, working hours and hence income levels remain unchanged. The consumption levels associated with the skilled and the deserving unskilled workers are increased in a manner that maintains the skilled workers' incentive-compatibility constraint, ICsd, binding. Namely, the bundles E' and F' lie on the same dashed indifference curve associated with the skilled workers. The consumption level associated with the undeserving unskilled workers is decreased to maintain the government's budget balanced. The latter violates the undeserving unskilled workers' incentive-compatibility constraint, ICld. These now prefer to mimic their deserving unskilled counterparts. Namely, the bundle F' lies above the dashed indifference curve going through the bundle G'. Mimicking, however, is rendered infeasible by the binding minimum wage due to the constrained efficient rationing that prevents the undeserving unskilled workers from working longer hours to replicate the deserving unskilled workers' income. Equilibrium with minimum wage It is important to clarify the rationale underlying the difference between our finding that a minimum wage is desirable and the earlier negative result in Allen (1987) and Guesnerie and Roberts (1987) that a minimum wage cannot be a useful supplement to an optimal tax-and-transfer system. In these two studies, the government's redistributive policy is constrained by the skilled workers' binding downward incentive-compatibility constraint, which makes them indifferent between whether or not to mimic the unskilled workers. In such a case, imposing a minimum wage is useless since it does not make mimicking harder for the skilled workers. In contrast, in our setting, the government's redistributive policy is constrained by the undeserving poor's binding upward incentive-compatibility constraint, which makes them indifferent between whether or not to mimic the deserving poor. As the undeserving poor would have to increase their working hours in order to mimic the deserving poor, an effective upper bound on the undeserving poor's working hours is desirable. This is achieved by the minimum wage, which sets an upper bound on the working hours of all unskilled workers. With constrained efficient rationing of employment hours, the latter is translated into an upper bound on only the working hours of the undeserving poor.Footnote 14 A minimum wage is clearly not the only policy tool that could serve to screen between the deserving and undeserving poor. Two notable policy tools that could serve the same purpose are wage subsidies (e.g., the Earned Income Tax Credit in the USA) and work/training requirements for welfare eligibility (Workfare). Both of these widely used policy tools would induce an increase in the labor supply of the deserving poor, rendering it less attractive for the undeserving poor to mimic their deserving counterparts, thereby enhancing the screening efficiency of the tax-and-transfer system. Indeed, in a working paper version (Blumkin and Danziger 2014), we have examined the desirability of levying a negative marginal tax rate on the deserving poor. We first show that in the absence of a minimum wage, a negative marginal tax rate may be optimal. We then demonstrate that by supplementing the tax-and-transfer system with a binding minimum wage, assuming that employment hours are efficiently rationed, the undeserving poor cannot mimic the deserving poor. This would, therefore, obviate the need to distort the labor supply of the deserving poor upward in order to mitigate the mimicking incentive of the undeserving poor. Thus, the minimum wage is more efficient than a negative marginal tax rate in targeting benefits to the deserving poor. While a formal analysis of workfare is beyond the scope of this paper, a minimum wage appears to have similar efficiency advantages over a workfare program, which entails labor supply distortions resembling those associated with a negative marginal tax rate. In this paper, we offer a normative justification for the use of a minimum wage to promote redistributive goals in a competitive labor market. Motivated by the ample empirical evidence showing that the general public is reluctant to support those poor perceived to be undeserving, we assume that unskilled workers differ in their disutility of work and further postulate that the unskilled workers with high disutility from work are those considered to be less deserving of receiving government support. This is reflected in their relatively low weight in the social welfare function. We demonstrate that a minimum wage is a desirable supplement to an optimal tax-and-transfer system when the rationing of employment hours is nearly constrained efficient. The reason is that a minimum wage can be used as a screening device to distinguish between deserving and undeserving poor and thereby to enhance the government capacity to direct transfers toward those considered more deserving of support. See Neumark and Wascher (2007) for a survey of the minimum wage. The federal minimum wage in the USA has been $7.25 per hour since July 2009 (reflecting an increase of 40% over the years 2007–2009). Some states and cities have set minimum wages exceeding the federal level, for instance, $11.5 per hour in the state of Washington and $11.00 per hour in California and Massachusetts; in San Francisco, the minimum wage is $15.00 per hour, and in New York City, the minimum wage for large employers is expected to be $15.00 per hour by the end of 2018. A different strand of the literature considers the efficiency-enhancing role of a minimum wage in the presence of labor market imperfections; see Lee and Saez (2012) for a short review of this literature. The implications of inefficient rationing for optimal tax systems are discussed in Gerritsen (2016). The voluminous empirical literature on the labor-market impact of a minimum wage has primarily focused on the extensive margin. However, a few papers have also studied the impact of a minimum wage on the intensive margin. Among them, Zavodny (2000) and Doppelt (2017) find a positive relationship between the minimum wage and working hours; Connolly and Gregory (2002) and Allegretto et al. (2011) find no significant relationship, whereas Couch and Wittenburg (2001) and Stewart and Swaffield (2008) find a negative relationship (which is hard to reconcile with a competitive labor market). Possible reasons for the mixed empirical results include the presence of imperfect competition in the labor market for low-skill workers and compliance issues. The empirical evidence on efficient rationing is scarce. Some supporting evidence, however, may be found in Luttmer (2007) and Neumark and Wascher (2007). Thus, Luttmer (2007) shows that an increase in the minimum wage does not cause workers with higher reservation wages to displace equally skilled workers with lower reservation wages. The workers who value their job the least are those who tend to lose their jobs due to a minimum wage increase. Neumark and Wascher (2007) further show that the employment effect of a minimum wage is strongest among those who are likely to have the highest reservation wage and typically opt for part-time jobs (such as teenagers and secondary earners). Testing the efficiency hypothesis is beyond the scope of the current paper and calls for future research. The traditional assumption in the optimal-taxation literature is that the government is unable to observe wages and therefore conditions transfers and taxes on observable income levels. This informational assumption may appear inconsistent with the common practice of simultaneously imposing an income tax and a minimum wage. However, we follow the reasoning in Lee and Saez (2012) who argue that this simultaneous use can be enforced by a combination of whistle blowing by underpaid workers and ex-post costly verification of wages by the government. See also Heclo (1986), Farkas and Robinson (1996), Gallop Organization (1998), Miller (1999), and Fong (2001). According to one poll cited in Gilens (1999), 74% of the public agrees that the criteria for welfare are not strong enough but only 3% reports that they would oppose a 1% sales tax increase aimed at funding help to the poor. The combination of constant returns to scale and diminishing marginal productivity in the input of each skill level implies that the two types of labor are complementary in production. If, instead, the production function were linear in the two inputs, then a binding minimum wage would crowd out all unskilled workers from the labor market. Consequently, a binding minimum wage would not be a desirable supplement to an optimal tax-and-transfer system as such crowding-out could be achieved by the tax-and-transfer system alone. For empirical evidence supporting our assumption of complementarity between the unskilled and skilled workers in the production function, see Hamermesh (1996). The maximization of a weighted sum of the individuals' utilities characterizes the second-best Pareto-efficient frontier. To set focus on the interesting case in which the direction of redistribution goes from the skilled workers toward their unskilled counterparts, we assume that the Pareto weight assigned to the skilled workers is strictly lower than their share in the population. Our approach is in line with that in Cuff (2000) who invokes a Rawlsian welfare function and considers the case in which a high disutility from work reflects a form of laziness. Hence, the individuals whose utility is being maximized are those with low ability and low disutility from work. In our framework this would correspond to setting βs = βl = 0. We assume that the optimal solution is separating so that each type of unskilled worker receives a distinct consumption-work bundle. This will be the case when the welfare weight assigned to type-l workers is sufficiently low. In the laissez-faire equilibrium with a minimum wage but in the absence of a tax-and-transfer system, efficient rationing would entail that both types of unskilled workers (deserving and undeserving) would share the burden of underemployment, so as to equalize their marginal disutility of labor. Constrained by the tax-and-transfer system, working hours cannot be allocated in a manner that equalizes the marginal disutility of labor. Instead, in our context, we obtain a corner solution where the entire burden of underemployment is borne by the undeserving unskilled workers. Formally, we consider an extension of the constrained efficient rationing rule by introducing a noise component which implies that the probability of becoming involuntarily underemployed becomes positive for both the deserving and undeserving poor. We then consider the limiting case in which the magnitude of the noise component goes to zero. Allen (1987) already mentions that a minimum wage can be a useful policy tool when the incentive- compatibility constraint is upward binding. This would be irrelevant in Allen's two-type setting with a standard welfare function, since it would require that the redistribution goes from the poor toward the rich. In our case with a three-type framework, redistribution goes from the undeserving poor (who earn less) toward their deserving counterparts (who earn more). Allegretto S, Dube A, Reich M (2011) Do minimum wages really reduce teen employment? Accounting for heterogeneity and selectivity in state panel data. Ind Relat 50:205–240 Allen S (1987) Taxes, redistribution, and the minimum wage: a theoretical analysis. Q J Econ 102:477–490 Besley T, Coate S (1992) Workfare versus welfare incentive-compatibility arguments for work requirements in poverty-alleviation programs. Am Econ Rev 82:249–261 Besley, T. and Coate, S. (1995) "The Design of Income Maintenance Programs," Review of Economic Studies, 62:187–221. Blumkin, T., and Danziger, L. (2014) "Deserving poor and the desirability of a minimum wage," IZA Discussion Paper No 8418 Blumkin T, Margalioth Y, Sadka E (2015) Welfare Stigma Re-examined. J Public Econ Theory 17:874–886 Boadway R, Cuff K (2001) A minimum wage can be welfare-improving and employment-enhancing. Eur Econ Rev 45:553–576 Connolly S, Gregory M (2002) The national minimum wage and hours of work: implications for low paid women. Oxf Bull Econ Stat 64:607–631 Couch, K. and Wittenburg, D. (2001) "The response of hours of work to increases in the minimum wage," South Econ J 68:171–177 Cuff K (2000) Optimality of workfare with heterogeneous preferences. Can J Econ 33:149–174 Danziger E, Danziger L (2015) A Pareto-improving minimum wage. Economica 82:236–252 Danziger, E. and Danziger, L. (2018) "The Optimal Graduated Minimum Wage and Social Welfare," Res Labor Econ 46:55–72 Doppelt, R. (2017) "Minimum wages and hours of work," Mimeo, Penn State University, PA, USA Farkas S, Robinson J (1996) The values we live by: what Americans want from welfare reform. Public Agenda, New York Fong C (2001) Social preferences, self-interest, and the demand for redistribution. J Public Econ 82:225–246 Gallop Organization (1998) "Haves and have-nots: perceptions of fairness and opportunity" Gerritsen, A. (2016) "Equity and efficiency in rationed labor markets", Working Paper No. 2016–4, Munich: Max Planck Institute for Tax Law and Public Finance Gerritsen, A. and Jacobs, B. (2016) "Is a minimum wage an appropriate instrument for redistribution?" Gilens, M. (1999) "Why Americans hate welfare," University of Chicago Press, Chicago Guesnerie R, Roberts K (1987) Minimum wage legislation as a second best policy. Eur Econ Rev 31:490–498 Hamermesh DS (1996) Labor demand. Princeton University Press, Princeton Heclo, H. (1986) "The Political Foundations of Antipoverty Policy," in S. Danziger and D. Weinberg (eds.) Fighting Poverty: What Works and What Doesn't, Harvard University Press, Cambridge Kanbur R, Keen M, Tuomala M (1994) Optimal non-linear income taxation for the alleviation of poverty. Eur Econ Rev 38:1613–1632 Lee D, Saez E (2012) Optimal minimum wage policy in competitive labor markets. J Public Econ 96:739–749 Luttmer E (2007) Does the minimum wage cause inefficient rationing? The B.E Journal of Economic Analysis and Policy 7 (Contributions), Article 49, 1–40 Miller D (1999) Principles of social justice. Harvard University Press, Cambridge Mirrlees J (1971) An exploration in the theory of optimum income taxation. Rev Econ Stud 38:175–208 Neumark D, Wascher W (2007) Minimum wages and employment. Found Trends Microeconomics 3:1–182 Saez E (2002) Optimal income transfer programs: intensive versus extensive labor supply responses. Q J Econ 117:1039–1073 Stewart M, Swaffield J (2008) The other margin: do minimum wages cause working hours adjustments for low-wage workers? Economica 75:148–167 Weisbach D (2009) Toward a new approach to disability law. University of Chicago Legal Forum 1:47–102 Zavodny M (2000) The effect of the minimum wage on employment and hours. Labour Econ 7:729–750 We are grateful to the referees and the editor for their insightful comments and constructive suggestions. We also thank Spencer Bastani, Sören Blomquist, Luca Micheletto, Casey Rothschild, Laurent Simula, and participants in the UCFS Public Economics Seminar in Uppsala University, the CESifo Employment and Social Protection Area Conference in Munich, the European Economic Association Conference in Mannheim, and the Taxation Theory Conference in Cologne for helpful comments and suggestions. Responsible editor: Pierre Cahuc Department of Economics, Ben-Gurion University, 84105, Beer-Sheba, Israel Tomer Blumkin & Leif Danziger CESifo, Munich, Germany IZA, Bonn, Germany Department of Economics and Business Economics, Aarhus University, Aarhus, Denmark Leif Danziger Search for Tomer Blumkin in: Search for Leif Danziger in: Correspondence to Tomer Blumkin. Proof of Lemma Part (i) The only binding incentive constraints are ICsd and ICld. Proof: We prove this part by a series of claims. Claim 1: ICsl is slack. Proof: By virtue of ICsd it follows that (A1) \( {c}^s-g\left({n}^s\right)\ge {c}^d-g\left(\frac{n^d{w}^u}{w^s}\right). \) Suppose, by way of contradiction, that ICsl is binding, hence: (A2) \( {c}^s-g\left({n}^s\right)={c}^l-g\left(\frac{n^l{w}^u}{w^s}\right) \). Subtracting (A2) from (A1) implies that (A3) \( {c}^l-g\left(\frac{n^l{w}^u}{w^s}\right)\ge {c}^d-g\left(\frac{n^d{w}^u}{w^s}\right) \). By virtue of ICdl it follows that (A4) cd − g(nd) ≥ cl − g(nl). Subtracting (A4) from (A3) yields: (A5) \( g\left({n}^l\right)-g\left(\frac{n^l{w}^u}{w^s}\right)\ge g\left({n}^d\right)-g\left(\frac{n^d{w}^u}{w^s}\right) \). $$ \Longleftrightarrow H\left({n}^l\right)\ge H\left({n}^d\right), $$ where \( H(n)\equiv g(n)-g\left(\frac{n{w}^u}{w^s}\right) \). Differentiation of H(n) with respect to n yields. (A6) \( {H}^{\prime }(n)={g}^{\prime }(n)-{g}^{\prime}\left(\frac{n{w}^u}{w^s}\right)\frac{w^u}{w^s}>0 \), where the inequality follows from the strict convexity of g and the fact that ws > wu. It follows from (A5) that nl ≥ nd. By our presumption of a separating equilibrium it follows that nl > nd. By virtue of ICld it follows that (A7) cl − kg(nl) ≥ cd − kg(nd). Subtracting (A7) from (A4) implies: (A8) (k − 1)[g(nd) − g(nl)] ≥ 0. As k > 1 and g is increasing, it follows that nd ≥ nl. We therefore obtain a contradiction. Thus, ICsl is slack. Claim 2: ICsd is binding. Proof: Suppose, by way of contradiction, that ICsd is slack and consider the following small perturbation to the presumed optimal solution: \( {\overset{\sim }{\mathrm{c}}}^s={\mathrm{c}}^s-\upvarepsilon \), \( {\overset{\sim }{\mathrm{c}}}^d={\mathrm{c}}^d+\upvarepsilon \) and \( {\overset{\sim }{\mathrm{c}}}^l={\mathrm{c}}^l+\upvarepsilon \), where ε > 0. By continuity considerations, ICsd and ICsl are maintained (the former is slack by presumption and the latter is slack by claim 1). Moreover, by construction of the perturbation, neither the revenue constraint nor any of the other incentive-compatibility constraints is violated. The suggested perturbation yields an increase in social welfare, as by presumption βs < 1/2; hence, the total change in welfare is given by ΔW = ε(1 − 2βs) > 0. We thus obtain the desired contradiction. Claim 3: ICls is slack. Proof: Suppose, by way of contradiction, that ICls is binding. Thus, (A9) \( {c}^l- kg\left({n}^l\right)={c}^s- kg\left(\frac{n^s{w}^s}{w^u}\right). \) (A10) cl − kg(nl) ≥ cd − kg(nd). Substituting (A9) into (A10) yields (A11) \( {c}^s- kg\left(\frac{n^s{w}^s}{w^u}\right)\ge {c}^d- kg\left({n}^d\right). \) By virtue of ICds it follows that (A12) \( {c}^d-g\left({n}^d\right)\ge {c}^s-g\left(\frac{n^s{w}^s}{w^u}\right) \). After rearrangement, (A11) and (A12) yield (A13) \( \left(k-1\right)\left[g\left(\frac{n^s{w}^s}{w^u}\right)-g\left({n}^d\right)\right]\le 0 \). As k > 1 and g is increasing, (A13) implies that nsws/wu ≤ nd. By the assumption that the equilibrium is separating it follows that (A14) \( \frac{n^s{w}^s}{w^u}<{n}^d \). By virtue of claim 2 ICsd is binding, hence, it follows that (A15) \( {c}^s-g\left({n}^s\right)={c}^d-g\left(\frac{n^d{w}^u}{w^s}\right) \). (A16) \( g\left(\frac{n^s{w}^s}{w^u}\right)-g\left({n}^s\right)\ge g\left({\mathrm{n}}^d\right)-g\left(\frac{n^d{w}^u}{w^s}\right) \) $$ \Longleftrightarrow H\left(\frac{n^s{w}^s}{w^u}\right)\ge H\left({n}^d\right), $$ (A17) \( {H}^{\prime }(n)={g}^{\prime }(n)-{g}^{\prime}\left(\frac{n{w}^u}{w^s}\right)\frac{w^u}{w^s}>0 \), where the inequality follows from the strict convexity of g and the fact that ws > wu. It follows from (A17) that nsws/wu ≥ nd, which violates (A14). Thus, ICls is slack. Claim 4: ICld is binding. Proof: Suppose, by way of contradiction, that ICld holds as a strict inequality. Consider the following small perturbation to the presumed optimal solution: \( {\overset{\sim }{\mathrm{c}}}^s={\mathrm{c}}^s+\upvarepsilon \), \( {\overset{\sim }{\mathrm{c}}}^d={\mathrm{c}}^d+\upvarepsilon \) and \( {\overset{\sim }{\mathrm{c}}}^l={\mathrm{c}}^l-\updelta \), where ε, δ > 0 and (1 − α)δ = (1 + α)ε. By continuity considerations ICld and ICls are maintained (the former is slack by our presumption and the latter by virtue of claim 3). Moreover, by construction of the perturbation neither the revenue constraint nor any of the other incentive-compatibility constraints is violated. The suggested perturbation yields an increase in social welfare, as by presumption βl < (1 − α)/2; hence, the total change in welfare is given by \( \Delta W=\varepsilon \left(1-{\beta}^l\right)-\updelta {\beta}^l=\varepsilon \left[1-\frac{2{\beta}^l}{\left(1-\alpha \right)}\right]>0 \). We thus obtain the desired contradiction. Claim 5: ICdl is slack. Proof: Suppose by negation that IChl is binding. Thus, (A18) cd − g(nd) = cl − g(nl). By virtue of claim 4 IClh is binding; hence (A19) cl − kg(nl) = cd − kg(nd). Subtracting (A19) from (A18) yields upon rearrangement (A20) (k − 1)[g(nd) − g(nd)] = 0. As g(n) is increasing and k > 1, it follows that nd = nl. By the assumption of a separating equilibrium, we obtain the desired contradiction. Claim 6: ICds is slack. Proof: Suppose by negation that IChs is binding. Hence, (A21) \( {c}^d-g\left({n}^d\right)={c}^s-g\left(\frac{n^s{w}^s}{w^u}\right) \). By virtue of claim 2 ICsd is binding. Hence, (A22) \( {c}^s-g\left({n}^s\right)={c}^h-g\left(\frac{n^d{w}^u}{w^s}\right) \). Subtracting (A22) from (A21) yields: (A23) \( g\left(\frac{n^s{w}^s}{w^u}\right)-g\left({n}^s\right)=g\left({n}^d\right)-g\left(\frac{n^d{w}^u}{w^s}\right) \) $$ \Longleftrightarrow H\left(\frac{n^s{w}^s}{w^u}\right)=H\left({n}^d\right), $$ where the inequality follows from the strict convexity of g and the fact that ws > wu. It follows from (A24) that \( \frac{n^s{w}^s}{w^u}={n}^h \). We thus obtain a contradiction by our presumption of a separating equilibrium. Part (ii): nh > nl. Proof: By virtue of claim 5, IChl is slack, hence: (A25) cd − g(nd) > cl − g(nl). By virtue of part claim 4 the constraint IClh is binding; hence (A27) (k − 1)[g(nd) − g(nl)] > 0. As g(n) is increasing and k > 1, it follows that nd > nl. This completes the proof. Proof of Proposition Suppose that there is no minimum wage in place and let the triplet (\( {c}_{\ast}^i,{n}_{\ast}^i\Big), \) where i = s, d, l, denote the optimal tax-and-transfer schedule that maximizes the welfare expression in (2) subject to the revenue constraint (3) and the incentive-compatibility constraints (4). The construction of the proof will be as follows. We will consider a small revenue-neutral perturbation to the optimal tax-and-transfer system. We will show that by imposing a binding minimum wage and further assuming that rationing is constrained efficient (as formally defined below) the suggested perturbation will violate none of the incentive-compatibility constraints (Part I). We will then demonstrate that the suggested perturbation results in a welfare gain (Part II). Finally, we will consider an extension to a more general class of rationing rules and demonstrate that the key result continues to hold when rationing is nearly constrained efficient (Part III). I. A small revenue-neutral perturbation Consider the following small perturbation to the optimal solution (characterized in Appendix A): \( {\overset{\sim }{\mathrm{c}}}^s={\mathrm{c}}_{\ast}^s+\upvarepsilon \), \( {\overset{\sim }{\mathrm{c}}}^d={\mathrm{c}}_{\ast}^d+\upvarepsilon \) and \( {\overset{\sim }{\mathrm{c}}}^l={\mathrm{c}}_{\ast}^l-\updelta \), where ε, δ > 0 and (1 − α)δ = (1 + α)ε. Notice that, by construction, provided that the resulting allocation is incentive compatible (as will be verified below), the suggested perturbation is revenue neutral. In addition, suppose that the government sets a minimum wage at the level of the equilibrium unskilled wage under an optimal income tax-and-transfer schedule in the absence of a minimum wage. Formally, let \( \overline{w}=\partial F\left({\alpha n}_{\ast}^d+\left(1-\alpha \right){n}_{\ast}^l,{n}_{\ast}^s\right)/\partial {N}^u \) denote the minimum wage. We turn next to verify that none of the incentive-compatibility constraints are violated. By construction of the suggested perturbation and by virtue of the quasi-linear utility specification, the incentive-compatibility constraints ICsd and ICds remain unchanged, whereas the incentive-compatibility constraints ICsl and ICdl are mitigated. Furthermore, by virtue of part (i) of the lemma, ICls is slack and hence remains satisfied under the suggested perturbation by continuity considerations. On the other hand, ICld, which by virtue of part (i) of the lemma is binding under an optimal tax-and-transfer regime, is violated by the suggested perturbation, since the undeserving poor would prefer the bundle associated with their deserving counterparts to their own bundle. However, we will now demonstrate that the binding minimum wage blocks such mimicking. By virtue of the incentive-compatibility constraints ICdl and ICld, the introduction of a binding minimum wage results in involuntary underemployment/unemployment. To see this, notice that the deserving and undeserving poor are willing to work \( {n}_{\ast}^d \) hours since both types strictly prefer the bundle \( \left({\overset{\sim }{\mathrm{c}}}^d,{n}_{\ast}^d\right) \) to any other available bundle. This implies that the total labor supply of the unskilled workers is given by \( {n}_{\ast}^d \). However, the total labor demand for the low-skilled workers is given by \( {\alpha n}_{\ast}^d+\left(1-\alpha \right){n}_{\ast}^l<{n}_{\ast}^d \), where the inequality sign follows from part (ii) of the lemma. Some form of rationing is required due to the gap between the demand and supply of labor. We will henceforth assume that rationing is constrained efficient, namely, the working hours demanded by the firms, given the tax-and-transfer system in place, are allocated so as to maximize the aggregate surplus of the unskilled workers. As will be shown below, constrained efficient rationing implies that the entire incidence of underemployment will fall on the undeserving poor. That is, the undeserving poor will become underemployed and only work \( {n}_{\ast}^l \) hours, whereas, the deserving poor will continue to work \( {n}_{\ast}^d \) hours. Formally, constrained efficient rationing implies that the allocation of working hours maximizes the aggregate surplus of the unskilled workers: (B1) \( S\equiv \left({x}^d+{x}^l\right){\overset{\sim }{\mathrm{c}}}^l+\left({z}^d+{z}^l-{x}^d-{x}^l\right){\overset{\sim }{\mathrm{c}}}^d-\left({z}^d-{x}^d\right)g\left({n}_{\ast}^d\right)-{x}^dg\left({n}_{\ast}^l\right)-\left({z}^l-{x}^l\right) kg\left({n}_{\ast}^d\right)-{x}^l kg\left({n}_{\ast}^l\right) \)subject to the constraint (B2) \( \left({z}^d+{z}^l-{x}^d-{x}^l\right){n}_{\ast}^d+\left({x}^d+{x}^l\right){n}_{\ast}^l={\alpha n}_{\ast}^d+\left(1-\alpha \right){n}_{\ast}^l \), where 0 ≤ xd ≤ zd ≤ α, 0 ≤ xl ≤ zl ≤ 1 − α, zj; j = l, d, denotes the measure of type-j workers that remain employed and xj; j = l, d, denotes the measure of type-j workers that are involuntarily underemployed. Several remarks are in order. First, we consider the most general rationing rule that allows each type of unskilled worker to be underemployed (xi ≤ zi; i = d, l) and/or unemployed (zd ≤ α, zl ≤ 1 − α). Second, the formulation of the surplus in (B1) accounts for the fact that the reservation utility of unemployed workers of both types is zero. Finally, we assume that the utility levels under the optimal tax-and-transfer regime (hence, by continuity considerations, also under the perturbed tax-and-transfer regime) are bounded away from zero for both types of unskilled workers; hence, both types of unskilled workers will have positive working hours. Rearranging (B2) yields (B2') xd + xl = ψ, where \( \uppsi \equiv 1-\upalpha -\frac{\left(1-{z}^d-{z}^l\right){n}_{\ast}^d}{n_{\ast}^d-{n}_{\ast}^l} \). Substituting for xl from (B2') into (B1) and rearranging yields (B3) \( S=\left({z}^d+{z}^l\right){\overset{\sim }{\mathrm{c}}}^d+\uppsi \left({\overset{\sim }{\mathrm{c}}}^l-{\overset{\sim }{\mathrm{c}}}^d\right)-\left({z}^d-{x}^d\right)g\left({n}_{\ast}^d\right)-{x}^dg\left({n}_{\ast}^l\right)-\left({x}^d+{z}^l-\uppsi \right) kg\left({n}_{\ast}^d\right)-\left(\uppsi -{x}^d\right) kg\left({n}_{\ast}^l\right) \). Differentiating (B3) with respect to xd and rearranging yields (B4) \( \frac{\partial S}{\partial {x}^d}=-\left(k-1\right)\left[g\left({n}_{\ast}^d\right)-g\left({n}_{\ast}^l\right)\right]<0 \), where the inequality follows since \( {n}_{\ast}^d>{n}_{\ast}^l \), g is increasing, and k > 1. We conclude that xd = 0 and, by virtue of (B2'), that xl = ψ. Differentiating (B3) with respect to zh upon rearrangement yields (B5) \( \frac{\partial S}{\partial {z}^d}=\frac{n_{\ast}^d\left(k-1\right)g\left({n}_{\ast}^d\right)+{n}_{\ast}^d\left[{\overset{\sim }{\mathrm{c}}}^l- kg\left({n}_{\ast}^l\right)\right]-{n}_{\ast}^l\left[{\overset{\sim }{\mathrm{c}}}^d-g\left({n}_{\ast}^d\right)\right]}{n_{\ast}^d-{n}_{\ast}^l} \). As \( {n}_{\ast}^d>{n}_{\ast}^l \) and k > 1, it follows by substituting \( {n}_{\ast}^l \) for \( {n}_{\ast}^d \) in the first term of the numerator of right-hand-side expression of (B5) that (B6) \( \frac{\partial S}{\partial {z}^d}>\frac{n_{\ast}^l\left(k-1\right)g\left({n}_{\ast}^d\right)+{n}_{\ast}^d\left[{\overset{\sim }{\mathrm{c}}}^l- kg\left({n}_{\ast}^l\right)\right]-{n}_{\ast}^l\left[{\overset{\sim }{\mathrm{c}}}^d-g\left({n}_{\ast}^d\right)\right]}{n_{\ast}^d-{n}_{\ast}^l} \), which, after rearrangement, yields (B6') \( \frac{\partial S}{\partial {z}^d}>\frac{n_{\ast}^d\left[{\overset{\sim }{\mathrm{c}}}^l- kg\left({n}_{\ast}^l\right)\right]-{n}_{\ast}^l\left[{\overset{\sim }{\mathrm{c}}}^d- kg\left({n}_{\ast}^d\right)\right]}{n_{\ast}^d-{n}_{\ast}^l} \). As \( {n}_{\ast}^d>{n}_{\ast}^l \), for \( \frac{\partial S}{\partial {z}^d}>0 \) it suffices to show that the numerator of the right-hand-side of (B6') is positive; that is (B7) \( {n}_{\ast}^d\left[{\overset{\sim }{\mathrm{c}}}^l- kg\left({n}_{\ast}^l\right)\right]-{n}_{\ast}^l\left[{\overset{\sim }{\mathrm{c}}}^d- kg\left({n}_{\ast}^d\right)\right] \)> 0. By virtue of the binding incentive-compatibility constraint ICld, under the optimal unperturbed tax-and-transfer regime (B8) \( \underset{\varepsilon \to 0}{\lim}\left[{\overset{\sim }{\mathrm{c}}}^d- kg\left({n}_{\ast}^d\right)\right]=\underset{\delta \to 0}{\lim}\left[{\overset{\sim }{\mathrm{c}}}^l- kg\left({n}_{\ast}^l\right)\right]\equiv B>0 \), where the inequality sign follows from our assumption that the utilities derived under the optimal tax-and-transfer regime are positive. Therefore, (B9) \( \underset{\varepsilon \to 0,\delta \to 0}{\lim }{n}_{\ast}^d\left[{\overset{\sim }{\mathrm{c}}}^l- kg\left({n}_{\ast}^l\right)\right]-{n}_{\ast}^l\left[{\overset{\sim }{\mathrm{c}}}^d- kg\left({n}_{\ast}^d\right)\right]=B\left({n}_{\ast}^d-{n}_{\ast}^l\right)>0, \)where the inequality sign follows from (B8) and \( {n}_{\ast}^d>{n}_{\ast}^l. \) Thus, by continuity considerations, for sufficiently small ε and δ, the inequality (B7) holds. We thus conclude that \( \frac{\partial S}{\partial {z}^d}>0. \) Hence, zd = α. Differentiating (B3) with respect to zl upon rearrangement yields (B10) \( \frac{\partial S}{\partial {z}^l}=\frac{n_{\ast}^h\left[{\overset{\sim }{\mathrm{c}}}^l- kg\left({n}_{\ast}^l\right)\right]-{n}_{\ast}^l\left[{\overset{\sim }{\mathrm{c}}}^d- kg\left({n}_{\ast}^d\right)\right]}{n_{\ast}^d-{n}_{\ast}^l} \). Noting that the expression on the right-hand-side of (B10) is identical to the expression on the right-hand-side of (B6'). By repeating the arguments used to establish the positive sign of \( \frac{\partial S}{\partial {z}^d} \), it therefore follows that \( \frac{\partial S}{\partial {z}^l} \)> 0. Hence, zl = 1 − α. We conclude that under constrained efficient rationing, none of the unskilled workers are forced into unemployment. Moreover, the entire incidence of underemployment falls on the undeserving poor who are unable to mimic the deserving poor. We conclude that the suggested perturbation supplemented by the binding minimum wage violates none of the incentive-compatibility constraints. II. Welfare gain The total change in welfare is given by \( \Delta W=\varepsilon \left(1-{\beta}^l\right)-\updelta {\beta}^l=\varepsilon \left[1-\frac{2{\beta}^l}{\left(1-\alpha \right)}\right]>0 \), where the last equality follows as (1 − α)δ = (1 + α)ε (by construction of the perturbation) and the strict inequality follows from the presumption that βl < (1 − α)/2. III. Nearly constrained efficient rationing By virtue of the suggested perturbation, both types of unskilled workers strictly prefer the bundle \( \left({\overset{\sim }{c}}^d,{n}_{\ast}^d\right) \), referred to as an d-job, to the bundle \( \left({\overset{\sim }{c}}^l,{n}_{\ast}^l\right) \), referred to as a l-job, where the respective measures of available d-jobs and l-jobs are given by α and 1 − α. We consider an extension of the constrained efficient rationing rule: (i) a fraction 0≤q≤1 of the h-jobs is assigned to the deserving poor; (ii) an identical fraction of the l-jobs is assigned to the undeserving poor; and (iii) the remaining jobs are assigned randomly. Notice that when q = 1 rationing is constrained efficient, whereas rationing is random when q = 0. Let the utility of a deserving poor assigned to an d–job (l–job) be denoted by udd (udl), and the utility of an undeserving poor assigned to an d–job (l–job) be denoted by uld (ull). In light of the extended rationing rule, the expected utility derived by type-d and type-l workers is given, respectively, by: (B11) EUd = [q + (1 − q)α]udd + [(1 − q)(1 − α)]udl, (B12) EUl = [q + (1 − q)(1 − α)]ull + [(1 − q)α]uld, and social welfare is given by: (B13) W = βlEUl + βdEUd + (1 − βl − βd)Us. Taking the limit when q → 1 implies that EUd ⟶ udd and EUl ⟶ ul. Recall that udd and ull are the utilities derived, respectively, by the deserving and the undeserving poor under the suggested perturbation with constrained efficient rationing. It follows by continuity considerations that the suggested perturbation yields an increase in social welfare if q is sufficiently close to 1. That is, there exists a \( \widehat{q} \) such that for any \( q\in \left(\widehat{q},1\right] \) the associated rationing rule attains a welfare improvement. Denoting such rationing rules as nearly constrained efficient establishes our argument. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Blumkin, T., Danziger, L. Deserving poor and the desirability of a minimum wage. IZA J Labor Econ 7, 6 (2018). https://doi.org/10.1186/s40172-018-0066-7 Deserving poor
CommonCrawl
In 2011, as part of the Silk Road research, I ordered 10x100mg Modalert (5btc) from a seller. I also asked him about his sourcing, since if it was bad, it'd be valuable to me to know whether it was sourced from one of the vendors listed in my table. He replied, more or less, I get them from a large Far Eastern pharmaceuticals wholesaler. I think they're probably the supplier for a number of the online pharmacies. 100mg seems likely to be too low, so I treated this shipment as 5 doses: Disclaimer: While we work to ensure that product information is correct, on occasion manufacturers may alter their ingredient lists. Actual product packaging and materials may contain more and/or different information than that shown on our Web site. We recommend that you do not solely rely on the information presented and that you always read labels, warnings, and directions before using or consuming a product. For additional information about a product, please contact the manufacturer. Content on this site is for reference purposes and is not intended to substitute for advice given by a physician, pharmacist, or other licensed health-care professional. You should not use this information as self-diagnosis or for treating a health problem or disease. Contact your health-care provider immediately if you suspect that you have a medical problem. Information and statements regarding dietary supplements have not been evaluated by the Food and Drug Administration and are not intended to diagnose, treat, cure, or prevent any disease or health condition. Amazon.com assumes no liability for inaccuracies or misstatements about products. Iluminal is an example of an over-the-counter serotonergic drug used by people looking for performance enhancement, memory improvements, and mood-brightening. Also noteworthy, a wide class of prescription anti-depression drugs are based on serotonin reuptake inhibitors that slow the absorption of serotonin by the presynaptic cell, increasing the effect of the neurotransmitter on the receptor neuron – essentially facilitating the free flow of serotonin throughout the brain. While these two compounds may not be as exciting as a super pill that instantly unlocks the full potential of your brain, they currently have the most science to back them up. And, as Patel explains, they're both relatively safe for healthy individuals of most ages. Patel explains that a combination of caffeine and L-theanine is the most basic supplement stack (or combined dose) because the L-theanine can help blunt the anxiety and "shakiness" that can come with ingesting too much caffeine. There is no clear answer to this question. Many of the smart drugs have decades of medical research and widespread use behind them, as well as only minor, manageable, or nonexistent side effects, but are still used primarily as a crutch for people already experiencing cognitive decline, rather than as a booster-rocket for people with healthy brains. Unfortunately, there is a bias in Western medicine in favor of prescribing drugs once something bad has already begun, rather than for up-front prevention. There's also the principle of "leave well enough alone" – in this case, extended to mean, don't add unnecessary or unnatural drugs to the human body in place of a normal diet. [Smart Drug Smarts would argue that the average human diet has strayed so far from what is physiologically "normal" that leaving well enough alone is already a failed proposition.] The principal metric would be mood, however defined. Zeo's web interface & data export includes a field for Day Feel, which is a rating 1-5 of general mood & quality of day. I can record a similar metric at the end of each day. 1-5 might be a little crude even with a year of data, so a more sophisticated measure might be in order. The first mood study is paywalled so I'm not sure what they used, but Shiotsuki 2008 used State-Trait of Anxiety Inventory (STAI) and Profiles of Mood States Test (POMS). The full POMS sounds too long to use daily, but the Brief POMS might work. In the original 1987 paper A brief POMS measure of distress for cancer patients, patients answering this questionnaire had a mean total mean of 10.43 (standard deviation 8.87). Is this the best way to measure mood? I've asked Seth Roberts; he suggested using a 0-100 scale, but personally, there's no way I can assess my mood on 0-100. My mood is sufficiently stable (to me) that 0-5 is asking a bit much, even. *Disclaimer: No statements on this website have been reviewed by the Food and Drug Administration. No products mentioned on this website are intended to diagnose, treat, cure or prevent any diseases. brs.brainreference.com is sponsored by BRS Publishers. All editorials on this site were written by editors compensated by BRS Publishers and do not claim or state to be medical professionals giving medical advice. This website is only for the purpose of providing information. Please consult with your doctor before starting any mental health program or dietary supplement. All product pictures were photographed by us and used in conjunction with stock photos who are representing lab technicians and not doctors. If you feel any of this information is inaccurate contact us and we will verify and implement your correction within about 48 business hours. Also note that we have multiple affiliates and we are paid commission on various products by different companies. If you wish to advertise with us, please contact us. Any and all trademarks, logos and service marks displayed on this site are registered or unregistered Trademarks of their respective owners. How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology. Probably most significantly, use of the term "drug" has a significant negative connotation in our culture. "Drugs" are bad: So proclaimed Richard Nixon in the War on Drugs, and Nancy "No to Drugs" Reagan decades later, and other leaders continuing to present day. The legitimate demonization of the worst forms of recreational drugs has resulted in a general bias against the elective use of any chemical to alter the body's processes. Drug enhancement of athletes is considered cheating – despite the fact that many of these physiological shortcuts obviously work. University students and professionals seeking mental enhancements by taking smart drugs are now facing similar scrutiny. Smart drugs could lead to enhanced cognitive abilities in the military. Also known as nootropics, smart drugs can be viewed similarly to medical enhancements. What's important to remember though, is that smart drugs do not increase your intelligence; however, they may improve cognitive and executive functions leading to an increase in intelligence. As professionals and aging baby boomers alike become more interested in enhancing their own brain power (either to achieve more in a workday or to stave off cognitive decline), a huge market has sprung up for nonprescription nootropic supplements. These products don't convince Sahakian: "As a clinician scientist, I am interested in evidence-based cognitive enhancement," she says. "Many companies produce supplements, but few, if any, have double-blind, placebo-controlled studies to show that these supplements are cognitive enhancers." Plus, supplements aren't regulated by the U.S. Food and Drug Administration (FDA), so consumers don't have that assurance as to exactly what they are getting. Check out these 15 memory exercises proven to keep your brain sharp. Dosage is apparently 5-10mg a day. (Prices can be better elsewhere; selegiline is popular for treating dogs with senile dementia, where those 60x5mg will cost $2 rather than $3531. One needs a veterinarian's prescription to purchase from pet-oriented online pharmacies, though.) I ordered it & modafinil from Nubrain.com at $35 for 60x5mg; Nubrain delayed and eventually canceled my order - and my enthusiasm. Between that and realizing how much of a premium I was paying for Nubrain's deprenyl, I'm tabling deprenyl along with nicotine & modafinil for now. Which is too bad, because I had even ordered 20g of PEA from Smart Powders to try out with the deprenyl. (My later attempt to order some off the Silk Road also failed when the seller canceled the order.) We'd want 53 pairs, but Fitzgerald 2012's experimental design called for 32 weeks of supplementation for a single pair of before-after tests - so that'd be 1664 weeks or ~54 months or ~4.5 years! We can try to adjust it downwards with shorter blocks allowing more frequent testing; but problematically, iodine is stored in the thyroid and can apparently linger elsewhere - many of the cited studies used intramuscular injections of iodized oil (as opposed to iodized salt or kelp supplements) because this ensured an adequate supply for months or years with no further compliance by the subjects. If the effects are that long-lasting, it may be worthless to try shorter blocks than ~32 weeks. Not that everyone likes to talk about using the drugs. People don't necessarily want to reveal how they get their edge and there is stigma around people trying to become smarter than their biology dictates, says Lawler. Another factor is undoubtedly the risks associated with ingesting substances bought on the internet and the confusing legal statuses of some. Phenylpiracetam, for example, is a prescription drug in Russia. It isn't illegal to buy in the US, but the man-made chemical exists in a no man's land where it is neither approved nor outlawed for human consumption, notes Lawler. …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. The flanker task is designed to tax cognitive control by requiring subjects to respond based on the identity of a target stimulus (H or S) and not the more numerous and visually salient stimuli that flank the target (as in a display such as HHHSHHH). Servan-Schreiber, Carter, Bruno, and Cohen (1998) administered the flanker task to subjects on placebo and d-AMP. They found an overall speeding of responses but, more importantly, an increase in accuracy that was disproportionate for the incongruent conditions, that is, the conditions in which the target and flankers did not match and cognitive control was needed. The chemicals he takes, dubbed nootropics from the Greek "noos" for "mind", are intended to safely improve cognitive functioning. They must not be harmful, have significant side-effects or be addictive. That means well-known "smart drugs" such as the prescription-only stimulants Adderall and Ritalin, popular with swotting university students, are out. What's left under the nootropic umbrella is a dizzying array of over-the-counter supplements, prescription drugs and unclassified research chemicals, some of which are being trialled in older people with fading cognition. "Where can you draw the line between Red Bull, six cups of coffee and a prescription drug that keeps you more alert," says Michael Schrage of the MIT Center for Digital Business, who has studied the phenomenon. "You can't draw the line meaningfully - some organizations have cultures where it is expected that employees go the extra mile to finish an all-nighter. " Talk to your doctor, too, before diving in "to ensure that they do not conflict with current meds or cause a detrimental effect," Hohler says. You also want to consider what you already know about your health and body – if you have anxiety or are already sensitive to caffeine, for example, you may find that some of the supplements work a little too well and just enhance anxiety or make it difficult to sleep, Barbour says. Finances matter, too, of course: The retail price for Qualia Mind is $139 for 22 seven-capsule "servings"; the suggestion is to take one serving a day, five days a week. The retail price for Alpha Brain is $79.95 for 90 capsules; adults are advised to take two a day. What if you could simply take a pill that would instantly make you more intelligent? One that would enhance your cognitive capabilities including attention, memory, focus, motivation and other higher executive functions? If you have ever seen the movie Limitless, you have an idea of what this would look like—albeit the exaggerated Hollywood version. The movie may be fictional but the reality may not be too far behind. According to clinical psychiatrist and Harvard Medical School Professor, Emily Deans, "there's probably nothing dangerous about the occasional course of nootropics...beyond that, it's possible to build up a tolerance if you use them often enough." Her recommendation is to seek pharmaceutical-grade products which she says are more accurate regarding dosage and less likely to be contaminated. There is an ancient precedent to humans using natural compounds to elevate cognitive performance. Incan warriors in the 15th century would ingest coca leaves (the basis for cocaine) before battle. Ethiopian hunters in the 10th century developed coffee bean paste to improve hunting stamina. Modern athletes ubiquitously consume protein powders and hormones to enhance their training, recovery, and performance. The most widely consumed psychoactive compound today is caffeine. Millions of people use coffee and tea to be more alert and focused. Despite decades of study, a full picture has yet to emerge of the cognitive effects of the classic psychostimulants and modafinil. Part of the problem is that getting rats, or indeed students, to do puzzles in laboratories may not be a reliable guide to the drugs' effects in the wider world. Drugs have complicated effects on individuals living complicated lives. Determining that methylphenidate enhances cognition in rats by acting on their prefrontal cortex doesn't tell you the potential impact that its effects on mood or motivation may have on human cognition. Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive. In August 2011, after winning the spaced repetition contest and finishing up the Adderall double-blind testing, I decided the time was right to try nicotine again. I had since learned that e-cigarettes use nicotine dissolved in water, and that nicotine-water was a vastly cheaper source of nicotine than either gum or patches. So I ordered 250ml of water at 12mg/ml (total cost: $18.20). A cigarette apparently delivers around 1mg of nicotine, so half a ml would be a solid dose of nicotine, making that ~500 doses. Plenty to experiment with. The question is, besides the stimulant effect, nicotine also causes habit formation; what habits should I reinforce with nicotine? Exercise, and spaced repetition seem like 2 good targets. Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon's 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It's not a hugely impressive cost-savings, but I think it's worth trying when I run out of fish oil. Cognitive control is a broad concept that refers to guidance of cognitive processes in situations where the most natural, automatic, or available action is not necessarily the correct one. Such situations typically evoke a strong inclination to respond but require people to resist responding, or they evoke a strong inclination to carry out one type of action but require a different type of action. The sources of these inclinations that must be overridden are various and include overlearning (e.g., the overlearned tendency to read printed words in the Stroop task), priming by recent practice (e.g., the tendency to respond in the go/no-go task when the majority of the trials are go trials, or the tendency to continue sorting cards according to the previously correct dimension in the Wisconsin Card Sorting Test [WCST]; Grant & Berg, 1948) and perceptual salience (e.g., the tendency to respond to the numerous flanker stimuli as opposed to the single target stimulus in the flanker task). For the sake of inclusiveness, we also consider the results of studies of reward processing in this section, in which the response tendency to be overridden comes from the desire to have the reward immediately. Studies show that B vitamin supplements can protect the brain from cognitive decline. These natural nootropics can also reduce the likelihood of developing neurodegenerative diseases. The prevention of Alzheimer's and even dementia are among the many benefits. Due to their effects on mental health, B vitamins make an excellent addition to any smart drug stack. Most people would describe school as a place where they go to learn, so learning is an especially relevant cognitive process for students to enhance. Even outside of school, however, learning plays a role in most activities, and the ability to enhance the retention of information would be of value in many different occupational and recreational contexts. My worry about the MP variable is that, plausible or not, it does seem relatively weak against manipulation; other variables I could look at, like arbtt window-tracking of how I spend my computer time, # or size of edits to my files, or spaced repetition performance, would be harder to manipulate. If it's all due to MP, then if I remove the MP and LLLT variables, and summarize all the other variables with factor analysis into 2 or 3 variables, then I should see no increases in them when I put LLLT back in and look for a correlation between the factors & LLLT with a multivariate regression. Modafinil is a eugeroic, or 'wakefulness promoting agent', intended to help people with narcolepsy. It was invented in the 1970s, but was first approved by the American FDA in 1998 for medical use. Recent years have seen its off-label use as a 'smart drug' grow. It's not known exactly how Modafinil works, but scientists believe it may increase levels of histamines in the brain, which can keep you awake. It might also inhibit the dissipation of dopamine, again helping wakefulness, and it may help alertness by boosting norepinephrine levels, contributing to its reputation as a drug to help focus and concentration. I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). Compared with those reporting no use, subjects drinking >4 cups/day of decaffeinated coffee were at increased risk of RA [rheumatoid arthritis] (RR 2.58, 95% CI 1.63-4.06). In contrast, women consuming >3 cups/day of tea displayed a decreased risk of RA (RR 0.39, 95% CI 0.16-0.97) compared with women who never drank tea. Caffeinated coffee and daily caffeine intake were not associated with the development of RA. The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement. Adaptogens are plant-derived chemicals whose activity helps the body maintain or regain homeostasis (equilibrium between the body's metabolic processes). Almost without exception, adaptogens are available over-the-counter as dietary supplements, not controlled drugs. Well-known adaptogens include Ginseng, Kava Kava, Passion Flower, St. Johns Wort, and Gotu Kola. Many of these traditional remedies border on being "folk wisdom," and have been in use for hundreds or thousands of years, and are used to treat everything from anxiety and mild depression to low libido. While these smart drugs work in a many different ways (their commonality is their resultant function within the body, not their chemical makeup), it can generally be said that the cognitive boost users receive is mostly a result of fixing an imbalance in people with poor diets, body toxicity, or other metabolic problems, rather than directly promoting the growth of new brain cells or neural connections. These pills don't work. The reality is that MOST of these products don't work effectively. Maybe we're cynical, but if you simply review the published studies on memory pills, you can quickly eliminate many of the products that don't have "the right stuff." The active ingredients in brain and memory health pills are expensive and most companies sell a watered down version that is not effective for memory and focus. The more brands we reviewed, the more we realized that many of these marketers are slapping slick labels on low-grade ingredients. I largely ignored this since the discussions were of sub-RDA doses, and my experience has usually been that RDAs are a poor benchmark and frequently far too low (consider the RDA for vitamin D). This time, I checked the actual RDA - and was immediately shocked and sure I was looking at a bad reference: there was no way the RDA for potassium was seriously 3700-4700mg or 4-5 grams daily, was there? Just as an American, that implied that I was getting less than half my RDA. (How would I get 4g of potassium in the first place? Eat a dozen bananas a day⸮) I am not a vegetarian, nor is my diet that fantastic: I figured I was getting some potassium from the ~2 fresh tomatoes I was eating daily, but otherwise my diet was not rich in potassium sources. I have no blood tests demonstrating deficiency, but given the figures, I cannot see how I could not be deficient. By the end of 2009, at least 25 studies reported surveys of college students' rates of nonmedical stimulant use. Of the studies using relatively smaller samples, prevalence was, in chronological order, 16.6% (lifetime; Babcock & Byrne, 2000), 35.3% (past year; Low & Gendaszek, 2002), 13.7% (lifetime; Hall, Irwin, Bowman, Frankenberger, & Jewett, 2005), 9.2% (lifetime; Carroll, McLaughlin, & Blake, 2006), and 55% (lifetime, fraternity students only; DeSantis, Noar, & Web, 2009). Of the studies using samples of more than a thousand students, somewhat lower rates of nonmedical stimulant use were found, although the range extends into the same high rates as the small studies: 2.5% (past year, Ritalin only; Teter, McCabe, Boyd, & Guthrie, 2003), 5.4% (past year; McCabe & Boyd, 2005), 4.1% (past year; McCabe, Knight, Teter, & Wechsler, 2005), 11.2% (past year; Shillington, Reed, Lange, Clapp, & Henry, 2006), 5.9% (past year; Teter, McCabe, LaGrange, Cranford, & Boyd, 2006), 16.2% (lifetime; White, Becker-Blease, & Grace-Bishop, 2006), 1.7% (past month; Kaloyanides, McCabe, Cranford, & Teter, 2007), 10.8% (past year; Arria, O'Grady, Caldeira, Vincent, & Wish, 2008); 5.3% (MPH only, lifetime; Du-Pont, Coleman, Bucher, & Wilford, 2008); 34% (lifetime; DeSantis, Webb, & Noar, 2008), 8.9% (lifetime; Rabiner et al., 2009), and 7.5% (past month; Weyandt et al., 2009). Stimulants are drugs that accelerate the central nervous system (CNS) activity. They have the power to make us feel more awake, alert and focused, providing us with a needed energy boost. Unfortunately, this class encompasses a wide range of drugs, some which are known solely for their side-effects and addictive properties. This is the reason why many steer away from any stimulants, when in fact some greatly benefit our cognitive functioning and can help treat some brain-related impairments and health issues. 70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of: It looks like the overall picture is that nicotine is absorbed well in the intestines and the colon, but not so well in the stomach; this might be the explanation for the lack of effect, except on the other hand, the specific estimates I see are that 10-20% of the nicotine will be bioavailable in the stomach (as compared to 50%+ for mouth or lungs)… so any of my doses of >5ml should have overcome the poorer bioavailability! But on the gripping hand, these papers are mentioning something about the liver metabolizing nicotine when absorbed through the stomach, so… But how, exactly, does he do it? Sure, Cruz typically eats well, exercises regularly and tries to get sufficient sleep, and he's no stranger to coffee. But he has another tool in his toolkit that he finds makes a noticeable difference in his ability to efficiently and effectively conquer all manner of tasks: Alpha Brain, a supplement marketed to improve memory, focus and mental quickness. If you could take a pill that would help you study and get better grades, would you? Off-label use of "smart drugs" – pharmaceuticals meant to treat disorders like ADHD, narcolepsy, and Alzheimer's – are becoming increasingly popular among college students hoping to get ahead, by helping them to stay focused and alert for longer periods of time. But is this cheating? Should their use as cognitive enhancers be approved by the FDA, the medical community, and society at large? Do the benefits outweigh the risks? Neuro Optimizer is Jarrow Formula's offering on the nootropic industry, taking a more creative approach by differentiating themselves as not only a nootropic that enhances cognitive abilities, but also by making sure the world knows that they have created a brain metabolizer. It stands out from all the other nootropics out there in this respect, as well as the fact that they've created an all-encompassing brain capsule. What do they really mean by brain metabolizer, though? It means that their capsule is able to supply nutrition… Learn More... Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says. Scientists found that the drug can disrupt the way memories are stored. This ability could be invaluable in treating trauma victims to prevent associated stress disorders. The research has also triggered suggestions that licensing these memory-blocking drugs may lead to healthy people using them to erase memories of awkward conversations, embarrassing blunders and any feelings for that devious ex-girlfriend. Integrity & Reputation: Go with a company that sells more than just a brain formula. If a company is just selling this one item,buyer-beware!!! It is an indication that it is just trying to capitalize on a trend and make a quick buck. Also, if a website selling a brain health formula does not have a highly visible 800# for customer service, you should walk away. In general, I feel a little bit less alert, but still close to normal. By 6PM, I have a mild headache, but I try out 30 rounds of gbrainy (haven't played it in months) and am surprised to find that I reach an all-time high; no idea whether this is due to DNB or not, since Gbrainy is very heavily crystallized (half the challenge disappears as you learn how the problems work), but it does indicate I'm not deluding myself about mental ability. (To give a figure: my last score well before I did any DNB was 64, and I was doing well that day; on modafinil, I had a 77.) I figure the headache might be food related, eat, and by 7:30 the headache is pretty much gone and I'm fine up to midnight. A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea. Speaking of addictive substances, some people might have considered cocaine a nootropic (think: the finance industry in Wall Street in the 1980s). The incredible damage this drug can do is clear, but the plant from which it comes has been used to make people feel more energetic and less hungry, and to counteract altitude sickness in Andean South American cultures for 5,000 years, according to an opinion piece that Bolivia's president, Evo Morales Ayma, wrote for the New York Times. It is often associated with Ritalin and Adderall because they are all CNS stimulants and are prescribed for the treatment of similar brain-related conditions. In the past, ADHD patients reported prolonged attention while studying upon Dexedrine consumption, which is why this smart pill is further studied for its concentration and motivation-boosting properties. Even if you eat foods that contain these nutrients, Hogan says their beneficial effects are in many ways cumulative—meaning the brain perks don't emerge unless you've been eating them for long periods of time. Swallowing more of these brain-enhancing compounds at or after middle-age "may be beyond the critical period" when they're able to confer cognitive enhancements, he says. One might suggest just going to the gym or doing other activities which may increase endogenous testosterone secretion. This would be unsatisfying to me as it introduces confounds: the exercise may be doing all the work in any observed effect, and certainly can't be blinded. And blinding is especially important because the 2011 review discusses how some studies report that the famed influence of testosterone on aggression (eg. Wedrifid's anecdote above) is a placebo effect caused by the folk wisdom that testosterone causes aggression & rage! Up to 20% of Ivy League college students have already tried "smart drugs," so we can expect these pills to feature prominently in organizations (if they don't already). After all, the pressure to perform is unlikely to disappear the moment students graduate. And senior employees with demanding jobs might find these drugs even more useful than a 19-year-old college kid does. Indeed, a 2012 Royal Society report emphasized that these "enhancements," along with other technologies for self-enhancement, are likely to have far-reaching implications for the business world. So what's the catch? Well, it's potentially addictive for one. Anything that messes with your dopamine levels can be. And Patel says there are few long-term studies on it yet, so we don't know how it will affect your brain chemistry down the road, or after prolonged, regular use. Also, you can't get it very easily, or legally for that matter, if you live in the U.S. It's classified as a schedule IV controlled substance. That's where Adrafinil comes in. Many people quickly become overwhelmed by the volume of information and number of products on the market. Because each website claims its product is the best and most effective, it is easy to feel confused and unable to decide. Smart Pill Guide is a resource for reliable information and independent reviews of various supplements for brain enhancement. Take at 11 AM; distractions ensue and the Christmas tree-cutting also takes up much of the day. By 7 PM, I am exhausted and in a bad mood. While I don't expect day-time modafinil to buoy me up, I do expect it to at least buffer me against being tired, and so I conclude placebo this time, and with more confidence than yesterday (65%). I check before bed, and it was placebo. Accordingly, we searched the literature for studies in which MPH or d-AMP was administered orally to nonelderly adults in a placebo-controlled design. Some of the studies compared the effects of multiple drugs, in which case we report only the results of stimulant–placebo comparisons; some of the studies compared the effects of stimulants on a patient group and on normal control subjects, in which case we report only the results for control subjects. The studies varied in many other ways, including the types of tasks used, the specific drug used, the way in which dosage was determined (fixed dose or weight-dependent dose), sample size, and subject characteristics (e.g., age, college sample or not, gender). Our approach to the classic splitting versus lumping dilemma has been to take a moderate lumping approach. We group studies according to the general type of cognitive process studied and, within that grouping, the type of task. The drug and dose are reported, as well as sample characteristics, but in the absence of pronounced effects of these factors, we do not attempt to make generalizations about them. In sum, the evidence concerning stimulant effects of working memory is mixed, with some findings of enhancement and some null results, although no findings of overall performance impairment. A few studies showed greater enhancement for less able participants, including two studies reporting overall null results. When significant effects have been found, their sizes vary from small to large, as shown in Table 4. Taken together, these results suggest that stimulants probably do enhance working memory, at least for some individuals in some task contexts, although the effects are not so large or reliable as to be observable in all or even most working memory studies. One possibility is that when an individual takes a drug like noopept, they experience greater alertness and mental clarity. So, while the objective ability to see may not actually improve, the ability to process visual stimuli increases, resulting in the perception of improved vision. This allows individuals to process visual cues more quickly, take in scenes more easily, and allows for the increased perception of smaller details.
CommonCrawl
I'm a university student of theoretical physics and mathematics, whilst my own studies focus on gravitational physics (e.g. string theory), differential geometry, and gauge field theories. Recommended Resources: Quantum Field Theory: http://www.damtp.cam.ac.uk/user/tong/qft.html; Introduction to Quantum Field Theory by Peskin and Schroeder String Theory: http://www.damtp.cam.ac.uk/user/tong/string.html; String Theory and M-Theory by Becker, Becker and Schwarz; Superstring Theory by Witten; String Theory, course by Prof. Freddy Chachazo, available at: http://perimeterscholars.org/ Solitons, Topology: Classical Solutions in Quantum Field Theory by Weinberg; http://www.damtp.cam.ac.uk/user/tong/tasi.html Advanced General Relativity: Gravitational Physics, course by Prof. Ruth Gregory, video lectures available at: http://perimeterscholars.org/ Advanced Differential Geometry: A Brief Introduction to Characteristic Classes from the Differential Viewpoint (Free Notes) by Yang Zhang, Cornell University Calabi-Yau Manifolds: A Bestiary for Physicists by T. Hubsch; an introduction to Calabi-Yau manifolds meant to be accessible to physicists, but requires a substantial background in some algebraic and differential geometry. Mannheim's Brane-Localized Gravity; discusses branes and fields in the context of general relativity, explicit computations are present throughout, in great detail. All volumes of A Comprehensive Introduction to Differential Geometry by Spivak; thorough, excellently written and with great insights. Profile image: an artist's depiction of a two-dimensional slice of a Calabi-Yau manifold, which by definition has vanishing first Chern class or equivalently trivial canonical bundle. Nullius in verba Physics 15.8k 15.8k 55 gold badges3434 silver badges9090 bronze badges Mathematics 1.1k 1.1k 66 silver badges1717 bronze badges Movies & TV 614 614 44 silver badges1010 bronze badges Reverse Engineering 474 474 22 silver badges1515 bronze badges 50 How exactly does gravity work? 41 Is McConaughey's anecdote about not memorizing a four-page monologue real? 29 Motion described by $m \frac{\mathrm{d}^2 x}{\mathrm{d}t^2}=-k\frac{\mathrm{d}^{\frac12 }x}{\mathrm{d}t^{\frac12}}$ 22 How can I read off the fact that gravity is associated with spin-2 particles from the Einstein-Hilbert action? 21 Global Properties of Spacetime Manifolds 20 Is it appropriate to e-mail a researcher asking about progress on a follow-up paper? 19 Conductivity as a function of acid concentration
CommonCrawl
How to destroy a star system? The admiral of a space fleet that belongs to a Type 3 civilization has been given orders to obliterate all planets within the habitable zone of a Sun like star. Some of the planets have been identified as nests for the larvae stage of swarm of the star eating Space Locus. Death Star type destruction of the planets won't kill the larvae. The weapon of choice: seed missile The technology: The "seed" of a seed missile is a microscopic black hole. The initial size is due to safety reasons. The black hole will evaporate via Hawking Radiation if the containment field is compromised; no secondary explosions possible. Because the initial size is useless, the missile has to charge the weapon while in-flight. To charge the black hole, the missile taps into Dark Matter and grows the black hole like a crystal. The missile can cause the essence of the black hole to precipitate into electrons, protons, neutrons, and elements. example: Uranium or a small Neutron Star. This is done with handwavy physics. What does the missile need to do to cause the star to obliterate the planets? Would this be a Nova or Super Nova? How far out could the star obliteration stellar objects? asteroid belt? kuiper belt? I'm editing this because I feel that I had describe the seed missile poorly. Instead of setting the describing of how it works at the beginning, let me first describe its effect. The missile is a white hole weapon that can generate a maximum amount of mass - say 0.01 solar mass. This is a rewording of point 5 of the original description along with a new description of the upper limit of its capabilities. This is done by causing a black hole to operate like a white hole. The black hole doesn't have all the mass, the Dark Matter does - point 4 of the original description The black hole is just there as an intermediate step for the Dark Matter -> Normal Matter conversion - clarification of points 2,3, and 4 of the original description The size of the black hole determines the rate of the conversion. - new information At launch, the size of the black hole is microscopic - ~27 micrograms. - point 1 of the original description This allows the missile to maneuver normally at first and become a door stop if compromised. - point 2 of the original description If this was a chemical reaction, it would be written similar to this: dark matter -> black hole -> white hole -> normal matter example usage place a stealth variant of the seed missile in an orbit almost parallel to a dyson-ring star defense system and crank it up to 0.011 solar mass. other usage throw the seed missile into a star, have it create ?????? type normal matter, and watch the star "go BOOM". This question is about filling in that unknown. Additional information on the situation It is estimated that 0.000001% of the star systems in the one galaxy are infected. Each of those star systems need to be 99.44% cleansed within a 1 year time frame. That's 2500 star systems in 365 days. No, you can't split the fleet. stars weapon-mass-destruction Michael Kutz Michael KutzMichael Kutz $\begingroup$ If "Death Star type destruction" is not bad enough to kill the larvae, then there's no guarantee that collapsing the system's star, even in supernova-type explosion, would do the job. Your "seed missiles" would have to strike every planet individually. $\endgroup$ – Alexander Jan 10 '18 at 22:45 $\begingroup$ A type III civilization should be able to make Kugelblitz black holes and it's hard to see anything not being effectively destroyed by using a black hole. And handwavy physics does not mean you should invent something like "the essence of a black hole" - there just isn't such a thing and it sounds ridiculous. $\endgroup$ – StephenG Jan 10 '18 at 22:55 $\begingroup$ "the missile taps into Dark Matter and grows the black hole like a crystal." — after handwave this big pretty much any outcome can go. $\endgroup$ – Mołot Jan 10 '18 at 23:06 $\begingroup$ @Mołot Very true. That step is literally black magic, and sidesteps all sorts of issues, like what happens when the black hole starts spaghettifying your missile around it! $\endgroup$ – Cort Ammon Jan 10 '18 at 23:38 $\begingroup$ The matter density within the sun is such that your blackhole would probably just be absorbed and do almost nothing. Even if it forms an event horizon the sun is still way more massive and will simply suck back in any ejected matter. I read some where the bigger mass wins, and there is no practical way you will be able to form a relevantly massive blackhole with your black hole missile. $\endgroup$ – cybernard Jan 11 '18 at 3:42 As others said, if the larvae can survive the destruction of a planet by Death Star-type laser, blowing the star up won't help, and may even make things worse. Fortunately, you are from a Type III civilisation, so you have better tools than crude star-blowers at your disposition. Star flamethrower If your resources at hand are limited, a cheap option is to use some local stellar engineering: dismantle a few planets to build a magnetic field controller around the star, and use it to push the plasma away, make a big hole and unveil the core of the star. While the surface of the star is at a frigid few thousand K, the core of the star is at millions of K, and it is under gargantuan pressures. This is where the actual fusion happens, but energy moves outwards through the layers of the stars ever so slowly, as it is, at this scale, incredibly opaque and insulating. Once unveiled, with the sudden absence of pressure, it will violently expand. The effect will be akin to a solar flare, expect much, much bigger. Or, if you prefer, a star flamethrower. This will roast any planet you are pointing it at, but it won't be enough to kill the larvae, or even blow the planet up, in fact. But you can use one device per planet and keep them firing for a long, long time. At some point, the star will start dimming as the now punctured core cannot sustain enough pressure to keep the same fusion rate. Decreasing the rate or pausing for some time should fix it, though. The star will also end up loosing mass, which may also be mitigated by dropping interstellar hydrogen or even recycling escaped hydrogen back into it, but the loss rate will still probably be too big to fully balance. The idea is that maybe the larvae can survive an instant, violent event but not a continuous burning for a long time. The planet will slowly evaporate (be careful to not let spores escape in its tail), though it would probably be too long to evaporate them completely that way. Even if it doesn't kill the spores, it should still work as a short-term, stopgap option, giving you a few thousand or million years to work on actual solutions. Be careful though, this de-orbit the planets by pushing them away. Again, you can impact them with other celestial bodies, or more elegantly use those bodies as gravity tractors to pull them back in closer orbit. In any case, it is recommended to start building a Dyson Sphere around the non-holed parts of the star for powering local facilities. Nicoll-Dyson Beams As was mentioned already, Nicoll-Dyson Beams are useful tools in this case. As a Type III civ, you should have a few of those in range, or be able to build them if it happens to be an undeveloped sector, or they are all busy with more important projects. You can even use the aforementioned local Dyson sphere you are probably already building around the star. Now, you could use those to cook the target planets, similarly to the flamethrower option but with more power. You should be able to end up evaporating the planet given enough time, though again be careful with flying spores. Another option is to use them to move the target planets, either with direct localised evaporation and radiation pressure (as in a rocket or a laser sail), or by moving other celestial bodies and use them as gravity tractors. That way, you can drop the planets right in the star's core, and slow their fall down enough that they stay there. Again, make sure no spore escape into the star's atmosphere, but after long enough, it should be entirely sterilised. The advantage here is fast response time, and once it is in the star core, light surveillance should be enough to make sure nothing escaped, freeing your time and resources for other projects. Dead star billiard If you fear that dropping it into the star won't be effective enough, I recommend sending a star remnant to the system and hitting the planets with it. While it may be tempting to hit them as fast as possible, and this should normally be enough, those larvae are good at surviving brief, violent events and may escape with the debris. In additions, relativistic debris spraying around are messy to clean up, and local sector population may complain, with good reason. A better solution is to slow the remnant down as it arrives, and hit the planets with it in such a way that all will fall and stay into the star core. This can be seen as an upgrade of the previous option in that regard. The advantage is that the star mass will, to some extent, help keeping debris from flying around. Be especially careful, though, as even a slow, controlled approach will have the planets breaking apart due to tidal forces, and the larvae may use the occasion to try and escape with great velocity by using varied tricks with the debris and the immense gravity of the remnant. Note that if your star is not massive enough, you may want to feed it local interstellar matter, or merge it with another star. Keeping the planetary system in order can be a bit tricky, but nothing unfeasible. Note however that merging two stars, in particular here when one is a remnant, cause a violent, energetic burst that you will want to plan for. Again, stellar manipulators like those used in the flamethrower option should help. There are basically three types of remnants you can use, depending on what is lying around: White dwarfs are the most abundant. They should be similarly sized as the target planet, but with a thousand times the gravity. This makes it the easiest to use, and the default choice if you are certain the larvae won't survive it. Honestly, I don't think anything like that could, but just in case... Neutron stars are tiny and with an absurdly high magnetic field. Magnetars are a type of neutron star with an even more absurdly high magnetic field, be careful with those. The magnetic field can play both for and against you or the larvae, depending on the details. However, once the star and the planet are in contact, the planet should be crushed in short order at the neutron star's surface, releasing lots of energy in the process, including in exotically strong forms. (Again, be careful that the larvae don't use it to help or help concealing their escape) Honestly, I can't even imagine any spore type to be able to survive it, but if you really want to be sure... Black holes are even tinier than neutron stars, and anything that enters will exit in a long, long time as scrambled Hawking radiation. Otherwise, it should be used pretty much like a neutron star, with less of a magnetic field. Easier or harder to work with, depending on the details. Big black holes, depending on local availability Depending on where the star system is, there may be a supermassive black hole in the vicinity. Even an intermediate black hole should work, as long as the event horizon is wide enough. In this case, an efficient solution may be to simply drop the planets in it. For this, build a Skhadov thruster around the star. The simplest design is a partial Dyson sphere letting light out in the opposite direction of the movement (similar to a photon rocket). Using the flamethrower system may have better thrust, if less range (similar to a nuclear fusion rocket), but you only need to go as far as the target black hole anyway. Once you are there, you simply have to move the target planet on a direct collision course - this time, the faster the better. The bigger the black hole, the less tidal effects, so the planet shouldn't make too much debris when crossing the Roche limit and starting to break apart. As always, of course, be careful of tricks from the larvae at this point, but once it has crossed the event horizon, you should be fine. Whether to let the star pas the black hole by, put it in orbit or drop it with the entire system in the black hole just in case it would still be infected is up to you. If you do choose to drop the star an it is an intermediate black hole, though, be careful. The event horizon is probably too small for the entire star, so use your stellar manipulators to siphon it until there is only the core left. The core is more or less a white dwarf, and should be small enough that you can simply drop it. EthEth $\begingroup$ From the Death Star link "It fragments in gobs of varied sizes and shapes, from dust grains to, possibly, some as big as the Moon or Mars." - the size of the chunks is exactly why I don't want to attack the planet directly with a Death Star or Kinetic Energy weapon. $\endgroup$ – Michael Kutz Jan 11 '18 at 22:55 $\begingroup$ I would use a low mass white dwarf. The higher bodies are much smaller than the targets, there's always the risk that some pieces are thrown clear when the planet crashes down on them at relativistic velocity. It's the easiest to handle and the biggest object. (Yes--white dwarfs get smaller as they get heavier!) If the pests can survive that then drop the white dwarf into orbit about a neutron star. Lower it's periapsis until it touches and is gobbled up. $\endgroup$ – Loren Pechtel Jan 13 '18 at 5:22 Targeting the star is a mistake. It's too inefficient. Consider Earth. Orbital distance: 149,600,000km. Radius: 6371km. If the sun were to explode, sending energy in all directions evenly, by the time the energy to to the Earth's orbit, it would be spread across $2.8 \times 10^{17} km^2$. The Earth's silhouette is about $1.28\times10^8 km^2$, so the Earth will receive a $\frac{1}{(2.2\times10^9)}$ fraction of the entire explosion. 99.999999954% of the energy of the sun's explosion will go elsewhere, wasted. The sun has $1.2 \times 10^{44}J$ of energy in it (total lifespan). With those numbers, about $10^{35}J$ of energy will hit the Earth. This is on par with the energy that strikes the Earth in one year. That's all. Compared to a Death Star that's not all that impressive. A minimum bound on the Death Star is $10^{32}J$, so this strike is a mere 1000 fold more powerful than the minimum bound for the Death star. Worse occurs if the infestation is on other planets. The power of this strike falls off with the square of the distance from the planet to the sun. Mars's orbit is 1.52 times the distance, so it receives about 43% of what Earth receives. Jupiter will only see about 3.7% of the intensity that Earth does, so now we're starting to talk about striking these planets with just a little more than that minimum Death Star bound. This just won't do. So what can we do? Well, one option is to just keep hand waving. Invent some new physics which permits access to far more energy than mere nuclear fusion would permit, but which only functions in the heart of a star (probably because it needs the gravitational pull to do its mumbo-jumbo). Make this star-killer weapon 1000x more deadly than the star itself. Another option would be to handwave the ability to focus the direction of the energy. If we don't waste the majority of the energy on empty space, the sun is a lot scarier. If your weapon could do some handwave trick which permits ejecting energy like a firehouse directly at the infested planet, all those annoying knockdown factors can go away, and you can truly focus the intensity of the star onto the job. The final option is to change the story. It's almost always better to strike the thing that needs striking, rather than striking some nearby innocent star. I'm assuming that's not an easy option for you, but it's always a good idea to remember that these worldbuilding ideas are not set in stone. There's always room to make changes. Lio Elbammalf $\begingroup$ I'm having trouble reconciling the statement that the Sun will provide 1.2e44 J over its lifetime (about 1e10 years) and that about 1e35 J reaches the Earth in one year, with the Earth covering about 1.3e-9 of a sphere centered at the Sun at radius 1AU. It seems to me that to a first order approximation, the Sun's energy output would be spread roughly evenly over the Sun's lifetime, so 1.2e44 J total over its lifetime becomes 1.2e34 J (1e-10) total output per year. Of this, about 1.6e26 J (1.3e-9) would reach the Earth. Your result seems off by nine orders of magnitude. Am I missing something? $\endgroup$ – user Feb 17 '18 at 21:30 $\begingroup$ For whatever that's worth, the claims made in the linked Wikipedia article have varying degrees of citations, but it does have a cited claim that "Total energy from the Sun that strikes the face of the Earth each year" is 5.5e24 J, which is pretty close to my crude estimate of 1.6e26 J and quite far from your estimate of around 1e35 J. $\endgroup$ – user Feb 17 '18 at 21:32 $\begingroup$ @MichaelKjörling Ahh, I think I see it. I mixed up "energy that hits the earth in a year" with "energy the sun emits in a year" The latter number is 1.2e34, which is close to the number I was looking at. $\endgroup$ – Cort Ammon Feb 17 '18 at 21:51 $\begingroup$ I think the rest of the numbers hold up, just not my claim that it's relatable to the amount of energy that hits the Earth per year? $\endgroup$ – Cort Ammon Feb 17 '18 at 21:53 $\begingroup$ I haven't double-checked your other numbers; it was the relationship between the claimed 1.2e44 J total over the Sun's lifetime and 1e35 J received by Earth in one year that seemed to not match. $\endgroup$ – user Feb 17 '18 at 23:00 There are several reasons why firing your missile into the sun (instead of the individual planets) is a very bad idea. For one, not all stars will form black holes after they die, but some will. If you consume all the mass of even a small star with a black hole, you now have a black hole that won't dissipate via Hawking radiation for some time; it's a navigational hazard if nothing else. Second, you lose the ability to harness the energy of that star for other purposes and build your own strategic installations in that star system. Third, you may not even get a nova or super nova effect. What we know about such processes is that they are a final release of energy by the star after it's expended it's nuclear fuel and the balance between its mass and its energy release is broken in favour of mass. Add a small singularity to the centre of it and the core (which is where the bulk of the nuclear reaction is taking place) may get consumed first, meaning that the energy can't escape the gravity the way it does when the collapse is natural. (This is admittedly scientific speculation) Finally, if your larvae can survive the cold, the planets are now likely still orbiting a singularity with the same mass as the original star, so with the exception of no sun, are probably unaffected in orbits, etc. meaning that they'll just keep doing what they were doing. Far better to aim your missile at the planet. It gets eaten by the small black hole, and that means that you now have a singularity with the mass of a planet orbiting the star, but Hawking radiation should take care of this reasonably quickly by comparison to a star mass (I don't have exact numbers) and in terms of navigation hazard, it only applies to in-system travel. This is (more or less) the principle behind the red-matter detonations used by the Romulan mining crew in the first of the new Star Trek movies, where Vulcan is destroyed. Create a small black hole at the centre of a planet, and the black hole eats the planet but does virtually no other damage. The star is safe because the black hole is already in orbit of the sun, so to speak and the mass of the resulting black hole will almost perfectly match that of the planet it consumes, meaning that your system is still useable, just less one planet which in time will evaporate via Hawking radiation. Tim B IITim B II A micro black hole does not work as a weapon This is demonstrated by Joe Kissling. For a very small black hole, in the range of a billion kilograms, the radiation pressure from the evaporation of the black hole will actually prevent anything from falling into it. Thus, it will evaporate faster than it can accrete additional material. Now this is a reasonably useful weapon on its own, since it is putting out about a petawatt of power; about 10 times as much energy released by a hurricane. But it isn't really going to be sucking anything in to it. If you stuck one in the sun, again nothing would happen. It would no accrete matter and its petawatt power output would be something like 11 orders of magnitude less than what the sun is putting off anyways. If you want to blow up a star for your story, just blow up a star. Don't justify it with some mumbo-jumbo about black holes. kingledionkingledion Starting from what you say it can be done... ...you already have a more efficient solution than a nova. If an Alderaan-style annihilation is not enough to get rid of the larvae, a nova will very probably also not be enough; rather it will scatter the critters all over the Universe, which is exactly the opposite of what we need. Rather, you place a seed missile next to the planet to be killed and "crank it up to 0.011 solar masses". Then you send it in a close orbit around the planet. Or rather the other way round, since 0.011 solar masses is about 3600 times the mass of the Earth. This is also massive enough that Hawking radiation is no longer a concern (black holes above the size of the Moon are stable). Roche's limit does not apply to a black hole, but it does apply to the planet, which will be shattered into an accretion ring around the black hole. The radiation intensity alone should be enough to kill off the larvae. If it isn't, they'll be taken care of by spaghettification. And if even that is not enough, once they're past the event horizon they'll no longer matter. Once locked by a Schwarzschild black hole, neither the planet nor the larvae have realistic possibilities of ever escaping. You get 100% clearance. "It's not the size that matters..." Throwing more energy at a problem sometimes simply might not help. What matters are the means to do it, which would go into sci-fi and require some (or a lot) of handwaving to make it palatable to present-day readers. TV series "Stargate SG-1" explained it away with removing enough mass from a star to offset the gravitational/fusion balance so that the star blows up. Question is, wouldn't you have to remove that much mass that the fusion reaction stops? TV series "Star Trek: The Next Generation" features a device to "halt a star's fusion reaction". Now imagine it the other way round, to make the star burn up fuel for thousands or millions of years in a fraction of a second. "But..." Blowing up the star would not help. Rather it would make matters even worse. If your 'larvae' are resilient enough to survive their planet being blown into its consituent parts, follow me through the following scenario: By whatever means, the central star of the system goes boom. The shockwave of the explosion expands with fractional-c, reaching the planet in the habitable zone any time between ten minutes or a hour The neutrino emissions may alert the beings if they're sentient and developed enough. Since they don't seem to be a localized threat, they may very well be - so some could leave in time. The shockwave would first strip away the atmosphere, then ablate the outer crust, the mantle, the core, and then blow the remains (the far side of the planet) to pieces. And that could mean that parts of the crust which had been the shadowed side of the planet may still be in pieces large enough for the larvae to survive, if they can survive the rigors of space. Now that gives me an idea for a spore-like lifeform that actually uses this as a means to spread out through the galaxy. Develop on a planet, 'do something' to blow up the sun, then ride out the shockwaves to another solar system. AnonymousAnonymous $\begingroup$ I'm trying to figure out Step 1 - How to make the star go BOOM? The other steps you have mentioned actually verify what I was thinking would happen when the star does go BOOM. $\endgroup$ – Michael Kutz Jan 11 '18 at 23:01 $\begingroup$ Two possible ways of 'how' I did list in the first section, sure. Though I was concerned that if 'blowing up the star in order to eradicate the pest' was an integral plot device - as in, they have to be all gone -, the blowing-up part may fall a bit short of what is desired. $\endgroup$ – Anonymous Jan 12 '18 at 12:14 $\begingroup$ Part of the story is about the futility of trying to fight Mother Nature, even for an extremely advanced civilization. Although they can identify the star systems infected with the larvae, they can't identify the star systems infected with the eggs. $\endgroup$ – Michael Kutz Jan 13 '18 at 14:40 A K2 level civilization should be able to harness the power of their home star (or any star they control) and create a Nicoll-Dyson Beam. A very interesting video is here as well. with enough energy, anything is possible Essentially, you will be sending so much energy into the system planets will ablate away (which should solve your pest control problem) and the extra energy impacting the star will upset the rate of fusion reaction, triggering giant flares as a minimum, and possibly evaporating the outer surface of the star as well. Mad scientist Alexander Bolonkin even believes that adding extra energy to the star's outer layers could trigger a runaway fusion reaction, although the science is not....settled. Outside of massive beam generators, the energy could be harnessed to deliver RKKV's. Even a very small mass moving at relativistic velocity can deliver massive amounts of energy. The Atomic Rockets "Boom Table" suggests you could generate 11kt of energy on impact with a single gram moving at .75 c, and 29kt if the same gram was accelerated to .9 c man, that had to hurt Sending a swarm of pellets moving at .9 c could cover the entire solar system and strike every planetary body, asteroid, comet and icy body that could conceivably hold an enemy object. Follow up swarms can be sent for good measure. And all that energy suddenly being deposited into the star will have similar effects to a Nicoll-Dyson beam weapon as well. ThucydidesThucydides Plain and simple: tap enough dark matter into your microscopic black hole to turn it into a regular-sized black hole with typical lifetime comparable to lifetime of a universe and rely upon relativistic effects to confine larvae in the singularity indefinitely long. ZuOvertureZuOverture Here is an idea: use a micro black hole to trigger an orbital instability in the planetary system. Planetary systems tend to form on the edge of stability, such that relatively small perturbations can destabilize them. What happens is this: small kicks change the planets' orbits and make them cross. In systems with just rocky planets, this would lead to giant collisions between the planets. Bad for any life on those planets. In systems with gas giant planets, something like this animation can happen: https://youtu.be/gT2_3NcL8UM It's from a real N-body simulation I published a few years back; the inner colored bodies that are in the process of forming terrestrial planets all end up falling onto the star. The same would happen to any already-formed planets. The basic idea is to use the mass in a black hole to trip up a planetary system to make it gravitationally self-destruct. The upside: this is an efficient use of mass. The downside: it can take time (like, thousands to millions of years) for the planets to be destroyed. Although if you planned it really precisely I imagine this could be sped up. Sean RaymondSean Raymond Not the answer you're looking for? Browse other questions tagged stars weapon-mass-destruction or ask your own question. How could you create a 95% effective global emergency broadcasting system? What exactly would happen if a black hole was introduced into the sun? By what mechanism can lasers destroy an entire planet? A weapon to attack the Solar System Exoplanetary Review: Acid Rain Massive planets around an old blue star Heavy metal planets in a globular cluster How can I make a black dwarf star? Can you prevent a star from dying with infinite energy? If someone dropped a black hole into the Sun, when and how will we notice it? How soon does the Earth's surface re-solidify after the red-giant Sun is replaced with a different star? Is this a reasonable way to enforce a ban on exotic weapons in an interstellar setting?
CommonCrawl
Allele frequency deviation (AFD) as a new prognostic model to predict overall survival in lung adenocarcinoma (LUAD) Aisha Al-Dherasi1,2, Yuwei Liao3 na1, Sultan Al-Mosaib4, Rulin Hua1, Yichen Wang1, Ying Yu5, Yu Zhang1, Xuehong Zhang1, Raeda Jalayta1, Haithm Mousa6, Abdullah Al-Danakh7, Fawze Alnadari8, Marwan Almoiliqy9, Salem Baldi6, Leming Shi5, Dekang Lv1, Zhiguang Li1 & Quentin Liu ORCID: orcid.org/0000-0002-0999-98051 Lung adenocarcinoma (LUAD) remains one of the world's most known aggressive malignancies with a high mortality rate. Molecular biological analysis and bioinformatics are of great importance as they have recently occupied a large area in the studies related to the identification of various biomarkers to predict survival for LUAD patients. In our study, we attempted to identify a new prognostic model by developing a new algorithm to calculate the allele frequency deviation (AFD), which in turn may assist in the early diagnosis and prediction of clinical outcomes in LUAD. First, a new algorithm was developed to calculate AFD using the whole-exome sequencing (WES) dataset. Then, AFD was measured for 102 patients, and the predictive power of AFD was assessed using Kaplan–Meier analysis, receiver operating characteristic (ROC) curves, and area under the curve (AUC). Finally, multivariable cox regression analyses were conducted to evaluate the independence of AFD as an independent prognostic tool. The Kaplan–Meier analysis showed that AFD effectively segregated patients with LUAD into high-AFD-value and low-AFD-value risk groups (hazard ratio HR = 1.125, 95% confidence interval CI 1.001–1.26, p = 0.04) in the training group. Moreover, the overall survival (OS) of patients who belong to the high-AFD-value group was significantly shorter than that of patients who belong to the low-AFD-value group with 42.8% higher risk and 10% lower risk of death for both groups respectively (HR for death = 1.10; 95% CI 1.01–1.2, p = 0.03) in the training group. Similar results were obtained in the validation group (HR = 4.62, 95% CI 1.22–17.4, p = 0.02) with 41.6%, and 5.5% risk of death for patients who belong to the high and low-AFD-value groups respectively. Univariate and multivariable cox regression analyses demonstrated that AFD is an independent prognostic model for patients with LUAD. The AUC for 5-year survival were 0.712 and 0.86 in the training and validation groups, respectively. AFD was identified as a new independent prognostic model that could provide a prognostic tool for physicians and contribute to treatment decisions. Lung cancer is the most common cause of cancer incidence and death-causing conditions in China and the world [1, 2]. Non-small cell lung cancer (NSCLC) accounts for nearly 80% of lung cancer, and it is histopathologically classified into two main subtypes: lung squamous cell carcinoma (LUSC) and lung adenocarcinoma (LUAD) [3], where the latter is the most common type, with a survival rate of approximately 15% within 5 years [4, 5]. These histological subtypes play the main role of determining the therapeutic options. Although patients with NSCLC receive different treatments, whether early-stage surgical treatment or other potential curative treatments for different stages, the prognosis of patients with NSCLC in the early stages remains poor, with a relapse rate of approximately 40% in patients within 5 years [6] and a survival rate of 50–60% [7, 8]. These information indicate the existence of some individual cases of high-risk among patients who are in the early stages of the disease. Therefore, patients need to be diagnosed in the early stages, and a reliable prognostic biomarker or prognostic factors to identify high-risk individuals are urgent and considerably important for NSCLC. There is a range of different and varied studies in their results conducted at the recent time to identify the prognostic factors and/or prognostic biomarkers for the diagnosis of patients with lung adenocarcinoma (LUAD). These biomarkers may include one of the following types: (1) biomarkers associated with the risk of development of toxicity related to certain medications in patients and this biomarker is single nucleotide polymorphism (SNP) haplotype; (2) Biomarkers indicating the recurrence of the disease after surgical removal, they are found on the tumor or secreted by the tumor such as some proteins; (3) The presence of genetic mutations targeted by the therapy or the level of gene expression, both of which act as biomarkers; (4) Finally, the number of cancer cells circulating or the tumor metabolic activity may be another vital indicator. Many studies have demonstrated tumor mutation burden (TMB) as a biomarker for patients with LUAD [9]. For example, Rizvi et al. [10] demonstrated that high TMB levels were correlated with improved ORR and prolonged PFS in a retrospective analysis of patients with NSCLC. Talvitie et al. [11] in its study on lung adenocarcinoma patients has shown that TMB is an independent biomarker for predicting survival, as patients with TMB greater than or equal to 14 mutations/MB had a longer survival than patients with TMB less than 14 mutations/MB. In another study, Jiao et al. [12] proved that TMB was a negative biomarker to predict survival for LUAD patients, where the TMB was low in the group of patients with EGFR-mutation. In addition, change in mean variant allele frequencies (dVAF) has been identified as a predictor of clinical outcomes in NSCLC and UC [13]. Allele frequency deviation (AFD) refers to the degree of deviation between the single nucleotide variant (SNV) allele frequency to tumor samples and that of matched control samples, it can reflect the disease stats of patients, as demonstrated in another study on AFD involving patients with cervical cancer revealed that AFD was positively correlated with therapy response and it helped in estimating progression-free survival [14]. On the basis of the previous studies on many different prognostic biomarkers, particularly the AFD-related study [14], the relationship between AFD and overall survival was identified in patients with LUAD in the current study by developing a new algorithm for measuring AFD and then evaluating its predictive performance to predict the survival of LUAD patients in the early stages as an independent prognostic model. This study is considered the first study to report the direct association of AFD for the prediction of patients survival, which may contribute and help in the early detection of LUAD patients and making effective clinical decisions regarding potential individual treatment. The raw data of whole-exome sequencing (WES) with clinical information related to patients with lung adenocarcinoma were obtained from Fudan University. The total number of patients after excluding those with insufficient clinical information was 102. They were randomly divided into two groups: training group, which included 54 patients, and validation group, which included 48 patients. The basic clinical characteristics included in the analysis are as follows: history of smoking, pT stage, age, sex, and tumor size. The details are provided in (Table 1). The data analysis process was carried out on the data collected by Fudan University that was previously used in another study [15] which was conducted according to the ethical standards (Fudan University Shanghai Cancer Center Institutional Review Board No. 090977-1). Informed consents of patients or their relatives were obtained while donating a samples to the tissue bank of Fudan University Shanghai Cancer Center [15]. For more information pertaining to the data analyzed in our study, the data can be accessed and obtain from the European Genome-phenome Archive (EGA) via using the following access code: EGAS00001004006. Table 1 Baseline Characteristics at Diagnosis Alignment and quality control In-house pipelines were used to process the sequencing of 102 WES data. Tumor and normal sample quality data were evaluated using FastQC (http:/www.bioinformatics.babraham.ac.uk/projects/fastqc/), including sequence length distribution, GC content, aspect of per-base quality, sequence duplicate levels, kemer content, and over-represented sequences [14]. Sequencing readings were aligned with the human reference genome (hg38) by using the Burrows-Wheeler Aligner (BWA) software package with default parameters [16]. The reads that were mapped in multiple genome positions were removed. Then, the quality of the map was accessed using SAM tools flagset [17]. All the genome sites for somatic variants were called by using VarScan2 [18] software with parameters of base quality higher than 30 and supporting reads ≥ 200 (Fig. 1). Whole exome sequencing analysis flowchart Calling of SNV from WES After all the readings were mapped to the human reference genome (hg38) by using BWA [16], Picard 1.67 was used to mark the duplicate readings realigned around the known indels. Base quality recalibration was performed using GATK version 3.7 [19]. Somatic mutations were called using Mutect2 after insuring that the following criteria have been met: first, the difference of mutant allele fraction (MAF) between the tumor and normal sample in the same patient was more than one percentage; second, in both tumor and normal samples, the sequencing coverage was more than 200; third, the alternative readings in the tumor samples were more than 10; and fourth, the corrected p-value was less than 0.05. SNVs were annotated using ANNOVAR in multiple databases [20] and further filtered with population frequency in ExAC, 1000 Genomes, dbSNP138. Allele frequency deviation (AFD) Variant allele frequency (VAF) of exome sites for 102 samples were called by using VarScan2 [18] software with the base quality higher than 30 and read depth ≥ 200, the WBC sample was used as a control to calibrate possible errors of the sequence and germline variants during the calculation of the VAF (Fig. 1). Then variant allele frequencies were used to calculate AFD for each patient. As displayed in (Fig. 2), a scatter plot was first created for all the detected genomic sites of the patient, with Y axis representing the VAF of a tumor sample and X axis representing the VAF of a paired normal sample. Second, a diagonal line, on which the points have the same VAF between both samples, was created. The distance from each point to this diagonal line was calculated and defined as di of the i−th point. Third, the X,Y coordinates were transposed by − 45°; thus, di is equal to the absolute value of the Y axis of i point and could be calculated using the Eq. (1): $$di = \left| {yi^{^{\prime}} } \right| = \left| {xi*\sin \left( { - \frac{{\uppi }}{4}} \right) + yi*{\text{cos}}\left( { - \frac{{\uppi }}{4}} \right)} \right|$$ where yi' is the transposed Y-axis value of the i point, the xi, yi is the original X and Y axis values. Finally, the AFD of the patients was calculated as in the Eq. (2): $$AFD = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {di} \right)}}{{\text{n}}}$$ where di represent the distance value of all points i that are deviated from the diagonal line, n represent the total number of point. Calculation of Allele Frequency Deviation. A Qualified distribution for every sites of Variant allele frequency (VAF) in normal cells should be lies around wild type (0%), heterozygous (50%) and homozygous (100%). B Diagonal line on each point that have the same VAF in both tumor and normal samples. C, D Transporation of X and Y coordinates by −45° Tumor mutation burden (TMB) In short, the tumor mutation burden (TMB) is defined as the total number of somatic (nonsynonymous) mutations, which include the small insertions and deletions (INDELs) and single nucleotide variants (SNVs) for each megabase [21, 22]. The golden standard method of measuring the TMB is through the use of WES, which can detect somatic mutations in the entire exome and thus give a comprehensive perception of all mutations that can contribute to the progress of the tumor at level of cost that is considered lower than the WGS [23]. The Quantile method based on TMB measurements was used to determine the appropriate cutting values [24]. Spearman correlation test was conducted to determine the correlation between factors, such as AFD and TMB. Kaplan–Meier (K-M) analysis was used to evaluate the differences in patient survival time between the high- and low-AFD value groups of patients with LUAD. The P values and HR (95% confidence interval [CI]) were determined via log-rank test and univariate Cox regression analysis to detect the significant differences between the groups. Multivariable Cox regression analysis performed to evaluate AFD independence. The ROC curve was used to estimate the performance of AFD by comparing the AUC. Statistical significance was identified as P ≤ 0.05. All statistical analyses were performed using version 3.5.1 of the R language. Patients characteristic The main histological subtype in this study was lung adenocarcinoma (LUAD). The range of the patient's age was between 37 and 84 years (61.5 years as a median age). Fifty-three (52%) patients were female and 49 (48%) were male; their output status was zero or one; 70% of the patients never smoked, while 30% were former/current smokers. Forty (39.2%) had stage T1a, twenty-seven patients (26.4%) had stage T1b, thirty-three patients (32.3%) had stage T2a, one patient had stage T2b (98%) and one patient had stage T4 (98%) (Table 1) (Additional file 1: Table S1). The patients have not received any neoadjuvant treatment. Relationship between AFD and TMB In order to find out if the AFD and TMB are related, we performed a Spearman correlation test. Figure 3(A) shows the correlations between AFD and TMB in patients with LUAD. Spearman correlation coefficient showed that the p-value of the test was more than the significance level of 0.05. Therefore, AFD and TMB were not significantly associated at a correlation coefficient of 0.16 and p-value of 0.26 for the training group. In the validation group, the result also showed no correlation between AFD and TMB, with a p-value of 0.6 and correlation coefficient = −0.077 (Fig. 3B). Spearman Correlation between the AFD and TMB. The association between AFD and TMB in patients with LUAD in the training group (A) and validation group (B) Allele frequency deviation shows an active power to predict patient outcomes A time-dependent curve was used to evaluate the sensitivity and specificity of AFD and TMB for OS prediction in the training and validation groups. The AFD and TMB significantly achieved almost the same AUC values of 0.713 and 0.721 (Fig. 4C and D), respectively, in the training group, while in the validation group, AFD achieved an AUC of 0.86 and TMB achieved 0.65 (Fig. 5C and D). These results demonstrated that AFD has the good power and efficient prognostic performance to predict the survival of patients with LUAD, which is reflected by the AUC value. Performance of AFD and TMB in the training group. A & B Kaplan–Meier survival curve analysis of AFD and TMB respectively. C & D The receiver operating characteristic (ROC) curve for the 5-year survival of patients with LUAD in AFD and TMB respectively Performance of AFD and TMB in the validation group. A & B Kaplan–Meier survival curve analysis of AFD and TMB respectively. C & D The receiver operating characteristic (ROC) curve for the 5-year survival of patients with LUAD in AFD and TMB respectively Overall survival Considering that TMB and AFD are continuous variables and the cutting points for these variables are still not uniformly established, therefore in our study, we assumed that the risk of death is associated with the rise of AFD values, and in order to select a group of patients with high AFD values as a high-risk group and separate them from the low AFD values group as a low-risk group, we used the quantile method to get the correct cutting point based on AFD values. In the training set, the mean value of AFD was 13.74 (0.15–33.18), while it was 19.81 (2.5–32.97) for TMB. The AFD cutoff points at 75% quantile were 17.93 and 22.028 mutation/Mb for the AFD and TMB in the training set, respectively, and 16.7 and 23.2 mutation/Mb for the AFD and TMB in the validation set, respectively, thus dividing the patients into high and low-value groups. The Kaplan–Meier curve estimated the OS at 31 months as 89.7% (95% confidence interval [CI] 80.6–99.8) in the low-AFD-value group and 64.3% (95% CI 43.5–95) in the high-AFD-value group (Table 2). A gradual decrease was observed in survival from 78.6% at 12 months to 52.2% at 35 months in the high-AFD-value group. In the training group, the OS of patients who belong to the low-AFD-value (low-risk) group was significantly longer than that of patients who belong to the high-AFD-value (high-risk) group, with 10% lower risk of death and 42.8% higher risk of death for both groups, respectively (HR for death = 1.10; 95% CI 1.01–1.2, p = 0.03) (Tables 2 and 4). The patients in the high and low-AFD-value groups included in the survival analysis according to their cutoff points were 14 and 40, respectively. In the validation group, OS was found to be significantly longer in the low-AFD-value (low-risk) group than in the high-AFD-value (high-risk) group, with 5.5% lower risk of death and 41.6% higher risk of death for both groups, respectively (HR = 3.1, 95% CI 1.4–6.60, p = 0.003) (Tables 3 and 4). The patients in the high and low-AFD-value groups included in the survival analysis according to their cutoff points were 12 and 36, respectively. Table 2 Overall survival in AFD, TMB and Kaplan–Meier estimates in the training group Table 3 Overall survival in AFD, TMB and Kaplan–Meier estimates in the validation group Table 4 Univariate and multivariate cox regression analysis of AFD, TMB and overall survival in patients with LUAD The one-sided stratified log-rank p-values were 0.0064 (Fig. 4A) and 0.0013 (Fig. 5A) for the training and validation groups, respectively, indicating a significant difference between the two groups regardless of the number of patients in each group. The result also showed that patients with high AFD values were at higher risk of death than patients with low AFD values. The Kaplan–Meier curve for TMB in the training group showed that the high-level patients had significantly shorter OS than the low-level patients, with 35.7% higher risk of death (HR = 1.08, 95% CI 0.96–1.2, p = 0.17). Thus, the OS was 62.5% at 31 months (95% CI 41–95.3) in the high-level TMB group and 89.9% (95% CI 80.9–99.8) in the low-level TMB group (Tables 2 and 4). The number of patients in the high-level group was 40, while it was 14 in the low-level group. The one-sided stratified log-rank p-value was notably 0.03, indicating the difference between the two groups in regard to OS (Fig. 4B). In the validation group, no significant differences were found between the two groups in the Kaplan–Meier curve (Fig. 5B). The numbers of patients in the high and low-level groups were 36 and 12, respectively. AFD as an independent prognostic factor Herein, univariate and multivariable Cox regression analyses were conducted in the training and validation groups to assess the contribution of AFD as an independent prognostic factor for patients with LUAD. AFD and other clinicopathological factors, including gender, smoking, age, pT, and tumor-size, were used as covariates. Univariate regression analysis indicated that AFD (p = 0.03) was significantly associated with patient survival, while sex (p = 0.47), age (p = 0.31), tumor size (p = 0.28), smoking (p = 0.22), pT (P = 0.68) and TMB (p = 0.17) were not significantly associated with patient survival in the training group, as shown in (Table 4). For the validation group, the analysis showed that AFD (p = 0.003) was the only factor correlated with patient survival; the other clinical factors did not show any association with patient survival (Table 4). The corresponding multivariable cox regression analysis confirmed that the AFD in the training (HR = 1.125, 95% CI = 1.001–1.26, P = 0.04) and validation (HR = 4.62, 95% CI 1.22–17.4, P = 0.02) groups was an independent prognostic factor (Table 4). These results showed that AFD is an independent risk factor that could be used as a prognostic tool for patients with LUAD to assist in the early diagnosis for LUAD patients. The time of survival differs due to the different stages of LUAD among patients, as this type of cancer is heterogeneous. Many clinical variables have taken up a wide area in the field of predicting the diagnosis and treatment of patients with LUAD, but the results are uneven. The most important factors are TNM stage, race, age, tumor size, and gender these are factors related to the patient. Other factors related to the tumor also contribute to the prediction of the outcomes and treatment of patients, including the invasion of blood vessels and cell differentiation [25,26,27,28,29]. In the current study, the patients with high AFD values were assumed to be at a high risk compared with those with low AFD values. Therefore, AFD may act as an indicator of the progress of the disease and the survival rate of patients. For confirmation, the patients were divided into two groups. The first group consisted of patients with high AFD values, while the second group consisted of those with low AFD values. The quantile method was used to obtain the appropriate cutoff point to separate patients into two groups in a scientific and unbiased manner. Through this cutoff value, a significant difference was obtained between the high and low-risk groups. Thus, AFD had a clear effect in predicting the survival of patients and identifying patients who are at high risk. Multivariable cox regression analysis showed that AFD is an independent prognostic tool capable of predicting survival in patients with LUAD. In addition, ROC analysis showed that AFD has the effect power to predict overall survival of patients. Previous studies have shown that TMB was significantly correlated with immune checkpoint inhibitors (ICIs), such as PD-L1 and PD-1, and other biomarkers, including EGFR and TP53 [30,31,32]. In the present research, the relationship between AFD and TMB were evaluated, and the results showed no correlation between the two. Furthermore, the AUC of the prediction for patient survival in AFD and TMB was high and almost the same, suggesting that AFD had a substantial efficiency not less than the efficiency of TMB to predict overall survival. In addition, these results are consistent with the findings in the Kaplan–Meier analysis for patients with LUAD, with a high statistical significance of AFD in the prediction. The patients were also divided by AFD into high and low-value risk group, the patients with high AFD value had shorter OS than those with low AFD value. On the contrary, univariate and multivariable cox regression analyses showed that TMB tended to be a non-independent prognostic factor for predicting the survival of patients with LUAD, and no significant association was observed between TMB and LUAD patients survival. This finding is consistent with that of previous studies [33, 34], which showed that TMB was significantly related to the prediction of the response of patients to the medications used in order to determine their effectiveness. Interestingly, AFD displayed a efficiency and predictive ability in both analyses and emerged as an independent prognostic factor. A number of studies have reported that tumor size is a prognostic factor used to predict patient progression and outcomes [35]. A previous study related to AFD demonstrated the effectiveness of AFD in predicting the benefit and response of patients with cervical cancer to treatment, and the predicted evidence of metastases was better than that of tumor size [14]. In the present study, AFD was shown to be independent of tumor size, and patients with high AFD values had worse prognosis than patients with low AFD values. Therefore, AFD can be considered as a prognostic factor for predicting the outcome of patients with LUAD, consequently suggesting the use of AFD in clinical application for the purpose of early diagnosis of lung adenocarcinoma patients. AFD is still a new model that has not yet been used as a prognostic model for the prediction of clinical outcomes in lung adenocarcinoma or any other type of cancer. Therefore, this study is the first to show that AFD is effective as an independent prognostic model that has the predictive power to identify high-risk groups of patients with LUAD. In addition, these results may indicate a more fundamental role in AFD efficacy in early LUAD detection and accurate survival prediction. However, this study has limitations. First, the number of samples was small, and this limitation could be avoided by conducting a study with a large number of patients. AFD could be applied to measure the effectiveness of medicines by measuring the patient's response to the treatment used by studying those who used certain treatments. In addition, as a prognostic model, AFD can be applied in further cancer research to verify it in different types of cancer. In conclusion, we developed a new prognostic analytical model by developing a new algorithm to calculate the allele frequency deviation (AFD) which characterized by effectiveness predictive performance to predict the survival of LUAD patients. Furthermore, AFD is an independent prognostic tool for predicting survival in patients with LUAD. The study results provided evidence of the possibility of using the AFD in the early diagnosis of patients with LUAD and therefore it may be possible to use AFD in clinical application as a new prognostic tool to predict the patient's outcomes and contribute to follow-up monitoring and help clinicians make effective decisions regarding the potential individual treatment of LUAD patients, which improves their survival. Despite these findings, the model needs further investigation and application in other types of cancers. The raw data used and/or analysed during the current study could be obtained from the European Genome-phenome Archive (EGA) with the accession code EGAS00001004006 (https://ega-archive.org/studies/EGAS00001004006). Source data underlying all figures are provided as an Additional file 1: Table S1. AUC: Area Under Curve bTMB: Blood Tumor Mutation Burden Confidence Interval Hazard Ratio K-M: Kaplan–Meier LUAD: Lung Adenocarcinoma LUSC: Lung Squamous Cell Carcinoma NSCLC: Non-Small Lung Cancer Zhou C. Lung cancer molecular epidemiology in China: recent trends. Transl Lung Cancer Res. 2014;3(5):270–9. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68(6):394–424. Herbst RS, Morgensztern D, Boshoff C. The biology and management of non-small cell lung cancer. Nature. 2018;553(7689):446–54. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2018. CA Cancer J Clin. 2018;68(1):7–30. Chen W, Zheng R, Baade PD, Zhang S, Zeng H, Bray F, Jemal A, Yu XQ, He J. Cancer statistics in China, 2015. CA Cancer J Clin. 2016;66(2):115–32. Hoffman PC, Mauer AM, Vokes EE. Lung cancer. Lancet. 2000;355(9202):479–85. Chansky K, Sculier JP, Crowley JJ, Giroux D, van Meerbeeck J, Goldstraw P, International Staging Committee and Participating Institutions. The International Association for the Study of Lung Cancer Staging Project: prognostic factors and pathologic TNM stage in surgically managed non-small cell lung cancer. J Thorac Oncol. 2009;4(7):792–801. Sawabata N, Asamura H, Goya T, Mori M, Nakanishi Y, Eguchi K, Koshiishi Y, Okumura M, Miyaoka E, et al. Japanese lung cancer registry study: first prospective enrollment of a large number of surgical and nonsurgical cases in 2002. J Thorac Oncol. 2010;5(9):1369–75. Wang C, Liang H, Lin C, Li F, Xie G, Qiao S, Shi X, Deng J, Zhao X, Wu K, Zhang X. Molecular subtyping and prognostic assessment based on tumor mutation burden in patients with lung adenocarcinomas. Int J Mol Sci. 2019;20(17):4251. Rizvi NA, Hellmann MD, Snyder A, Kvistborg P, Makarov V, Havel JJ, Lee W, Yuan J, Wong P, et al. Cancer immunology. Mutational landscape determines sensitivity to PD-1 blockade in non-small cell lung cancer. Science. 2015;348(6230):124–8. Talvitie EM, Vilhonen H, Kurki S, Karlsson A, Orte K, Almangush A, Mohamed H, Liljeroos L, Singh Y, et al. High tumor mutation burden predicts favorable outcome among patients with aggressive histological subtypes of lung adenocarcinoma: a population-based single-institution study. Neoplasia. 2020;22(9):333–42. Jiao XD, He X, Qin BD, Liu K, Wu Y, Liu J, Hou T, Zang YS. The prognostic value of tumor mutation burden in EGFR-mutant advanced lung adenocarcinoma, an analysis based on cBioPortal data base. J Thorac Dis. 2019;11(11):4507–15. Raja R, Kuziora M, Brohawn PZ, Higgs BW, Gupta A, Dennis PA, Ranade K. Early reduction in ctDNA predicts survival in patients with lung and bladder cancer treated with durvalumab. Clin Cancer Res. 2018;24(24):6212–22. Tian J, Geng Y, Lv D, Li P, Cordova M, Liao Y, Tian X, Zhang X, Zhang Q, et al. Using plasma cell-free DNA to monitor the chemoradiotherapy course of cervical cancer. Int J Cancer. 2019;145(9):2547–57. Chen H, Carrot-Zhang J, Zhao Y, Hu H, Freeman SS, Yu S, Ha G, Taylor AM, Berger AC, et al. Genomic and immune profiling of pre-invasive lung adenocarcinoma. Nat Commun. 2019;10(1):5472. Li H, Durbin R. Fast and accurate long-read alignment with Burrows-Wheeler transform. Bioinformatics. 2010;26(5):589–95. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R, 1000 Genome Project Data Processing Subgroup. The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009;25(16):2078–9. Koboldt DC, Zhang Q, Larson DE, Shen D, McLellan MD, Lin L, Miller CA, Mardis ER, Ding L, Wilson RK. VarScan 2: somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome Res. 2012;22(3):568–76. McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, DePristo MA. The genome analysis toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20(9):1297–303. Wang K, Li M, Hakonarson H. ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res. 2010;38(16):e164. Yarchoan M, Hopkins A, Jaffee EM. Tumor mutational burden and response rate to PD-1 inhibition. N Engl J Med. 2017;377(25):2500–1. Chalmers ZR, Connelly CF, Fabrizio D, Gay L, Ali SM, Ennis R, Schrock A, Campbell B, Shlien A, et al. Analysis of 100,000 human cancer genomes reveals the landscape of tumor mutational burden. Genome Med. 2017;9(1):34. Berland L, Heeke S, Humbert O, Macocco A, Long-Mira E, Lassalle S, Lespinet-Fabre V, Lalvée S, Bordone O, et al. Current views on tumor mutational burden in patients with non-small cell lung cancer treated by immune checkpoint inhibitors. J Thorac Dis. 2019;11(Suppl 1):S71–80. Hendriks LE, Rouleau E, Besse B. Clinical utility of tumor mutational burden in patients with non-small cell lung cancer treated with immunotherapy. Transl Lung Cancer Res. 2018;7(6):647–60. Alatorre CI, Carter GC, Chen C, Villarivera C, Zarotsky V, Cantrell RA, Goetz I, Paczkowski R, Buesching D. A comprehensive review of predictive and prognostic composite factors implicated in the heterogeneity of treatment response and outcome across disease areas. Int J Clin Pract. 2011;65(8):831–47. Crinò L, Weder W, van Meerbeeck J, Felip E, ESMO Guidelines Working Group. Early stage and locally advanced (non-metastatic) non-small-cell lung cancer: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2010;21(Suppl 5):103–15. Rami-Porta R, Bolejack V, Crowley J, Ball D, Kim J, Lyons G, Rice T, Suzuki K, Thomas CF Jr, et al. The IASLC lung cancer staging project: proposals for the revisions of the T descriptors in the forthcoming eighth edition of the TNM classification for lung cancer. J Thorac Oncol. 2015;10(7):990–1003. Tas F, Ciftci R, Kilic L, Karabulut S. Age is a prognostic factor affecting survival in lung cancer patients. Oncol Lett. 2013;6(5):1507–13. Radkiewicz C, Dickman PW, Johansson ALV, Wagenius G, Edgren G, Lambe M. Sex and survival in non-small cell lung cancer: a nationwide cohort study. PLoS One. 2019;14(6):e0219206. Samstein RM, Lee CH, Shoushtari AN, Hellmann MD, Shen R, Janjigian YY, Barron DA, Zehir A, Jordan EJ, et al. Tumor mutational load predicts survival after immunotherapy across multiple cancer types. Nat Genet. 2019;51(2):202–6. Li WY, Zhao TT, Xu HM, Wang ZN, Xu YY, Han Y, Song YX, Wu JH, Xu H, Yin SC, Liu XY, Miao ZF. The role of EGFR mutation as a prognostic factor in survival after diagnosis of brain metastasis in non-small cell lung cancer: a systematic review and meta-analysis. BMC Cancer. 2019;19(1):145. Jiao XD, Qin BD, You P, Cai J, Zang YS. The prognostic value of TP53 and its correlation with EGFR mutation in advanced non-small cell lung cancer, an analysis based on cBioPortal data base. Lung Cancer. 2018;123:70–5. Wu HX, Wang ZX, Zhao Q, Chen DL, He MM, Yang LP, Wang YN, Jin Y, Ren C, Luo HY, Wang ZQ, Wang F. Tumor mutational and indel burden: a systematic pan-cancer evaluation as prognostic biomarkers. Ann Transl Med. 2019;7(22):640. Marina Garassino, MD & Corey J. Langer, MD. In International Association for the Study of Lung Cancer (IASLC) 2019 World Conference on Lung Cancer (WCLC) in Barcelona on "Tumor Mutational Burden Disappoints as Biomarker for Treatment Response in Exploratory Analyses of Nonsquamous NSCLC"; 2019. Zhang J, Gold KA, Lin HY, Swisher SG, Xing Y, Lee JJ, Kim ES, William WN Jr. Relationship between tumor size and survival in non-small-cell lung cancer (NSCLC): an analysis of the surveillance, epidemiology, and end results (SEER) registry. J Thorac Oncol. 2015;10(4):682–9. The authors would like to thanks Fudan University for providing the data. This work was supported by National Natural Science Foundation of China (No. 81630005, 81872655, 81602200, 81820108024, 31801100, 82003141, 82002960, 81672784 and 81472637), the Pandeng Scholar Program from the Department of Education of Liaoning Province (to Dr. Zhiguang Li), FONDECYT 1180241, CONICYT-FONDAP 15130011, IMII P09/016-F (GIO) and startup funds from Dalian Medical University (to Dr. Zhiguang Li), the Natural Science Foundation of Liaoning (No. 2019-BS-081), the "Seedling cultivation" program for young scientific and technological talents of Liaoning (No. LZ2020044 and No. LZ2019067). Aisha AL-Dherasi and Yuwei Liao contributed equally to this work Center of Genome and Personalized Medicine, Institute of Cancer Stem Cell, Dalian Medical University, Dalian, 116044, Liaoning, People's Republic of China Aisha Al-Dherasi, Rulin Hua, Yichen Wang, Yu Zhang, Xuehong Zhang, Raeda Jalayta, Dekang Lv, Zhiguang Li & Quentin Liu Department of Biochemistry, Faculty of Science, Ibb University, Ibb, Yemen Aisha Al-Dherasi Yangjiang Key Laboratory of Respiratory Diseases, Yangjiang Peoples Hospital, Yangjiang, Guangdong, People's Republic of China Yuwei Liao Department of Computer Science and Technology, Sahyadri Science Collage, Kuvempu University, Shimoga district, Karnataka, India Sultan Al-Mosaib State Key Laboratory of Genetic Engineering, School of Life Sciences and Human Phenome Institute, Fudan University, 2005 Songhu Road, Shanghai, 200438, People's Republic of China Ying Yu & Leming Shi Department of Clinical Biochemistry, College of Laboratory Diagnostic Medicine, Dalian Medical University, Dalian, 116044, Liaoning, People's Republic of China Haithm Mousa & Salem Baldi Department of Urology, First Affiliated Hospital of Dalian Medical University, Dalian Medical University, Dalian, 116044, Liaoning, People's Republic of China Abdullah Al-Danakh Department of Food Science and Engineering, College of Food Science and Technology, Nanjing Agricultural University, Nanjing, 210095, Jiangsu, People's Republic of China Fawze Alnadari Key Lab of Aromatic Plant Resources Exploitation and Utilization in Sichuan Higher Education, Yibin University, Yibin, 644000, Sichuan, China Marwan Almoiliqy Rulin Hua Yichen Wang Ying Yu Yu Zhang Xuehong Zhang Raeda Jalayta Haithm Mousa Salem Baldi Leming Shi Dekang Lv Zhiguang Li Quentin Liu AA analyzed the data, interpreted the results and wrote the manuscript; LS, YY generated the data; AA, YWL and YW were responsible for developing an algorithm; YZ and XZ helped with data analysis; RH and SA wrote some part of codes in R language; FA, HM, RJ, ABA, MA and SB made contributions to the final revision; DL, ZL and QL guided the research, revised the manuscript and final approval of the manuscript. All authors read and approved the final manuscript. Correspondence to Dekang Lv, Zhiguang Li or Quentin Liu. Ethical approval and consent to participate The data analysis process was carried out on the data collected by Fudan University that was previously used in another study [15] which was conducted according to the ethical standerds (Fudan University Shanghai Cancer Center Institutional Review Board No. 090977-1). Informed consents of patients or their relatives were obtained while donating a samples to the tissue bank of Fudan University Shanghai Cancer Center [15]. Hence, only the data analysis process was carried out for this study, and none of the samples collection. Source data of clinical information for patients with lung adenocarcinoma (LUAD). Al-Dherasi, A., Liao, Y., Al-Mosaib, S. et al. Allele frequency deviation (AFD) as a new prognostic model to predict overall survival in lung adenocarcinoma (LUAD). Cancer Cell Int 21, 451 (2021). https://doi.org/10.1186/s12935-021-02127-z Lung Adenocarcinoma (LUAD)
CommonCrawl
How to implement a Fredkin gate using Toffoli and CNOTs? Inspired by a question Toffoli using Fredkin, I tried to do "inverse" task, i.e. to implement Fredkin gate (or controlled swap). In the end I implemented it with three Toffoli gates. Firstly, I started with swap gate without control qubit which is implemented with CNOTs followingly: Then I realized that I need control qubit, or in other words that I have to control each CNOT gate. As controlled CNOT is Toffoli gate (CCNOT gate), I came to this circuit As matrix representation of Toffoli gate controlled by qubits $|q_0\rangle$ and $|q_1\rangle$ is \begin{equation} CCNOT_{01} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end{pmatrix} \end{equation} matrix of Toffoli gate controlled by qubits $|q_0\rangle$ and $|q_2\rangle$ is \begin{equation} CCNOT_{02} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \end{pmatrix} \end{equation} and finnaly, matrix of Fredking gate is \begin{equation} F = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix} \end{equation} Since $F=CCNOT_{01} CCNOT_{02} CCNOT_{01}$, the circuit is designed corectly. Unfortunatelly, implementation of Toffoli gate requires many CNOT gates and single qubit rotation gates. My question: Is this implementation of Fredkin gate the most efficient one? quantum-gate circuit-construction Martin VeselyMartin Vesely $\begingroup$ "My question: Is this implementation of Fredkin gate the most efficient one?" -- Most efficient in terms of what? Toffoli gates? Two-qubit gates? Sth. else? Have you e.g. checked out Five two-bit quantum gates are sufficient to implement the quantum Fredkin gate? $\endgroup$ – Norbert Schuch $\begingroup$ @NorbertSchuch: I meant if is it possible to implement it with less gates (CNOTs and rotations) behind Toffoli gates. $\endgroup$ – Martin Vesely $\begingroup$ I still don't understand. What is your figure of merit? E.g., you can get Fredkin with one Toffoli + 2 CNOTs. $\endgroup$ $\begingroup$ @NorbertSchuch: It was answer to your question in terms of what. I will have a look at the paper you sent me. Thanks. $\endgroup$ $\begingroup$ Yes, and I could not properly understand your question. Are you trying to minimize the number of Toffoli gates, or the number of CNOTs and rotations in addition to a given number of Toffoli gates? $\endgroup$ Based on paper Five Two-Bit Quantum Gates are Sucient to Implement the Quantum Fredkin Gate provided by Norbert Schuch, I realized that there is a more efficient implementation in terms of number of gates. Here is a result: Matrix of CNOT acting on $|q_1\rangle$ controlled by $|q_2\rangle$ is \begin{equation} CNOT_{2}= \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ \end{pmatrix} \end{equation} It can be verified that $(I \otimes CNOT_2)CCNOT(I \otimes CNOT_2)$ is matrix describing Fredkin gate. $\begingroup$ What is noteworthy is that your question contains all the ingredients for this answer! (Basically, once you understand that two CNOTs are a SWAP + a CNOT, you're there. All the rest is just putting a control on that. $\endgroup$ $\begingroup$ @NorbertSchuch: I see, thanks. It is enough to control only the middle CNOT. In case $|q_0\rangle$ is $|0\rangle$, left and right CNOT cancel each other as Toffoli is in this case just $I$. Only in case $|q_0\rangle$ is $|1\rangle$ the middle CNOT works and all three gates implement swap gate. $\endgroup$ $\begingroup$ Just note that design of Fredkin gate is an excercise no. 4.25 on pg. 182 in Nielsen and Chuang. $\endgroup$ Not the answer you're looking for? Browse other questions tagged quantum-gate circuit-construction or ask your own question. Can a Toffoli gate be implemented using Fredkin gates? How to construct matrix of regular and "flipped" 2-qubit CNOT? Implementing "Classical AND Gate" and "Classical OR Gate" with a quantum circuit Decomposing a controlled phase gate into CNOTs Can Toffoli operate on a qubit in superposition? How do I interpret the CNOT gate control-target structure after a basis-change? General approach for switching control and target qubit How to realize SWAP operation using iSWAP gate? Controlled Z gate using Pauli rotation operators and Z tensor product Z
CommonCrawl
How can we represent thermal energy and heat diffusion in the Lagrangian? My question has two parts. But let's first introduce the problem: In Lagrangian mechanics, a central part is the Lagrangian $$ \mathcal L\left(t, q,\dot{q}\right) = T\left(t, q,\dot{q}\right) - V\left(t, q,\dot{q}\right), $$ where $t$ is the time, $q$ are the generalized coordinates and $\dot q$ are the corresponding time derivatives, $T$ is the total kinetic energy of the system, and $V$ is the total potential energy of the system. Alternatively, in a field theory, the Lagrangian would be the density $$ \mathcal L\left(t, q(\vec{x}),\dot{q}(\vec{x}), \vec{\nabla}q(\vec{x})\right) = T\left(t, q(\vec{x}),\dot{q}(\vec{x}), \vec{\nabla}q(\vec{x})\right) - V\left(t, q(\vec{x}),\dot{q}(\vec{x}), \vec{\nabla}q(\vec{x})\right), $$ where $q$ are now the fields and $\dot{q}$ and $\vec{\nabla}q$ are the corresponding time- and spatial derivatives, respectively, and $T$ and $V$ are now the kinetic and potential energy densities, respectively. The equations of motion of the system are then given by the Euler–Lagrange equations. However, if the system contains a gas, the gas may vary in temperature and will become hotter if it is compressed, which means that some kinetic energy was converted to thermal energy, which should increase the pressure of the gas more than if it was just the density of the gas that was considered, hence changing the behavior of the system. Besides, since sound waves traveling in a gas will not only create pressure waves but also temperature waves, since an increase in pressure is accompanied by an increase in temperature, we will have a varying temperature within the gas. This variation will give rise to thermal diffusion, causing thermal energy to diffuse from the pressure tops to the pressure valleys, in turn causing the sound waves to weaken over time (the "life expectancy" is proportional to the wavelength squared). How do we take thermal energy into account in the Lagrangian? Should this be counted as just a form of potential energy, since macroscopically, it doesn't directly cause anything to move? Or should it be counted as partially potential energy and partially kinetic energy, since microscopically, it both makes the gas particles move faster, which increases the total kinetic energy, and makes particles press against each other harder during collisions which, during the collision between two particles, temporarily causes a higher potential energy? How do we model thermal diffusion in the Lagrangian of a field theory? Note that I'm still only interested in macroscopic systems, as temperature is an inherently macroscopic property, not to mention that treating each individual fluid particle would become too complex. thermodynamics lagrangian-formalism thermal-conductivity diffusion HelloGoodbyeHelloGoodbye $\begingroup$ Something about your question reminds me of Nose-Hoover dynamics. If you are unfamiliar, I would suggest this as one route to temperature control within the context of Hamiltonian/Lagrangian systems. $\endgroup$ – Matt P. If you want to look at individual atoms and molecules, then thermal energy is just particle motion. Thus, if you have represented your particle motion in your Lagrangian, you have represented thermal motion. In the ordinary case it would be seriously impractical to use such. You are talking something in the range of $10^{22}$ particles in a gram of air. (More or less depending on the type of atoms.) So you can't really do it that way. If you wanted to treat the system as whole and include thermal energy that is very different. For example, if you wanted to do a Lagranian representation of thermal energy in a fluid. Then you bring in the equation of state for the fluid. That will relate temperature and pressure, and possibly chemical identity if you wanted to include chemical processes. The equation of state will let you do things like relate pressure, temperature, and density. Depending on what you are doing, that can come into the system in a variety of ways. For example, the equation of state might enter as a constraint. You would add the equation of state to the Lagrangian with a Lagrange multiplier. Then you treat the Lagrange multiplier as a new system variable, the equation of motion of which is the equation of state. You might get some incite into that process by looking at the Dirac method of dealing with constrained systems. Just don't get bogged down since he's doing it to head towards quantum systems. Diffusion is going to be a gnarly problem in a Lagrangian formulation. I like Lagrangians, but they would not be my first choice for it. Diffusion does not really have anything like cannoncial coords or momentum. $\begingroup$ Thanks for the answer, Dan! Indeed, treating each individual particle would be infeasible, and there would no longer be any notion of temperature as the system would become microscopic and temperature is a macroscopic property. Do you think you could give an example of how to use the equation of state to obtain the equations of motion? It is not immediately clear to me how that would work. When it comes to diffusion, I'm mainly considering field theories. It's maybe possible to include heat diffusion in a discrete system as well, but that is not what I had in mind. $\endgroup$ – HelloGoodbye $\begingroup$ You basically write the eqn of state in a form G(x,y,z) = 0. Then you add G to the Lagrangian, multiplied by a parameter L. Here, x, y, and z, need to be parameters both in your Lagrangian and such that they can represent the equation of state. That may be the hardest step. Then you treat L as a new system variable and derive its equation of motion just as you would other system coords. It usually winds up having a canonical momentum of zero so you get a new constraint, which is the G=0 you just entered. Check the Dirac citation. $\endgroup$ $\begingroup$ Do you mean that I should introduce pressure and temperature as extra generalized coordinates into the Lagrangian, since they would appear in the equation of state $G(x,y,z) = 0$? Doing that would lead to invalid Euler–Lagrange equations for those parameters since the Lagrangian doesn't depend on the partial time derivatives of those parameters. $\endgroup$ $\begingroup$ For example, if I have a cylinder with a constant amount of gas (constant mass) in it with pressure $P$ and temperature $T$, and for which the volume $V$ depends on $q$, the ideal gas law looks like $PV(q)=nRT$ (where $n$ and $R$ are constants). So we could write the equation of state on the form $G(q,P,T) = PV(q) - nRT = 0$. Do you mean that I should treat $P$ and $T$ as extra generalized coordinates and modify the Lagrangian as follows $L \mapsto L + G(q,P,T)$? Or how do you mean I should get equations of motion for $P$ and $T$? $\endgroup$ $\begingroup$ (The detailed approach in my last comment won't work since there is no "inertial" associated with $P$ or $T$, so the Euler–Lagrange equations for these parameters have no solutions.) $\endgroup$ Not the answer you're looking for? Browse other questions tagged thermodynamics lagrangian-formalism thermal-conductivity diffusion or ask your own question. speed of sound and the potential energy of an ideal gas; Goldstein derivation Lagrangian and conservation of energy Confusion between thermal energy and heat Conservation of heat equation: what represent heat, enthalpy or internal energy? Diffusion equation Lagrangian: what is the conjugate field? Why does an increase in pressure of a gas not increase it's kinetic potential energy?
CommonCrawl
Accueil > Annuaire > Page Personnelle LOUHICHI Ameur Fonction : ANR-ElastoBio Post-Doc Post-Doc ANR (Responsable : BANC A.) Autre type: Rhéologie Autre(s) thème(s) de recherche ou rattachement(s) : - Matière molle pour l'agronomie et l'environnement ameur.louhichi umontpellier.fr Bureau: , Etg: 1, Bât: 11 - Site : Campus Triolet Programme de Recherche/CV: Participation(s) à Projets: Gels élastomériques de biopolymères soumis à des déformations extrêmes (ANR ELASTOBIO, 2018-2022 ) Domaines de Recherche: Physique/Physique/Dynamique des Fluides Physique/Matière Condensée/Matière Molle Physique/Mécanique/Mécanique des fluides Chimie/ou physique Sciences du Vivant/Alimentation et Nutrition Dernieres productions scientifiques : Competition between shear and biaxial extensional viscous dissipation in the expansion dynamics of Newtonian and rheo-thinning liquid sheets Auteur(s): Louhichi A., Charles C.-A., Arora S., Bouteiller Laurent, Vlassopoulos Dimitris, Ramos L., Ligoure C. (Article) Publié: Physics Of Fluids, vol. 33 p.10.1063/5.0057316 (2021) DOI: 10.1063/5.0057316 WoS: 000691864900008 When a drop of fluid hits a small solid target of comparable size, it expands radially until reaching a maximum diameter and subsequently recedes. In this work, we show that the expansion process of liquid sheets is controlled by a combination of shear (on the target) and biaxial extensional (in the air) deformations. We propose an approach toward a rational description of the phenomenon for Newtonian and viscoelastic fluids by evaluating the viscous dissipation due to shear and extensional deformations, yielding a prediction of the maximum expansion factor of the sheet as a function of the relevant viscosity. For Newtonian systems, biaxial extensional and shear viscous dissipation are of the same order of magnitude. On the contrary, for thinning solutions of supramolecular polymers, shear dissipation is negligible compared to biaxial extensional dissipation and the biaxial thinning extensional viscosity is the appropriate quantity to describe the maximum expansion of the sheets. Moreover, we show that the rate-dependent biaxial extensional viscosities deduced from drop impact experiments are in good quantitative agreement with previous experimental data and theoretical predictions for various viscoelastic liquids. Viscoelasticity and elastocapillarity effects in the impact of drops on a repellent surface Auteur(s): Charles C.-A., Louhichi A., Ramos L., Ligoure C. (Article) Publié: Soft Matter, vol. 17 p.5829 (2021) We investigate freely expanding viscoelastic sheets. The sheets are produced by the impact of drops on a quartz plate covered with a thin layer of liquid nitrogen that suppresses shear viscous dissipation as a result of the cold Leidenfrost effect. The time evolution of the sheet is simultaneously recorded from top and side views using high-speed cameras. The investigated viscoelastic fluids are Maxwell fluids, which are characterized by low elastic moduli, and relaxation times that vary over almost two orders of magnitude, thus giving access to a large spectrum of viscoelastic and elastocapillary effects. For the purposes of comparison, Newtonian fluids, with viscosity varying over three orders of magnitude, are also investigated. In this study, $d_{\mathrm{max}}$, the maximal expansion of the sheets, and $t_{\mathrm{max}}$ the time to reach this maximal expansion from the time at impact, are measured as a function of the impact velocity. By using a generalized damped harmonic oscillator model, we rationalize the role of capillarity, bulk elasticity and viscous dissipation in the expansion dynamics of all investigated samples. In the model, the spring constant is a combination of the surface tension and the bulk dynamic elastic modulus. The time-varying damping coefficient is associated to biaxial extensional viscous dissipation and is proportional to the dynamic loss modulus. For all samples, we find that the model reproduces accurately the experimental data for $d_{\mathrm{max}}$ and $t_{\mathrm{max}}$. Impact of the protein composition on the structure and viscoelasticity of polymer-like gluten gels Auteur(s): Ramos L. , Banc A., Louhichi A., Pincemaille J., Jestin Jacques, Fu Zhendong, Appavou Marie-Sousai, Menut Paul, Morel Marie-Hélène (Article) Publié: Journal Of Physics: Condensed Matter, vol. 33 p.144001 (2021) DOI: 10.1088/1361-648X/abdf91 We investigate the structure of gluten polymer-like gels in a binary mixture of water/ethanol, $50/50$ v/v, a good solvent for gluten proteins. Gluten comprises two main families of proteins, monomeric gliadins and polymer glutenins. In the semi-dilute regime, scattering experiments highlight two classes of behavior, akin to standard polymer solution and polymer gel, depending on the protein composition. We demonstrate that these two classes are encoded in the structural features of the proteins in very dilute solution, and are correlated with the presence of proteins assemblies of typical size tens of nanometers. The assemblies only exist when the protein mixture is sufficiently enriched in glutenins. They are found directly associated to the presence in the gel of domains enriched in non-exchangeable H-bonds and of size comparable to that of the protein assemblies. The domains are probed in neutron scattering experiments thanks to their unique contrast. We show that the sample visco-elasticity is also directly correlated to the quantity of domains enriched in H-bonds, showing the key role of H-bonds in ruling the visco-elasticity of polymer gluten gels. Tailoring the viscoelasticity of polymer gels of gluten proteins through solvent quality Auteur(s): Costanzo S., Banc A., Louhichi A., Chauveau E., Wu Baohu, Morel Marie-Hélène, Ramos L. (Article) Publié: Macromolecules, vol. 53 p.9470-9479 (2020) DOI: 10.1021/acs.macromol.0c01466 We investigate the linear viscoelasticity of polymer gels produced by the dispersion of gluten proteins in water:ethanol binary mixtures with various ethanol contents, from pure water to 60% v/v ethanol. We show that the complex viscoelasticity of the gels exhibits a time/solvent composition superposition principle, demonstrating the self-similarity of the gels produced in different binary solvents. All gels can be regarded as near critical gels with characteristic rheological parameters, elastic plateau and characteristic relaxation time, which are related one to another, as a consequence of self-similarity, and span several orders of magnitude when changing the solvent composition. Thanks to calorimetry and neutron scattering experiments, we evidencea co-solvency effect with a better solvation of the complex polymer-like chains of the gluten proteins as the amount of ethanol increases. Overall the gel viscoelasticity can be accounted for by a unique characteristic length characterizing the crosslink density of the supramolecular network, which is solvent composition-dependent. On a molecular level, these findings could be interpreted as a transition of the supramolecular interactions, mainly H-bonds, from intra- to interchains, which would be facilitated by the disruption of hydrophobic interactions by ethanol molecules. This work provides new insight for tailoring the gelation process of complex polymer gels. Biaxial extensional viscous dissipation in sheets expansion formed by impact of drops of Newtonian and non-Newtonian fluids Auteur(s): Louhichi A., Charles C.-A., Phou T., Vlassopoulos Dimitris, Ramos L., Ligoure C. (Article) Publié: Physical Review Fluids, vol. 5 p.053602 (2020) DOI: 10.1103/PhysRevFluids.5.053602 We investigate freely expanding liquid sheets made of either simple Newtonian fluids or solutions of high molecular water-soluble polymer chains. A sheet is produced by the impact of a drop on a quartz plate covered with a thin layer of liquid nitrogen that suppresses shear viscous dissipation thanks to an inverse Leidenfrost effect. The sheet expands radially until reaching a maximum diameter and subsequently recedes. Experiments indicate the presence of two expansion regimes: the capillary regime, where the maximum expansion is controlled by surface tension forces and does not depend on the viscosity, and the viscous regime, where the expansion is reduced with increasing viscosity. In the viscous regime, the sheet expansion for polymeric samples is strongly enhanced as compared to that of Newtonian samples with comparable zero-shear viscosity. We show that data for Newtonian and non-Newtonian fluids collapse on a unique master curve where the maximum expansion factor is plotted against the relevant effective \textit{biaxial extensional} Ohnesorge number that depends on fluid density, surface tension and the biaxial extensional viscosity. For Newtonian fluids, this biaxial extensional viscosity is six times the shear viscosity. By contrast, for the non-Newtonian fluids, a characteristic \textit{Weissenberg number}-dependent biaxial extensional viscosity is identified, which is in quantitative agreement with experimental and theoretical results reported in the literature for biaxial extensional flows of polymeric liquids.
CommonCrawl
Is there any SRP-like key exchange only using "standard" cryptographic primitives? I am looking into PAKEs (password-authenticated key exchanges), and it seems like SRP (Secure Remote Password) is essentially the de-facto standard. However, implementing SRP actually requires doing modular arithmetic, and is similar to, say, implementing Diffie-Hellman. That is, you'd have to have constant-time exponentiation algorithms, a fast bignum library, and getting one of any of these subtly wrong and you might have a terrible side-channel attack. And unlike algorithms like AES or Curve25519, there aren't many crypto libraries that contain a primitive for SRP, so "rolling your own" is often unavoidable anyway. Are there any PAKEs that instead of using "custom" mathematics like SRP, simply is implemented in terms of more standard primitives, such as "any secure cryptographic hash", "any secure Diffie-Hellman-like exchange", "any secure signature scheme"? It would be a lot more obvious that it's secure - SRP isn't obviously secure at first glance unless you actually follow the reduction to the discrete log problem - and a lot easier to securely implement providing you have secure implementations of the primitives. For example, I could think of a weak PAKE, where you simply do a Diffie-Hellman key exchange, but both parties attach a MAC to the ephemeral public keys derived from the password. This is obviously secure if the password is strong, but unlike SRP an attacker gains enough information for an offline brute-force attack, and the server has to store the password in plaintext. I'm looking for whether there are PAKEs similar to SRP in strength that are as easy as the above weak scheme to intuitively understand and implement in terms of other primitives. key-exchange password-based-encryption srp ithisaithisa $\begingroup$ Hmm, asking for a list is kind-of off topic, but this question clearly requires objective reasons (using established primitives). Furthermore, I wonder if we would go over 1 or even 0 items on the list and the question is pretty interesting. Let's leave it open! $\endgroup$ – Maarten Bodewes♦ Mar 3 '17 at 9:42 $\begingroup$ It appears that both SESPAKE and J-PAKE are unsuitable for you :( $\endgroup$ – SEJPM♦ Mar 5 '17 at 10:51 $\begingroup$ The closest I've been able to find is the Salted Challenge Response Authentication Mechanism (SCRAM), but it is not a PAKE. It uses the "standard" SHA-1/SHA-256, PBKDF2, and HMAC cryptographic primitives. $\endgroup$ – Emile Cormier Mar 30 '18 at 2:23 The first protocol for password authenticated key exchange that appeared in the crypto community was the Bellovin-Merritt scheme (see also this survey page 4). This protocol is very simple, and might actually suit your need: is is exactly a Diffie-Hellman key exchange, in which the flows are encrypted with a block cipher (using the common password as the key of the cipher), and where the secret key the players agree on is derived by hashing the Diffie-Hellman tuple. The security of this protocol was analyzed several times, in various models (ideal cipher model or random oracle model, in indistinguishability-based framework or simulation-based framework...). Although it does not enjoy a proof of security in the plain model, you might be satisfied with a protocol proven secure in the random oracle model. In this case, this scheme seems to exactly fit your requirements: you need "any Diffie-Hellman key exchanged", together with "any (good) hash function" and "any block cipher" that allows you to encrypt the flow with the password. Variations of the Bellovin-Merritt scheme that might make it even simpler are presented in this article (they essentially replace the block cipher by a simple one-time pad, and have two variants, one non-concurrently secure and one concurrently secure). EDIT: so, after discussing with Ricky Demer, this does not quite work yet. A necessary condition for this to work is that the messages generated by the DHKE - which are group elements - should be indistinguishable from random bit-strings. For DHKE over $\mathbb{Z}^*_p$ for some large prime $p,$ group elements can be naturally mapped to a distribution statistically indistinguishable from bit-strings, and existing implementations might already encode these messages as random-looking bit strings (but this would have to be checked). For more sparse groups such as elliptic curves, I believe such a mapping can be done, but it would be more cumbersome from an implementation point of view. I thank Ricky Demer for pointing this out. The variants presented in this article do not use a block cipher, which gives rise to dictionary attacks when the encoded element does not look random, but using a kind of multiplicative one-time pad: Alice masks her flow $g^x$ by multiplying it with $M^{\mathsf{pw}}$, where $M$ is a group element and $\mathsf{pw}$ is the password (and Bob plays similarly). Here, you do not have to care about how group elements are represented; however, you must perform an exponentiation (with a small exponent) and a multiplication, hence it does not makes a black-box use of the DHKE key exchange. So, I discussed today with my PhD advisor, who happens to be the author of quite a number of papers in the PAKE area (in particular this paper which I had mentioned). It confirmed what I had started to think: it does not seem feasible to build a PAKE with a black box access to a DH key exchange and symmetric primitives. Somehow, you have to be at least able to multiply two group elements (hence you must know their structure). I cannot prove that it is infeasible, of course, but that is currently unknown in the scientific community, and not believed to be feasible. Geoffroy CouteauGeoffroy Couteau $\begingroup$ "any Diffie-Hellman key exchanged" is not enough ​ - ​ One also needs oblivious sampling of group elements, or something very close to that. ​ ​ ​ ​ $\endgroup$ – user991 Mar 5 '17 at 13:54 $\begingroup$ Well, depends if you see the protocol as a complete black box that only provides a common key, or as some pre-existing code that will generate the message of Alice for Alice, the message of Bob for Bob, and will then give the corresponding key. Here, you can simply take the message generated for the DH key exchange, and encrypt it to get your message of the PAKE. $\endgroup$ – Geoffroy Couteau Mar 5 '17 at 14:26 $\begingroup$ That does not necessarily work ​ ​ ​ - ​ ​ ​ Suppose the group elements all start with ​ security parameter ​ zeros. ​ ​ ​ ​ ​ ​ ​ ​ $\endgroup$ – user991 Mar 5 '17 at 14:30 $\begingroup$ I'm not sure I'm following you, can we discuss that in a chat, if you have a few minutes? $\endgroup$ – Geoffroy Couteau Mar 5 '17 at 14:37 $\begingroup$ I have time, but don't know how to create a chatroom here. ​ ​ $\endgroup$ – user991 Mar 5 '17 at 14:49 Building upon Geoffroy Couteau's answer, there are possible fixes to the issues adressed there. The Bellovin-Merrit (from section 3: EKE using exponential key exchange) scheme is roughly like this: - Alice and Bob agree on a safe prime modulus and a generator of the group (which has a problem of leaking its legendre symbol) - Alice and Bob do a normal DH key exchange first; this is encrypted with a symmetric encryption scheme using the password as key - Afterwards a challenge-response is done to counter replay attacks, encrypted with the exchanged key. The issue is: If you use another KE protocol, encoding those elements might have certain structure and we can't assume they are indistinguishable from random values. And if you encrypt a non-random value with a low-entropy key, that could turn into an off-line or dictionary attack. With a generic key exchange protocol and ECB mode of operation for the symmetric encryption, I don't think this is possible: If you give the attacker in addition to the original message an encryption of all $0$ (which could be part of the representation of group elements), the attacker can try and check guesses for the password. If you use a password-based KDF this increases the effort for brute-forcing the password, but does not solve the problem entirely. Using other modes of operation don't solve the problem either: With known IV and low entropy password, this can also be utilized in an attack. The only way to solve this would to make sure that in the first $k$ bits of the binary representation of the group elements have at least $k-x$ bits of entropy (or even are hardcore bits) and then use some mode of operation with chaining. Another idea would be to use SPEKE, which was developed shortly after the other protocol: Agreement on public parameters with a safe prime $2p+1$ The password is hashed, matched to a group element and then squared. This is the generator of the subgroup with prime order $p$. Alice and Bob just do a key exchange with this generator, which only they should know. Regarding the scheme, the upside is there is no way an attacker can test a guess for the password, because the key exchange happens in the subgroup of quadratic residues and the group has prime order. It is noted that you can use the protocol with elliptic curves, but you do need a way to match a password to a group element. The wiki atricle notes IOP or Integer-to-Point function in IEEE P1363.2 for that. However, in 2014 the paper The SPEKE Protocol Revisited (Hao,Shahandashti) showed attacks on SPEKE, and the paper also discusses necessary changes to the protocol. One possibility of this is tho, that once you fix the problems for those attacks, you actually end up with something very similar to SRP. tylotylo $\begingroup$ See, all of these solutions require "manually" doing mathematics with big numbers. The point of my question was precisely, whether there are protocols that don't require me to do so. I shouldn't need to care that Diffie-Hellman even uses numbers: it's just a black box with two functions: "generate keypair" and "derive shared secret from my private key and their public key". $\endgroup$ – ithisa Mar 6 '17 at 19:05 $\begingroup$ SPEKE alone is a hash function, squaring one element and a key exchange. That's as much out-of-the-box as you can get. However, if you assume a "gen keypair" algorithm, that's something entirely different than password based key exchange. Just use authenticated Diffie-Hellmann instead. $\endgroup$ – tylo Mar 7 '17 at 10:23 $\begingroup$ As I said, authenticated Diffie-Hellman, authenticated with a MAC derived from the password, is less secure than SRP because a passive attacker gains enough information to brute-force the password if it's weak. SRP 1. does not need the server to store a password equivalent 2. does not allow passive attackers to gain information for an offline brute-force. Authenticated DH satisfies none of these requirements. $\endgroup$ – ithisa Mar 7 '17 at 14:30 $\begingroup$ @user54609 Then I don't understand what you meant with your previous comment. The term "generate keypair" doesn't fit in password based key exchange at all. I did not mean authenticated DHKE with a password-derived key - I meant authenaticated DHKE with a proper PKI - where you store the certificate locally, and possibly unlock it with a passphrase. $\endgroup$ – tylo Mar 7 '17 at 15:37 $\begingroup$ @tylo : ​ However, that term does fit non-interactive key agreement, as well as $\hspace{1.4 in}$ setup-based versions of that, such as Diffie-Hellman. ​ ​ ​ ​ $\endgroup$ – user991 Mar 8 '17 at 7:33 Not the answer you're looking for? Browse other questions tagged key-exchange password-based-encryption srp or ask your own question. Can the TLS 1.3 PSK-DHE handshake be turned into a PAKE? Does SRP reduce to DH key exchange when shared password is not secret? Which version(s) of SRP are in ISO/IEC 11770-4:2006? State level "Weak Diffie-Hellman" working for SRP too? Non-numeric Diffie-Hellman? What asymmetric key exchange algorithms are known besides DH? Can a key exchange protocol exchange a one time pad instead of a key? Are there any key exchange schemes that only require entropy on one side? Key exchange protocol and digital signatures Is perfectly secret key exchange provably impossible?
CommonCrawl
Meaning of Exponents Let's see how exponents show repeated multiplication. 12.1: Notice and Wonder: Dots and Lines What do you notice? What do you wonder? Description: <p>A figure of a series of dot branches. In the center is a black dot. Three branches extend from the black dot with one red dot at the end of each branch. There are three branches that extend from each red dot with one green dot at the end of each branch. There are three branches that extend from each green dot with one yellow dot at the end of each branch. There are three branches that extend from each yellow dot with one blue dot at the end of each branch.</p> 12.2: The Genie's Offer You find a brass bottle that looks really old. When you rub some dirt off of the bottle, a genie appears! The genie offers you a reward. You must choose one: $50,000; or A magical $1 coin. The coin will turn into two coins on the first day. The two coins will turn into four coins on the second day. The four coins will double to 8 coins on the third day. The genie explains the doubling will continue for 28 days. The number of coins on the third day will be \(2 \boldcdot 2 \boldcdot 2\). Write an equivalent expression using exponents. What do \(2^5\) and \(2^6\) represent in this situation? Evaluate \(2^5\) and \(2^6\) without a calculator. How many days would it take for the number of magical coins to exceed $50,000? Will the value of the magical coins exceed a million dollars within the 28 days? Explain or show your reasoning. Explore the applet. (Why do you think it stops?) A scientist is growing a colony of bacteria in a petri dish. She knows that the bacteria are growing and that the number of bacteria doubles every hour. When she leaves the lab at 5 p.m., there are 100 bacteria in the dish. When she comes back the next morning at 9 a.m., the dish is completely full of bacteria. At what time was the dish half full? 12.3: Make 81 Here are some expressions. All but one of them equals 16. Find the one that is not equal to 16 and explain how you know. \(2^3\boldcdot 2\) \(4^2\) \(\frac{2^5}{2}\) Write three expressions containing exponents so that each expression equals 81. When we write an expression like \(2^n\), we call \(n\) the exponent. If \(n\) is a positive whole number, it tells how many factors of 2 we should multiply to find the value of the expression. For example, \(2^1=2\), and \(2^5=2 \boldcdot 2 \boldcdot 2 \boldcdot 2 \boldcdot 2\). There are different ways to say \(2^5\). We can say "two raised to the power of five" or "two to the fifth power" or just "two to the fifth."
CommonCrawl
Where else in physics does one encounter Reynolds averaging? Reynolds-averaged Navier–Stokes equations (RANS) is one of the approaches to turbulence description. Physical quantities, like for example velocity $u_i$, are represented as a sum of a mean and a fluctuating part: $$ u_i = \overline{u_i} + u'_i $$ where the Reynolds averaging operator $\overline{\cdot}$ satisfies, among the others, relations: $$ \overline{\overline{u_i}} = \overline{u_i}, \qquad \overline{u'_i} = 0 $$ which distinguish it from other types of averaging. In fluid dynamics Reynolds operator is usually interpreted as time averaging: $$ \overline{u_i} = \lim_{T \to \infty} \frac{1}{T}\int_t^{t+T} u_i \,dt $$ The above construction seems to be universal for me and is likely to be used in other areas of physics. Where else does one encounter Reynolds averaging? fluid-dynamics statistical-mechanics turbulence Yrogirg YrogirgYrogirg $\begingroup$ I don't have specific examples handy, but I would say anywhere that you only care about the mean of the temporal signal. The Reynolds averaging is just a low-pass filter on the time signal, so I would imagine any number of applications are possible from communications, electronics, control theory, etc.. Anybody who uses a low-pass filter on a time-varying signal. $\endgroup$ – tpg2114♦ Jul 24 '13 at 16:12 $\begingroup$ Oliver Penrose wrote a useful (and very detailed) article in Rep. Prog. Phys. in 1979 entitled, "Foundations of statistical mechanics." In that work, he has some very useful discussions on the differences between time-averages and ensemble averages (e.g., the last paragraph before section 1.2). $\endgroup$ – honeste_vivere Oct 18 '14 at 14:12 $\begingroup$ I should also mention that time-averages are not always appropriate. In some cases, a time-average amounts to a bad low-pass filter. I say bad because unlike a Fourier-based (or some other basis) low-pass filter, a time-average mixes neighboring data points potentially convolving two signals that are completely unrelated. In solar wind data analysis, for instance, using a time-average can be a bad idea because you start to mix things that can be hundreds of km apart and may be from completely separate structures. $\endgroup$ – honeste_vivere Oct 18 '14 at 14:15 Browse other questions tagged fluid-dynamics statistical-mechanics turbulence or ask your own question. What does the Reynolds Number of a flow represent physically? Where does the Maxwell-Boltzmann distribution come from? In terms of scale, where does the concept of Reynold's number cease to have meaning? "Where" does dissipated enstrophy go? What does mean "inertial forces" in the Reynolds number definition? 2D Uniform Flow Inclined Plane - Reynolds Averaging: Leads to no Turbulence? Where does the Boltzmann Constant first appear in physics?
CommonCrawl
Published by editor on February 6, 2021 Approaches to causality and multi-agent paradoxes in non-classical theories. (arXiv:2102.02393v1 [quant-ph]) 上午10:10 | V. Vilasini | quant-ph updates on arXiv.org This thesis reports progress in the analysis of causality and multi-agent logical paradoxes in quantum and post-quantum theories. These research areas are highly relevant for the foundations of physics as well as the development of quantum technologies. In the first part, focussing on causality, we develop techniques for using generalised entropies to analyse distinctions between classical and non-classical causal structures. We derive new properties of Tsallis entropies of systems that follow from the relevant causal structure, and apply these to obtain new necessary constraints for classicality in the Triangle causal structure. Supplementing the method with the post-selection technique, we provide evidence that Shannon and Tsallis entropic constraints are insufficient for detecting non-classicality in Bell scenarios with non-binary outcomes. This points to the need for better methods of characterising correlations in non-classical causal structures. Further, we investigate the relationships between causality and space-time by developing a framework for modelling cyclic and fine-tuned influences in non-classical theories. We derive necessary and sufficient conditions for such causal models to be compatible with a space-time structure and for ruling out operationally detectable causal loops. In particular, this provides an operational framework for analysing post-quantum theories admitting jamming non-local correlations. In the second part, we investigate multi-agent logical paradoxes such as the Frauchiger-Renner paradox and develop a framework for analysing such paradoxes in arbitrary physical theories. Applying this to box world, a post-quantum theory, we derive a stronger paradox that does not rely on post-selection. Our results reveal that reversible evolution of agents' memories is not necessary for deriving multi-agent paradoxes, and that certain forms of contextuality might be. The outcomes of measurements in the de Broglie-Bohm theory. (arXiv:2102.02519v1 [quant-ph]) 上午10:10 | G. Tastevin, F. Laloë | quant-ph updates on arXiv.org Within the de Broglie-Bohm (dBB) theory, the measurement process is usually discussed only in terms of the effect of the Bohmian positions of the measured system S, while the effects of the Bohmian positions associated with the measurements apparatus M are ignored. This article shows that the latter variables actually play an essential role in the determination of the result. Indeed, in many cases, the result of measurement is practically independent of the initial value of a Bohmian position associated with S, and determined only by those of M. The measurement then does not reveal the value of any pre-existing variable attached to S, but just the initial state of the measurement apparatus. Quantum contextuality then appears with particular clarity as a consequence of the dBB dynamics for entangled systems. Times of Arrival and Gauge Invariance. (arXiv:2102.02661v1 [quant-ph]) 上午10:10 | Siddhant Das, Markus Nöth | quant-ph updates on arXiv.org We revisit the arguments underlying two well-known arrival-time distributions in quantum mechanics, viz., the Aharonov-Bohm and Kijowski (ABK) distribution, applicable for freely moving particles, and the quantum flux (QF) distribution. An inconsistency in the original axiomatic derivation of Kijowski's result is pointed out, along with an inescapable consequence of the "negative arrival times" inherent to this proposal (and generalizations thereof). The ABK free-particle restriction is lifted in a discussion of an explicit arrival-time setup featuring a charged particle moving in a constant magnetic field. A natural generalization of the ABK distribution is in this case shown to be critically gauge-dependent. A direct comparison to the QF distribution, which does not exhibit this flaw, is drawn (its acknowledged drawback concerning the quantum backflow effect notwithstanding). The equivalence of local-realistic and no-signalling theories. (arXiv:1710.01380v2 [quant-ph] UPDATED) 上午10:10 | Paul Raymond-Robichaud | quant-ph updates on arXiv.org We provide a framework that describe all local-realistic theories and all no-signalling theories. We show that every local-realistic theory is a no-signalling theory. We also show that every no-signalling theory with invertible dynamics has a local-realistic model. This applies in particular to unitary quantum theory. Local description of the Aharonov-Bohm effect with a quantum electromagnetic field. (arXiv:1910.10650v4 [quant-ph] UPDATED) 上午10:10 | Pablo L. Saldanha | quant-ph updates on arXiv.org In the seminal works from Santos and Gozalo [Europhys. Lett. $\mathbf{45}$, 418 (1999)] and Marletto and Vedral [Phys. Rev. Lett. $\mathbf{125}$, 040401 (2020)], it is shown how the Aharonov-Bohm effect can be described as the result of an exchange of virtual photons between the solenoid and the quantum charged particle along its propagation through the interferometer, where both the particle and the solenoid interact locally with the quantum electromagnetic field. This interaction results in a local and gauge-independent phase generation for the particle propagation in each path of the interferometer. Here we improve the cited treatments by using the quantum electrodynamics formalism in the Lorentz gauge, with a manifestly gauge-independent Hamiltonian for the interaction and the presence of virtual longitudinal photons. Only with this more complete and gauge-independent treatment it is possible to justify the acquired phases for interferometers with arbitrary geometries, and this is an advantage of our treatment. We also extend the results to the electric version of the Aharonov-Bohm effect. Finally, we propose an experiment that could test the locality of the Aharonov-Bohm phase generation. Quantum Counterpart of Classical Equipartition of Energy. (arXiv:1911.06570v3 [quant-ph] UPDATED) 上午10:10 | Jerzy Łuczka | quant-ph updates on arXiv.org It is shown that the recently proposed quantum analogue of classical energy equipartition theorem for two paradigmatic, exactly solved models (i.e., a free Brownian particle and a dissipative harmonic oscillator) also holds true for all quantum systems which are composed of an arbitrary number of non-interacting or interacting particles, subjected to any confining potentials and coupled to thermostat with arbitrary coupling strength. A time-symmetric formulation of quantum entanglement. (arXiv:2003.07183v3 [quant-ph] UPDATED) 上午10:10 | Michael B. Heaney | quant-ph updates on arXiv.org I numerically simulate and compare the entanglement of two quanta using the conventional formulation of quantum mechanics and a time-symmetric formulation that has no collapse postulate. The experimental predictions of the two formulations are identical, but the entanglement predictions are significantly different. The time-symmetric formulation reveals an experimentally testable discrepancy in the original quantum analysis of the Hanbury Brown-Twiss experiment, suggests solutions to some parts of the nonlocality and measurement problems, fixes known time asymmetries in the conventional formulation, and answers Bell's question "How do you convert an 'and' into an 'or'?'" Time-travelling billiard ball clocks: a quantum model. (arXiv:2007.12677v2 [quant-ph] UPDATED) 上午10:10 | Lachlan G. Bishop, Fabio Costa, Timothy C. Ralph | quant-ph updates on arXiv.org General relativity predicts the existence of closed timelike curves (CTCs), along which an object could travel to its own past. A consequence of CTCs is the failure of determinism, even for classical systems: one initial condition can result in multiple evolutions. Here we introduce a new quantum formulation of a classic example, where a billiard ball can travel along two possible trajectories: one unperturbed and one, along a CTC, where it collides with its past self. Our model includes a vacuum state, allowing the ball to be present or absent on each trajectory, and a clock, which provides an operational way to distinguish the trajectories. We apply the two foremost quantum theories of CTCs to our model: Deutsch's model (D-CTCs) and post-selected teleportation (P-CTCs). We find that D-CTCs reproduce the classical solution multiplicity in the form of a mixed state, while P-CTCs predict an equal superposition of the two trajectories, supporting a conjecture by Friedman et al. [Phys. Rev. D 42, 1915 (1990)]. Schr\"odinger's Cat. (arXiv:2102.01808v2 [quant-ph] UPDATED) 上午10:10 | Matthew F. Brown | quant-ph updates on arXiv.org The basic idea here is that observation (or one's experience) is fundamental and the `atomic world' is postulated as the source of such observation. Once this source has been inferred to exist one may attempt to explicitly derive its structure in such a way that the observation itself can be reproduced. And so here is a purely quantum mechanical model of observation coupled to its supposed source, and the observation itself is realised as a projection of this quantum system. Lapsing Quickly into Fatalism: Bell on Backward Causation. (arXiv:2102.02392v1 [quant-ph]) 上午9:24 | physics.hist-ph updates on arXiv.org Authors: Travis Norsen, Huw Price This is a dialogue between Huw Price and Travis Norsen, loosely inspired by a letter that Price received from J. S. Bell in 1988. The main topic of discussion is Bell's views about retrocausal approaches to quantum theory, and their relevance to contemporary issues. Why Do You Think It is a Black Hole?. (arXiv:2102.02592v1 [physics.hist-ph]) Authors: Galina Weinstein This paper analyzes the experiment presented in 2019 by the Event Horizon Telescope (EHT) Collaboration that revealed the first image of the supermassive black hole at the center of galaxy M87. The very first question asked by the EHT Collaboration is: What is the compact object at the center of galaxy M87? Does it have a horizon? Is it a Kerr black hole? In order to answer these questions, the EHT Collaboration first endorses the working hypothesis that the central object is a black hole described by the Kerr metric, i.e. a spinning Kerr black hole as predicted by classical general relativity. They choose this hypothesis based on previous research and observations of the galaxy M87. After having adopted the Kerr black hole hypothesis, the EHT Collaboration proceeds to test it. They confront this hypothesis with the data collected in the 2017 EHT experiment. They then compare the Kerr rotating black hole hypothesis with alternative explanations and finally find that their hypothesis is consistent with the data. In this paper I describe the complex methods used to test the spinning Kerr black hole hypothesis. I conclude this paper with a discussion of the implications of the findings presented here with respect to Hawking radiation. Reconstructing the graviton. (arXiv:2102.02217v1 [hep-th]) 上午9:24 | gr-qc updates on arXiv.org Authors: Alfio Bonanno, Tobias Denz, Jan M. Pawlowski, Manuel Reichert We reconstruct the Lorentzian graviton propagator in asymptotically safe quantum gravity from Euclidean data. The reconstruction is applied to both the dynamical fluctuation graviton and the background graviton propagator. We prove that the spectral function of the latter necessarily has negative parts similar to, and for the same reasons, as the gluon spectral function. In turn, the spectral function of the dynamical graviton is positive. We argue that the latter enters cross sections and other observables in asymptotically safe quantum gravity. Hence, its positivity may hint at the unitarity of asymptotically safe quantum gravity. Testing No-Hair Theorem by Quasi-Periodic Oscillations: the quadrupole of GRO J1655$-$40. (arXiv:2102.02232v1 [gr-qc]) Authors: Alireza Allahyari, Lijing Shao We perform an observational test of no-hair theorem using quasi-periodic oscillations within the relativistic precession model. Two well motivated metrics we apply are Kerr-Q and Hartle-Thorne metrics in which the quadrupole is the parameter that possibly encodes deviations from the Kerr black hole. The expressions for the quasi-periodic frequencies are derived before comparing the models with the observation. We encounter a degeneracy in constraining spin and quadrupole parameters that makes it difficult to measure their values. In particular, we here propose a novel test of no-hair theorem by adapting the Hartle-Thorne metric. It turns out that a Kerr black hole is a good description of the central object in GRO J1655$-$40 given the present observational precisions. Probing Hawking radiation through capacity of entanglement. (arXiv:2102.02425v1 [hep-th]) Authors: Kohki Kawabata, Tatsuma Nishioka, Yoshitaka Okuyama, Kento Watanabe We consider the capacity of entanglement in models related with the gravitational phase transitions. The capacity is labeled by the replica parameter which plays a similar role to the inverse temperature in thermodynamics. In the end of the world brane model of a radiating black hole the capacity has a peak around the Page time indicating the phase transition between replica wormhole geometries of different types of topology. Similarly, in a moving mirror model describing Hawking radiation the capacity typically shows a discontinuity when the dominant saddle switches between two phases, which can be seen as a formation of island regions. In either case we find the capacity can be an invaluable diagnostic for a black hole evaporation process. Analogue Hawking temperature of a laser-driven plasma. (arXiv:2102.02556v1 [gr-qc]) Authors: C. Fiedler, D.A. Burton We present a method for exploring analogue Hawking radiation using a laser pulse propagating through an underdense plasma. The propagating fields in the Hawking effect are local perturbations of the plasma density and laser amplitude. We derive the dependence of the resulting Hawking temperature on the dimensionless amplitude of the laser and the behaviour of the spot area of the laser at the analogue event horizon. We demonstrate one possible way of obtaining the analogue Hawking temperature in terms of the plasma wavelength, and our analysis shows that for a high intensity near-IR laser the analogue Hawking temperature is less than approximately 25K for a reasonable choice of parameters. Diving inside a hairy black hole. (arXiv:2102.02707v1 [gr-qc]) Authors: Nicolás Grandi, Ignacio Salazar Landea We investigate the interior of the Einstein-Gauss-Bonnet charged black-hole with scalar hair. We find a variety of dynamical epochs, with the particular important feature that the Cauchy horizon is not present. This makes the violation of the no-hair theorem a possible tool to understand how might the strong cosmic censorship conjecture work. Possible alterations of local gravitational field inside a superconductor. (arXiv:2102.01489v2 [gr-qc] UPDATED) Authors: G. A. Ummarino, A. Gallerati We calculate the possible interaction between a superconductor and the static Earth's gravitational fields, making use of the gravito-Maxwell formalism combined with the time-dependent Ginzburg-Landau theory. We try to estimate which are the most favourable conditions to enhance the effect, optimizing the superconductor parameters characterizing the chosen sample. We also give a qualitative comparison of the behaviour of high-$T_\text{c}$ and classical low-$T_\text{c}$ superconductors with respect to the gravity/superfluid interplay. Knowledge is closed under analytic content 2021年2月5日 星期五 上午8:00 | Latest Results for Synthese I am concerned with epistemic closure—the phenomenon in which some knowledge requires other knowledge. In particular, I defend a version of the closure principle in terms of analyticity; if an agent S knows that p is true and that q is an analytic part of p, then S knows that q. After targeting the relevant notion of analyticity, I argue that this principle accommodates intuitive cases and possesses the theoretical resources to avoid the preface paradox. A Principle Explanation of Bell State Entanglement 2021年2月4日 星期四 上午9:23 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Stuckey, W. M. and Silberstein, Michael and McDevitt, Timothy (2021) A Principle Explanation of Bell State Entanglement. In: UNSPECIFIED. No-go Theorems: What are they good for? Dardashti, Radin (2021) No-go Theorems: What are they good for? [Preprint] Spacetime singularities and a novel formulation of indeterminism Azhar, Feraz and Namjoo, Mohammad Hossein (2021) Spacetime singularities and a novel formulation of indeterminism. [Preprint] The Concept of Time: A Grand Unified Reaction Platform Hamidreza, Simchi (2019) The Concept of Time: A Grand Unified Reaction Platform. [Preprint] Demystifying mysteries. How metaphors and analogies extend the reach of the human mind Boudry, Maarten and Vlerick, Michael and Edis, Taner (2021) Demystifying mysteries. How metaphors and analogies extend the reach of the human mind. [Preprint] The Local versus the Global in the History of Relativity: The Case of Belgium 2021年2月2日 星期二 下午3:56 | Philsci-Archive: No conditions. Results ordered -Date Deposited. ten Hagen, Sjang L. (2020) The Local versus the Global in the History of Relativity: The Case of Belgium. [Preprint] Clarifying the New Problem for Quantum Mechanics: Reply to Vaidman Meehan, Alexander (2020) Clarifying the New Problem for Quantum Mechanics: Reply to Vaidman. [Preprint] Probing Theoretical Statements with Thought Experiments El Skaf, Rawad (2021) Probing Theoretical Statements with Thought Experiments. [Preprint]
CommonCrawl
Prevalence, antimicrobial susceptibility pattern, and associated factors of Salmonella and Shigella among food handlers in Adigrat University student's cafeteria, northern Ethiopia, 2018 Haftom Legese ORCID: orcid.org/0000-0002-6280-11161, Tsega Kahsay1, Aderajew Gebrewahd1, Brhane Berhe1, Berhane Fseha2, Senait Tadesse3, Guesh Gebremariam1, Hadush Negash1, Fitsum Mardu1, Kebede Tesfay1 & Gebre Adhanom1 Food handlers play a significant role in the transmission of foodborne infections. Salmonella and Shigella are the most common foodborne pathogens and their infections are a major public health problem globally. Thus, this study aimed to determine the prevalence, antimicrobial susceptibility patterns, and associated factors of Salmonella and Shigella colonization among food handlers. A cross-sectional study was conducted from March to August 2018 at Adigrat University student cafeteria, Northern Ethiopia. Data on socio-demographic and associated factors were collected using a structured questionnaire. Fresh stool samples were collected from 301 food handlers and transported to Adigrat University Microbiology Laboratory. Bacterial isolation and antimicrobial susceptibility test were performed using standard bacteriological methods. Data analysis was performed using SPSS version 22 and P < 0.05 where a corresponding 95% confidence interval was considered statistically significant. A total of 301 food handlers were included in this study. The majority of study participants were females 265 (88.0%). About 22 (7.3%) and 11 (3.7%) of food handlers were found to be positive for Salmonella and Shigella respectively. Hand washing after using a bathroom with water only, no hand washing after using the bathroom, no hand washing after touching dirty materials, no hand washing before food handling, and untrimmed fingernails were significant associated factors identified. None of the Salmonella and Shigella isolates were sensitive to ampicillin, yet low resistance against chloramphenicol, ceftriaxone, and ciprofloxacin was found. The present study revealed that the prevalence of Salmonella and Shigella among food handlers was 22 (7.3%) and 11 (3.7%) respectively. Such colonized food handlers can contaminate food, and drinks and could serve as a source of infection to consumers. This indicates that there is a need for strengthened infection control measures to prevent Salmonella and Shigella transmission in the students' cafeteria. Foodborne diseases are a major public health problem globally. The severity is higher among developing countries due to low hygienic food handling practices, lack of environmental sanitation, and poor access to safe drinking water [1]. In developing countries, approximately 70% of cases of diarrheal diseases are associated with the consumption of contaminated food [2]. Salmonella remains a major cause of foodborne infection in humans [3], which leads to approximately 93 million infections every year [4, 5]. The World Health Organization (WHO) estimates that there are around 16 million new cases and 600,000 deaths due to typhoid fever each year worldwide [6]. It causes bacterial bloodstream infections with a fatality rate of 20–25% [7]. The widespread nature of salmonellosis increases antibiotic resistance which in turn increases the treatment cost, hospitalization, morbidity, and mortality [8]. These bacteria are transmitted directly and indirectly through contaminated objects such as food, water, nails, and fingers. This indicates those microorganisms can be spread by fecal-oral human-to-human transmission [9, 10]. Compared to other parts of the hand, fingernails harbor the most microorganisms and are difficult to clean. Shigella continues to play a major role in the etiology of inflammatory diarrhea and dysentery in food handlers [11]. The annual incidence of Shigella is estimated to be 164.7 million people, with 69% of all deaths attributable to shigellosis worldwide [12, 13]. The highest prevalence of shigellosis is observed in tropical and subtropical parts of the world [14]. Salmonella and Shigella are a significant cause of severe post-diarrheal complications such as reactive arthritis, sepsis, Reiter's syndrome, myocarditis, inflammatory bowel diseases, irritable bowel syndrome, and peritonitis [8, 15, 16]. The emergence of antimicrobial-resistant Salmonella and Shigella becomes a significant threat to deliver reliable therapies [17, 18]. In Ethiopia, it is difficult to estimate the severity of salmonellosis and shigellosis as well as their antibiotic resistance due to the limited scope of studies, lack of coordinated epidemiological surveillance system, poor reporting system, and limited availability of culture facilities [19]. Determining the prevalence and antimicrobial susceptibility pattern of Salmonella and Shigella is very important for the proper selection of antimicrobial agents to control the spread of infection. However, in the study area, there was a scarcity of data on the carriage of Salmonella and Shigella among food handlers. Therefore, this study aims to assess the prevalence, antimicrobial susceptibility patterns, and associated factors of Salmonella and Shigella among food handlers in Adigrat University, Tigrai, Northern Ethiopia. Study design, area and period A cross-sectional study was conducted among food handlers who participated in food preparation, dispatch and storage at the Adigrat University student cafeteria from March to August 2018. The annual rainfall ranges from 400 to 600 mm and the minimum and maximum temperature range from 6 to 21.8 °C. Currently, the university enrolls more than 15,000 students dining in the student cafeteria. There are six cafeterias, and a total of 700 food handlers are working in the student cafeteria (Adigrat University human resource management and registrar office). Few food handlers in the university received food handling certification and graduated in food handling, preparation, and serving programs. Although there was a periodic medical checkup in the university, it was not consistent. In contrast to the university food handlers, food handlers employed in serving the public and tourists are well-trained and graduate in food preparation and handling practice from college or universities. Sample size determination and sampling technique The sample size was determined by using a single population proportion formula. $$ \mathrm{n}=\frac{{\left(\mathrm{Z}\upalpha /2\right)}^2\mathrm{P}\ \left(1-\mathrm{P}\right)}{{\mathrm{d}}^2} $$ The sample size was determined based on the prevalence of Salmonella among university food handlers done by Mama and Alemu (2016) at Arba Minch University, South Ethiopia (6.9%) [14]. Then with a margin of error (5%), (d = 0.03) and 95% level of confidence (z = 1.96), the sample size was calculated as follows: $$ \mathrm{n}=\frac{(1.96)^2\ast 0.069(0.931)}{(0.03)^2}=274,\mathrm{with}\ 10\%\mathrm{non}\ \mathrm{response}\ \mathrm{rate}=301 $$ Therefore, a total of 301 food handlers were included in the study from all university cafeterias. A simple random sampling technique was employed. The lottery method was used to select the study subjects after a complete list of food handlers was obtained from a roster of cafeteria staff at Adigrat University. Food handlers working in Adigrat University student cafeterias were included in the study. Food handlers who have taken antibiotics and/or antihelminthics within one week and those with clinical signs of typhoid fever such as cough, a high temperature which reaches up to 39 to 40C, headache, general aches and pains were excluded from the study. Data collection and sample processing Socio-demographic and specimen collection, handling and transportation A structured questionnaire was used to collect the data regarding socio-demographic and associated factors. Questionnaires were checked for accuracy and completeness. After proper instruction, about 2 g of fresh stool specimens were collected from food handlers with a labeled wide-mouthed plastic container and a clean wooden applicator stick. Specimens were immediately transported to the laboratory using an icebox. Isolation and identification The stool specimen was collected and transported to Adigrat University Medical Microbiology Laboratory within one hour of collection. Stool specimens were immediately inoculated in Selenite F enrichment broth and incubated at 37 °C for 24 h, and then subculture onto selective media of xylose-lysine desoxycholate agar (XLD) and Hektoen enteric medium agar at 37 °C for 18–24 h. The isolated colonies were differentiated and identified based on Gram stain, colonial morphology and pigmentation, hemolysis on blood agar, catalase test, oxidase test, carbohydrate fermentation, H2S production, motility, indole formation, urease production, and citrate utilization. Cultures were incubated for 24 to 48 h at 37 °C. Then colonies producing an alkaline slant with acid butt and hydrogen sulfide production on Triple Sugar Iron Agar, positive for lysine, negative for urea hydrolysis, negative for indole test, positive for citrate utilization and motility test were considered to be Salmonella. Colonies which were urease negative, indole positive/negative, produced a pink-red slope and yellow butt with no blackening on Triple Sugar Iron agar, lysine decarboxylase negative and citrate negative were identified as Shigella species. Finally, all of the confirmed Salmonella and Shigella isolates were examined for antimicrobial susceptibility. Antimicrobial susceptibility tests Antimicrobial susceptibility testing was performed using the modified Kirby-Bauer disc diffusion method according to Clinical and Laboratory Standards Institute (CLSI) guidelines, 2016 [20]. Using a sterile wire loop, 3–5 well-isolated colonies of the test organism was emulsified into a tube of 3–4 ml sterile physiological saline to get bacterial inoculums equivalent to 0.5 McFarland turbidity standards. Then the standardized suspension (test organisms) were uniformly swabbed within 15 min using a sterile cotton swab into Muller-Hinton agar and allowed to dry. After that, the antibiotic discs were placed manually on the medium and incubated at 37 °C for about 18 h and the zones of inhibition were measured using a caliper. The interpretation of the results was made based on the CLSI criteria as sensitive, intermediate and resistant [20]. The following antimicrobials were prioritized by considering local prescription; gentamicin (10 μg), ampicillin (30 μg), amoxicillin (30 μg), ciprofloxacin (5 μg), clarithromycin (30 μg), chloramphenicol (30 μg), cotrimoxazole (25 μg), amoxicillin-clavulanic acid (30 μg), and ceftriaxone (30 μg) [20]. Data quality assurance Data quality was ensured at various activities of the study by following a prepared standard operating procedure (SOP). Questionnaires were prepared in a clear and precise way and translated into the local language and back-translated to English to ensure the consistency of the questionnaires. The pretest was done on 5% of food handlers and modifications were made accordingly. To ensure general safety, universal bio-safety precautions were followed. American Type Culture Collection (ATCC) strains P.aeruginosa (ATCC 27853), E. coli (ATCC 25922) were used as control strains for the culture and antimicrobial susceptibility testing. After the collection of socio-demographic characteristics, associated factors and laboratory data using a structured questionnaire and laboratory report format, data were edited, cleaned, entered and analyzed using Statistical Package for Social Sciences (SPSS) version 22. Descriptive statistics, bivariate, and multivariate logistic regressions were performed. Bivariate logistic regression was employed to determine the association between the outcome variable and each independent variable. A binary logistic regression analysis was used to calculate the odds ratios (OR); crude odds ratio (COR) and adjusted odds ratio (AOR) to ascertain the degree of association between dependent and independent variables. All variables with p-value 0.20 in the bivariate logistic regression were transferred to multivariate logistic regression analysis to compute AOR. The regression model was first examined by the goodness of fit test using the Hosmer-Lemeshow test to determine whether the model is adequately fitted to the study. In this study, multi-collinearity among independent variables was detected using the standard errors for regression coefficients. Finally, variables with a p-value < 0.05 with a corresponding 95% confidence interval (CI) were considered as statistically significant. Operational definition Dirty materials: Dirty materials are soiled objects, unclean or impure. Fingernail status: Status of the nail was either trimmed or untrimmed fingernails that serve as a vehicle for the transmission of food contaminating pathogens. Socio-demographic characteristics A total of 301 food handlers were included in the study. Out of the total respondents, 265 (88.0%) were females. The age of study participants ranged from 19 to 38 years (23.51 ± 3.186 years). The majority of 241 (80.1%) of the participants were between the ages of 21 and 30 years. One hundred fifty-six (51.8%) were enrolled in secondary school with an average of 3.7 years of work experience in the cafeteria. Out of the total study participants, 37 (12.3%) were certified for training in food handling and 265 (88.0%) had previously undergone a medical checkup stool microscopy examination (Table 1). Table 1 Socio-demographic characteristics of food handlers in Adigrat University student cafeteria, Tigrai, North Ethiopia March to August 2018 (N = 301) Prevalence and associated factors of Salmonella and Shigella carriers The prevalence of Salmonella and Shigella in this study was 22 (7.3%) and 11 (3.7%) respectively. In the current study, 13 independent variables were considered during the analysis of associated factors for Salmonella and Shigella carriers (Table 2). Accordingly in the multivariate analysis, hand washing after using the bathroom with water only (AOR = 23.24, 95% CI: 2.13–254.17, P < 0.01), no handwashing after using the bathroom (AOR = 2.25, 95% CI: 5.11–77.34, P < 0.001), no hands washing after touching dirty materials (AOR = 37.19, 95% CI: 5.66–244.45, P < 0.001), no handwashing before food handling (AOR = 33.1, 95% CI: 4.96–220.52, P < 0.001), untrimmed fingernail (AOR = 13.97, 95% CI: 3.40–57.36, P < 0.001) were significant factors associated with Salmonella and Shigella carriers. However, no significant association was found between years of service, medical check-up, workers blowing their noses, touching food with bare hands, and certification in food preparation and handling, with Salmonella and Shigella colonization in this study (Table 3). Table 2 Bivariate logistic regression analysis of factors associated with Salmonella and Shigella infections among food handler's working at Adigrat University Students' Cafeteria, Tigrai, Northern Ethiopia, March to August 2018 (N = 301) Table 3 Multivariate logistic regression analysis of factors associated with Salmonella and Shigella isolates among food handler's working at Adigrat University Students' Cafeteria, Tigrai, Northern Ethiopia, March to August 2018 (N = 301) Antimicrobial susceptibility patterns of Salmonella and Shigella isolates Antimicrobial susceptibility patterns were performed for 22 Salmonella and 11 Shigella isolates against 9 antimicrobial agents. All Salmonella and Shigella isolates were resistant to ampicillin. 22(100%) Salmonella and 10(90.90%) Shigella isolates were resistant to gentamicin; and 21(95.50% Salmonella and 11(100%) Shigella isolates were resistant to amoxicillin. However, all Salmonella and Shigella isolates were susceptible to ciprofloxacin. Additionally, low resistance was observed for ceftriaxone, chloramphenicol and cotrimoxazole for Salmonella and Shigella. None of the isolates showed intermediate resistance (Table 4). Multidrug resistance in this study is defined as resistance to at least three classes of antimicrobial agents and out of the thirty-three isolates 12 (54.5%) Salmonella and 10(90.90%) Shigella species were multidrug- resistant isolates (Table 5). Table 4 Antimicrobial susceptibility patterns of Salmonella and Shigella isolated from food handlers Of Adigrat University Students' Cafeteria, Tigrai, Northern Ethiopia, March to August 2018 (N = 33) Table 5 Multidrug-resistant of Salmonella and Shigella isolated from food handler's working at Adigrat University Students' Cafeteria, Tigrai, Northern Ethiopia, March to August 2018 N = 33 In this study, the prevalence of Salmonella among food handlers was 22 (7.3%). This was similar to studies carried out in Southern Ethiopia, Arba Minch University (6.9%) [14], and Nigeria, Abeokuta (5.5%) [21]. However, it was higher than the studies reported from Ethiopia, Addis Ababa (3.5%) [22], Bahir Dar (1.6%) [23], and Gondar (3.1%) [24]. On the other hand, our result was lower than the studies reported from Ethiopia, Addis Ababa (10.5%) [25], and Nigeria (42.3%) [26]. The variation might be attributed to poor personal hygiene and environmental sanitation differences among the study areas. The prevalence of Shigella (3.7%) in our study is consistent with studies done in Arba Minch University, Southern Ethiopia (3%) [14]; Addis Ababa, Ethiopia (4.5%) [25], and Gondar Ethiopia (3.1%) [26]. However, our finding was lower than a study conducted in Nigeria (15.5%) [27]. These might be due to the differences in inconsistent training on food preparation and handling and hygiene practices of the food handlers. In the present study, not practicing handwashing after using the bathroom among food handlers was significantly associated with Salmonella and Shigella carriers. Food handlers who hadn't washed their hands after using the bathroom were more likely to be colonized with Salmonella and Shigella compared to those who washed with water and soap after using the bathroom. This finding was similar to a study conducted in Mekelle, Ethiopia [28], Gondar, Ethiopia [29], and Bahir Dar, Ethiopia [30]. The acquisition of Salmonella and Shigella is due to poor sanitary conditions, poor toilet facilities, and scarcity of availability of facilities used for handwashing practice. The majority of food handlers of the university reported that they washed their hands with only water and some of them did not wash their hands at all after using the bathroom. Our findings also revealed that there is a statistically significant difference in handwashing after touching dirty materials among food handlers with Salmonella and Shigella carriers. Food handlers who did not wash their hands after touching dirty materials are twenty-eight fold more likely to be colonized with Salmonella and Shigella than those who washed with water and soap after touching dirty materials. This finding is consistent with a study conducted in Ethiopia, Bahir Dar [23]. This might be due to the absence of handwashing facilities within proximity of the food handlers' workplace. Our study showed that food handlers who washed their hand with soap and water before touching food were less likely to be colonized with Salmonella and Shigella than food handlers who did not wash their hand with soap and water before food preparation. This is in line with the finding of a similar study reported fromYebu Town Ethiopia [31]. In this study, the majority offood handlers practiced handwashing before handling food. However, a very large proportion (47.1%) washed their hands only with water. There are food handlers who apply some hygiene practice, though many of them do not use soap nor do they appreciate or understand the need for handwashing [32]. Furthermore, in this finding, untrimmed fingernails were significantly associated with Salmonella and Shigella colonization among food handlers. This study is similar to studies conducted in-Yebu Town, Ethiopia [31], and Arba Minch, Ethiopia [14]. This result might be due to the lifestyle of food handlers. Examination of fingernail contents of food handlers for Salmonella or Shigella is one way of indicating a source of possible food contamination [31]. However, the current study did not assess the Salmonella and Shigella carriage of fingernail contents. Salmonella and Shigella carriers who are preparing and handling food daily can act assources of infection to the university community via the food chain. Therefore, regular training, medical check-up programs and accessibility of personal hygiene guidelines with intensive health education could be important to prevent and control the carriage. Antimicrobial susceptibility pattern data showed that ciprofloxacin, ceftriaxone, gentamicin, chloramphenicol, and cotrimoxazole were effective against the Shigella isolates. Our finding was comparable with studies reported from Ethiopia, Haramaya University on ceftriaxone (16.7%) [33], Jimma on gentamicin (1.3%) [34], and Harar on gentamicin (3.6%) [35]. Whereas our result showed lower resistance patterns compared to the studies conducted in Ethiopia, Addis Ababa on gentamicin (75.6%) [36], and Gondar on ciprofloxacin (8.9%) and cotrimoxazole (73.4%) [37, 38]. This increase of resistance from those reports indicates that there are differences in the geographical area, study period and study design. Increased resistance was observed in our finding which is in line with a study reported from Harar on ampicillin (100%) [35], and Arba Minch on amoxicillin (100%) [14]. In the current study, isolates of Salmonella species were sensitive to gentamicin, ciprofloxacin, chloramphenicol, ceftriaxone, cotrimoxazole, and clarithromycin. This is consistent with reports from Gondar University, Ethiopia [24, 25, 38]. Increased resistance was observed in our findings for amoxicillin-clavulanic, amoxicillin, and ampicillin which were supported by studies reported from Ethiopia in Arba Minch, Jimma, and Bahir Dar [12, 14, 23, 35]. This might be due to misuse or inappropriate use of antibiotics and the use of clinical diagnosis for treatment by physicians may lead to the emergence of drug-resistant bacterial strains and replacement of sensitive strains by resistant strains. The prevalence of multidrug resistance towards Salmonella and Shigella were observed. Out of all the isolates, 54.54% Salmonella and 90.9% Shigella species were resistant to at least three antimicrobials. One isolate of Shigella was resistant to six classes of antimicrobial agents. This study is supported by a study conducted in Ethiopia in Butajira [25], Addis Ababa [22], Haramaya University [33], and Gondar [24]. This increased multidrug resistance might be due to genetic variation by mutations, irrational use of antimicrobials, and less hygienic practice of the food handlers. Salmonella and Shigella species are becoming resistant to most antimicrobials, indicating that there might be easy availability, irrational use of common antimicrobials from different governmental and private pharmacies. Fingernail content examination could not be identified. This may be supportive to know whether the contamination is due to poor fingernail hygiene or poor food handling practices. Despite this limitation, the methods used to isolate and characterize the antimicrobial susceptibility pattern of Salmonella and Shigella spp. are comprehensive. In the current study, because of the self-reported nature of the study, recall bias was a limitation but it is not reflected in the findings. We also do not know what biases have the greatest impact on self-reports. In addition to that, even though this study used a random sampling technique to select study participants, it was facility-based. The Hosmer-Lemeshow test is used for overall calibration error but not used for a particular lack of fitness, so it does not properly take overfitting into account. Therefore generalizability might be hardly achieved. Additionally, since there is variation in seasons, geography, and in the definition of antimicrobial resistance guidelines among different studies and across regions we couldn't infer the external validity. The overall prevalence of Salmonella and Shigella in the study area was found to be 7.3 and 3.7% respectively. The Salmonella and Shigella carriage was significantly associated with not washing hands after touching dirty materials, not washing hands after using the bathroom, and untrimmed fingernails. No resistance against chloramphenicol, ceftriaxone, and ciprofloxacin was identified. The majority of Salmonella and Shigella were multidrug-resistant. To improve food handling safety within the university, regular medical checkups, increased handwashing, and environmental sanitation should be practiced. Consistent training on food preparation and handling for the food handlers of Adigrat University is very important to prevent the risk of infection for the university community having close contact with those carriers. Additionally, this study suggested that physicians should prescribe based on the laboratory result. Drug dispensing by different governmental and private pharmacies should be according to the prescription of the physicians. All data collected and analyzed during this study were included in the manuscript. But if the full paper is needed, it will be shared upon request by the editor from the corresponding author. ATCC: American Type Culture Collection CLSI: Clinical and Laboratory Standards Institute MDR: Multi-drug resistant Multi-Drug Resistant SOPs: Centers for Disease Control and Prevention (CDC). Surveillance for foodborne disease outbreaks-United States, 2008. MMWR 2010; 59(31):1277–1280. World Health Organization (WHO). (2007). Fact Sheet Number 237: Food safety and foodborne illness. Geneva, Switzerland: World Health Organization. Available at http://www. Who.int/media centre/factsheets/fs237/en/. accessed June 26, 2017. B. Coburn, G. A. Grassl, and B. B. Finlay, "Salmonella, the host and disease: a brief review," Immunology and Cell Biology 2007;85;2:112–118. Majowicz SE, Musto J, Scallan E, Angulo FJ, O'Brien SJ, Jones TF. The global burden of nontyphoidal Salmonella gastroenteritis. Clin Infect Dis. 2010;50:882–9. Centers for Disease Control and Prevention. Surveillance for foodborne disease outbreaks the United States. MMWR. Morb Mortal Wkly Rep 2011. 2008;60(35):1197–202. World Health Organization (WHO) laboratory identification and antimicrobial susceptibility testing of bacterial pathogens of public health concern in the developing world Geneva, Switzerland: World Health Organization; 2003. Smith SI, Fowora M. A, Goodluck H.A, Nwaokorie F.O, Aboaba O.O, Opere B. Molecular typing of Salmonella spp isolated from food handlers and animals in Nigeria Int J Mol Epidemiol Genet 2011; 2(1):73–77. Feasey NA, Dougan G, Kingsley RA, Heyderman RS, Gordona MA. Invasive non-typhoidal salmonella disease: an emerging and neglected tropical disease in Africa. Lancet. 2012;379(9835):2489–99. Pala K, Ozakin C, Akis N, Sinirtas M, Gediko S, Aytekin H: Asymptomatic carriage of bacteria in food workers in Nilüfer district, Bursa, TurkeyTurk J Med Sci 2010; 40(1):133–139. Khurana S, Taneja N, Thapar R, Sharma M, Malla N: Intestinal bacterial and parasitic infections among food handlers in a tertiary care hospital of North India. Trop Gastroenterol 2008; 29:207–209. Ao TT, Feasey NA, Gordon MA, Keddy KH, Angulo FJ, Crump JA. Global Burden of Invasive Nontyphoidal Salmonella Disease Emerg. Infect Dis 2015 ;21(6):441–449 DOI: https://doi.org/10.3201/eid2106.140999. Beyene G, Tasew H. Prevalence of intestinal parasite, Shigella and Salmonella species among diarrheal children in Jimma health center, Jimma southwest Ethiopia: a cross-sectional study. Ann Clin Microbiol Antimicrob. 2014; 13:10. Published 2014 Feb 5. doi:https://doi.org/10.1186/1476-0711-13-10. Mokhtari W, Nsaibia S, Majouri D, Ben Hassen A, Gharbi A, Aouni M. Detection and characterization of Shigella species isolated from food and human stool samples in Nabeul, Tunisia, by molecular methods and culture techniques. J Applied Microbiol. 2012;113:209–22. https://doi.org/10.1111/j.1365-2672.2012.05324. Mama M, Getaneh A. Prevalence, antimicrobial susceptibility patterns and associated risk factors of Shigella and Salmonella among food handlers in Arba Minch University, South Ethiopia. BMC Infect Dis. 2016; 16:686. Scallan E, Hoekstra RM, Angulo FJ, Tauxe RV, Widdowson MA, Roy SL, Jones JL, Griffin PM. Foodborne illness acquired in the United States-major pathogens. Emerg Infect Dis. 2011;17(1):7–15. Bonkoungou IJO, Haukka K, Österblad M, Hakanen AJ, Traoré AS, Barro N, et al. Bacterial and viral etiology of childhood diarrhoea in Ouagadougou, Burkina Faso. BMC Pediat. 2013;13(36):1–6. Afeworki G, Lirneneh Y. Multiple drug resistance within Shigella serogroups. Ethiop Med J. 1980;18:7–11. Roma B, Worku S. T/Mariam S, Langeland N. antimicrobial susceptibility pattern of Shigella isolates in Awassa. Ethiop J of Health Dev. 2000;14:149–54. Beyene G, Asrat D, Mengistu Y, Aseffa A, Wain J. Typhoid fever in Ethiopia. J Infect Developing Countries. 2008;2:448–53. Clinical and Laboratory Standards Institute (CLSI): Performance standards for antimicrobial susceptibility testing; Twenty-sixth Informational Supplement. Wayne PA 19087 USA 2016; 36(1). Mobolaji OA, Olubunmi OF. Assessment of the hygienic practices and the incidence of enteric bacteria in food handlers in small businesses in an urban area in Abeokuta. Int J Microbiol Res. 2014;5(3):41–9. Aklilu A, Kahase D, Dessalegn M, Tarekegn N, Gebremichael S, Zenebe S, et al. Prevalence of intestinal parasites, salmonella and shigella among apparently health food handlers of Addis Ababa university student's cafeteria, Addis Ababa, Ethiopia. BMC Res Notes. 2015;8(17):1–6. Abera B, Biadegelgen F, Bezabih B. Prevalence of Salmonella typhi and intestinal parasites among food handlers in Bahir Dar town, Northwest Ethiopia. Ethiop J Health Dev. 2010;24(1):46–50. Garedew-Kifelew L, Wondafrash N, Feleke A. Identification of drug-resistant Salmonella from food handlers at the University of Gondar, Ethiopia. BMC Res Notes. 2014;7(1):545. Mengistu G, Mulugeta G, Lema T, Aseffa A. Prevalence and antimicrobial susceptibility patterns of Salmonella serovars and Shigella species. J Microb Biochem Technol. 2014;S2:006. https://doi.org/10.4172/1948-5948.S2-006. Ifeadike C, Ironkwe O, Adogu P, Nnebue C, Emelumadu O, Nwabueze S, Ubajaka C. Prevalence and pattern of bacteria and intestinal parasites among food handlers in the Federal Capital Territory of Nigeria. Niger Med J. 2012;53(3):166. Andargie G, Kassu A, Moges F, Tiruneh M, Huruy K. Prevalence of bacteria and intestinal parasites among food handlers in Gondar town, Northwest Ethiopia. J Health Popul Nutri. 2008;26(4):451–5. Nigusse D and Kumie A. Food hygiene practices and prevalence of intestinal parasites among food handlers working in Mekelle university student's cafeteria, Mekelle. Global Adv Res J of Soc Sci (GARJSS),2012;1(4):065–071. Dagnew M, Tiruneh M, Moges F, Gizachew M. Bacterial Profile and Antimicrobial Susceptibility Pattern among Food Handlers at Gondar University Cafeteria, Northwest Ethiopia. J Infect Dis Ther.2013;1: 105. doi:https://doi.org/10.4172/2332- 0877.1000105. Abera B, Yitayew G, Amare H. Salmonella serotype Typhi, Shigella, and intestinal parasites among food handlers at Bahir Dar University, Ethiopia. J Infect Dev Ctries. 2016;10(2):121–6. Tefera T, Mebrie G. Prevalence and Predictors of Intestinal Parasites among Food Handlers in Yebu Town, Southwest Ethiopia. PLoS ONE.2014; 9 (10): e110621. doi:https://doi.org/10.1371/journal.pone.0110621. Zain MM, Naing NN. Sociodemographic characteristics of food handlers and their knowledge, attitude and practice towards food sanitation: a preliminary report. Southeast Asian J Trop Med Public Health. 2002;33(2):410–7. Marami D, Hailu K, and Tolera M. Prevalence and antimicrobial susceptibility pattern of Salmonella and Shigella species among asymptomatic food handlers working in Haramaya University cafeterias, Eastern Ethiopia. BMC Res Notes.2018; 11:74.. Mache A. Salmonella serogroups and their antibiotic resistance patterns isolated from diarrhoeal stools of pediatric out-patients in Jimma Hospital and Jimma Health Center, South West Ethiopia. Ethiop J Health Sci2002; 37: 37–45. Reda AA, Seyoum B, Yimam J, Andualem G, Fiseha S, Vandeweerd J-M Antibiotic susceptibility patterns of Salmonella and Shigella isolates in Harar, Eastern, Ethiopia J Infect Dis Immun 2011; 3: 134–139. Asrat D. Shigella and Salmonella serogroups and their antibiotic susceptibility patterns in Ethiopia. East Mediterr Heal J. 2008;14:760–7. Yismaw O, Negeri C, Kassu A. A five-year antimicrobial resistance pattern observed in Shigella species isolated from stool samples in Gondar University hospital, Northwest Ethiopia. Ethiop J Heal Dev. 2006;20:194–8. Tiruneh M Serodiversity and antimicrobial resistance pattern of Shigella isolates at Gondar University teaching hospital, Northwest Ethiopia Jpn J Infect Dis 2009; 62(2):93–97. The authors gratefully acknowledge the food handlers of Adigrat University for their participation in the study. We also wish to extend our deep appreciation to Adigrat University, College of Medicine and Health Sciences for providing us with the opportunity to do this research and allowing the finance. Department of Medical Laboratory, College of Medicine and Health Science, Adigrat University, Adigrat, Ethiopia Haftom Legese, Tsega Kahsay, Aderajew Gebrewahd, Brhane Berhe, Guesh Gebremariam, Hadush Negash, Fitsum Mardu, Kebede Tesfay & Gebre Adhanom Department of public health, College of Medicine and Health Science, Adigrat University, Adigrat, Ethiopia Berhane Fseha Department of Medical Laboratory, College of Medicine and Health Science, Bahr dar University, Bahr dar, Ethiopia Senait Tadesse Haftom Legese Tsega Kahsay Aderajew Gebrewahd Brhane Berhe Guesh Gebremariam Hadush Negash Fitsum Mardu Kebede Tesfay Gebre Adhanom HL designed the study, collection, analysis, and interpretation of data, and drafted the manuscript. TK, AG, BB, and BF designed the study, supervised data collection both on-field and in the laboratory, and prepared the manuscript. ST, GG, HN, and GA read and approved the final manuscript. Correspondence to Haftom Legese. The study was approved by the College of medicine and health sciences Research ethical review committee of Adigrat University, Ethiopia (Consent Ref Number AGU/CMHS/044/2018 approval dated 07/04/2018 Official letter was obtained from Adigrat University (Consent Ref Number AGU/CMHS/RCSH/19/2018 approval dated 25/04/2018. Written informed consent was sought from each study participant before sample collection and maintained throughout the study. All participants were given code numbers to keep their identity confidential. The authors' declared that there were no competing interests. Legese, H., Kahsay, T., Gebrewahd, A. et al. Prevalence, antimicrobial susceptibility pattern, and associated factors of Salmonella and Shigella among food handlers in Adigrat University student's cafeteria, northern Ethiopia, 2018. Trop Dis Travel Med Vaccines 6, 19 (2020). https://doi.org/10.1186/s40794-020-00119-x Received: 04 February 2020 DOI: https://doi.org/10.1186/s40794-020-00119-x Food handlers Contribution of Climate Change to the spread of Infectious Diseases
CommonCrawl
Quantifying causal effects from observed data using quasi-intervention Jinghua Yang1,4, Yaping Wan1,2, Qianxi Ni3,4, Jianhong Zuo5, Jin Wang1, Xiapeng Zhang1 & Lifang Zhou1 Causal inference is a crucial element within medical decision-making. There have been many methods for investigating potential causal relationships between disease and treatment options developed in recent years, which can be categorized into two main types: observational studies and experimental studies. However, due to the nature of experimental studies, financial resources, human resources, and patients' ethical considerations, researchers cannot fully control the exposure of the research participants. Furthermore, most existing observational research designs are limited to determining causal relationships and cannot handle observational data, let alone determine the dosages needed for medical research. This paper presents a new experimental strategy called quasi-intervention for quantifying the causal effect between disease and treatment options in observed data by using a causal inference method, which converts the potential effect of different treatment options on disease into computing differences in the conditional probability. We evaluated the accuracy of the quasi-intervention by quantifying the impact of adjusting Chinese patients' neutrophil-to-lymphocyte ratio (NLR) on their overall survival (OS) (169 lung cancer patients and 79 controls).The results agree with the literature in this study, consisting of nine papers on cohort studies on the NLR and the prognosis of lung cancer patients, proving that our method is correct. Taken together, the results imply that quasi-intervention is a promising method for quantifying the causal effect between disease and treatment options without clinical trials, and it could improve confidence about treatment options' efficacy and safety. In biomedicine, causal inference often relies on the framework of counterfactual reasoning. For example, given an observed target image with lesions and a reference image without lesions in the corresponding region, what would the features of the target image look like if the lesions were removed? Through such comparison or thinking, researchers can quickly estimate the causal relationship, find the answer to the question, and relieve the suffering of the patient. Counterfactuals are located at the top of the ladder of causation [1], which is Judea Pearl's ladder of three different levels based on cognitive ability, with the remaining two levels being association and intervention. Under counterfactual theory, everyone has a potential outcome in different states, and by comparing the outcomes of individuals in different states, the causal effect of treatment on the outcome can be obtained [2]. However, in practice, counterfactuals are never observed because a single person (or group) cannot choose different states at the same time and place, so how to use observational and experimental data to extract information about counterfactual scenarios becomes the focus of scholarly research. The most common experimental strategy is to conduct a randomized controlled trial (RCT). Because of the randomized nature of RCTs, the subject and their "counterfactual" counterpart have the same or similar values for the confounding variables, except for the relevant condition variables, to approximate the potential outcome [2]. However, RCTs are expensive, time-consuming, and ethically concerning, making many experiments a luxury [3]. In statistics, researchers are fond of viewing the counterfactual causal inference problem as a missing data problem, i.e., solving for the potential outcomes corresponding to different individuals in different states. Common methods for inferring missing data include matching [4] and linear regression. Matching methods refer to finding several pairs of individuals who match well on all other variables except the target variable, and then we can calculate the missing data based on this matching relationship. However, there will always be special cases in the data that cannot be matched. Linear regression methods assume that the data come from a random source at some location, then use standard statistical methods to find the best-fitting straight line for the data, and finally use padding techniques to resolve the missing data. Although this method cleverly calculates an approximation of the missing outcome, the number is not a potential outcome and cannot be used to make counterfactual causal inferences. The reasons for this are as follows: on the one hand, the method is data-driven rather than model-driven by nature; on the other hand, and more importantly, there is simply no situation where a Tier 1 method of the ladder of causation can solve the counterfactual problem (Tier 3). In the ladder of causation, the three levels correspond to complex causal problems, and each level holds power beyond the reach of the next. Thus, the data cannot tell us that we are in a counterfactual or fictional world or what will happen. In this paper, we present a new experimental strategy called quasi-intervention for quantifying the causal effect between disease and treatment options in observed data by using a causal inference method, which converts the potential effect of different treatment options on disease into computing differences in the conditional probability. With the given observed data, quasi-intervention takes advantage of a quasi-experimental design (QED) [5,6,7] to determine the causal relationship between variables and uses a sign test to ensure the reliability of the results. To quantify the causal effect between disease and treatment options, with the help of hypothetical interventions [8,9,10,11], we implemented different treatment options for the patients and compared the difference between the means. We evaluated the accuracy of the quasi-intervention by quantifying the causal effect between the NLR and OS (169 lung cancer patients and 79 controls) among Chinese patients. Our results showed that quasi-intervention could compute OS well corresponding to the average causal effect (34.4%) under variable NLR intervention conditions. The result agrees with the literature findings [12,13,14,15,16,17,18,19,20], which consist of nine papers on cohort studies on the NLR and the prognosis of lung cancer patients, proving that our method is correct. Determination of causal relationships between variables Correct causality is the primary premise of this study and the guarantee of the correct conclusion. If intervening variable (X) and outcome variable (Y) have a purely correlational relationship, rather than causality, then this will lead to poor business decisions. In this method, we used a QED to infer causality from observed data. We supposed we had a pre-processed (e.g., factorization) dataset Q, which consists of epidemiological information, such as age, sex, and clinical records, for all patients who do not intersect. To make better counterfactual causal inferences, the following assumptions were made about the study population: there is no crossover treatment effect between individuals; all individuals are treated to the same extent; the assignment of treatment is independent of the potential outcome; and the probability of assignment of treatment is nondeterministic for all individuals. We will consider here in detail that the sample size is larger than 20. The specific experimental steps are as follows: We defined a matched set of pairs P as follows. Let T (T ⊆ Q) be the set of all patients who have been treated. Then, we picked the intervention patient u(u ∈ T) and paired them with a patient v picked uniformly and randomly from a noninterventional patient set D, which means u and v have similar age, same sex, similar clinical notes and so on. For each pair (u, v) ∈ P, we assigned outcome (u, v) to + 1 if Yu (patient u corresponding to Y) was larger than Yv, -1 if the outcome variable of Yu was smaller than Yv, and 0 otherwise. The matching algorithm's net outcome (δ) can be viewed as the difference between Yu and Yv. The positive value of δ provides strong evidence of the causality between X and Y, while a negative value provides negative evidence. $$value(\delta )=\frac{\sum_{\left(u,v\right)\in P}outcome\left(u,v\right)}{\left|P\right|}*100\%$$ The QED obtained a causal effect between X and Y by controlling for observable confounders in the data. For the accuracy and completeness of the trial, the subsequent hypothetical intervention continued to explore the causal effects of the two variables under unknown confounding conditions. The fusion of the two methods overcomes their respective limitations and enhances the credibility of the experimental results. To confirm the reliability of the result, we also used the sign test to determine whether our result was statistically significant. We formulated H0, which states that X has a null significant impact on Y, and let H1 be the alternative hypothesis in which it was assumed that X has an impact on Y. The number of positive or zero values of outcome (u, v) corresponded to m and n, respectively. After the removal of matching pairs with the same treatment effect, the sample size(s) was |P| − n. The measurement data m obeys an approximate normal distribution with a mean (μ) of \(\frac{1}{2}s\) and a variance (σ) of \(\frac{\sqrt{s}}{2}\). The significance level was set at an α of 0.05, and therefore, the statistic(Ζ) is \(\frac{m-\mu }{\sigma }\). The null hypothesis H0 is rejected when Ζ > Z \(\alpha /2\). Evaluating the effect of interventions The chi-square test was used to compare the relationship between the clinicopathological data between groups with a given state x of X as a cut-off value. Kaplan–Meier univariate survival analysis and the log-rank test were used to analyse the survival of different patients, and the factors with statistical significance (P < 0.05) in the univariate analysis were independent factors affecting the prognosis of patients. There are three basic types of junctions in the causal graph: chain, fork, and collider. Through analysis, we can see that X and Y can only have the following three forms of causal diagrams (among them, "fork" can be divided into two types, and "collider" cannot form a bivariate causal diagram). These disturbance terms (e.g., Ux, Uy) in Fig. 1, which are mutually independent, arbitrarily distributed random disturbances, represent exogenous factors that the investigator chose not to include in the analysis. A potential confounding factor is identified from the observed data, which means that the confounder blocks all backdoor paths from X to Y and is not a descendant of X. This is illustrated in Fig. 1a. If there are no obvious confounding factors in the observed data but a mediator (W) can be found to transmit the effect of X on Y, which means that all causal paths from X to Y pass through W, there is no unobstructed backdoor path from X to W, and all backdoor paths from W to Y are blocked by X. This is illustrated in Fig. 1b. If we are willing to accept the assumption of linearity or monotonicity, then an instrumental variable can be used to estimate the intervention effect (assuming the variable can be present in the data). Instrumental variables are required to affect X and not (directly) affect Y, as illustrated in Fig. 1c. Three types of causal graphs. A confounding factor in the data (a), X can exert an indirect effect on Y through an intermediary variable (b); an instrumental variable is found for replacement studies (c). Ux and Uy are exogenous variables, representing any location or random effect that can affect the relationship between endogenous variables Supposing that we have the structure of a causal graph G, where some nodes are observable and others are not. Our main goal is to progressively reduce the expression \(P(y|do(x))\) to an equivalent expression containing the standard probabilities of the observations. Notably, \(P(y|do(x))\) stands for the probability of achieving a yield level of \(Y = y\) given that the treatment is set to level \(X = x\) by external intervention. It can be further stated that evaluating the effect of intervention involves computing the average causal effect (ACE): $$P(Y |do(X = x^{\prime})) - P(Y |do(X = x))$$ where do(.) set X to a value, e.g., (x + 1). This intervention is equivalent to removing X from the influence of the old functional mechanism \(X=f({pa}_{x},\varepsilon )\) and placing it under the influence of a new mechanism that sets its value to x + 1 while keeping all other mechanisms undisturbed. Clearly, an intervention \(do(x)\) can affect only the descendants of X in G. The do operation allows the intervention effect to be obtained without the actual intervention, the counterfactual answer to be obtained, and thus the causal effect to be ascertained. The intervention not only replaced the causal mechanism linking X to its preintervention parents with a new mechanism X = x but also gave us a new manipulated graph. Interventional distributions (such as P (Y |do(X = x)) are conceptually quite different from the observational distributions (such as \(P (Y |X=x)\)). Because the latter does not have the do-operator, we can observe data from the dataset without carrying out any experiment. With the aid of the manipulated graph and the do-algorithm [8, 10, 11, 21], we eliminate the do operation in \(P(Y |do(X = x))\), which represents hypothetical intervention and cannot be obtained from the dataset. A causal relationship model characterized by graph G is identifiable, which has been demonstrated in Ref. [3]. This means that in a finite sequence of transformations, the causal relationship Q can be reduced to a check-free, probabilistic expression involving the observed quantity according to the do-algorithm. The derivation process is as follows: the probability distribution is first expanded according to a Bayesian formula, and then the expression is appropriately added, deleted, or replaced according to the do-algorithm, and the process is iterated until the expression no longer contains the do operation. It is noted that the experiment assumes that interventions are local and the global Markov assumption is true in the causal graph. The do algorithm is described as follows: G is the direct acyclic graph, X, Y, Z, W are any sets of variables in G, and P is the probability distribution. We use G \(\overline{x}\)(G \(\underline{x}\), respectively) to denote that all arrows pointing to (emerging from, respectively) node X are deleted in G, and Z(W) is the set of Z nodes that are not ancestors of any W node in G \(\overline{x}.\) Role 1) Insertion/deletion of observations $$P\left(y|do\left(x\right),z,w\right)=P\left(y|do\left(x\right),w\right) \mathrm{if }(\mathrm{Y }||\mathrm{ Z}|\mathrm{ X},\mathrm{W}){{\rm{G}}_{\overline x }}$$ Action/observation exchange $$P\left(y|do\left(x\right),do(z),w\right)=P\left(y|do\left(x\right),z,w\right) \mathrm{if }(\mathrm{Y }||\mathrm{ Z}|\mathrm{ X},\mathrm{W}){{\rm{G}}_{\overline x \underline z}}$$ Insertion/deletion of actions $$P\left(y|do\left(x\right),do(z),w\right)=P\left(y|do\left(x\right),w\right) \mathrm{if }(\mathrm{Y }||\mathrm{ Z}|\mathrm{ X},\mathrm{W}){{\rm{G}}_{\overline x \overline {z\left(w\right)} }}$$ If we cannot find a way to estimate \(P (Y | do(X))\) from the data in rules 1 to 3, then the solution does not exist for this problem. In this case, we realize that we have no choice but to run a RCT. In addition, it tells us what additional hypotheses or experiments could make the causal effect change from nonestimable to estimable for a particular problem. According to the derived causal diagram and do-calculus, we can eliminate the do operation in the ACE and quantify the effect of interventions between X and Y. The experimental workflow is shown in Fig. 2. Experimental Flowchart of the Quasi-intervention Context of the study Lung cancer is the most common form of cancer, with the highest morbidity and mortality in most countries [22,23,24,25,26,27,28]. The neutrophil-to-lymphocyte ratio (NLR) has been confirmed as an essential indicator of cancer prognosis and a risk of cancer metastasis in patients with lung cancer, and a high NLR was associated with poor overall survival (OS) [29,30,31,32]. However, most current studies reveal only a correlational relationship between the NLR and OS rather than a causal effect. Our study aimed to identify a causal relationship of the NLR with OS by quasi-intervention and quantify the impact of the NLR on OS, which contributes significantly to elucidating the cause of cancer and clinical treatment. Lung cancer patients who were treated in the Affiliated Nanhua Hospital, University of South China, and the First Affiliated Hospital of the University of South China from January 2012 to December 2017 were selected from the experimental dataset as the research participants. A summary of 169 Chinese lung cancer patients' demographics is shown in Tables 1 and 2. Table 3 shows the peripheral leukocyte levels in lung cancer patients and normal subjects. All patients had no other history of malignant disease, and samples were collected before treatment, such as chemoradiotherapy, radiotherapy, and other treatment samples. One week before surgery, we identified the individual's Karnofsky Performance Status (KPS) score. In addition, the anticoagulant tube was used to take 2–3 mL of fasting peripheral venous blood from each eligible patient, which was stored at 4 °C and examined within 1 h. In addition to patient demographics (including age, sex, date of diagnosis, smoking status, clinical stages, neutrophil count, KPS score, and lymphocyte count), the data collected included 79 healthy controls with normal lung condition from the physical examination centre in the Affiliated Nanhua Hospital, University of South China. All patients were followed until December 2018 by regular outpatient reviews and telephone. We extracted anonymized patient records from the electronic patient files. All patients who participated in the present study signed informed consent before the experiment, which was approved by the South China Ethics Committee. The experiment assumed that the neutrophil and lymphocyte count in the patient were according to the patient's condition; the researchers did not deliberately interfere with the change. Table 1 Comparison of clinicopathological characteristics of lung cancer patients in the high NLR group and low NLR group Table 2 Univariate analysis of patient survival Table 3 Comparison of peripheral leukocyte levels in lung cancer patients and normal controls Most patients were aged > 55 years (74.56%, 126/169), were female (24.26%, 41/169), and were smokers (26.63%, 45/169). Based on the standard for tumour, node, metastasis (TNM) stage, 12 patients (7.1%) were in stage I + II, and 157 patients (92.9%) were in stage III + IV. There were 73 cases of lung adenocarcinoma, 77 cases of lung squamous cell carcinoma, 17 cases of small cell carcinoma, and two other diseases among the lung cancer samples. Patients were dichotomized according to a prespecified cut-off value of an NLR ≥ 5 vs. < 5, as an NLR ≥ 5 has been previously validated as being associated with overall survival (OS) in patients with lung cancer [33]. In addition, the cut-off value of OS was set to the median value of 27. Determination of causal relationships between the NLR and OS To test causal relationships between variables, patients were subdivided into two groups (N = 130–39). The higher NLR group had an NLR > 5 (n = 39, 23.08%), and the lower NLR group had an NLR ≤ 5 (n = 130, 76.92%). We took patient u at random in the lower NLR group and selected v with similar conditions, which means similar age, same sex, same cancer type and so on, with u from the control set for pairing. Then, the outcome (u, v) variables and overall evaluation parameter δ of each matched pair were calculated to determine whether there was a causal relationship between the research variables. All patients were divided into 35 matching pairs, including 22 positive pairs, 10 negative pairs, and 3 zero pairs. Therefore, the value of δ was 34.286%, which provided strong evidence of the causality between the NLR and OS. In addition, we used the sign test (95% confidence interval) to ensure the credibility and reliability of the results. The mean and variance were 34.5 and 4.153 throughout, respectively. The model's Z statistic (2.04656) was larger than the Z \(\alpha /2\) (1.96) statistic, implying a causal relationship between the two. Evaluation of the causal effect Based on the previous results, the NLR was regularly altered with the change in OS, while our data analysis (Fig. 3) was contradictory. Therefore, according to the method in the above exposition, we analysed the observed data from different perspectives. Relationship between baseline NLR and OS From Table 3, we can draw some conclusions. The peripheral white blood cell count, neutrophil count, and NLR of lung cancer patients were significantly higher than those of healthy controls (P < 0.05), while the lymphocyte count and basophil count were lower than those of healthy controls (P < 0.05), and the difference was statistically significant. In the high NLR group and the low NLR group, we counted the number of patients with each clinicopathological datum in each group and compared them with the X2 test. The results showed that the difference in the NLR in the KPS score of lung cancer patients before treatment was statistically significant (P < 0.05); there was no significant difference in clinical data, such as classification (P > 0.05). In addition, we also performed univariate analysis on peripheral blood leukocytes and clinicopathological data of lung cancer patients. The results showed that smoking, tumour stage, KPS score, and NLR were all factors affecting the survival of lung cancer patients. The age, sex, cancer type, white blood cell count, neutrophil count, lymphocyte count, and basophil count of lung cancer patients were not associated with the survival and prognosis of the patients. Combining Tables 1, 2, 3 and Fig. 4, we find that OS decreases significantly as the NLR increases in Fig. 4e. We can explain this phenomenon through theoretical common sense. The NLR is an inflammatory marker with high sensitivity and specificity, and it represents the balance between inflammatory activator neutrophils and inflammatory regulator lymphocytes. The essence of an elevated NLR is the increase in neutrophils and the decrease in lymphocytes. The higher the NLR is, the more pronounced the imbalanced state and the more serious the inflammation. Severe inflammation may lead to a decline in the patient's mobility, deterioration of their disease, and limit the patients' self-care ability, which will lead to a decrease in KPS score. Our conclusion was also confirmed through the literature [20, 34,35,36]. The causal graph between the NLR and OS is shown in Fig. 5a. The modified graphical model (denoted in alphabetical letters as G\(\overline{x}\)), which is necessary for us to quantify the causal relationship between them, representing an intervention in the model in Fig. 5a is shown in Fig. 5b. Relationship between baseline NLR and OS under different type, cancer stages (a, g), cancer type (b, f), age (c, h), sex (d, i), and KPS score (e). Among them, a–d is a confounding factor analysis, and e–i is an intermediate value analysis. Due to the lack of data, we used SPSS to fill in the missing data, but for the categories with fewer data (the second stage of cancer), we adopted the omission and merge method Causal graph between the NLR and survival time. A graphical model representing the causal effects of the NLR on OS; confounders are an unknown element, and KPS score is a mediator(a). An intervention on the model in Fig. 4a that changes the NLR in the population (b) In this study, X = 1 stands for the lower NLR (defined by the previous), Z stands for the KPS scores of patients, and Y = 1 stands for the higher OS (defined by the median OS). To evaluate the effect of interventions in the study, we need to eliminate the do operation in \(P (Y |do (X = x))\) and estimate the difference \(P(Y = 1|do(X = 1)) - P(Y = 1|do(X = 0))\). The derivation process is as follows: $$P\left(Y=y|do\left(X=x\right)\right)$$ $$=\sum_{Z}P(Y|{\text{do}}(X),Z)P(Z|do(X))$$ $$=\sum_{Z}P\left(Y|do\left(X\right),do(Z)\right)P\left(Z|do(X)\right)$$ $$=\sum_{Z}P\left(Y|do\left(X\right),do\left(Z\right)\right)P\left(Z|X\right)$$ $$=\sum_{Z}P\left(Y|do(Z)\right)P\left(Z|X\right)$$ $$=\sum_{X{^{\prime}}}\sum_{Z}P\left(Y|do\left(Z\right),X{^{\prime}}\right)P\left(X{^{\prime}}|do(Z)\right)P\left(Z|X\right)$$ $$=\sum_{X{^{\prime}}}\sum_{Z}P\left(Y|Z,X{^{\prime}}\right)P\left(X{^{\prime}}|do(Z)\right)P\left(Z|X\right)$$ $$=\sum_{z}P(Z=z,X)\sum_{X}P\left(Y|X=x,Z=z\right)P(X=x)$$ $$P\left(Y=y|do\left(X=x\right)\right) =\sum_{z}P(Z=z,X)\sum_{X}P\left(Y|X=x,Z=z\right)P(X=x)$$ Formulas (2) and (6) were constructed using the Bayesian formula; Formulas (3), (4) and (7) were constructed using Role 2; and Formulas (5) and (8) were constructed using Role 3. Bringing the experimental data into Eq. (9) to obtains: $$P\left(Y=1|do\left(X=0\right)\right)=0.173077*0.047337+0.453963*0.195266+0.699023*0.426036+0.864253*0.100592=0.481582$$ $$P(Y=1|do(X=1))=0.173077*0.017751+0.453963*0.071006+0.699023*0.12426+0.864253*0.017751=0.137509$$ Thus, comparing the effect of NLR-higher (X = 1) to the effect of NLR-lower (X = 0), we obtain: $$ACE=P\left(Y=1|do\left(X=0\right)\right)-P(Y=1|do(X=1))\text{=0.3}{44073}$$ giving a clear positive advantage to NLR-lower. The causal association between the NLR and OS is 34.4%; that is, under the same survival environment, patients with lower NLRs have a higher survival rate. Accuracy of the result In medicine, a cohort study is often undertaken to obtain evidence to refute the existence of a suspected association between cause and effect, and failure to refute a hypothesis often strengthens confidence in it. Crucially, the cohort is identified before the appearance of the disease under investigation, which aids greatly in studying causal associations [37, 38]. In survival analysis, the hazard ratio (HR) [39] is the ratio of the hazard rates corresponding to the condition described by the two levels of the explanatory variable. In addition to capturing information about the entire Kaplan–Meier (KM) survival curve, the HR also provides an estimate of the relative efficacy between treatment groups (e.g., HR = 0.75 for the OS endpoint, which means that the mortality rate of the experimental group is reduced by approximately 25% compared to the control group). Therefore, we selected nine papers on cohort studies on the NLR and the prognosis of lung cancer patients. The relative ranges of the NLR and OS causality were determined by HRs (0.2291, 0.6487). Our result agrees with the literature findings and with real-world data, which further proves that our method is correct. There have been many methods for investigating potential causal relationships between disease and treatment options in recent years, which can be categorized into two main types: experimental studies and observational studies. Researchers control the experimental conditions and evaluate the intervention effects in experimental studies. Due to the nature of experimental studies, financial resources, human resources, and patients' ethical considerations, the researchers cannot fully control the exposure of the research participants. Therefore, many of the findings are from observational studies, specifically from case–control studies. Regardless of the method adopted, the results in most cases only determine causal relationships. They cannot intervene with observational data, let alone answer the questions needed for medical research. This work presents a new experimental strategy called quasi-intervention for evaluating the effects of specific treatments without clinical trials by using a causal inference method. The quasi-intervention consisted of a QED [5,6,7], sign test [40] and hypothetical intervention [8,9,10,11]. We used the QED to establish the causal association between the intervening and outcome variables and used a sign test to ensure the reliability of the results. Hypothetical intervention can quantify the causal effect without simulating the intervention, which saves money and is easy to implement to evaluate the accuracy of the quasi-intervention by quantifying the causal effect between the NLR and OS. Our results showed that a low or decreased NLR leads to a significant improvement in OS. This result was consistent with a previous study, proving that our method is correct. Compared with other observational studies, our study is unique in the following aspects: The method incorporates as many confounding factors as possible into the study, making the experiment more rigorous. A QED considers known confounders in the data, and hypothetical interventions consider potential confounders. The method relaxes the conditions of the research environment, uses a series of ingenious, intelligent observation methods to simulate the actual experiment, and combines the cause-and-effect diagram to obtain the actual intervention effect. This method can complete some intervention experiments that cannot actually be completed for factors such as patient's obesity, blood pressure, and smoking status. It allows us to determine causal effects in nonexperimental studies. There are some limitations to this study. First, our data was retrospectively collected and selected from the hospital, so there might be selection bias or recall bias. Second, a causal graph critically influences the obtained results, and it is affected by assumptions and confounding factors. Although we excluded some confounders, unmeasured confounders still impacted the results. These factors would introduce more bias and limit the method's generalizability to a broader patient population. In summary, this work provides a new method for evaluating the effect of interventions that can be applied in the fields of clinical medicine. The presented results from our method could provide a causal effect between disease and treatment options. We believe that the proposed method can be applied to clinically relevant research to obtain more results. The datasets generated and/or analysed during the current study are not publicly available due to the sensitive nature of the questions asked in this study, but are available from the corresponding author on reasonable request. Average causal effect QED: Quasi-experimental design NLR: Neutrophil-to-lymphocyte ratio KPS: Karnofsky performance status Pearl J, Mackenzie D. The book of why: the new science of cause and effect. Science. 2018;361(6405):855.852-855. Rubin DB. B: Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol. 1974;66:688. Dimasi JA, Grabowski HG, Hansen RW. Innovation in the pharmaceutical industry: new estimates of R&D costs. J Health Econ. 2016;47:20–33. Brady H, Collier D, Sekhon JS. The Neyman–Rubin model of causal inference and estimation via matching methods. 2008. Harris AD, Bradham DD, Baumgarten M, Zuckerman IH, Perencevich EN. The use and interpretation of quasi-experimental studies in infectious diseases. Clin Infect Dis. 2004;38(11):1586–91. Marinescu IE, Lawlor PN, Kording KP. Quasi-experimental causality in neuroscience and behavioural research. Nat Hum Behav. 2018;2(12):891–8. Harris AD, Lautenbach E, Perencevich E. A systematic review of quasi-experimental study designs in the fields of infection control and antibiotic resistance. Clin Infect Dis. 2005;41(1):77–82. Pearl J. Lord's paradox revisited—(Oh Lord! Kumbaya!). J Causal Inference. 2016;4(2):1–13. Robins MJ. Causal models for estimating the effects of weight gain on mortality. Int J Obes. 2008;32(Suppl 3):S15-41. Pearl J. Graphs, causality, and structural equation models. Sociol Methods Res. 1998;27(2):226–84. Pearl J. Interpretation and identification of causal mediation. Psychol Methods. 2014;19(4):459–81. Jin F, Han AQ, Shi F, Kong L, Yu JM. The postoperative neutrophil-to-lymphocyte ratio and changes in this ratio predict survival after the complete resection of stage I non-small cell lung cancer. Oncotargets Ther. 2016;9:6529–37. Xie XH, Liu JJ, Yang HT, Chen HJ, Zhou SJ, Lin H, Liao ZY, Ding Y, Ling LT, Wang XW. Prognostic value of baseline neutrophil-to-lymphocyte ratio in outcome of immune checkpoint inhibitors. Cancer Investig. 2019;37(6):265–74. Forget P, Machiels JP, Coulie PG, Berliere M, Poncelet AJ, Tombal B, Stainier A, Legrand C, Canon JL, Kremer Y, et al. Neutrophil: lymphocyte ratio and intraoperative use of ketorolac or diclofenac are prognostic factors in different cohorts of patients undergoing breast, lung, and kidney cancer surgery. Ann Surg Oncol. 2013;20:S650–60. Abravan A, Salem A, Price G, Faivre-Finn C, van Herk M. Effect of systemic inflammation biomarkers on overall survival after lung cancer radiotherapy: a single-center large-cohort study. Acta Oncol. 2013. https://doi.org/10.1245/s10434-013-3136-x. Lan H, Zhou L, Chi D, Zhou Q, Tang X, Zhu D, Yue J, Liu B. Preoperative platelet to lymphocyte and neutrophil to lymphocyte ratios are independent prognostic factors for patients undergoing lung cancer radical surgery: a single institutional cohort study. Oncotarget. 2017;8(21):35301–10. Liu D, Jin J, Zhang L, Li L, Song J, Li W. The neutrophil to lymphocyte ratio may predict benefit from chemotherapy in lung cancer. Cell Physiol Biochem. 2018;46(4):1595–605. Seong YW, Han SJ, Jung W, Jeon JH, Cho S, Jheon S, Kim K. Perioperative change in neutrophil-to-lymphocyte ratio (NLR) is a prognostic factor in patients with completely resected primary pulmonary sarcomatoid carcinoma. J Thorac Dis. 2019;11(3):819–26. Cedrés S, Torrejon D, Martínez A, Martinez P, Navarro A, Zamora E, Mulet-Margalef N, Felip E. Neutrophil to lymphocyte ratio (NLR) as an indicator of poor prognosis in stage IV non-small cell lung cancer. Clin Transl Oncol. 2012;14(11):864–9. Diem S, Schmid S, Krapf M, Flatz L, Born D, Jochum W, Templeton AJ, Früh M. Neutrophil-to-Lymphocyte ratio (NLR) and Platelet-to-Lymphocyte ratio (PLR) as prognostic markers in patients with non-small cell lung cancer (NSCLC) treated with nivolumab. Lung Cancer. 2017;111:176–81. Morgan MS, Hendry DF. The foundations of econometric analysis: the foundations of econometric analysis. 1995. Carozzi FM, Bisanzi S, Carrozzi L, Falaschi F, Lopes-Pegna A, Mascalchi M, Picozzi G, Peluso M, Sani C, Greco L. Multimodal lung cancer screening using the ITALUNG biomarker panel and low dose computed tomography. Results of the ITALUNG biomarker study. Int J Cancer. 2017;141:94–101. Chunshan S, Haiyang Y, Dejun S, Lili M, Zhaohui T. Cisplatin-loaded polymeric nanoparticles: characterization and potential exploitation for the treatment of non-small cell lung carcinoma. Acta Biomater. 2015;18:68–76. Su Y, Hu Y, Wang Y, Xu X, Yuan Y, Li Y, Wang Z, Chen K, Zhang F, Ding X. A precision-guided MWNT mediated reawakening the sunk synergy in RAS for anti-angiogenesis lung cancer therapy. Biomaterials. 2017;139:75–90. Freddie B, Jacques F, Isabelle S, Rebecca SL. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68:394–424. Zhang Z, Zeng K, Zhao S, Zhao Y, Hou X, Luo F, Lu F, Zhang Y, Zhou T, Ma Y, et al. Pemetrexed/carboplatin plus gefitinib as a first-line treatment for EGFR-mutant advanced nonsmall cell lung cancer: a Bayesian network meta-analysis. Ther Adv Med Oncol. 2019;11:1758835919891652. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2018. CA Cancer J Clin. 2018;60(suppl 12):277–300. Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, Bray F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021. https://doi.org/10.3322/caac.21660. Diem S, Schmid S, Krapf M, Flatz L, Born D, Jochum W, Templeton AJ, Fruh M. Neutrophil-to-Lymphocyte ratio (NLR) and Platelet-to-Lymphocyte ratio (PLR) as prognostic markers in patients with non-small cell lung cancer (NSCLC) treated with nivolumab. Lung Cancer. 2017;111:176–81. Bagley SJ, Kothari S, Aggarwal C, Bauml JM, Alley EW, Evans TL, Kosteva JA, Ciunci CA, Gabriel PE, Thompson JC, et al. Pretreatment neutrophil-to-lymphocyte ratio as a marker of outcomes in nivolumab-treated patients with advanced non-small-cell lung cancer. Lung Cancer. 2017;106:1–7. He JR, Shen GP, Ren ZF, Qin H, Cui C, Zhang Y, Zeng YX, Jia WH. Pretreatment levels of peripheral neutrophils and lymphocytes as independent prognostic factors in patients with nasopharyngeal carcinoma. Head Neck. 2012;34(12):1769–76. Templeton AJ, McNamara MG, Šeruga B, Vera-Badillo FE, Aneja P, Ocaña A, Leibowitz-Amit R, Sonpavde G, Knox JJ, Tran B, et al. Prognostic role of neutrophil-to-lymphocyte ratio in solid tumors: a systematic review and meta-analysis. J Natl Cancer Inst. 2014;106(6):dju124. Sarraf KM, Belcher E, Raevsky E, Nicholson AG, Goldstraw P, Lim E. Neutrophil/lymphocyte ratio and its association with survival after complete resection in non–small cell lung cancer. J Thorac Cardiovasc Surg. 2009;137(2):425–8. Mandaliya H, Jones M, Oldmeadow C, Nordman II. Prognostic biomarkers in stage IV non-small cell lung cancer (NSCLC): neutrophil to lymphocyte ratio (NLR), lymphocyte to monocyte ratio (LMR), platelet to lymphocyte ratio (PLR) and advanced lung cancer inflammation index (ALI). Transl Lung Cancer Res. 2019;8(6):886–94. Russo A, Russano M, Franchina T, Migliorino MR, Aprile G, Mansueto G, Berruti A, Falcone A, Aieta M, Gelibter A, et al. Neutrophil-to-lymphocyte ratio (NLR), platelet-to-lymphocyte ratio (PLR), and outcomes with nivolumab in pretreated non-small cell lung cancer (NSCLC): a large retrospective multicenter study. Adv Ther. 2020;37(3):1145–55. Liu J, Li S, Zhang S, Liu Y, Ma L, Zhu J, Xin Y, Wang Y, Yang C, Cheng Y. Systemic immune-inflammation index, neutrophil-to-lymphocyte ratio, platelet-to-lymphocyte ratio can predict clinical outcomes in patients with metastatic non-small-cell lung cancer treated with nivolumab. J Clin Lab Anal. 2019;33(8):e22964. Power C. Elliott, Jane: Cohort profile: 1958 British birth cohort (National Child Development Study). Int J Epidemiol. 2006;35(1):34–41. Schlesselman J. Sample size requirements in cohort and case-control studies of disease. Am J Epidemiol. 1974;99:381. Spruance SL, Reid JE, Grace M, Samore M. Hazard ratio in clinical trials. Antimicrob Agents Chemother. 2004;48(8):2787–92. Diebold FX, Mariano RS. Comparing predictive accuracy. J Bus Econ Stat. 1995;13(3):134–44. This research was supported by grants from Innovation Special Zone Project (NO. 17-163-15-XJ-002-002-04); Hunan Province's 2020 Innovative Province Construction Special Project to Fight the New Coronary Pneumonia Epidemic Response Support (2020SK3010); Hunan Provincial Education Department Key Project (17A185); Hunan Province Graduate Student Research and Innovation Project Funding (CX20200936); Innovation Special Zone Project (NO. 18-163-15-LZ-001-002-09); Postgraduate Scientific Research Innovation Project of Hunan Province(CX20210938). School of Computer Science, University of South China, Hengyang, China Jinghua Yang, Yaping Wan, Jin Wang, Xiapeng Zhang & Lifang Zhou Hunan Provincial Base for Scientific and Technological Innovation Cooperation, Hengyang, China Yaping Wan Hunan Cancer Hospital, Changsha, China Qianxi Ni School of Nuclear Science and Technology, University of South China, Hengyang, China Jinghua Yang & Qianxi Ni The Third Affiliated Hospital of South China University, Hengyang, China Jianhong Zuo Jinghua Yang Jin Wang Xiapeng Zhang Lifang Zhou JHY and YPW contributed to the study design; QXN provides research data; JHY analysed the data; JHY, YPW, JW, LFZ, and XPZ interpreted the results; JHY prepared the figures; JHY and YPW drafted the manuscript; JHY, YPW, QXN, JW, LFZ, and XPZ edited and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Jinghua Yang, Yaping Wan or Qianxi Ni. All patients who participated in the present study signed informed consent before the experiment, which was approved by the South China Ethics Committee. All methods involved in the collection of these data were performed in accordance with the relevant guidelines and regulations. Yang, J., Wan, Y., Ni, Q. et al. Quantifying causal effects from observed data using quasi-intervention. BMC Med Inform Decis Mak 22, 337 (2022). https://doi.org/10.1186/s12911-022-02086-z Causal effect Do-algorithm Quasi-intervention
CommonCrawl
Find the center, radius and intercepts of the circle Find Great Products On eBay - Huge Selection & Great Price Great discounts and savings on top home brands. Free delivery with Prime My Geometry course: https://www.kristakingmath.com/geometry-courseIn this video we'll learn how to find the equation of a circle, the center and radius of. Transcribed image text: Find the center, radius, and intercepts of the circle with the given equation andTo find center and radius of circle, you must first use completing the square to put then sketch the graph equation in standard form for a circle 4x-10y+40 The center is (Type an ordered pair) The radius is Simplify your answer. Type an exact answer, using radicals as needed) Example. Our target will be to convert this into standard circle form: XXX(x − a)2 +(y − b)2 = r2 for a circle with center (a,b) and radius r Regrouping the x and y terms separatel e the center and radius of the given circle Standard form of a circle equation is Where center is (h,k) and radius of circle is r. To change the expression into a perfect square add (half the x coefficient)² and add (half the y coefficient)²to each side of the expression. Here x coefficient = 12. so, (half the x coefficient)² = (12/2)2= 3 Buy Circle Radius Master at Amazon - Wide Range of Circle Radius Maste This calculator can find the center and radius of a circle given its equation in standard or general form. Also, it can find equation of a circle given its center and radius. The calculator will generate a step by step explanations and circle graph Use of the calculator to Find x and y intercepts of Circles 1 - Enter the values of h, k and r where h and k are the x and y coordinates of the center of the circle; r is the radius of the circle and the number of desired decimal places and press Find x-y intercepts An x-intercept is where the graph touches or crosses the x-axis.. A y-intercept is where the graph touches of crosses the y-axis.. To find an x-intercept: Let y=0 and solve for x. To find an y-intercept: Let x=0 and solve for y. Example: Find the intercepts of the circle for the given equation. Solution: To find an x-intercept, let y=0 and solve for x. This equation has one x-intercept This is the form of a circle. Use this form to determine the center and radius of the circle. (x−h)2 +(y−k)2 = r2 (x - h) 2 + (y - k) 2 = r 2 Match the values in this circle to those of the standard form Problem 24 Easy Difficulty. In Problems $23-36,(a)$ find the center $(h, k)$ and radius r of each circle; $(b)$ graph each circle; $(c)$ find the intercepts, if any This video explains how to find the x-intercepts and the y-intercepts of a circle.Site: http://mathispower4u.co Question 714400: Find the center, radius, and intercepts of the circle with the given equation x^2+y^2+10y-24=0 Answer by josgarithmetic(35784) ( Show Source ): You can put this solution on YOUR website We rearrange the equation, completing the square, so that we can write down the radius and coordinates of the centre of the circle Trigonometry Q&A Library Find the center and radius for the circle defined by the following equation: x2 + y2 + 6x - 12y + 20 = 0 Your work must show how you rewrote the equation into standard form. Find the center and radius for the circle defined by the following equation: x2 + y2 + 6x - 12y + 20 = 0 Your work must show how you rewrote the. Equation off a circle is given us X squared plus y squared plus four X minus four y minus one equals zero. We have to find center and radios off the circle. For that, we will write the given equation in its standard form by completing the square off X squared plus four X and for Vice Square minus four way See www.psnmathapps.com for Android math applications This video provides a little background information and three examples of how to find the center and radius of a circle, given an equation. These problems r.. Divide the arc length by the radius to get your angular displacement in radians (theta=1.72414=0.549pi) The arc length of a circle, with respect to a given radius and angle, can be written as an equation: S=rtheta Where S is the arc length, r is the radius, and theta is the angle in radians Find the properties of the circle y^2+z^2=1. Tiger Algebra's step-by-step solution shows you how to find the circle's radius, diameter, circumference, area, and center (a) Find the center (h,k) and radius r of the circle. b) Graph the circle. (c) Find the intercepts, if any, of the graph. (a) The center of the circle is O. (Type an ordered pair, using integers or fractions.) 2- The radius of the circle is (Type an integer or a fraction.) -5 (b) Use the graphing tool to graph the circle. -2 Click to enlarge. Answer to: A circle has the equation 3(x - 2)^2 + 3y^2 = 3 . Find the center (h,k) and radius r and graph the circle. Find the intercepts, if any.. An x-intercept is where the graph touches or crosses the x-axis.. A y-intercept is where the graph touches of crosses the y-axis.. To find an x-intercept: Let y=0 and solve for x. To find an y-intercept: Let x=0 and solve for y. Example: Find the intercepts of the circle for the given equation. Solution: To find an x-intercept, let y=0 and solve for x.. Step-by-step solution. Properties of circles. 1. Find the radius. Use the standard form of the equation for a circle to find : 2. Find the diameter. The diameter is equal to twice the radius: r=9.798 Find the properties of the circle x^2+y^2=13,y-x=1. Tiger Algebra's step-by-step solution shows you how to find the circle's radius, diameter, circumference, area, and center calculator will find either the equation of the circle from the given parameters or the center, radius, diameter, area, circumference (perimeter), eccentricity, linear eccentricity, x-intercepts, y-intercepts, domain, and range of the entered circle Get an answer for 'write the equation of the circle in standard form. find the center, radius, intercepts and graph the circle. x^2+y^2+10x+8y+16=0 must show work thank you for the help' and find. equation, center and radius, and intercepts of a circle Question 636257: Write the equation of the circle in standard form. Find the center, radius, intercepts, and graph the circle. x^2+y^2+8x+2y+8=0 I am having trouble with the steps involved. Answer by lwsshak3(11628) (Show Source) Solving the equation for the radius r. The equation has three variables (x, y and r). If we know any two, then we can find the third. So if we are given a point with known x and y coordinates we can rearrange the equation to solve for r: The negative root here has no meaning. Note the this only works where the circle center is at the origin (0,0), because then there is only one circle that. Find the radius of a circle with an x-intercept of -2 if the circle's center is (2,3) on a coordinate plane. A.9 B.2 Find the Center and Radius x^2+y^2-4x-8y+19=0. Subtract from both sides of the equation. Complete the square for . Use this form to determine the center and radius of the circle. Match the values in this circle to those of the standard form Free Circle intercepts calculator - Calculate circle intercepts given equation step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy 1b) Radius = 3.6 central angle 63.8 degrees. Arc Length equals? Click the Arc Length button, input radius 3.6 then click the DEGREES button. Enter central angle =63.8 then click CALCULATE and your answer is Arc Length = 4.0087. 2) A circle has an arc length of 5.9 and a central angle of 1.67 radians Find the equation of the circle whose center and radius are given. center ( 7, -3), radius = 7 2 See answers ApplePiWizard ApplePiWizard (x-7)^2*(y+3)^2=49 Is the equation of the circle Determine the y-intercept for the following equation 4x + 2y = -1 In order to find the center and radius, we need to change the equation of the circle into standard form, ( x − h) 2 + ( y − k) 2 = r 2 (x-h)^2+ (y-k)^2=r^2 ( x − h) 2 + ( y − k) 2 = r 2 . In order to get the equation into standard form, we have to complete the square with respect to both variables Answer to: Write the equation of the circle in standard form. Find the center, radius, intercepts, and graph the circle. x^{2} + y^{2} + 8x+ 2y + 8.. To find x -intercepts set y = 0 and solve for x. Graph the circles and label the x- and y-intercepts. Graphing circles in standard form is just a matter of identifying the center and the radius. The difficulty comes when the circle is not given in standard form. In this case, when given general form, we will complete the square twice as. Free Circle calculator - Calculate circle area, center, radius and circumference step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy Find the center, radius, and intercepts of the circle Find the center (h,k) and radius r of the circle and then use these to (a) graph the circle and (b) find the intercepts, if any. 3x^2+36x+3y^2=0 college algebra, Please help!! a circle has the equation x^2+y^2+6x-6y-46=0 find the center and the radius of the euation ed by its center and radius. Standard form for the equation of a circle is (x − h) 2 + (y − k) 2 = r 2. The center is (h, k) and the radius measures r units. To graph a circle mark points r units up, down, left, and right from the center. Draw a circle through these four points Graph: x y circumference: 4ˇ √ 3 Example 2: Find the center, radius, circumference, xand y intercepts of the circle, where x2 +y2 =1. Then sketch the circle A circle is all points in a plane that are a fixed distance from a given point on the plane. The given point is called the center, and the fixed distance is called the radius. The standard form of the equation of a circle with center (h,k) ( h, k) and radius r r is (x−h)2+(y−k)2 = r2 ( x − h) 2 + ( y − k) 2 = r 2 How do you find center, radius, and intercepts of a circle Find the center of the circle, say as and . The perpendicular from the center divides the intercept into two equal parts, therefore calculate the length of one of the parts and multiply it by 2 to get the total length of the intercept. Calculate the value of radius (r) using the formula: , where an (a) find the center (h, k) and radius r of each circle; (b) graph each circle; (c) find the intercepts, if any. x^{2}+y^{2}+4 x+2 y-20=0 �� Announcing Numerade's $26M Series A, led by IDG Capital! Read how Numerade will revolutionize STEM Learnin Transcript. Example 3 Find the radius of the circle in which a central angle of 60 intercepts an arc of length 37.4 cm (use = 22/7 ) Given = 37.4 cm and = 60 = 60 /180 radian = /3 radian By = / r = (37 We are trying to find an equation for all of the points that are the same distance (5 units) from (-3, 6). The locus of all points equidistant from a single point is a circle. In other words, we need to find an equation of a circle. The center of the circle will be (-3, 6), and the radius, which is the distance from (-3,6), will be 5 Circle centered at the origin with radius r. Notice that every point along the circle is a distance 'r' away from the center. This is the radius. Each point on the circle can also be defined by x- and y-coordinates. The relationship between the x- and y-coordinates and the radius can be found using a right triangle Algebra Assignment Help, algebra two, Write the equation of the circle in standard form. Find the center, radius, intercepts, and graph the circle. ??2+??2+16??-18??+145=25 Solved: Find The Center, Radius, And Intercepts Of The Cir Transform the equation of the line to (i) slope intercept form and find its slope and y-intercept (ii) intercept form and find intercepts in the coordinates axes (iii) normal form and find the inclination of the perpendicular segment from the origin on the line with the axis and its length We need to know the radius and the center in order to write the equation. The center is given at (,). It is left to find the radius. Radius is the distance between the center and a point on the circle. So, radius is the distance between (,) and (,) Find step-by-step Precalculus solutions and your answer to the following textbook question: (a) find the center (h, k) and radius r of each circle; (b) graph each. Question : Find the radius of the circle in which a central angle of 60° intercepts an arc of length 37.4 cm (use π = 22/7). To find :. The radius of the circle A circle has the equation Find the center, radius, and intercepts of the circle and then sketch the graph of the circle. (x − 6) + (y + 9) = 12. 2 2 The center of the circle is . (Type an ordered pair.) (6, − 9) The radius is . (Simplify your answer Here we are going to see some practice questions on writing equation of circle with given center and radius. (1) Find the equation of the circle if the center and radius are (2, − 3) and 4 respectively. (2) Find the equation of the circle with center (-2, 5) and radius 3. Show that it passes through the point (2, 8) e the rest. We will also want to find the x -and y - intercepts. Example 1 : Graph the following. Find all x and y -intercepts. a. x 1 2 y 2 2 25 b. 2 8x 2y 13 0 Solution: a. Since the equation is already in standard form, we can simply read off the center of the circle. No tice that the equation is. To graph a circle: 1. Put the equation in standard form. 2. Find the center and radius. 3. Find the x and y intercepts. 4. Plot the x and y intercepts. Going any direction from the center by a radius amount reaches the circle. 5. Connect the dots. 4) Find the center and radius of the circle given that the two points (-3, 2) and (1, 4) are on the opposite ends of the diameter of the circle. (6 points) 1 1---- Find the equation of the circle with center at (3,-1) and which cuts off an intercept of length 6 from the line 2x-5y+18=0 Updated On: 7-11-2020 To keep watching this video solution fo The graph of a circle is completely determined by its center and radius. Standard form for the equation of a circle is (x−h)2+(y−k)2=r2. The center is (h,k) and the radius measures r units. This will result in standard form, from which we can read the circle's center and radius. How do you write standard form of an equation? The standard. Find the center,radius, intercepts and graph the circle The center-radius form of the circle equation is in the format (x - h) 2 + (y - k) 2 = r 2, with the center being at the point (h, k) and the radius being r.This form of the equation is helpful, since you can easily find the center and the radius Similarly to find the y-intercepts, set x = 0 in the equation and solve for x. On graph paper set up a coordinate system and use a compass to draw the circle with center (2,-4) and radius 3. Look at the points on the graph where is crosses the x- and y-coordinates Given coordinate of the center and radius > 1 of a circle and the equation of a line. The task is to check if the given line collide with the circle or not. There are three possibilities : Length of intercept cut off from a line by a Circle. 23, May 21. Slope of the line parallel to the line with the given slope. 16, Apr 19 Ok, so (x - a)^2 + (y - b)^2 = c^2 is basically the standard for of circle, where (a, b) is the centre and c is radius of the circle. Since y axis ia a tangent, the distance between y axis and center (basically the absolute value of x coordinate o.. Find the intersection of two circles. This online calculator finds the intersection points of two circles given the center point and radius of each circle. It also plots them on the graph. To use the calculator, enter the x and y coordinates of a center and radius of each circle. A bit of theory can be found below the calculator Circle equation calculator - with detailed explanatio The relationship between radius and diameter is an important one to know when learning to how to calculate the radius. Since the radius is a line segment from the center to the circle, and the diameter, d, is a line segment from on side of a circle through the center of a circle and out to the other side of the circle, it follows that a radius is 1 2 a diameter We have to find the equation of a circle given with a center at (-3, -5) and radius of 6. Since the standard equation of a circle is. where (h, k) is the center and r is the radius. Now we form an equation of a circle given with center (-3, -5) and radius = 6 units. (x + 3)² + (y + 5)² = 6² (x + 3)² + (y + 5)² = 36. Therefore Option D is. One algebraic approach would be to call the center (x, y) and use the distance formula to write two equations expressing the fact that this point is 5 units from each intercept. Have you done that? The geometric approach, starting with a picture, is a lot simpler, especially if you know about the 3-4-5 triangle Center of a circle (h, k) any point on the circle endpoints of the diameter. use midpoint formula to solve for (h, k) choose 1 point and solve for radius plug in values for radius and (h, k) to find the standard form. -h, -k ex./ (x+1)^2 + (y-2)^2 = 36 center is (-1, 2) how to find x and y intercepts. set one equal to zero to solve for. Trigonometry questions and answers. > Question 15 5 pts Find the measure in radians) of a central angle.e, that intercepts an arc on a circle with radius 38 cm and arc length 4 cm. 9.5 radians 0.1 radians 152.0 radians 34 radians D Question 16 5 pts Convert 164 degrees to radians 2.Bonradan 1.1 radians 0.91 radians 3.457 radians D Question 17 5. write the equation of the circle in standard form. find the center, radius, intercepts, and graph the circle. x^2+y^2-6x-8y+25=36 Write the standard form of the equation of the circle with the given radius and. Write the standard form of the equation of the circle with the given radius andcenter C (0, 0); r = 1 Find the center and radius of the circle x2 + y2 + 6x -4y - 51 = 0. Complete the squares on x and y. The circle has center at (-3, 2) and radius 8. Slide 11.2- 8 CLASSROOM EXAMPLE 4 Completing the Square to Find the Center and Radius y-intercepts are (0, b) and (0,. Problem 23 Easy Difficulty. In Problems $23-36,(a)$ find the center $(h, k)$ and radius r of each circle; $(b)$ graph each circle; $(c)$ find the intercepts, if any then the center of the circle is at the positive x axis. The intersection points should be only real numbers. In order to find the circle intercepts with the y axis substitute the value x = 0 in the circle equation and solve for y. a 2 + (y − b) 2 = r 2 In this model, the Sun is at the centre of the circle, and the Earth's orbit is the circumference. The radius is the distance from the Earth and the Sun: 149.6 million km. The central angle is a quarter of a circle: 360° / 4 = 90°. Use the central angle calculator to find arc length Video: Find x and y intercepts of Circles - Calculato The equation of a circle is given by 2/3 x2 + 2/3 y2 - 4x+ 2 2/3 y - 2 = 0. Determine the centre and the radius of the circle. asked Jun 30 in Mathematics Form 2 by anonymou Stack Exchange network consists of 178 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchang Finding Intercepts of a Circle math15fun A circle has the equation x squared plus y squared minus 3 x minus 4 y plus 4 equals 0.x2+y2−3x−4y+4=0. Graph the circle using the center (h,k) and radius r. Find the intercepts, if any, of the grap First step is to find the intercepts of the line. X Intercept: [math]2x + 3(0) = 6[/math] [math]x = 3[/math] Y Intercept: [math]2(0) + 3(y) = 6[/math] [math]y = 2[/math] The center of the circle is the midpoint between the point (3, 0) and (0, 2)... Find the Center and Radius x^2+y^2=49 Mathwa Now, from the center of the circle, measure the perpendicular distance to the tangent line. This gives us the radius of the circle. Using the center point and the radius, you can find the equation of the circle using the general circle formula (x-h)*(x-h) + (y-k)*(y-k) = r*r, where (h,k) is the center of your circle and r is the radius The center of the circle is (h, k) = (3, -5) and the radius is r = 7. To determine on which side of the circle this point lies, I need to find its distance from the center. Since this distance is more than the radius, then this point is in the exterior The center of the circle has to be halfway between the y intercepts and halfway between the x intercepts. That immediately tells you the coordinates of the center. Then you can use the distance formula to any of the three intercepts (the origin would be easiest) to find the radius † Circle: is the set of all points in a plane that lie a flxed distance from a flxed point. The flxed distance is called the radius and the flxed point is called the center of the circle. Important Properties: † Equation of a circle: An equation of the circle with center (h;k) and radius r is given by (x¡h)2 +(y ¡k)2 = r2 2. Find the radius and center of the circle given by the equation below. Then sketch the circle. (c — 1)2 + (y — = 4 3. Solve the inequality. < 21 s 21 - + 25/ 4. Find the and y-intercepts of the function: y o then X OCC-vr O: o - 1/11' O Show your work. Circle or Box your answers SOLVED:In Problems 23-36,(a) find the center (h Get answer: Find the Equation; radius; center of the circle and its x intercept and y intercept To calculate the radius of a circle by using the circumference, take the circumference of the circle and divide it by 2 times π. For a circle with a circumference of 15, you would divide 15 by 2 times 3.14 and round the decimal point to your answer of approximately 2.39. Be sure to include the units in your answer Detecting the collission of two circles is quite easy, given the center points and their radii, as the sum of them must be greater than the distance between their center points to overlap: We define a distance measure between the two center points: d = ∣ B ⃗ − A ⃗ ∣ = ( B x − A x) 2 + ( B y − A y) 2 Homework Statement A variable circle cuts x and y axes so that intercepts are of given length k1 and k2. Find the locus of center of circle Homework Equations The Attempt at a Solution There must be four intercepts but only two are given The center-radius form of the circle equation is in the format (x - h) 2 + (y - k) 2 = r 2, with the center being at the point (h, k) and the radius being r. This form of the equation is helpful, since you can easily find the center and the radius Apr 24, 2017 — Find the equation for the circle using the formula (x-h)^2 + (y- k)^2 = r^2, to the center of the circle on the (x, y) plane and r is the length of the radius. For example, the equation for a circle with its center at the point (1,0) and Radius bisects in the circle below. How does relate to chord?Prove your ideas. Central Angles and Chords. A central angle for a circle is an angle with its vertex at the center of the circle.. In the circle above, is the center and is a central angle. Notice that the central angle meets the circle at two points (and ), dividing the circle into two sections The standard form of the equation of a circle with center at ( h, k) and radius r is . Rewrite to find the center and the radius. So, h = 2, k = 1, and r = 2. Therefore, the center and radius are (2, 1) and 2. Plot the center and four points that are 2 units from this point. Sketch the circle through these four points. $16:(5 (2, 1); 2 62/87,21. Here is one equation that satisfies your requirements: [math](x-3)^2 + (y-\frac{5}{2})^2 = 9[/math] at [math]y=0[/math]: you get two [math]x-[/math]intercepts at. Input : Radius of circle and the y - intercept of the line. Output : Circle drawn with a horizontal line across the window with the given y intercept. Mark two points of the intersection. Print the x values of the points of intersection *Formula : x = ± √r^2 - y^2. Code:: from graphics import * from math import * def main (): # enter radius. Find the center-radius form of the equation of the circle given center (3,-7) and radius of length 5. A circle has the equation 4(x-3)^2+4y^2=4 Find the center (h,k) and radius r and graph the circle. Find the intercepts, if any, of the graph.. Definition: For a circle of radius r, a central angle θ intercepts an arc length of s given by sr T where θ is measured in radians. Note: You must use radians!!! Examples: Find the arc length. 1. A circle has a radius of 3 inches. Find the length of the arc intercepted by a central angle of 150. 2. A circle has a radius of 4 inches 1. The equation of a circle is x^2 + y^2 - 4x + 2y - 11 = 0. What are the center and the radius of the circle? Show your work. Answer: 2. Write the equation of the circle in general form. Show your work. 3. Write the equation of a parabola with focus (-2,4) and directrix y = 2. Show your work, including a sketch. Answer a) Find the measurement of the central angle labeled x˚. b) Express the central angle in radians. 72 8 8) a) Find the length of the radius. b) Find the central angle in radians and degrees. 9) In a circle whose radius measures 5 feet, a central angle intercepts an arc of length 12 feet. Find the radian measure of the central angle Find the standard form of the equation of the circle with the given characteristics. 1. Center: (0, 0); point on circle: (8, —15) 2. Endpoints of diameter: (—2, 3) and (6, — CevNer: -2 Write the equation of the circle in standard form. Sketch the circle and then identify its center, radius, x-intercepts, and y-intercepts Finding the Center-Radius Form of a Circle Given the Endpoints of the Diamete The technique of completing the square is used to turn a quadratic into the sum of a squared binomial and a number: (x - a)2 + b. The center-radius form of the circle equation is in the format (x - h)2 + (y - k)2 = r2, with the center being at the point (h, k) and the radius being r. Click to see full answer Ex 11.1, 12 (Method 1) Find the equation of the circle with radius 5 whose centre lies on x-axis and passes through the point (2, 3). We know that equation of circle is (x - h)2 + (y - k)2 = r2 Centre of circle is denoted by (h, k) Since it lies on x-axis , k = 0 Hence Centre o l. Sketch the graph of each circle. Find the center, radius, X and y-intercepts (as points). Leave values in simplified radical form and as approximations (this will help you when you graph). If there are no x or y-intercepts, write none. = 25 Center: Radius: x-interceptS: y-interceptS: Sketch the graph of each ellipse. State the major axis, = 1 If the radius of a circle is not given, it will need to be determined before an equation can be written. Write the equation of a circle with a center at (-3,6) which passes through (5,-1). Find intercepts and domain and range. NOTE: If the equation of a circle is written in standard form, its center and radius can easily be read. 3 Radius, r 7.10 inches 8.5 feet 9.6 yards 10.8 yards 11.1 meter 12.1 meter 1. 1350 3. 83.1350 2. 1770 4. 87.1770 In Exercises 7—12, find the radian measure of the central angle of a circle of radius r that intercepts an arc of length s A circle of radius 1 unit touches positive x-axis and positive y-axis at A and B, respectively. A variable line passing through origin intersects the circle in two points D and E. The area of the triangle DEB is maximum when the slope of the line is m. Find the value of m Vitamin D deficiency dry skin. Biosphere 2 animals. Tesla consumer reports' 2020. Mycetoma foot Radiology. Mercedes GLC Pre Owned. Names for sheep farm. Bear Takedown recurve bow. Gas powered bicycles. A Child's World Facebook. Slogo. Nikon D70s camera price in India. Spanish celebrities birthdays. Scion tC horsepower 2008. How to turn off discover groups on Facebook. Total Divas Season 5 Episode 7. Japanese Samurai Sleeve Tattoo. Pinch point meaning in Tamil. 1, 2, 3 to the zoo activities for preschoolers. Happee birthdae Harry original. The Attico. Zillow Fremont INDIANA. Graceland gift shop hours. Old Telephone Company. Beaches Cancun all inclusive. Rainbow Rehabilitation Ypsilanti. Projectile on inclined plane formulas PDF. H&M sandals turkey. Raksha ET vaccine uses. IPhone 12 Pro camera pixel. Party city St patrick's Day. Hangover baby then and now. 2007 Toyota Camry transmission fluid. Drip for jaundice. Anna Karen age. Two syllable words with long vowels list. Gorillas Amsterdam Salary. Add alpha channel Photoshop. Sony Pictures Animation logopedia. Body Armor for breastfeeding. Vintage Art Studio furniture. How to tie Shoes to avoid blisters.
CommonCrawl
1-D inversion analysis of a shallow landslide triggered by the 2018 Eastern Iburi earthquake in Hokkaido, Japan Jun Kameda ORCID: orcid.org/0000-0002-2963-24021 & Atsushi Okamoto2 Earth, Planets and Space volume 73, Article number: 116 (2021) Cite this article Destructive landslides were triggered by the 6.7 Mw Eastern Iburi earthquake that struck southern Hokkaido, Japan, on 6 September 2018. In this study, we carried out 1-D inversion analysis of one of the shallow landslides near the epicenter using a Bing debris-flow model. At this site, the slope failure comprised cover soil with an initial down-slope length of ~ 80 m and a thickness of ~ 7 m on a slope with < 20° dip. The landslide moved southeastward with a run-out distance of ~ 100 m. Inversion analysis of the post-failure deposit geometry was conducted with the Markov Chain Monte Carlo method (MCMC) to optimize the Bingham rheological parameters of the debris. The analysis reproduced several features of the deposit geometry with a yield stress of ~ 1500 Pa and dynamic viscosity of 800–3000 Pa s. The results suggest that the shallow landslide can be approximated by the flow of a viscoplastic fluid with high-mobility debris and a maximum frontal velocity of 6–9 m/s, with a flow duration of 2–4 min. The Eastern Iburi earthquake (Mw = 6.7) occurred in southern Hokkaido, Japan, on 6 September 2018 (Fig. 1a). The resulting strong ground motion had a maximum intensity of 7 on the Japan Meteorology Agency (JMA) intensity scale and caused shallow landslides as well as a few large-scale deep-seated landslides near the epicenter (Yamagishi and Yamazaki 2018; Osanai et al. 2019). Most of the shallow landslides show long runouts of debris on relatively gentle slopes (< 30°; Kasai and Yamada 2019; Osanai et al. 2019) and are classified as earthflows or earth slides according to the scheme of Varnes (1978). The widespread hills in southern Hokkaido are covered by volcanic soils derived from material produced by several nearby volcanos. The soils are thought to have been weakened by heavy rainfall from a typhoon that passed through the area the day before the earthquake (Osanai et al. 2019). a Location of the epicenter of the Eastern Iburi earthquake (red cross) and the studied landslide in the Asahi district (42° 44′ 10.6″ N, 141° 52′ 39.8″ E; black arrow). b Aerial view of the landslide (taken by the Geospatial Information Authority of Japan 2018). c Cross-section of the landslide based on the field survey by Li and Wang (2020). Soils before and after the failure are marked by light blue and green, and yellow and green, respectively Field observations of the shallow landslides revealed that the volcanic soil close to the slip zone is clay-rich and contains the clay mineral halloysite (Chigira et al. 2019; Kameda et al. 2019). Halloysite is a typical alteration product of volcanic glass (Wada 1977) and is commonly observed on the slip surfaces of other landslide sites in volcanic areas of Japan (Chigira and Yokoyama 2005; Chigira et al. 2012). The field survey also found evidence of liquefaction of volcanic soil (Kameda et al. 2019). Geotechnical experiments using a triaxial apparatus have demonstrated that the volcanic soils around the slip zone have a low shear resistance and are more susceptible to liquefaction than are other layers (Li and Wang 2020). The earthquake possibly caused liquefaction and fluidization of the water-saturated soils, resulting in multiple shallow landslides in this area. In addition to the works mentioned above, other studies have analyzed the landslides by field surveys, laboratory experiments, and image analysis (e.g., Kawamura et al. 2019; Wang et al. 2019; Zhang, et al. 2019; Ito et al. 2020), and they reported on the geological history and triggering mechanisms. To complement these studies, we conducted numerical modeling of the landslide and discuss its post-failure behavior. We selected one of the shallow landslides for our study, whose geometry was investigated in a previous field survey (Li and Wang 2020). We employ a Bing debris-flow model (Imran et al. 2001) for inversion analysis of the landslide geometry and discuss the rheological properties of the collapse debris to interpret its post-failure flow behavior. The studied landslide is located in the Asahi district of the Atsuma town in Hokkaido (42° 44′ 10.6″ N, 141° 52′ 39.8″ E; Fig. 1a). The volcanic soil slid southeastward on a slope that dips 19° at its highest point and decreases in dip down-slope. The run-out distance of the landslide is ~ 100 m (Fig. 1b, c; Li and Wang 2020). This landslide was selected for numerical analysis because of its simple geometry. Figure 1c shows a cross-section through the slope. Due to poor field exposure, we assume the depth of the slip plane on the slope based on the thickness of the post-failure debris, with the depth increasing down-slope. Despite making this assumption, our model successfully reproduces the whole geometry of the deposit including the slope. Forward modeling We used a Bing debris-flow model (Imran et al. 2001) to conduct a numerical analysis of the landslide. This model assumes two layers of debris (i.e., an upper plug-layer and a lower shear-layer) that flow laminarly with rheological soil properties described by a viscoplastic Herschel–Bulkley model: $$\left| {\frac{\gamma }{{\gamma_{{\text{r}}} }}} \right|^{n} = \left\{ {\begin{array}{*{20}ll} {0 \quad {\text{if}}\; \left| \tau \right| < \tau_{{\text{y}}} } \\ {\frac{\tau }{{\tau_{{\text{y}}} {\text{sgn}} \left( \gamma \right)}} - 1 \quad {\text{if}} \; \left| \tau \right| \ge \tau_{{\text{y}}} } \\ \end{array} } \right.,$$ where \(\tau\) is the shear stress, \(\tau_{{\text{y}}}\) is the yield stress, and \(\gamma\) is the strain rate. A reference strain rate, denoted by \(\gamma_{{\text{r}}}\), is defined as follows: $$\gamma_{{\text{r}}} = \left( {\frac{{\tau_{{\text{y}}} }}{\mu }} \right)^{\frac{1}{n}} ,$$ where \(\mu\) is the dynamic viscosity. The volcanic soil of the studied landslide is assumed to behave as a Bingham fluid, which is a limiting case for the Herschel–Bulkley model with n = 1. The Bingham model is the most commonly used viscoplastic model that describes the behavior of debris or mud flows (Jiang and LeBlond 1993; Wan and Wang 1994; Julien 1995; Naruse 2016). The integral equations of the debris flow on a Lagrangian framework are described by the following three conservation equations for mass and momentum (Jiang and LeBlond 1993; Huang and Garcia 1998; Pratson et al. 2001; Imran et al. 2001; Naruse 2016): $$\frac{\partial D}{{\partial t}} + \frac{\partial }{\partial x}\left[ {U_{{\text{P}}} \left( {D_{{\text{P}}} + \frac{2}{3}D_{{\text{S}}} } \right)} \right] = 0,$$ $$\frac{\partial }{\partial t}\left( {U_{{\text{P}}} D_{{\text{P}}} } \right) + U_{{\text{P}}} \frac{{\partial D_{{\text{S}}} }}{\partial t} + \frac{\partial }{\partial x}\left( {U_{{\text{P}}}^{2} D_{{\text{P}}} } \right) + \frac{2}{3}U_{{\text{P}}} \frac{\partial }{\partial x}U_{{\text{P}}} D_{{\text{S}}} = - gD_{{\text{P}}} \left[ {1 - \frac{\rho }{{\rho_{{\text{m}}} }}} \right]\frac{\partial D}{{\partial x}} + gD_{{\text{P}}} \left[ {1 - \frac{\rho }{{\rho_{{\text{m}}} }}} \right]S - \frac{\mu }{{\rho_{{\text{m}}} }},$$ $$\frac{2}{3}\frac{\partial }{\partial t}\left( {U_{{\text{P}}} D_{{\text{S}}} } \right) - U_{{\text{P}}} \frac{{\partial D_{{\text{S}}} }}{\partial t} + \frac{8}{15}\frac{\partial }{\partial x}\left( {U_{{\text{P}}}^{2} D_{{\text{S}}} } \right) - \frac{2}{3}U_{{\text{P}}} \frac{\partial }{\partial x}U_{{\text{P}}} D_{{\text{S}}} = - gD_{{\text{S}}} \left[ {1 - \frac{\rho }{{\rho_{{\text{m}}} }}} \right]\frac{\partial D}{{\partial x}} + gD_{{\text{S}}} \left[ {1 - \frac{\rho }{{\rho_{{\text{m}}} }}} \right]S - 2\frac{\mu }{{\rho_{{\text{m}}} }}\frac{{U_{{\text{P}}} }}{{D_{{\text{S}}} }},$$ where \(U_{{\text{P}}}\) is the plug-layer velocity, \(D_{{\text{P}}}\), \(D_{{\text{S}}}\), and D are the plug-layer, shear-layer, and total thickness of the debris, respectively, S is the slope angle, t is time, x is the position along the slope (downward is positive), \(g\) is gravitational acceleration, \(\rho\) is the density of air, and \(\rho_{{\text{m}}}\) is the density of the debris material. According to Li and Wang (2020), the clay-rich volcanic soil near the base of the landslide (known as Ta-d pumice MS) has a specific gravity of 2.631 (\(\rho_{{\text{m}}}\) = 1177 kg/m3) and a water content of 206% at a degree of saturation (Sr) of 92.8%. The upper volcanic soil (Ta-d pumice with volcanic ash) has a specific gravity of 2.553 (\(\rho_{{\text{m}}}\) = 1122 kg/m3) and a water content of 121% at a degree of saturation (Sr) of 76.2%. If these soils were fully saturated at failure (i.e., Sr = 100%), \(\rho_{{\text{m}}}\) is estimated to be 1240 kg/m3 for the upper layer and 1310 kg/m3 for the lower layer. Therefore, we assume \(\rho_{{\text{m}}}\) = 1300 kg/m3 in our model. We developed a MATLAB code to numerically solve Eqs. (3)–(5). The equations were assembled in a staggered lattice using a finite difference method with a deformable moving grid system. Following Imran et al. (2001), a numerical viscosity of 0.001 was used to prevent the solution from becoming unstable. More detailed information on the solution procedure can be found in Imran et al. (2001). The calculation was stopped when the frontal velocity of the debris flow decreased to 1 cm/s. Inversion analysis Based on the geometry of the landslide deposit, inversion analysis was conducted to optimize the parameters \(\tau_{{\text{y}}}\) and \(\gamma_{{\text{r}}}\) in Eq. (2). Since our analysis is 1-D, lateral flow is ignored. However, the volume of debris before and after failure is not equal along the constructed cross-section (Fig. 1c), with the latter being larger than the former by 17%. This implies that soils from outside the survey line contributed to the final geometry of the deposit. For this reason, we use a volume of the post-failure deposit that is artificially reduced (shortened vertically), so that it is identical to the pre-failure volume. Due to these limitations, we aim at reproducing key features of the debris flow such as run-out distance and position of the hump of the deposit. In the inversion analysis, we defined an objective function as a residual sum of squares between the observed (corrected) and modeled deposit thickness $$F = \mathop \sum \limits_{i = 1}^{n} \left( {D_{{{\text{o}}i}} - D_{i} } \right)^{2} ,$$ where n is the number of grid points and \(D_{{{\text{o}}i}}\) is the observed thickness of the deposit at i grid point. Based on the result of the forward model, the thickness of the modeled deposit at each grid point (\(D_{i}\)) was calculated using the one-dimensional data interpolation method. The analysis was done over the length of the deposit, from x = 76.5 to 184.68 m, which consisted of 58 grid points. The source area was excluded from the inversion analysis (x = 0 to 76.5 m), because we assumed the thickness of the deposit on this area as described above. Since the F function is non-linearly dependent on the fitting parameters, a Markov Chain Monte Carlo (MCMC) method was applied to optimize the fitting parameters over a range of values. Details of this method are outlined by Metropolis et al. (1953). For an MCMC simulation, a candidate value for one of the unknown parameters is tested by adding a random number (nr = [− 1,1]) to the old candidate, namely log10(Pi,can) = log10(Pi) + nr, where Pi and Pi,can are the old and current candidates, respectively. Therefore, the candidate parameter vector for one local step can be described as \(\varphi_{{{\text{can}}}} = \left\{ {P_{{1,{\text{can}}}} , P_{2} , \ldots , P_{N} } \right\}\). \(F\left( {\varphi_{{{\text{can}}}} } \right)\) is then calculated from these parameters and \(\Delta F\) is evaluated from \(\Delta F = F\left( \varphi \right) - F\left( {\varphi_{{{\text{can}}}} } \right).\) The candidate vector \(\varphi_{{{\text{can}}}}\) is accepted if the probability is \(\min \left( {1, \exp \left( { - \Delta F\beta_{N} } \right)} \right)\), where \(\beta_{N}\) is the "inverse temperature", which controls the acceptance or rejection of the candidate values. At low \(\beta_{N}\), the candidate value is updated when \(- \Delta F\) > 0. At high \(\beta_{N}\), this is not the case. The appropriate \(\beta_{N}\) is problem-dependent, and in this work, \(\beta_{N}\) was set to 0.5–1.0 by trial-and-error (βN was fixed during one series of calculations). After one local update, the next unknown parameter is tested in a similar way. One Monte Carlo Step (MCS) is defined as N trials (i.e., the number of unknown parameters examined) of the local update, and Esum is obtained by the arithmetic sum of F of each MCS. Schematic flowchart of the MCMC method for our inversion analysis is shown in Fig. 2. Schematic flowchart of the Markov Chain Monte Carlo (MCMC) method for our inversion analysis. MCS Monte Carlo step The results of MCMC inversion are shown on a yield stress–viscosity (\(\tau_{{\text{y}}} - \mu\)) plot (Fig. 3). The parameters in the simulation are treated in logarithmic form (log10(\(\tau_{{\text{y}}}\)) and log10(\(\gamma_{{\text{r}}}\))), but to clearly illustrate the results, Esum values are plotted against \(\tau_{{\text{y}}}\) and \(\mu\). We used several initial value conditions indicated by the red crosses in Fig. 3. However, after several MCSs, the resulting viscosity and yield stress values converged to two distinct domains: a high-viscosity domain and a low-viscosity domain. Esum values for these domains are similar, ranging from ~ 17 to 20, which corresponds to an F value of ~ 9 to 10 and a trial number N = 2. Yield stress–viscosity diagram showing the results of the MCMC simulation. Initial values are shown by red crosses. Resulting values are plotted with their corresponding Esum value indicated by the color scale. The inferred boundary between gravel-rich and sand-rich soils (grey dash line) is from Jeong et al. (2010) Figure 4 shows how \(\tau_{{\text{y}}}\) and \(\mu\) are updated with the progress of the MCMC simulation after a converged state is achieved. In a converged state, the accepted yield stress values vary steadily between 1450 and 1800 Pa, while the viscosity values fluctuate between two distinct ranges of values: ~ 1000 Pa s and 1500–3000 Pa s (Fig. 4). Of the > 8000 MCSs, the ~ 6200 MCSs yield viscosity values in the high-viscosity domain, while the remaining ~ 2300 MCSs yields viscosity values in the low-viscosity domain. Based on these results, the probability distributions of log10(\(\tau_{{\text{y}}}\)) and log10(\(\gamma_{{\text{r}}}\)) values in the two domains are individually estimated (Fig. 5). The mean and standard deviation of log10(\(\tau_{{\text{y}}}\)) and log10(\(\gamma_{{\text{r}}}\)) in the high-viscosity domain are 3.192 (1σ = 0.0169) and − 0.151 (1σ = 0.0772), respectively, corresponding to a yield stress of 1554 Pa and viscosity of 2200 Pa s (Fig. 5a, b). The mean and standard deviation of log10(\(\tau_{{\text{y}}}\)) in the low-viscosity domain are comparable to the values in the high-viscosity domain at 3.201 (1σ = 0.0166), which gives a similar yield stress of 1588 Pa (Fig. 5c). However, log10(\(\gamma_{{\text{r}}}\)) in the low-viscosity domain has a larger mean value of 0.225 and lower standard deviation (1σ = 0.0347) than that in the high-viscosity domain and, consequently, a lower viscosity of 945 Pa s (Fig. 5d). Behavior of yield stress and viscosity over 8500 MCSs. Left: fitting parameter log10(\(\tau_{{\text{y}}}\)) recalculated as \(\tau_{{\text{y}}}\). Right: fitting parameter log10(\(\gamma_{{\text{r}}}\)) recalculated as \(\mu\) Probability distributions of log10(\(\tau_{{\text{y}}}\)) and log10(\(\gamma_{{\text{r}}}\)) obtained by the MCMC simulation. Mean values (dashed line) and standard deviations (grey shading) are shown for the high-viscosity (a, b) and low-viscosity (c, d) domains Figure 6 shows the results of the forward modeling based on the two optimized parameter sets obtained from the MCMC simulation. The deposit thickness and frontal velocity are plotted against the distance from the top of the source area for both the high-viscosity and low-viscosity parameter sets. In the case of the high-viscosity parameter set (Fig. 6a), the debris flow accelerates to 6.5 m/s before quickly decelerating to 1 m/s. It then continues to flow with a velocity of < 1 m/s until it stops moving, 252 s after initial failure, at a distance of 175 m (F = 9.185). In the low-viscosity case (Fig. 6b), the debris flow accelerates to a frontal velocity of 9 m/s and keeps this high-velocity state for a certain period. The debris flow then quickly decelerates and stops flowing after 115 s (F = 9.033). Although the low-viscosity debris flow is completed in a shorter timeframe than the high-viscosity debris flow, the final deposit profiles are similar in both cases. Results of forward modeling of the debris flow with optimized viscosity and yield stress parameters. The thickness of the deposit is plotted against distance from the top of the source area. a Results of the high-viscosity debris flow with \(\tau_{{\text{y}}}\) = 1554 Pa and \(\mu\) = 2200 Pa s. b Results of the low-viscosity debris flow with \(\tau_{{\text{y}}}\) = 1588 Pa and \(\mu\) = 945 Pa s. Black dashed line: initial deposit profile; black line: final modeled deposit profile; red crosses: observed deposit profile; grey circles: frontal velocity of the debris flow (m/s) Our 1-D inversion analysis yields rheological parameters that closely reproduce the observed deposit profile (Fig. 6). The parameters are separated into two domains that both lie outside the field of sand-rich soils as classified by Jeong et al. (2010) (Fig. 3). The boundaries between the fields of sandy and silty soils, and between silty and clay-rich soils occur at lower viscosities than those obtained for the studied landslide (Locat 1997; Jeong et al. 2010). Although this suggests that the rheology of the volcanic soil is more comparable to that of a gravel-rich soil, care should be taken in making this comparison. The basal volcanic soil of a similar landslide (Ta-d) triggered by the same earthquake contains the clay mineral halloysite (Kameda et al. 2019), and suspensions of halloysite generally exhibit shear-rate-dependent resistance as assumed in this study (Theng and Wells 1995), but the rheological properties of halloysite-bearing soils are poorly defined. Moreover, the parameters obtained in this study likely represent the bulk rheological properties of the collapsed soil rather than of the sorted fine-grained materials that are generally used in laboratory experiments (Locat 1997; Jeong et al. 2010). On the other hand, several studies have reported the rheological properties of soils involved in flow-like landslides based on numerical modeling (Malet et al. 2004; Carrière et al. 2018). Using a Bing model, Carrière et al. (2018) conducted back analyses of several flow-like landslides in Pont-Bourquin, Switzerland, that have a geometry (maximum thickness of 3 m, length of 100 m, run-out distance of 130–160 m) similar to that of the present study. They reported possible yield stresses of 550–1300 Pa. Their results are not directly comparable to our study, since they used the Herschel–Bulkley model with n = ~ 0.3 and the debris is composed mainly of quartz and mica (Carrière et al. 2018), which are absent in the volcanic soil of the Atsuma region (Kameda et al. 2019). Nevertheless, the obtained yield stress is in the same order of magnitude as our results. Further experiments are necessary to test the validity of our results and assess which viscosity value gives the best fit to the flow of the volcanic soil. Although the observed deposit profile is closely reproduced by our numerical model, there are some minor differences. For instance, the observed debris profile shows ~ 1 m of uplift at a distance of 130 m (Figs. 1 and 6). In our numerical model, this is observed as a transient phenomenon that gradually disappears with the progression of the debris flow. Figure 7 shows time snapshots of the debris profile to show temporal changes in the morphology of the debris flow in the high-viscosity case. During the first 2 min (Fig. 7a, b), uplift of the debris occurs at a distance of 130 m, which resembles the uplift feature in the observed debris profile (Fig. 6). During the period after 2 min, the uplift at 130 m ceases to occur (Fig. 7c, d). At 130 m from the top of the source area, the slide slope is steeper (14° dip) over a distance of ~ 5 m (Fig. 1c). This area corresponds to the sloping ground between the road and the field, which might explain the transient uplift (i.e., a stack of debris behind the slope). The conservation of this feature in the debris-flow deposit may be a result of the local variation in rheological soil properties and/or water saturation state. Such variation could arise from a vertical layering of different volcaniclastic strata, which is not incorporated in our model. Profiles of the modeled debris flow during the intervals from a 10 to 60 s, b 60 to 120 s, c 120 to 180 s, and d 180 to 252 s. The profiles were calculated using the optimized high-viscosity parameter set (\(\tau_{{\text{y}}}\) = 1554 Pa, \(\mu\) = 2200 Pa s). Grey arrows indicate the movement direction of the debris surface with time. The final deposit geometry is indicated by the red profile in d Another discrepancy between our modeled profile and the observed profile is the V-shaped depression directly behind the uplift at 130–145 m (Fig. 6). One explanation for this depression is a lateral crack in the debris deposit. To discuss such features, it would be necessary to use numerical modeling techniques that are capable of reproducing flow processes in discontinuous media (e.g., the discrete element method). However, another interpretation is a local erosion of a small drainage channel located south of the road. In fact, it seems there is a small impoundment on the right and it is possible to see a little stream on the left (Fig. 1b). If the latter interpretation is correct, then our model seems to reproduce the observed deposit profile quite well, including the microtopographic features. In recent years, 3D simulations of landslides considering various rheological models have been conducted (Kelfoun and Druitt 2005; Pirulli and Mangeney 2008), but in most of these studies, the model parameters are calibrated by trial-and-error. In addition, seismic signals associated with landslides rather than deposit profiles are often used in inversions analysis of landslides (Yamada et al. 2018; Moretti et al. 2020). Although there are still some uncertainties in our analysis, the model successfully fits the observed deposit profile with reasonable yield stress and viscosity values, and therefore, the MCMC-based inversion analysis presented here is useful as a new methodology for reproducing landslide processes and inferring material properties. The results of our analysis suggest that the shallow landslide can be approximated by the flow of a viscoplastic fluid over a period of several minutes. The morphological features, geological background, and rheological properties of the landslide are similar to those of other shallow landslides triggered by the 2018 earthquake (Osanai et al. 2019; Li and Wang 2020). The many slope failures in the region might therefore have collapsed in a similar manner to that outlined in the present study. Data are available on request by contacting JK. Carrière SR, Jongmans D, Chambon G, Bièvre G, Lanson B, Bertello L, Berti M, Jaboyedoff M, Malet JP, Chambers JE (2018) Rheological properties of clayey soils originating from flow-like landslides. Landslides 15:1615–1630. https://doi.org/10.1007/s10346-018-0972-6 Chigira M, Yokoyama O (2005) Weathering profile of non-welded ignimbrite and the water infiltration behavior within it in relation to the generation of shallow landslides. Eng Geol 78:187–207 Chigira M, Nakasuji A, Fujiwara S, Sakagami M (2012) Catastrophic landslides of pyroclastics induced by the 2011 off the Pacific Coast of Tohoku Earthquake. In: Earthquake-induced landslides, Proc. Int. Symp, Kiryu, Japan. Springer, Berlin, pp 139–147 Chigira M, Tajika J, Ishimaru S (2019) Landslides of pyroclastic fall deposits induced by the 2018 Eastern Iburi earthquake with special reference to the weathering of pyroclastics. DPRI Annu 62:348–356 Geospatial Information Authority of Japan (2018) https://maps.gsi.go.jp/#12/42.770442/141.985660/&base=std&ls=std%7C20180906hokkaido_atsuma_0906do%7Cexperimental_anno&blend=0&disp=111&lcd=20180906hokkaido_atsuma_0906do&vs=c1j0h0k0l0u0t0z0r0s0m0f1&d=vl Huang X, Garcia MH (1998) A Herschel–Bulkley model for mud flow down a slope. J Fluid Mech 374:305–333 Ito Y, Yamazaki S, Kurahashi T (2020) Geological features of landslides caused by the 2018 Hokkaido Eastern Iburi earthquake in Japan. Geol Soc Lond Special Publ. https://doi.org/10.1144/SP501-2019-122 Imran J, Harff P, Parker G (2001) A numerical model of submarine debris-flow with graphical user interface. Comput Geosci 27:717–729 Jeong SW, Locat J, Leroueil S, Malet JP (2010) Rheological properties of fine-grained sediment: the roles of texture and mineralogy. Can Geotech J 47:1085–1100. https://doi.org/10.1139/T10-012 Jiang L, LeBlond PH (1993) Numerical modeling of an underwater Bingham plastic mudslide and the waves which it generates. J Geophys Res 98:10303–10317 Julien PY (1995) Erosion and sedimentation. Cambridge University Press, Cambridge Kameda J, Kamiya H, Masumoto H, Morisaki T, Hiratsuka T, Inaoi C (2019) Fluidized landslides triggered by the liquefaction of subsurface volcanic deposits during the 2018 Iburi-Tobu earthquake, Hokkaido. Sci Rep 9:13119. https://doi.org/10.1038/s41598-019-48820-y Kawamura S, Kawajiri S, Hirose W, Watanabe T (2019) Slope failures/landslides over a wide area in the 2018 Hokkaido Eastern Iburi earthquake. Soils Found 59:2376–2395. https://doi.org/10.1016/j.sandf.2019.08.009 Kasai M, Yamada T (2019) Topographic effects on frequency-size distribution of landslides triggered by the Hokkaido Eastern Iburi earthquake in 2018. Earth Planets Space 71:89. https://doi.org/10.1186/s40623-019-1069-8 Kelfoun K, Druitt TH (2005) Numerical modeling of the emplacement of Socompa rock avalanche. Chile J Geophys Res 110:B12202. https://doi.org/10.1029/2005JB003758 Li R, Wang F (2020) Zhang S (2020) Controlling role of Ta-d pumice on the coseismic landslides triggered by 2018 Hokkaido Eastern Iburi earthquake. Landslides 17:1233–1250. https://doi.org/10.1007/s10346-020-01349-y Locat J (1997) Normalized rheological behavior of fine muds and their flow properties in a pseudoplastic regime. Debris-flow hazards mitigation: mechanics, prediction, and assessment. Water Resources Engineering Division, American Society of Civil Engineers, New York, pp 260–269 Malet JP, Maquaire O, Remaître A (2004) Assessing debris flow hazards associated with slow moving landslides: methodology and numerical analyses. Landslides 1:83–90 Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21:1087–1092 Moretti L, Mangeney A, Walter F, Capdeville Y, Bodin T, Stutzmann E, Le Friant A (2020) Constraining landslide characteristics with Bayesian inversion of field and seismic data. Geophys J Int 221:1341–1348 Naruse H (2016) Origins of Lobate Landforms on Mars: preliminary examination from an inverse analysis of debris-flow deposits. J Geogr (chigaku Zasshi) 125:163–170 Osanai N, Yamada T, Hayashi S, Kastura S, Furuichi T, Yanai S, Murakami Y, Miyazaki T, Tanioka Y, Takiguchi S, Miyazaki M (2019) Characteristics of landslides caused by the 2018 Hokkaido Eastern Iburi earthquake. Landslides 16:1517–1528. https://doi.org/10.1007/s10346-019-01206-7 Pirulli M, Mangeney A (2008) Result of back-analysis of the propagation of rock avalanches as a function of the assumed rheology. Rock Mech Rock Eng 41:59–84 Pratson L, Imran J, Hutton E, Parker G, Syvitski JPG (2001) BANG1D: a one-dimensional Lagrangian model of turbidity current mechanics. Comput Geosci 26(7):705–720 Theng BKG, Wells N (1995) The flow characteristics of halloysite suspensions. Clay Miner 30:99–106 Varnes DJ (1978) Slope movement types and processes. In: Schuster, RJ, Krizek RJ (eds) Landslides: analysis and control. Transportation Research Board, Special Report No. 176, pp 11–33 Wada K (1977) Minerals in soil environments. Soil Science Society of America, Madison, p 603 Wan Z, Wang Z (1994) Hyperconcentrated flow. IAHR monograph. Balkema, Rotterdam Wang FR, Fan XM, Yunus AP, Subramanian SS, Alonso-Rodriguez A, Dai LX, Xu Q, Huang RQ (2019) Coseismic landslides triggered by the 2018 Hokkaido, Japan (Mw 6.6), earthquake: spatial distribution, controlling factors, and possible failure mechanism. Landslide 16:1551–1566. https://doi.org/10.1007/s10346-019-01187-7 Yamada M, Mangeney A, Matsushi Y, Matsuzawai T (2018) Estimation of dynamic friction and movement history of large landslides. Landslides 15:1963–1974 Yamagishi H, Yamazaki F (2018) Landslides by the 2018 Hokkaido Iburi-Tobu earthquake on September 6. Landslides 15:2521–2524 Zhang S, Li R, Wang F, Lio A (2019) Characteristics of landslides triggered by the 2018 Hokkaido Eastern Iburi earthquake, northern Japan. Landslides 16:1691–1708. https://doi.org/10.1007/s10346-019-01207-6 We acknowledge Giulia Bossi, two anonymous reviewers, handling editor Tadashi Yamasaki, and editor Takeshi Sagiya for their constructive comments, which greatly improved the manuscript. This work was supported by a JSPS Grant-in-Aid for Scientific Research (18H0129508). Department of Earth and Planetary Sciences, Faculty of Science, Hokkaido University, N10W8, Kita-ku, Sapporo, 060-0810, Japan Jun Kameda Graduate School of Environmental Studies, Tohoku University, Sendai, Japan Atsushi Okamoto JK and AO designed the study, JK carried out the forward modeling, and JK and AO carried out inversion analysis and constructed the manuscript. Both authors read and approved the final manuscript. Correspondence to Jun Kameda. Kameda, J., Okamoto, A. 1-D inversion analysis of a shallow landslide triggered by the 2018 Eastern Iburi earthquake in Hokkaido, Japan. Earth Planets Space 73, 116 (2021). https://doi.org/10.1186/s40623-021-01443-y 2018 Hokkaido Eastern Iburi earthquake Shallow landslide
CommonCrawl
What's the real fundamental definition of energy? Some physical quantities like position, velocity, momentum and force, have precise definition even on basic textbooks, however energy is a little confusing for me. My point here is: using our intuition we know what momentum should be and also we know that defining it as $p = mv$ is a good definition. Also, based on Newton's law we can intuit and define what forces are. However, when it comes to energy many textbooks become a little "circular". They first try to define work, and after some arguments they just give a formula $W = F\cdot r$ without motivating or giving intuition about this definition. Then they say that work is variation of energy and they never give a formal definition of energy. I've heard that "energy is a number that remains unchanged after any process that a system undergoes", however I think that this is not so good for three reasons: first because momentum is also conserved, so it fits this definition and it's not energy, second because recently I've heard that on general relativity there's a loss of some conservation laws and third because conservation of energy can be derived as consequence of other definitions. So, how energy is defined formally in a way that fits both classical and modern physics without falling into circular arguments? classical-mechanics energy $\begingroup$ Possible duplicates: physics.stackexchange.com/q/3014/2451 and links therein. $\endgroup$ – Qmechanic♦ Apr 2 '13 at 5:28 The Lagrangian formalism of physics is the way to start here. In this formulation, we define a function that maps all of the possible paths a particle takes to the reals, and call this the Lagrangian. Then, the [classical] path traveled by a particle is the path for which the Lagrangian has zero derivative with respect to small changes in each of the paths. It turns out, due to a result known as Noether's theorem, that if the Lagrangian remains unchanged due to a symmetry, then the motion of the particles will necessarily have a conserved quantity. Energy is a conserved quantity associated with a time translation symmetry in the Lagrangian of a system. So, if your Lagrangian is unchanged after substituting $t^{\prime} = t + c$ for $t$, then Noether's theorem tells us that the Lagrangian will have a conserved quantity. This quantity is the energy. If you know something about Lagrangians, you can explicitly calculate it. There are numerous googlable resources on all of these words, with links to how these calculations happen. I will answer further questions in edits. Jerry SchirmerJerry Schirmer $\begingroup$ I disagree, for the reasons given in my answer. (I'm not downvoting, because I think the answer still provides valuable insight, but I think the claims being made are too strong.) $\endgroup$ – Ben Crowell Apr 1 '13 at 21:00 $\begingroup$ There is another definition of energy, which is consistent with the above one. It is defined as the negative of the change in the action per unit displacement of the end point of the trajectory. H=-dS/dt(both derivatives are partial). I find this definition particularly comfortable. en.wikipedia.org/wiki/Hamilton%E2%80%93Jacobi_equation $\endgroup$ – Prathyush Apr 2 '13 at 6:33 $\begingroup$ Minor comment: The Lagrangian formalism is a bit of a red herring here. You can simply define energy to be the generator of time translations. This should work in any formalism you choose. $\endgroup$ – user1504 Apr 2 '13 at 15:10 $\begingroup$ ...and call this the Lagrangian, for the map $t\mapsto q(t)$ it should read action there, alternatively you can speak about the image of that map. $\endgroup$ – Nikolaj-K Feb 27 '14 at 12:48 $\begingroup$ @Prathyush Your definition seems promising. Can you describe it extensively in an answer to my question(the link to the question is below)? Thank you in advance. physics.stackexchange.com/q/288773/126696 $\endgroup$ – Hamed Begloo Nov 2 '16 at 10:09 The problem here isn't that energy needs to be defined more rigorously like everything else. The problem is that you're making an incorrect assumption that everything else can be rigorously defined for once and for all. For example: "[...]using our intuition we know what momentum should be and also we know that defining it as p=mv is a good definition." Actually this doesn't work. For example, a beam of light has zero mass and nonzero momentum, so p=mv is false for light. If intuition told you p=mv, intuition was wrong. The general way to go about defining a conserved quantity is to pick something that is your standard amount of that quantity, and then use experiments to find out how much of various things can be converted into that standard. For example, if you pick a 1.00 kg mass moving at 1.00 m/s as your definition of the unit of momentum, then you will find through experiments that its momentum can be exchanged for 2.00 m/s worth of motion for a 0.50 kg mass. This naturally leads to the hypothesis that p=mv. Further experiments seem to verify that hypothesis. But then eventually you do experiments with electrons moving at 30% of the speed of light, or with beams of light, and you find out that p=mv was wrong. It was only an approximation valid under some circumstances. You're forced to revise your definition of p. It's a purely empirical process. Same thing for energy. The only approach that fundamentally works is to define something as your standard unit of energy. This could be the energy required to heat 0.24 g of water by 1 degree C. Then experiments would show that you could trade that amount of energy for the kinetic energy of a 2.00 kg object moving at 1.00 m/s. Ultimately, all you can do is proceed empirically. "[...]recently I've heard that on general relativity there's a loss of some conservation laws [...]" Yes, and this is why I don't agree with Jerry Schirmer's answer. He says that energy is the conserved quantity that you get because of time-translation invariance. But this procedure doesn't work in GR. In technical terms, the relevant symmetry becomes diffeomorphism invariance, and that doesn't satisfy the requirements of Noether's theorem. The more fundamental reason it can't work in GR is that in GR, energy-momentum is a vector, not a scalar, and you can't have global conservation of a vector in GR, because parallel transport of vectors in GR is path-dependent and therefore ambiguous. What you can do in GR is define local (not global) conservation of energy-momentum. Even if the technical details are mysterious, I think this counterexample shows although Noether's theorem does provide a deeper insight into where conservation laws come from, the ultimate definition of conserved quantities is still empirical. BTW, there is a good exposition of this philosophical position in the Feynman Lectures. He discusses conservation of energy using the metaphor of a bishop moving on a chess board and always staying on the same color. Although that treatment is aimed at people who don't know anything about Noether's theorem or general relativity, I think his philosophical position holds up very well in the full context of what is currently known about all of physics. safesphere Ben CrowellBen Crowell $\begingroup$ This procedure DOES work in GR, when you have a global timelike killing vector in your spacetime, which is the analogue of having a time translation symmetry. Alternately, it will work if you have asymptotic flatness, etc. $\endgroup$ – Jerry Schirmer Apr 1 '13 at 20:45 $\begingroup$ I don't think these issues are obscure at all. For example, undergrads often ask what happens to conservation of energy in cosmology. The answer is that energy simply isn't conserved. "I discussed energy specifically." Sorry to have mischaracterized what you said. However, you made a claim about energy, and I provided a counterexample concerning energy. $\endgroup$ – Ben Crowell Apr 1 '13 at 21:03 $\begingroup$ @BenCrowell: you can still define energy as the Noether charge corresponding to time translations even if it's not conserved $\endgroup$ – Christoph Apr 1 '13 at 21:21 $\begingroup$ But seriously, cosmology? Of course energy conservation fails! There is explicit time dependence in the metric, and in the Ricci tensor, and therefore, also in the action. You don't have conserved energy for a finite square well with a moving wall, either! $\endgroup$ – Jerry Schirmer Apr 1 '13 at 21:35 $\begingroup$ @Christoph: that's exactly my point. If you don't have a special time, there's no hope in getting an energy. Newtonian physics grants this from on high. Special cases of general relativity also have special notions of time. The general case does not. It's meaningless to chug forward, then, since we can always just arbitrarily change coordiantes. $\endgroup$ – Jerry Schirmer Apr 1 '13 at 21:58 Definitions of physical quantities in physics are dependent on context. For example the definition of energy in classical general relativity is different from the definition used in the quantum physics of the standard model. We don't yet have the most "fundamental" theory of physics so we don't know what the fundamental definition of energy will be like, or even if there will be one. Perhaps energy is emergent at the level of quantum gravity so it does not have a fundamental definition. We wont know until we understand quantum gravity better than we do now. However, there is a general theory about energy and its relation to time translation invariance that is embodied in Noether's theorem. The theorem says that there is a conserved quantity associated with any symmetry of nature. Energy is related to time symmetry while momentum is linked to space translations, angular momentum is linked to rotations, charge is linked to electromagnetic gauge invariance etc. Noether's theorem was originally stated and proved for classical systems but there is also a version that works for quantum physics, so it could be said that energy is defined as the quantity that comes out of Noether's theorem that is linked to time invariance. This may be the most fundamental definition we can give now, but it depends on the context of currently known physics and we have no idea if it will survive in some form at more fundamental levels of theory than those currently known. When we speak of time invariance in Noether's theorem we are talking about the fact that the complete laws of physics do not change with time. The early universe may have been very different from the one we live in now, but the laws of physics were the same. This means that Noether's theorem works perfectly well in general relativity for example. The universe may expand and cosmology may evolve but Einstein's field equation for gravity is always the same, so energy is conserved. Many people, especially those in this forum dispute this, but they are wrong. The arguments to this effect given in other answers here are flawed. Energy is conserved in GR without caveats about special cases or global meaning. For detailed refutations of individual claims see http://vixra.org/abs/1305.0034 In case anyone thinks I sound like a lone voice contradicting the mainstream view, that is not the case. When I wrote in a recent FQXi essay about how energy is conserved in GR despite claims to the contrary, Carlo Rovelli responded by writing "I do not see anything in what you say that goes beyond what is written in all GR books about energy conservation in GR. There is a vast literature on this." He is mainly right. You will find conservation of energy explained in books on gravitation by Weinberg, Dirac, Landau and Lifschitz, etc. It is covered well in Wikipedia and there was even a Nobel prize awarded for the application of energy conservation in GR to binary pulsars. The idea that energy is not conserved in GR is a meme perpetuated in some blogs and forums like this one. It stems from an article written on the subject in the physics FAQ some years ago which unfortunately I have been unable to get changed. Do not be fooled. Philip Gibbs - inactivePhilip Gibbs - inactive Precise definitions only ever apply to specific models. One of the more instructive ones for energy comes from special relativity, where space and time are not independent, but rather part of a single entity, space-time. Directional quantities in SR not only have components in the spatial x-, y- and z-directions, but also a fourth (or by convention zeroth) component in time direction. For momentum, that component is (up to a constant factor) the energy. It turns out that $\vec p=m\vec v$ is less fundamental than $\vec p=\gamma m\vec v$ which is the spatial part of the 4-vector $$p^\mu=\gamma m\begin{pmatrix} c \\ v_x \\ v_y \\ v_z \end{pmatrix}$$ where $\gamma = 1/\sqrt{1-(v/c)^2}$. As $$\gamma = 1 + \frac 12 (\frac vc)^2 + \mathcal O((\frac vc)^4)$$ the relativistic definition of $\vec p$ agrees with the classical one for $v\ll c$. Inserting our approximation up to second order into the definition of 4-momentum yields $$ p^0\approx mc + \frac{mv^2}{2c} = \frac 1c \left( mc^2 + \frac 12 mv^2 \right) = \frac 1c \left( E_\mathrm{rest}^\mathrm{Einstein} + E_\mathrm{kin}^\mathrm{Newton} \right) $$ Setting $\vec v = 0$, this also shows that rest energy and mass are essentially the same thing (they only differ by a constant factor). It's important to note that rest energy includes internal binding energy, which leads to the mass defect in nuclear reactions. ChristophChristoph protected by Qmechanic♦ Oct 17 '13 at 7:06 Not the answer you're looking for? Browse other questions tagged classical-mechanics energy or ask your own question. How can we define energy? What is a qualitative description of energy? What is the latest definition of energy? What Is Energy? Where did it come from? How can energy be useful when it is 'abstract'? Do all forms of energy fall under kinetic and potential energy? What is the rigorous quantitative definition of the concept of "Energy"? How general are Noether's theorem in classical mechanics? What is the relationship between Energy, Entropy, and Information? Definition of force, kinetic energy and momentum Energy definition in special relativity What variable is the conjugate momentum for angular momentum? Intuition about Momentum Maps Do solids have translational energy? Why does the kinetic energy of a particle moving in circular motion increase when the turn radius decreases and no torque is acting? How can one prove the existence of potential energy? Is there a way to have momentum without energy
CommonCrawl
Which computational model is used to analyse the runtime of matrix multiplication algorithms? Although I have already learned something about the asymptotic runtimes of matrix multiplication algorithms (Strassen's algorithm and similar things), I have never found any explicit and satisfactory reference to a model of computation, which is used to measure this complexity. In fact, I have found three possible answers, neither of which seems to me as absolutely satisfactory: Wikipedia says that the model used here is the Multitape Turing Machine. This does not seem to make much sense to me, since in the analysis of matrix multiplication, scalar multiplication is supposed to have a constant time complexity. This is not the case on Turing Machines. Some texts describe the complexity only vaguely as the number of arithmetic operations used. However, what exactly are arithmetic operations in this context? I suppose that addition, multiplication, and probably subtraction. But what about division, integer division, remainder, etc.? And what about bitwise operations - how do they fit into this setting? Finally, I have recently discovered an article, which uses the BSS machine as the model of computation. However, this also seems little bit strange to me, since for, e.g., integer matrices, it does not make much sense to me to disallow operations such as, e.g., integer division. I would be grateful to anyone, who could help me to sort these things out. algorithm-analysis runtime-analysis matrices machine-models Raphael♦ $\begingroup$ For complexity, we care only about one measure: steps of a TM. In algorithm analysis, you are unlikely to get something more precise than "number of basic operations", which roughly correspond to elementary ALU/memory access operations in processors. I think you are asking for algorithm analysis, not problem complexity? $\endgroup$ – Raphael♦ Feb 3 '14 at 9:51 $\begingroup$ @Raphael "For complexity, we care only about one measure: steps of a TM." Sorry but that's completely false. First off, there are plenty of models of computation that are not Turing machines: circuits, for example. Then you get things like geometric and descriptive complexity. Even within the realms of Turing machines, space is as important a measure as time. And what kind of Turing machine? Deterministic, nondeterministic, alternating and probabilistic machines all have different resource requirements. Random access is significant if you want finer classifications than "polynomial time". $\endgroup$ – David Richerby Feb 3 '14 at 9:59 $\begingroup$ @DavidRicherby: All true. Our statements are compatible; I should have made my scope clearer. "For time complexity as considered in classic classes like P, NP etc, we care...". $\endgroup$ – Raphael♦ Feb 3 '14 at 10:02 $\begingroup$ @Raphael But this isn't a question about P, NP, etc. It's a question about a specific problem. Upper bounds for any problem are going to involve algorithm analysis so I don't think it's really possible to split the two. Having said that, yes, it does seem that the complexity of Strassen and so on is expressed in terms of "arithmetic operations", rather than on any standard model of computation. $\endgroup$ – David Richerby Feb 3 '14 at 10:34 $\begingroup$ Regarding your second approach (counting arithmetic operations): You could simply count the number of each operation (multiplication, addition, bitwise operations, etc) separately. You can find an example where that is done e.g. in Sedgewick & Flajolet: Introduction to the Analysis of Algorithms (there they analyze Quicksort quite precisely). With matrix multiplication I believe that the number of multiplications involved dominates the rest, so essentially you're counting that. $\endgroup$ – john_leo Feb 3 '14 at 10:45 Matrix multiplication algorithms are analyzed in terms of arithmetic complexity. The computation model is straight-line programs with instructions of the form $a \gets b \circ c$, where $\circ \in \{ +,-,\times,\div \}$, $a$ is a variable, and $b,c$ could be either variables, inputs or constants. Additionally, certain variables are distinguished as outputs. For example, here is how to multiply two $2\times 2$ matrices using the usual algorithm, with input matrices $a_{ij},b_{ij}$ and output matrix $c_{ij}$: $$ \begin{align*} x_{11} &\gets a_{11} \times b_{11} & y_{11} &\gets a_{12} \times b_{21} & c_{11} &\gets x_{11} + y_{11} \\ x_{12} &\gets a_{11} \times b_{12} & y_{12} &\gets a_{12} \times b_{22} & c_{12} &\gets x_{12} + y_{12} \\ x_{21} &\gets a_{21} \times b_{11} & y_{21} &\gets a_{22} \times b_{21} & c_{21} &\gets x_{21} + y_{21} \\ x_{22} &\gets a_{21} \times b_{12} & y_{22} &\gets a_{22} \times b_{22} & c_{22} &\gets x_{22} + y_{22} \end{align*} $$ The complexity measure is the number of lines in the program. For matrix multiplication, one can prove a normal form for all algorithms. Every algorithm can be converted into an algorithm of the following form, at the cost of only a constant multiplicative increase in complexity: Certain linear combinations $\alpha_i$ of the input matrix $a_{jk}$ are calculated. Certain linear combinations $\beta_i$ of the input matrix $b_{jk}$ are calculated. $\gamma_i \to \alpha_i \times \beta_i$. Each entry in the output matrix is a linear combination of $\gamma_i$s. This is known as bilinear normal form. In the matrix multiplication algorithm shown above, $x_{jk},y_{jk}$ function as the $\gamma_i$, but in Strassen's algorithm the linear combinations are more interesting; they are the $M_i$'s in Wikipedia's description. Using a tensoring approach (i.e. recursively applying the same algorithm), similar to the asymptotic analysis of Strassen's algorithm, one can show that given such an algorithm for multiplying $n\times n$ matrix with $r$ products (i.e. $r$ variables $\gamma_i$), then arbitrary $N\times N$ matrices can be multiplied in complexity $O(N^{\log_n r})$; thus only the number of products matters asymptotically. In Strassen's algorithm, $n = 2$ and $r = 7$, and so the bound is $O(N^{\log_2 7})$. The problem of finding the minimal number of products needed to compute matrix multiplication can be phrased as finding the rank of a third-order tensor (a "matrix" with three indices rather than two), and this forms the connection to algebraic complexity theory. You can find more information in this book or these lecture notes (continued here). The reason this model is used is twofold: first, it is very simple and amenable to analysis; second, it is closely related to the more common RAM model. Straight-line programs can be implemented in the RAM model, and the complexity in both models is strongly related: arithmetic operations have unit cost in several variants of the model (for example, the RAM model with real numbers), and are otherwise related polynomially to the size of the numbers. In the case of modular matrix multiplication, therefore, arithmetic complexity provides an upper bound on complexity in the RAM model. In the case of integer or rational matrix multiplication, one can show that for bilinear algorithms resulting from tensorization, the size of the numbers doesn't grow too much, and so arithmetic complexity provides an upper bound for the RAM model, up to logarithmic factors. It could a priori be the case that a RAM machine can pull some tricks that the arithmetic model is oblivious to. But often we want matrix multiplication algorithms to work for matrices over arbitrary fields (or even rings), and in that case uniform algorithm should only use the arithmetic operations specified by the model. So this model is a formalization of "field-independent" algorithms. Yuval FilmusYuval Filmus $\begingroup$ Is there a Turing machine model? $\endgroup$ – 1.. Oct 2 '16 at 8:43 $\begingroup$ It turns out you don't need one. Usually one introduces the Turing machine to introduce uniformity – one piece of code that works for all $n$. While the model I have described is non-uniform, it turns out that you can approach the optimal exponent $\omega$ even uniformly, by brute-forcing a good algorithm on some matrix size much smaller than $n$. $\endgroup$ – Yuval Filmus Oct 2 '16 at 13:02 $\begingroup$ But one may still want to double-check that one doesn't rely heavily on random-access memory. Undoubtedly the best MM algorithms are fine in the Word RAM model, but are they definitely fine in the (say) multitape TM model? $\endgroup$ – Ryan O'Donnell Aug 26 '20 at 15:17 Not the answer you're looking for? Browse other questions tagged algorithm-analysis runtime-analysis matrices machine-models or ask your own question. Which algorithms have runtime recurrences like $T(n) = \sqrt{n}\,T(\sqrt{n}) + O(n)$? Efficient (sublinear) approximation algorithms for matrix-vector multiplication? Matrix Multiplication Algorithms for Non-Square Matrices Are there any non-naive parallel sparse matrix multiplication algorithms? Are there parallel matrix exponentiation algorithms that are more efficient than sequential multiplication? Time Complexity of Shifting How fast is this Division Algorithm Can Strassen's multiplication algorithm be improved if we divide matrices to 3x3 or axa in general?
CommonCrawl
Joint distribution of k-tuple statistics in zero-one sequences of Markov-dependent trials Anastasios N. Arapis1, Frosso S. Makri1 & Zaharias M. Psillakis2 Journal of Statistical Distributions and Applications volume 4, Article number: 26 (2017) Cite this article We consider a sequence of n, n≥3, zero (0) - one (1) Markov-dependent trials. We focus on k-tuples of 1s; i.e. runs of 1s of length at least equal to a fixed integer number k, 1≤k≤n. The statistics denoting the number of k-tuples of 1s, the number of 1s in them and the distance between the first and the last k-tuple of 1s in the sequence, are defined. The work provides, in a closed form, the exact conditional joint distribution of these statistics given that the number of k-tuples of 1s in the sequence is at least two. The case of independent and identical 0−1 trials is also covered in the study. A numerical example illustrates further the theoretical results. Run counting statistics defined on a sequence of binary (zero (0) - one (1)) random variables (RVs), along with their exact and approximate distributions, have been extensively studied in the literature. Their popularity is due to the fact that such statistics appear as useful theoretical models in many research areas including statistics (e.g. hypothesis testing), engineering (e.g. system reliability and quality control), biology (e.g. population genetics and DNA sequence analysis), computer science (e.g. encoding/decoding/transmission of digital information) and financial engineering (e.g. insurance and risk analysis). In such applications, a key point is the understanding how 1s and 0s are distributed and combined as elements of a 0−1 sequence (finite or infinite, memoryless or not) and eventually forming runs of 1s or 0s which are enumerated according to certain counting schemes. Each scheme defines how runs of same symbols or strings (patterns) of both symbols are formed and consequently are enumerated. A counting scheme may depend on, among other considerations, whether overlapping counting is allowed or not as well as if the counting starts or not from scratch when a run/string of a certain size has been so far enumerated. The counting scheme as well as the intrinsic uncertainty of a 0−1 sequence are often suggested by the applications. Probabilistic models, in common use, for the internal structure of a 0−1 sequence include the model of a sequence with elements independent of each other or a model for which it is assumed some kind of dependence among the elements of it. The methods used to derive exact/approximating, marginal/joint probability distributions include combinatorial analysis, generating functions, finite Markov chain imbedding technique, recursive schemes as well as normal, Poisson and large deviation approximations. For extensive reviews of the recent literature on the distribution theory of runs and patterns we refer to Balakrishnan and Koutras (2002) and Fu and Lou (2003). Current works on the subject include, among others, those of Antzoulakos and Chadjiconstantinidis (2001); Eryilmaz (2006, 2015, 2016, 2017); Eryilmaz and Yalcin (2011); Johnson and Fu (2014); Koutras (2003); Koutras et al. (2016); Makri and Psillakis (2015); Makri et al. (2013) and Mytalas and Zazanis (2013, 2014). In this article we derive expressions for a conditional distribution of a trivariate statistic. Its components denote the number of runs of 1s of length exceeding a fixed threshold number, the number of 1s in such runs of 1s and the length of the minimum sequence's segment in which these runs are concentrated. The study is developed on a sequence of two-state (0−1) Markov-dependent trials. The runs are enumerated according to Mood's (1940) counting scheme. More specifically, the manuscript is organized as follows. In Section 2 we present some preliminary material, including notation and definitions, necessary to develop our results which are obtained in Section 4. In Section 3 we give a motivation along with a statement of the aim of the work. A numerical example, showed in Section 5, clarifies the theoretical results of Section 4. A discussion on the results as well as a note on a future work are given in Section 6. Throughout the article, for integers, n, m, \({n\choose m}\) denotes the extended binomial coefficient (see, Feller (1968), pp. 50, 63), ⌊x⌋ stands for the greatest integer less than or equal to x and δ ij denotes the Kronecker delta fuction of the integer arguments i and j. Further, for α>β, we apply the conventions \(\sum _{i=\alpha }^{\beta }y_{i}=0\), \(\prod _{i=\alpha }^{\beta }y_{i}=1\), \(\sum _{i=\alpha }^{\beta }\mathbf {Y}^{(i)}=\mathbf {O}\equiv {\scriptsize \left (\begin {array}{cc} 0 &0\\ 0 & 0 \end {array}\right)}\), \(\prod _{i=\alpha }^{\beta }\mathbf {Y}^{(i)}=\mathbf {I}\equiv {\scriptsize \left (\begin {array}{cc} 1 &0\\ 0 & 1 \end {array}\right)}\), where y i and Y (i) are scalars and 2×2 matrices, respectively. 2.1 Run counting statistics Let \(\{X_{t}\}_{t=1}^{n}\), n≥1, be the first n trials of a binary (0−1) sequence of RVs, X t =x t ∈{0,1}. A run of 1s, is a (sub)sequence of \(\{X_{t}\}_{t=1}^{n}\) consisting of consecutive 1s, the number of which is referred to as its length, preceded and succeeded by 0s or by nothing. Given a fixed integer k, 1≤k≤n, a k-tuple of 1s is a run of 1s of length k or more. In the paper we will deal with the following statistics defined on a \(0-1 \{X_{t}\}_{t=1}^{n}\). For details see, e.g. Makri et al. (2015) and the references therein. (I) G n,k denoting the number of k-tuples of 1s, 1≤k≤n. In particular, G n,1 denotes the number of 1-tuples of 1s, i.e. it represents the number R n ≡G n,1 of all runs of 1s in the sequence. Using the convention X 0=X n+1≡0, we can define G n,k as $$ G_{n,k}=\sum_{i=k}^{n}E_{n,i},\, 1\leq k\leq n, $$ $$E_{n,i}=\sum_{j=i}^{n}J_{j},\, J_{j}=\left(1-X_{j-i}\right)\left(1-X_{j+1}\right)\prod_{r=j-i+1}^{j}X_{r}. $$ (II) S n,k denoting the number of 1s in the G n,k k-tuples of 1s; i.e. S n,k represents the sum of lengths of the G n,k k-tuples of 1s, 1≤k≤n. In particular S n,1 represents the number of all 1s in the sequence; hence, the number of 0s, Y n , in the sequence is Y n =n−S n,1. S n,k is formally defined as $$ S_{n,k}=\sum_{i=k}^{n}{iE}_{n,i}, \,1\leq k\leq n. $$ Readily, k G n,k ≤S n,k . (III) L n , n≥1, denoting the length of the longest run of 1s in the sequence. By setting $$\Lambda_{n}=\{i:G_{n,i}>0, 1\leq i\leq n\}, $$ we have that $$ L_{n}=\max \{k: k\in \Lambda_{n}\},\, \text{if}\, \Lambda_{n}\neq \emptyset;\,0,\,\text{otherwise}. $$ Readily L n <k iff G n,k <1. (IV) For G n,k ≥1, 1≤k≤n, D n,k denotes the distance (number of trials) between and including the first 1 of the first k-tuple of 1s and the last 1 of the last k-tuple of 1s in the sequence. If there is only one k-tuple of 1s in the sequence then D n,k denotes its length. That is, D n,k represents the size (length) of the minimum (sub)sequence of \(\{X_{t}\}_{t=1}^{n}\) in which all G n,k k-tuple of 1s are concentrated. In particular, D n,1 represents the length of the minimum segment of the sequence containing all R n runs of 1s or in other words all S n,1 1s appearing in the sequence. For G n,k ≥1, 1≤k≤n, D n,k can be formally defined as $$ D_{n,k}=U_{n,k}^{(2)}-U_{n,k}^{(1)}+1, $$ $$U_{n,k}^{(1)}=\min\{j:I_{j}=1, 1\leq j\leq n-k+1\}, $$ $$U_{n,k}^{(2)}=\max\{j:I_{j-k+1}=1, k\leq j\leq n\}, $$ $$I_{j}=\prod_{r=j}^{j+k-1}X_{r},\, 1\leq j\leq n-k+1. $$ Readily, D n,k =S n,k =L n , if G n,k =1 and D n,k >S n,k >L n , if G n,k >1. (V) For G n,k ≥1, 1≤k≤n, set V n,k =(D n,k ,G n,k ,S n,k ). This is the RV we focus on in the article. Example: By way of illustration consider the trials 1110001100010001010011101111001001001001 numbered from 1 to 40. Then, L 40=4 and V 40,1=(40,11,19), V 40,2=(28,4,12), V 40,3=(28,3,10), V 40,4=(4,1,4). 2.2 Internal structure's models A general enough model for the internal structure of a \(0-1 \{X_{t}\}_{t=1}^{n}\), n≥2, is that of the first n trials of a homogeneous 0−1 Markov chain of first order (HMC1). On such a model we will develop our results. Accordingly, we next state the necessary notation/definitions. Let {X t } t≥1 be a HMC1 with state space ={0,1}, one step transition probability matrix $$\mathbf{P}=(p_{ij})=\left(\begin{array}{cc} p_{00} & p_{01} \\ p_{10} & p_{11} \\ \end{array} \right), $$ $$ p_{ij}=P\left(X_{t}=j\mid X_{t-1}=i\right),\,i,j\in {\cal{A}},\,\sum_{j\in \cal{A}}p_{ij}=1,\,i\in {\cal{A}},\,t\geq 2 $$ and probability distribution vector at time t $$\mathbf{p}^{(t)}=\left(p_{0}^{(t)}, p_{1}^{(t)}\right), $$ $$ p_{i}^{(t)}=P(X_{t}=i),\, i\in {\cal{A}},\, \sum_{i\in \cal{A}}p_{i}^{(t)}=1,\, t\geq 1. $$ Readily, because of the homogeneity of {X t } t≥1, it holds $$\mathbf{p}^{(t)}=\mathbf{p}^{(t-1)}\mathbf{P}=\mathbf{p}^{(1)}\mathbf{P}^{t-1 },\,t\geq 2;\,\mathbf{p}^{(1)},\,t=1\,\,\text{and}\,\,\mathbf{P}^{t-1}=\left(p_{ij}^{(t-1)}\right),\, t\geq 2, $$ $$p_{i}^{(t)}=\mathbf{p}^{(t)}\mathbf{e}_{i}^{'},\, i\in {\mathcal{A}},\,t\geq 1, $$ $$ p_{ij}^{(t-1)}=P(X_{t-1+m}=j\mid X_{m}=i)=\mathbf{e}_{i}\mathbf{P}^{t-1}\mathbf{e}_{j}^{'},\,i,j\in {\mathcal{A}},\, t\geq 2,\, m\geq 1, $$ where \(\mathbf {e}_{i}^{'}\) is the transpose (i.e. the column vector) of the row vector e i , \(i\in {\mathcal {A}}\), with e 0=(1,0) and e 1=(0,1). In particular, for p 01+p 10≠0, i.e. P≠I, it holds $$ \mathbf{P}^{t-1}=\left(p_{01}+p_{10}\right)^{-1}\left\{\left(\begin{array}{cc} p_{10} & p_{01} \\ p_{10} & p_{01} \\ \end{array} \right)+(1-p_{01}-p_{10})^{t-1}\left(\begin{array}{cc} p_{01} & -p_{01} \\ -p_{10} & p_{10} \\ \end{array} \right)\right\},\, t\geq 2, $$ $$ p_{0}^{(t)}=p_{0}^{(1)}\left(1-p_{01}-p_{10}\right)^{t-1}+p_{10}\left(p_{01}+p_{10}\right)^{-1}\left[1-\left(1-p_{01}-p_{10}\right)^{t-1}\right],\, t\geq 1. $$ The setup of a 0−1 HMC1 \(\{X_{t}\}_{t=1}^{n}\), n≥2, covers the case of a 0−1 sequence of independent and identically distributed (IID) RVs, too. This is so, because a \(0-1 \{X_{t}\}_{t=1}^{n}\), n≥2, IID sequence with $$ P(X_{t}=1)=1-P(X_{t}=0)=p_{1},\, 1\leq t\leq n, $$ is a particular HMC1 with $$p_{ij}=1-p_{1},\, j=0;\, p_{1}, j=1,\, i\in {\mathcal{A}},\,p_{ij}^{(t-1)}=p_{ij},\,i,j\in {\mathcal{A}},\,t\geq 2, $$ $$ p_{1}^{(t)}=p_{1}=1-p_{0}^{(t)},\, 1\leq t\leq n. $$ 2.3 A combinatorial result In combinatorial analysis which will be used in Section 4, the following result, recalled from Makri et al. (2007), is useful. The coefficient $$ H_{m}(\alpha,r,k)=\sum_{j=0}^{\left\lfloor\frac{\alpha}{k+1}\right\rfloor}(-1)^{j}{m\choose j}{\alpha-(k+1)j+r-1\choose \alpha-(k+1)j}, $$ represents the number of allocations of α indistinguishable balls into r distinguishable cells where each of the m, 0≤m≤r, specified cells is occupied by at most k balls. Equivalently, it gives the number of nonnegative integer solutions of the linear equation x 1+x 2+…+x r =α with the restrictions, for m≥1, \(0\leq x_{i_{j}}\leq k\), 1≤j≤m, for some specific m-combination {i 1,i 2,…,i m } of {1,2,…,r}, and no restrictions on x j s, 1≤j≤r, for m=0. Moreover, H r (α,r,k) is Riordan's (1964, p. 104) coefficient $$ C(\alpha,r,k)=\sum_{j=0}^{\left\lfloor\frac{\alpha}{k+1}\right\rfloor}(-1)^{j}{r\choose j}{\alpha-(k+1)j+r-1\choose \alpha-(k+1)j}. $$ Motivation and aim of the work In a study of a 0−1 sequence \(\{X_{t}\}_{t=1}^{n}\), n≥3, it is reasonable for one to be interested in the probabilistic behavior of RV V n,k =(D n,k ,G n,k ,S n,k ). This happens because jointly its components provide a more refined view of the internal clustering structure of the sequence than the information extracted by each one alone. Interpreting a k-tuple of 1s as a cluster of consecutive 1s of size at least k, D n,k represents the size of the minimum segment of \(\{X_{t}\}_{t=1}^{n}\) in which G n,k clusters of size at least k and at most L n are concentrated. The overall density of G n,k clusters, with respect to the number of 1s in them, as well as of the minimum concentration segment is evaluated by S n,k . Large values of D n,k suggest that these G n,k clusters spread over the interval between the left and the right side of the sequence whereas small values of D n,k indicate rather that the clusters are concentrated in a segment of the sequence of small size leaving the rest part(s) of the sequence empty of such clusters. In addition to this information, a large value of S n,k paired with a small value of G n,k indicates the existence of clusters of 1s of a large size and therefore a trend whereas the same value of S n,k paired with a large value of G n,k indicates rather a distribution of clusters of small size in the (sub)sequence in which they are concentrated. Therefore, based on the former interpretation, the motivation for the study as well as the usefulness of the statistic V n,k =(D n,k ,G n,k ,S n,k ) is apparent. In the sequel, we assume that G n,k ≥2 in order to have at least two k-tuples of 1s in the sequence and accordingly the distance D n,k is not a degenerate one. Moreover, this assumption is a common one in an application area of D n,k ; e.g., in detecting pattern (tandem or non-tandem direct) repeats in DNA sequences (Benson 1999). For 1≤k≤n, set $$ {\mathcal{M}}_{n,k}=\{G_{n,k}\geq 2\},\,\alpha_{n,k}=P\left({\mathcal{M}}_{n,k}\right) $$ and for n≥3, 1≤k≤⌊(n−1)/2⌋, define $$ \Omega_{n,k}=\left\{(d,m,s): 2k+1\leq d\leq n, 2k\leq s\leq d-1, 2\leq m\leq \min\left(\lfloor s/k\rfloor, d-s+1\right)\right\} $$ and for (d,m,s)∈Ω n,k , $$h_{n,k}(d,m,s)=P\left(\mathbf{V}_{n,k}=(d,m,s), {\cal {M}}_{n,k}\right), $$ $$ v_{n,k}(d,m,s)=P\left(\mathbf{V}_{n,k}=(d,m,s)\mid {\cal {M}}_{n,k}\right)=h_{n,k}(d,m,s)/\alpha_{n,k}. $$ The paper provides exact closed form expressions for α n,k , h n,k (d,m,s) and eventually for v n,k (d,m,s) when V n,k is defined on a 0−1 HMC1/IID. The expressions are obtained via combinatorial analysis. More specifically, closed formulae are established for the first time for h n,k (d,m,s), 1≤k≤⌊(n−1)/2⌋, when V n,k is defined on a 0−1 HMC1 with given P and p (1). Since, the general frame of HMC1 covers as a particular case IID sequences, the so implied expressions for v n,k (d,m,s) are alternative to those obtained for v n,k (d,m,s), 1≤k≤⌊(n−1)/2⌋, by Makri et al. (2015) for IID sequences. Moreover, for n≥3, 1≤k≤⌊(n−1)/2⌋, 2k+1≤d≤n, let $$f_{n,k}(d)=P\left(D_{n,k}=d\mid {\cal {M}}_{n,k}\right). $$ Therefore, since $$ f_{n,k}(d)=\sum_{s=2k}^{d-1}\sum_{m=2}^{\min\left(\lfloor s/k\rfloor,d-s+1\right)}v_{n,k}(d,m,s)=\alpha_{n,k}^{-1}\sum_{s=2k}^{d-1}\sum_{m=2}^{\min\left(\lfloor s/k\rfloor,d-s+1\right)}h_{n,k}(d,m,s), $$ hence, the work provides closed form expressions for determining f n,k (d) for HMC1 and IID \(0-1 \{X_{t}\}_{t=1}^{n}\). These expressions are alternative to those derived, for IID sequences, by Makri et al. (2015) for 1≤k≤⌊(n−1)/2⌋ as well as to those obtained, for HMC1, by Arapis et al. (2016) for k=1 and by Arapis et al. (2017) for 1≤k≤⌊(n−1)/2⌋. In a 0−1 sequence \(\{X_{t}\}_{t=1}^{n}\), n≥2, for 0≤y≤n, 0≤r≤⌊(n+1)/2⌋ and (i,j)∈{0,1}2, define $$B_{n}^{(i,j)}(y,r)=\{X_{1}=i,X_{n}=j,Y_{n}=y,G_{n,1}=r\}, $$ $$\pi_{n}^{(i,j)}(y,r)=P(B_{n}^{(i,j)}(y,r)). $$ Accordingly, for a HMC1 \(\{X_{t}\}_{t=1}^{n}\), n≥2, with given P and p (1), it holds $$ \pi_{n}^{(i,j)}(y,r)=\left(p_{1}^{(1)}\right)^{i}\left(1-p_{1}^{(1)}\right)^{1-i}p_{00}^{y-r-1+i+j}\left(1-p_{00}\right)^{r-i} \left(1-p_{11}\right)^{r-j}p_{11}^{n-y-r}, $$ for 2−(i+j)≤y≤n−(i+j), 1−δ y,0−δ y,n +δ i+j,2≤r≤ min{n−y,y−1+i+j} and \(\pi _{n}^{(i,j)}(y,r)=0\), otherwise. Consequently, \(\pi _{n}^{(i,j)}(y,r)\), for a 0−1 IID sequence, reduces to $$ \pi_{n}^{(i,j)}(y,r)=\pi_{n}(y)=p_{1}^{n-y}(1-p_{1})^{y},\, 0\leq y\leq n. $$ For n≥3, (d,m,s)∈Ω n,1, \(0<p_{1}^{(1)}<1\), it holds $$\begin{array}{@{}rcl@{}} h_{n,1}(d,m,s)&=& {s-1\choose m-1}{d-s-1\choose m-2}\pi_{d}^{(1,1)}(d-s,m)\varepsilon_{n}(d) \end{array} $$ where ε n (d)=1, if n=d; \(p_{00}^{n-d-2}\left \{p_{10}p_{00}+p_{0}^{(1)}(p_{1}^{(1)})^{-1}p_{01}\left [(n-d-1)p_{10}+p_{00}\right ]\right \}\), if n≥d+1. For d=3,…,n−2, i=2,3,…,n−d, s=2,3,…,d−1, m=2,3,…, min{s,d−s+1} an element of the event \(\Gamma _{i,d,m,s}=\{U_{n,1}^{(1)}=i, D_{n,1}=d, R_{n}=m, S_{n,1}=s\}\) is a 0−1 sequence of length n with probability $$p_{0}^{(1)}p_{00}^{i-2}p_{01}\left[\pi_{d}^{(1,1)}(d-s,m)\left(p_{1}^{(1)}\right)^{-1}\right]p_{10}p_{00}^{n-i-d}. $$ Fix i. Then the number of elements of the event Γ i,d,m,s is \({s-1\choose m-1}{d-s-1\choose m-2}\), since the number of allocations of s 1s in m runs of 1s is \({s-1\choose m-1}\) and the number of allocations of d−s 0s in m−1 runs of 0s is \({d-s-1\choose m-2}\), so that $$P\left(\Gamma_{i,d,m,s}\right)={s-1\choose m-1}{d-s-1\choose m-2}p_{0}^{(1)}p_{01}\left[\pi_{d}^{(1,1)}(d-s,m)\left(p_{1}^{(1)}\right)^{-1}\right]p_{10}p_{00}^{n-d-2}. $$ We use similar reasoning for the rest cases. Then summing with respect to i we get the result. □ For a sequence \(\{X_{t}\}_{t=1}^{n}\) of 0−1 IID RVs, h n,1(d,m,s) reduces to the explicit formula given in the next Corollary. Corollary 1 For n≥3, (d,m,s)∈Ω n,1, 0<p 1<1, it is true that $$ h_{n,1}(d,m,s)=(n-d+1){s-1\choose m-1}{d-s-1\choose m-2}p_{1}^{s}(1-p_{1})^{n-s}.\quad \diamondsuit $$ In order to derive for HMC1, in the forthcoming Theorem 2, h n,k (d,m,s), 5≤2k+1≤n, we next recall, in Lemma 1, a result from (Makri et al.: On the concentration of runs of ones of length exceeding a threshold in a Markov chain, submitted). Lemma 1 For (i,j)∈{0,1}2, n≥2, set \(\lambda _{n,k}^{(i,j)}(x)=P(G_{n,k}=x,X_{1}=i,X_{n}=j)\), x=0,1. Then, it holds that: (I) For 2≤k≤n−2+i+j, $$\lambda_{n,k}^{(i,j)}(0)=\sum_{y=1}^{n-(i+j)}\sum_{r=i+j}^{y-1+i+j} {y-1\choose r-i-j}C(n-y-r,r,k-2)\pi_{n}^{(i,j)}(y,r), $$ $$ {}\lambda_{n,k}^{(i,j)}(1)=\pi_{n}^{(i,j)}(0,1)\delta_{2,i+j}+\sum_{y=1}^{n-k}\sum_{r=1}^{y-1+i+j} r{y-1\choose r-i-j}H_{r-1}(n-y-r-k+1,r,k-2)\pi_{n}^{(i,j)}(y,r). $$ (II) For k>n−2+i+j, $$\lambda_{n,k}^{(i,j)}(0)=\left(p_{1}^{(1)}\right)^{i}\left(1-p_{1}^{(1)}\right)^{1-i}p_{ij}^{(n-1)}, $$ $$ \lambda_{n,k}^{(i,j)}(1)=0. $$ For n≥5, 2≤k≤⌊(n−1)/2⌋, (d,m,s)∈Ω n,k , \(0<p_{1}^{(1)}<1\), it holds $$\begin{array}{*{20}l} {}h_{n,k}(d,m,s)\,=\,p_{11}^{2k-2}\left(p_{1}^{(1)}\right)^{-1}\!\! \sum_{i=1}^{n-d+1}\!\!\!\ell_{i-1,k}^{(\alpha)}\ell_{n-d-i+1,k}^{(\beta)}\!\!\!\!\!\!\!\!\!\sum_{r=m}^{m+\left\lfloor\frac{d-s-m+1}{2}\right\rfloor}\! \sum_{y=r-1}^{d-s-r+m}\!\!\!\gamma_{d,m,s}(y,\!r)\pi_{d-\!2k+\!2}^{(1,1)}(y,\!r), \end{array} $$ $${}\ell_{n,k}^{(\alpha)}=p_{1}^{(1)},\,\text{for}\,n=0;\quad p_{0}^{(n)}p_{01},\,\text{for}\, 1\leq n\leq k;\quad p_{01}\left[\lambda_{n,k}^{(0,0)}(0)+\lambda_{n,k}^{(1,0)}(0)\right],\,\text{for}\,n\geq k+1, $$ $$ {}\ell_{n,k}^{(\beta)}=1,\,\text{for}\,n=0;\quad p_{10},\,\text{for}\, 1\leq n\leq k;\quad \!p_{10}(p_{0}^{(1)})^{-1}\left[\lambda_{n,k}^{(0,0)}(0)+\lambda_{n,k}^{(0,1)}(0)\right],\,\text{for}\,n\geq k+1 $$ $$\begin{array}{@{}rcl@{}} {}\gamma_{d,m,s}(y,r)\,=\,{y-1\choose r-2}{r-2\choose m-2}{s-mk+m-1\choose m-1}C(d-y-s-r+m,r-m,k\,-\,2). \end{array} $$ For 1≤r 1≤r 2≤n let \(Y_{r_{1},r_{2}}\), \(R_{r_{1},r_{2}}\), \(L_{r_{1},r_{2}}\), \(S_{r_{1},r_{2},k}\), \(D_{r_{1},r_{2},k}\), \(G_{r_{1},r_{2},k}\) be RVs defined on the subsequence \(X_{r_{1}}, X_{r_{1}+1},\ldots,X_{r_{2}}\) of \(\{X_{t}\}_{t=1}^{n}\). For m≥2 define the event $$\begin{array}{@{}rcl@{}} \lefteqn{\Delta_{r_{1},r_{1}+d-1}(d,s,m,y,r)}\\ & & =\{D_{r_{1},r_{1}+d-1,k}=d, G_{r_{1},r_{1}+d-1,k}=m, S_{r_{1},r_{1}+d-1,k}=s, Y_{r_{1},r_{1}+d-1}=y, R_{r_{1},r_{1}+d-1}=r\}. \end{array} $$ An element of this event is a 0 - 1 sequence of length d, starting and ending with a 1, for which y j 's and z j 's, representing the lengths of the failure and success runs, respectively, satisfy the conditions: y 1+y 2+…+y r−1=y, y j ≥1, 1≤j≤r−1. \(\phantom {\dot {i}\!}z_{1}+z_{i_{1}}+z_{i_{2}}+\ldots +z_{i_{m-2}}+z_{r}=s\), z j ≥k, j∈{1,i 1,i 2,…,i m−2,r}, for some specific combination {1,i 1,i 2,…,i m−2,r} of {1,2,…,r−1,r} among the \({r-2\choose m-2}\) ones. \(z_{i_{m-1}}+z_{i_{m}}+\ldots +z_{i_{r-2}}=d-y-s\), \(1\leq z_{i_{j}}\leq k-1\), m−1≤j≤r−2, for {i m−1,…,i r−2}∈{1,2,…,r}−{1,i 1,i 2,…,i m−2,r}. Fix i 1,i 2,…,i m−2. Then the number of such sequences, i.e. the number of solutions of the system (a)-(c), is $${y-1\choose r-2}C(d-y-s-r+m,r-m,k-2){s-mk+m-1\choose m-1} $$ and each such sequence has probability $$p_{1}^{(1)}p_{11}^{k-1}(p_{1}^{(1)})^{-1}\pi_{d-2k+2}^{(1,1)}(y,r)p_{11}^{k-1}=p_{11}^{2k-2}\pi_{d-2k+2}^{(1,1)}(y,r). $$ $$\begin{array}{@{}rcl@{}} P(\Delta_{r_{1},r_{1}+d-1}(d,s,m,y,r))&=& p_{11}^{2k-2}\pi_{d-2k+2}^{(1,1)}(y,r){r-2\choose m-2}{y-1\choose r-2}{s-mk+m-1\choose m-1}\\ & & \times C(d-y-s-r+m,r-m,k-2). \end{array} $$ For k+2≤i≤n−k−d, m≥2, we have that $$\begin{array}{@{}rcl@{}} \lefteqn{P\left(U_{n,k}^{(1)}=i, D_{n,k}=d, G_{n,k}=m, S_{n,k}=s, Y_{i,i+d-1}=y, R_{i,i+d-1}=r\right)}\\ &=&P\Big\{\left[(L_{1,i-1}<k, X_{i-1}=0)\cap\left[(X_{1}=0)\cup(X_{1}=1)\right]\right]\cap \Delta_{i,i+d-1}(d,s,m,y,r)\\ & &\cap\left[(L_{i+d,n}<k, X_{i+d}=0)\cap\left[(X_{n}=0)\cup(X_{n}=1)\right]\right]\Big\}\\ &= & \left[\lambda_{i-1,k}^{(0,0)}(0)+\lambda_{i-1,k}^{(1,0)}(0)\right]p_{01}\\ & &\times \left(p_{1}^{(1)}\right)^{-1}P\left(\Delta_{i,i+d-1}(d,s,m,y,r)\right) p_{10}\left[\lambda_{n-i-d+1,k}^{(0,0)}(0)+\lambda_{n-i-d+1,k}^{(0,1)}(0)\right]/p_{0}^{(1)}\\ &=& \left[\lambda_{i-1,k}^{(0,0)}(0)+\lambda_{i-1,k}^{(1,0)}(0)\right]p_{01}\left(p_{1}^{(1)}\right)^{-1}p_{11}^{2k-2}\pi_{d-2k+2}^{(1,1)}(y,r)\\ & & \times{r-2\choose m-2}{y-1\choose r-2}{s-mk+m-1\choose m-1}\\ & & \times C(d-y-s-r+m,r-m,k-2)p_{10}\left(p_{0}^{(1)}\right)^{-1} \left[\lambda_{n-i-d+1,k}^{(0,0)}(0)+\lambda_{n-i-d+1,k}^{(0,1)}(0)\right]. \end{array} $$ By similar reasoning we get the remaining cases of i, i.e. 1≤i≤k+1 and n−d+1−k≤i≤n−d+1. Then summing with respect to i, y and r we get the result. □ Having found h n,k (d,m,s), we next proceed to obtain v n,k (d,m,s). In accomplishing it, the required probabilities α n,k for HMC1 are recalled, in Lemma 2, from Arapis et al. (2016) for k=1, and they are computed via Lemma 1 for 2≤k≤⌊(n−1)/2⌋. For n≥k≥1, the probability α n,k , for HMC1, is computed via the expressions:(I) For k=1, $$\begin{array}{@{}rcl@{}} \alpha_{n,1}&=&1-p_{00}^{n-3}\left\{p_{00}\left(1+(n-2)p_{01}\right)+\frac{(n-1)(n-2)}{2}p_{0}^{(1)}p_{01}^{2}\right\},\quad \text{if}\quad p_{00}=p_{11} \end{array} $$ $$\begin{array}{@{}rcl@{}} \alpha_{n,1}&=& 1-p_{0}^{(1)}p_{00}^{n-1}-p_{11}^{n-2}\left(p_{1}^{(1)}+p_{0}^{(1)}p_{01}\right)-p_{00} \left(p_{0}^{(1)}p_{01}+p_{1}^{(1)}p_{10}\right)\frac{p_{11}^{n-2}-p_{00}^{n-2}}{p_{11}-p_{00}} \\ & &\quad -p_{0}^{(1)}p_{01}p_{10}\frac{p_{11}^{n-1}-p_{00}^{n-2} \left[p_{11}+(n-2)\left(p_{11}-p_{00}\right)\right]}{\left(p_{11}-p_{00}\right)^{2}},\quad \text{if}\quad p_{00}\neq p_{11}. \end{array} $$ (II) For 2≤k≤n, $$\begin{array}{@{}rcl@{}}\alpha_{n,k}=1-\sum_{(i,j)\in \{0,1\}^{2}}\left[\lambda_{n,k}^{(i,j)}(0)+\lambda_{n,k}^{(i,j)}(1)\right]. \end{array} $$ For n≥3, 1≤k≤⌊(n−1)/2⌋, (d,m,s)∈Ω n,k , \(0<p_{1}^{(1)}<1\), the PMF v n,k (d,m,s) for a HMC1, with given P and p (1), is calculated by $$ v_{n,k}(d,m,s)=\alpha_{n,k}^{-1}h_{n,k}(d,m,s), $$ where α n,k and h n,k (d,m,s) are provided by Lemma 2 and Theorems 1 (for k=1) and 2 (for 2≤k≤⌊(n−1)/2⌋), respectively. Remark 1 For IID sequences, in implementing Theorem 3, one has to take into consideration Eqs. (10) - (11), (19) and (21). Moreover, for speeding up calculations, one has to set π n (y) in front of the inner summation in (22). A numerical example In this example we compute some indicative numerics concerning two model (i.e. HMC1 and IID) 0−1 sequences \(\{X_{t}\}_{t=1}^{n}\) which are considered in the paper. The common length of these was taken small, i.e. n=8, so that the required computations can also be carried out by a hand/pocket calculator and thus it is possible to gain insight in the formulae developed in Section Results, and also because of space limitations. The sequences that have been used are as follows. Table 1: An IID sequence with p 1=0.5. Table 2: A HMC1 sequence with p 00=p 11=0.9, \(p_{1}^{(1)}=0.5\). Table 1 0−1 IID sequence with p 1=0.5 Table 2 0−1 HMC1 sequence with p 00=p 11=0.9, \(p_{1}^{(1)}=0.5\) Both tables depict for k=1,2,3, v 8,k (d,m,s), (d,m,s)∈Ω 8,k and f 8,k (d), 2k+1≤d≤8 illustrating the numeric values of the involved probabilities. v 8,k (d,m,s) and f 8,k (d) were computed via Eqs. (29) and (17), respectively. Discussion and further study In this article we have derived exact closed form expressions for PMF v n,k (d,m,s), n≥3, 1≤k≤⌊(n−1)/2⌋, (d,m,s)∈Ω n,k , of the RV V n,k ∣ n,k defined on a 0−1 sequence of homogeneous Markov-dependent trials. The method used is a combinatorial one relied on results exploiting the internal structure of such a sequence. As it is noticed in the Introduction the application domain of runs contains a diverse range of fields. Indicative potential ones are next discussed. Encoding, compression and transmission of digital information calls for the understanding the distributions of runs of 1s or 0s. Such a knowledge helps in analyzing, and also in comparing, several techniques used in communication networks. In such networks 0−1 data ranging from a few kilobytes (e.g. e-mails) to many gigabytes of greedy multimedia applications (e.g. video on demand) are highly encoded, decoded and eventually proceeded under security. For details, see e.g., Sinha and Sinha (2009), Makri and Psillakis (2011a) and Tabatabaei and Zivic (2015). An area where the study of runs of 1s and 0s has become increasingly useful is the field of bioinformatics or computational biology. For instance, molecular biologists design similarity tests between two DNA sequences where a 1 is interpreted as a match of the sequences at a given position and everything else as a 0. Moreover, the probabilistic analysis of such sequences according to the form, the length and the number of detected patterns as well as of the positions and the lengths of the segments of the sequence in which they are concentrated, probably suggests a functional reason for the internal structure of the examined sequence. The latter facts might be useful in suggesting a further investigation of the underline sequence(s) by biologists. See, e.g. Avery and Henderson (1999), Benson (1999) and Nuel et al. (2010). Another active area where run statistics, in particular G n,k and S n,k , have interesting statistical applications is that connected to hypothesis testing; e.g., in tests of randomness. For a systematic study of such a topic, we refer among others, the works of Koutras and Alexandrou (1997) and Antzoulakos et al. (2003). Accordingly, it is reasonable for one to use the exact expressions obtained for v n,k (d,m,s) in applications like the ones mentioned above. This is so, because this distribution, as a joint one, is more flexible than each one of its marginals which have been used in such applications. See, e.g. Lou (2003), Makri and Psillakis (2011b) and Arapis et al. (2016). Moreover, in handling 0 - 1 sequences of a large length, with dependent or not elements, a Monte - Carlo simulation, based on Eqs. (1) - (4) would be a useful tool in obtaining approximate values for v n,k (d,m,s). In addition, the general approximating methods, suggested by Johnson and Fu (2014), might be helpful in deriving approximate values for f n,k (d). Antzoulakos, DL, Bersimis, S, Koutras, MV: On the distribution of the total number of run lengths. Ann. Inst. Statist. Math. 55, 865–884 (2003). Antzoulakos, DL, Chadjiconstantinidis, S: Distributions of numbers of success runs of fixed length in Markov dependent trials. Ann. Inst. Statist. Math. 53, 559–619 (2001). Arapis, AN, Makri, FS, Psillakis, ZM: On the length and the position of the minimum sequence containing all runs of ones in a Markovian binary sequence. Statist. Probab. Lett. 116, 45–54 (2016). Arapis, AN, Makri, FS, Psillakis, ZM: Distribution of statistics describing concentration of runs in non homogeneous Markov-dependent trials. Commun. Statist. Theor. Meth. (2017). doi:10.1080/03610926.2017.1337144. Avery, PJ, Henderson, D: Fiting Markov chain models to discrete state series such as DNA sequences. Appl. Statist. 48(Part 1), 53–61 (1999). Balakrishnan, N, Koutras, MV: Runs and Scans with Applications. Wiley, New York (2002). Benson, G: Tandem repeats finder: a program to analyze DNA sequences. Nucleic Acids Res. 27, 573–580 (1999). Eryilmaz, S: Some results associated with the longest run statistic in a sequence of Markov dependent trials. Appl. Math. Comput. 175, 119–130 (2006). Eryilmaz, S: Discrete time shock models involving runs. Statist. Probab. Lett. 107, 93–100 (2015). Eryilmaz, S: Generalized waiting time distributions associated with runs. Metrika. 79, 357–368 (2016). Eryilmaz, S: The concept of weak exchangeability and its applications. Metrika. 80, 259–271 (2017). Eryilmaz, S, Yalcin, F: Distribution of run statistics in partially exchangeable processes. Metrika. 73, 293–304 (2011). Feller, W: An Introduction to Probability Theory and Its Applications. 3rd Ed., Vol. I. Wiley, New York (1968). Fu, JC, Lou, WYW: Distribution Theory of Runs and Patterns and Its Applications: A finite Markov chain imbedding approach. World Scientific, River Edge (2003). Johnson, BC, Fu, JC: Approximating the distributions of runs and patterns. J. Stat. Distrib. Appl. 1:5, 1–15 (2014). Koutras, MV: Applications of Markov chains to the distribution of runs and patterns. In: Shanbhag, DN, Rao, CR (eds.)Handbook of Statistics, pp. 431–472. Elsevier, North-Holland (2003). Koutras, MV, Alexandrou, V: Non-parametric randomness tests based on success runs of fixed length. Statist. Probab. Lett. 32, 393–404 (1997). Koutras, VM, Koutras, MV, Yalcin, F: A simple compound scan statistic useful for modeling insurance and risk management problems. Insur. Math. Econ. 69, 202–209 (2016). Lou, WYW: The exact distribution of the k-tuple statistic for sequence homology. Statist. Probab. Lett. 61, 51–59 (2003). Makri, FS, Philippou, AN, Psillakis, ZM: Success run statistics defined on an urn model. Adv. Appl. Prob. 39, 991–1019 (2007). Makri, FS, Psillakis, ZM: On success runs of a fixed length in Bernoulli sequences: Exact and asymptotic results. Comput. Math. Appl. 61, 761–772 (2011a). Makri, FS, Psillakis, ZM: On runs of length exceeding a threshold: normal approximation. Stat. Papers. 52, 531–551 (2011b). Makri, FS, Psillakis, ZM: On ℓ-overlapping runs of ones of length k in sequences of independent binary random variables. Commun. Statist. Theor. Meth. 44, 3865–3884 (2015). Makri, FS, Psillakis, ZM, Arapis, AN: Counting runs of ones with overlapping parts in binary strings ordered linearly and circularly. Intern. J. Statist. Probab. 2, 50–60 (2013). Makri, FS, Psillakis, ZM, Arapis, AN: Length of the minimum sequence containing repeats of success runs. Statist. Probab. Lett. 96, 28–37 (2015). Mood, AM: The distribution theory of runs. Ann. Math. Statist. 11, 367–392 (1940). Mytalas, GC, Zazanis, MA: Central limit theorem approximations for the number of runs in Markov-dependent binary sequences. J. Statist. Plann. Infer. 143, 321–333 (2013). Mytalas, GC, Zazanis, MA: Central limit theorem approximations for the number of runs in Markov-dependent multi-type sequences. Commun. Statist. Theor. Meth. 43, 1340–1350 (2014). Nuel, G, Regad, L, Martin, J, Camproux, A-C: Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data. Algorithm Mol. Biol. 5, 1–18 (2010). Riordan, AM: An Introduction to Combinatorial Analysis. Second Ed. John Wiley, New York (1964). Sinha, K, Sinha, BP: On the distribution of runs of ones in binary trials. Comput. Math. Appl. 58, 1816–1829 (2009). Tabatabaei, SAH, Zivic, N: A review of approximate message authentication codes. In: Zivic, N (ed.)Robust Image Authentication in the Presence of Noise, pp. 106–127. Springer International Publishing AG, Cham (ZG), Switzerland (2015). The authors wish to thank the Editor for the thorough reading, and the anonymous reviewers for useful comments and suggestions which improved the article. Department of Mathematics, University of Patras, Patras, 26500, Greece Anastasios N. Arapis & Frosso S. Makri Department of Physics, University of Patras, Patras, 26500, Greece Zaharias M. Psillakis Search for Anastasios N. Arapis in: Search for Frosso S. Makri in: Search for Zaharias M. Psillakis in: The authors, ANA, FSM and ZMP with the consultation of each other carried out this work and drafted the manuscript together. All authors read and approved the final manuscript. Correspondence to Frosso S. Makri. Arapis, A., Makri, F. & Psillakis, Z. Joint distribution of k-tuple statistics in zero-one sequences of Markov-dependent trials. J Stat Distrib App 4, 26 (2017) doi:10.1186/s40488-017-0080-5 Exact Distributions Binary trials Markov chain AMS Subject Classification Primary 60E05, 62E15 Secondary 60J10, 60C05 International Conference on Statistical Distributions and Applications, ICOSDA 2016
CommonCrawl
Comparing two means (3) Now that we have seen different ways of calculating the standard deviation and standard error, we can move to the next step of calculating the t statistic. We do so using the following formula: $$t = \frac{\bar{x_1} - \bar{x_2} - 0}{se}$$ \(\bar{x_1}\) represents the sample mean of the first sample, \(\bar{x_2}\) represents the sample mean of the second sample, the 0 represents the null hypothesis that the difference between the two sample means is zero, and \(se\) represents the standard error of the mean difference. Let's go back to our example where we had a sample of 100 males that do sports on average 4.2 hours per week and a sample of 150 females that do sports on average 5.8 hours per week. The standard deviation of the male sample was 2.3 hours and the female sample was 3.1 hours. All these variables are available in the sample code. Calculate the t score and assign it to the variable t_score
CommonCrawl
A multiple kernel learning algorithm for drug-target interaction prediction André C. A. Nascimento1,2,3, Ricardo B. C. Prudêncio1 & Ivan G. Costa1,3,4 BMC Bioinformatics volume 17, Article number: 46 (2016) Cite this article Drug-target networks are receiving a lot of attention in late years, given its relevance for pharmaceutical innovation and drug lead discovery. Different in silico approaches have been proposed for the identification of new drug-target interactions, many of which are based on kernel methods. Despite technical advances in the latest years, these methods are not able to cope with large drug-target interaction spaces and to integrate multiple sources of biological information. We propose KronRLS-MKL, which models the drug-target interaction problem as a link prediction task on bipartite networks. This method allows the integration of multiple heterogeneous information sources for the identification of new interactions, and can also work with networks of arbitrary size. Moreover, it automatically selects the more relevant kernels by returning weights indicating their importance in the drug-target prediction at hand. Empirical analysis on four data sets using twenty distinct kernels indicates that our method has higher or comparable predictive performance than 18 competing methods in all prediction tasks. Moreover, the predicted weights reflect the predictive quality of each kernel on exhaustive pairwise experiments, which indicates the success of the method to automatically reveal relevant biological sources. Our analysis show that the proposed data integration strategy is able to improve the quality of the predicted interactions, and can speed up the identification of new drug-target interactions as well as identify relevant information for the task. The source code and data sets are available at www.cin.ufpe.br/~acan/kronrlsmkl/. Drug-target networks are receiving a lot of attention in late years, given their relevance for pharmaceutical innovation and drug repositioning purposes [1–3]. Although the amount of known interactions between drugs and target proteins has been increasing, the number of targets for approved drugs is still only a small proportion (<10 %) from the human proteome [1]. Recent advances on high-throughput methods provide ways for the production of large data sets about molecular entities as drugs and proteins. There is also an increase in the availability of reliable databases integrating information about interactions between these entities. Nevertheless, as the experimental verification of such interactions does not scale with the demand for innovation, the use of computational methods for the large scale prediction is mandatory. There is also a clear need for systems-based approaches to integrate these data for drug discovery and repositioning applications [1]. Recently, an increasing number of methods have been proposed for drug-target interaction (DTI) prediction. They can be categorized in ligand-based, docking-based, or network-based methods [4]. The docking approach, which can provide accurate estimates to DTIs, is computationally demanding and requires a 3D model of the target protein. Ligand-based methods, such as the quantitative structure activity relationship (QSAR), are based on a comparison of a candidate ligand to the known ligands of a biological target [5]. However, the utility of these ligand-based methods is limited when there are few ligands for a given target [2, 4, 6]. Alternatively, network based approaches use computational methods and known DTIs to predict new interactions [4, 5]. Even though ligand-based and docking-based methods are more precise when compared to network based approaches, the latter are more adequate for the estimation of new interactions from complete proteomes and drugs catalogs [1]. Therefore, it can indicate novel candidates to be evaluated by more accurate methods. Most network approaches are based on bipartite graphs, in which the nodes are composed of drugs (small molecules) and biological targets (proteins) [3, 7, 8]. Edges between drugs and targets indicate a known DTI (Fig.1). Given a known interaction network, kernel based methods can be used to predict unknown drug-target interactions [2, 9–11]. A kernel can be seen as a similarity matrix estimated on all pairs of instances. The main assumption behind network kernel methods is that similar ligands tend to bind to similar targets and vice versa. These approaches use base kernels to measure the similarity between drugs (or targets) using distinct sources of information (e.g., structural, pharmacophore, sequence and function similarity). A pairwise kernel function, which measures the similarity between drug-target pairs, is obtained by combining a drug and a protein base kernel via kernel product. Overview of the proposed method. a The drug-target is a bipartite graph with drugs (left) and proteins (right). Edges between drugs and proteins (solid line) indicates a known drug-protein interaction. The drug-protein interaction problem is defined as finding unknown edges (dashed lines) with the assumption that similar drugs (or proteins) should share the same edges. b KronRLS-MKL uses several drugs (and protein) kernels to solve the drug-target interaction problem. Distinct Kernels are obtained by measuring similarities of drugs (or proteins) using distinct information sources. c KronRLS-MKL provides not only novel predicted interactions as it indicates the relevance (weights) of each kernel used in the predictions The majority of previous network approaches use classification methods, as Support Vector Machines (SVM), to perform predictions over the drug-target interaction space [2, 4]. However, such techniques have major limitations. First, they can only incorporate one pair of base kernels at a time (one for drugs and one for proteins) to perform predictions. Second, the computation of the pairwise kernel matrix for the whole interaction space (all possible drug-target pairs) is computationally unfeasible even for a moderate number of drugs and targets. Moreover, most drug target interaction databases provide no true negative interaction examples. The common solution for these issues is to randomly sample a small proportion of unknown interactions to be used as negative examples. While this approach provides a computationally trackable small drug-target pairwise kernel, it generates an easier but unreal classification task with balanced class size [12]. An emerging machine learning (ML) discipline focused on the search for an optimal combination of kernels, called Multiple Kernel Learning (MKL) [13]. MKL-like methods have been previously proposed to the problem of DTI prediction [14–16] and the closely related protein-protein interaction (PPI) prediction problem [17, 18]. This is extremely relevant, as it allows the use of distinct sources of biological information to define similarities between molecular entities. However, since traditional MKL methods are SVM-based [13, 19], they are subject to memory limitations imposed by the pairwise kernel, and are not able to perform predictions in the complete drugs vs. protein space. Moreover, MKL approaches used in PPI prediction problem [17, 18] and protein function prediction [20, 21] can not be applied to bipartite graphs, as the problem at hand. Currently, we are only aware of two recent works [19, 22] proposing MKL approach to integrate similarity measures for drugs and targets. Drug-target prediction fits a link prediction problem [4], which can be solved by a Kronecker regularized least squares approach (KronRLS) [10]. A single kernel version of this method has been recently applied to drug-target prediction problem [10, 11]. A recent survey indicated that KronRLS outperforms SVM based methods in DTI prediction [2]. KronRLS uses Kronecker product algebraic properties to be able to perform predictions on the whole drug-target space, without the explicit calculation of the pairwise kernels. Therefore, it can cope with problems on large drugs vs. proteins spaces. However, KronRLS can not be used on a MKL context. In this work, we propose a new MKL algorithm to automatically select and combine kernels on a bipartite drug-protein prediction problem, the KronRLS-MKL algorithm (Fig 1). For this, we extend the KronRLS method to a MKL scenario. Our method uses L2 regularization to produce a non-sparse combination of base kernels. The proposed method can cope with large drug vs. target interaction matrices; does not requires sub-sampling of the drug-target network; and is also able to combine and select relevant kernels. We perform an empirical analysis using drug-target datasets previously described [23] and a diverse set of drug kernels (10) and protein kernels (10). In our experiments, we considered three different scenarios in the DTI prediction [2, 11, 24]: pair prediction, where every drug and target in the training set have at least one known interaction; or the 'new drug' and 'new target' setting, where some drugs and targets are present only in the test set, respectively. A comparative analysis with top performance single kernel approaches [2, 8, 10, 25–27] and all competing integrative approaches [14, 15, 22] demonstrates that our method is better or competitive in the majority of evaluated scenarios. Moreover, KronRLS-MKL was able to select and also indicate the relevance of kernels, in the form of weights, for each problem. In this work, we propose an extension of the KronRLS algorithm under recent developments of the MKL framework [28] to address the problem of link prediction on bipartite networks with multiple kernels. Before introducing our method, we will describe the RLS and the KronRLS algorithms (for further information, see [10, 11]). RLS and KronRLS Given a set of drugs \(\phantom {\dot {i}\!}D = \{ d_{1}, \ldots, d_{n_{d}}\}\), targets \(\phantom {\dot {i}\!}T = \{ t_{1}, \ldots, t_{n_{t}}\}\), and the set of training inputs x i (drug-target pairs) and their binary labels \(y_{i} \in \mathbb {R}\) (where 1 stands for a known interaction and 0 otherwise), with 1<i≤n, n=|D||T| (number of drug-target pairs). The RLS approach minimizes the following function [29]: $$ J(f) = \frac{1}{2n}\sum\limits_{i=1}^{n}(y_{i} - f(x_{i}))^{2} + \frac{\lambda}{2} \parallel f {\parallel_{K}^{2}} \;, $$ where ∥f∥ K is the norm of the prediction function f on the Hilbert space associated to the kernel K, and λ>0 is a regularization parameter which determines the compromise between the prediction error and the complexity of the model. According to the representer theorem [30], a minimizer of the above objective function admits a dual representation of the following form $$ f(x) = \sum\limits_{i=1}^{n} a_{i} K(x,x_{i}) \;, $$ where \(K: |D||T| \times |D||T| \rightarrow \mathbb {R}\) is named the pairwise kernel function and a is the vector of dual variables corresponding to each separation constraint. The RLS algorithm obtains the minimizer of Eq. 1 solving a system of linear equations defined by (K+λ I)a=y, where a and y are both n-dimensional vectors consisting of the parameters a i and labels y i . One can construct such pairwise kernel as the product of two base kernels, namely K((d,t),(d ′,t ′))=K D (d,d ′)K T (t,t ′), where K D and K T are the base kernels for drugs and targets, respectively. This is equivalent to the Kronecker product of the two base kernels [4, 31]: K=K D ⊗K T . The size of the kernel matrix makes the model training computationally unfeasible even for moderate number of drugs and targets [4]. The KronRLS algorithm is a modification of RLS, and takes advantage of two specific algebraic properties of the Kronecker product to speed up model training: the so called vec trick [31] and the relation of the eigendecomposition of the Kronecker product to the eigendecomposition of its factors [11, 32]. Let \(K_{D} = Q_{D} \Lambda _{D} {Q_{D}^{T}}\) and \(K_{T} = Q_{T} \Lambda _{T} {Q_{T}^{T}}\) be the eigendecomposition of the kernel matrices K D e K T . The solution a can be given by solving the following equation [11]: $$ \boldsymbol{a} = vec(Q_{T} C {Q_{D}^{T}}) \;, $$ where v e c(·) is the vectorization operator that stacks the columns of a matrix into a vector, and C is a matrix defined as: $$ C = (\Lambda_{D} \otimes \Lambda_{T} + \lambda I)^{-1}vec({Q_{T}^{T}} Y^{T} Q_{D}) \;. $$ The KronRLS algorithm is well suited for the large pairwise space involved on the DTI prediction problem, since the estimation of vector a using Eqs. 3 and 4 is a much faster solution compared to the original RLS estimation process in such scenario. However, it does not support the use of multiple kernels. KronRLS MKL In this work, a vector of different kernels is considered, i.e., \(\boldsymbol {k}_{D} = ({K_{D}^{1}}, {K_{D}^{2}},\ldots, K_{D}^{P_{D}})\) and \(\boldsymbol {k}_{T} = ({K_{T}^{1}}, {K_{T}^{2}}, \ldots, K_{T}^{P_{T}})\), P D and P T indicate the number of base kernels defined over the drugs and target set, respectively. In this section, we propose an extension of KronRLS to handle multiple kernels. The kernels can be combined by a linear function, i.e., the weighted sum of base kernels, corresponding to the optimal kernels \(K_{D}^{*}\) and \(K_{T}^{*}\): $$K_{D}^{*} = \sum\limits_{i=1}^{P_{D}} {\beta_{D}^{i}} {K_{D}^{i}} \; \;, \; \; K_{T}^{*} = \sum\limits_{j=1}^{P_{T}} {\beta_{T}^{j}} {K_{T}^{j}}, $$ where \(\boldsymbol {\beta }_{D} = \left \{{\beta _{D}^{1}},\ldots,\beta _{D}^{P_{D}}\right \}\) and \(\boldsymbol {\beta }_{T} = \left \{{\beta _{T}^{1}},\ldots,\beta _{T}^{P_{T}}\right \}\), correspond to the weights of drug and protein kernels, respectively. In [28], the author demonstrated that MKL can be interpreted as a particular instance of a kernel machine with two layers, in which the second layer is a linear function. His work provides the theoretical basis for the development of a MKL extension for the closely related KronRLS algorithm in our work. The classification function of Eq. 2 can be written in matricial form, f a =K a [29] and applying the well known property of the Kronecker product, (A⊗B)v e c(X)=v e c(B X A T)[32], we have: $$\begin{array}{*{20}l} f_{a}(X) &= K \boldsymbol{a} \\ &= \left(K_{D}^{*} \otimes K_{T}^{*}\right) vec\left(Q_{T} C {Q_{D}^{T}}\right) \\ &= \left(K_{T}^{*} \left(Q_{T} C {Q_{D}^{T}}\right) \left(K_{D}^{*}\right)^{T}\right). \end{array} $$ This way, we can rewrite the classification function as \(\left (K_{T}^{*} A \left (K_{D}^{*}\right)^{T}\right)\), where A=u n v e c(a). Using the same iterative approach considered in previous MKL strategies [13], we propose the use of a two step optimization process, in which the optimization of the vector a is interleaved with the optimization of the kernel weights. Given two initial weight vectors, \(\boldsymbol {\beta }_{D}^{0}\) and \(\boldsymbol {\beta }_{T}^{0}\), an optimal value for the vector a, using Eq. 3 is found, and with such optimal a, we can proceed to find optimal β D and β T . More specifically, Eq. 1 can be redefined when a is fixed, and knowing that \(\parallel f {\parallel _{F}^{2}}=\boldsymbol {a}^{T}K\boldsymbol {a}\) [28], we have: $$\boldsymbol{u} = \left(\boldsymbol{y} - \frac{\lambda \boldsymbol{a}}{2}\right), $$ then, $$ J(f_{a}) = \frac{1}{2 \lambda n}\parallel \boldsymbol{u} - K\boldsymbol{a} {\parallel_{2}^{2}} + \frac{1}{2}\boldsymbol{a}^{T} (y-\lambda\boldsymbol{a}). $$ Since the second term does not depend on K (and thus does not depend on the kernel weights), and, as y and a are fixed, it can be discarded from the weights optimization procedure. Note that we are not interested in a sparse selection of base kernels as in [28], therefore we introduce a L2 regularization term to control sparsity [33] of the kernel weights, also known as a ball constraint. This term is parameterized by the σ regularization coefficient. Additionally, we can convert u to its matrix form by the application of the unvec operator, i.e., U=u n v e c(u), and also use a more appropriate matrix norm (Frobenius, ∥A∥2≤∥A∥ F [32]). In this way, for any fixed values of a and β T , the optimal value for the combination vector is obtained by solving the optimization problem defined as: $$\begin{array}{*{20}l} & \underset{\boldsymbol{\beta}_{D}}{\text{min}} \;\;\;\frac{1}{2\lambda n}\parallel U - \boldsymbol{m}_{D} \boldsymbol{\beta}_{D} \parallel_{F} + \; \sigma \parallel \boldsymbol{\beta}_{D} {\parallel_{2}^{2}} \end{array} $$ $$\begin{array}{*{20}l} & \boldsymbol{m}_{D} = \left(K_{T}^{*} A \left({K_{D}^{1}}\right)^{T}, K_{T}^{*} A \left({K_{D}^{2}}\right)^{T},\ldots, K_{T}^{*} A\left(K_{D}^{P_{A}}\right)^{T}\right) \end{array} $$ while the optimal β T can be found fixing the values of a and β D , according to: $$\begin{array}{*{20}l} & \underset{\boldsymbol{\beta}_{T}}{\text{min}} \;\;\;\frac{1}{2\lambda n}\parallel U - \boldsymbol{\beta}_{T} \boldsymbol{m}_{T} \parallel_{F} + \; \sigma \parallel \boldsymbol{\beta}_{T} {\parallel_{2}^{2}} \end{array} $$ $$\begin{array}{*{20}l} & \boldsymbol{m}_{T} = \left({K_{T}^{1}} A \left(K_{D}^{*}\right)^{T}, {K_{T}^{2}} A \left(K_{D}^{*}\right)^{T},..., K_{T}^{P_{T}} A \left(K_{D}^{*}\right)^{T} \right). \end{array} $$ The optimization method used here is the interior-point optimization algorithm [34] implemented in MATLAB [35]. The datasets considered were first proposed by [23] and used by most competing methods [2, 10, 11, 15, 25]. Each dataset consists of a binary matrix, containing the known interactions of a determined set of drug targets, namely Enzyme (E), Ion Channel (IC), GPCR and Nuclear Receptors (NR), based on information extracted from the KEGG BRITE [36], BRENDA [37], SuperTarget [38] and DrugBank databases [39]. All four datasets are extremely unbalanced, if we consider the whole drug-target interaction space, i.e., the number of known interactions is extremely lower than the number of unknown interactions, as presented in Table 1. Table 1 Number drugs, targets and positive instances (known interactions) vs. the number of negative (or unknown) interactions on each dataset In order to analyze each type of entity from different points of view, we extracted 20 (10 for targets and 10 for drugs) distinct kernels from chemical structures, side-effects, amino acid sequence, biological function, PPI interactions and network topology (a summary of base kernels is presented in Table 2). Table 2 Network entities and respective kernels considered for combination purposes Protein kernels Here we use the following information sources about target proteins: amino acid sequence, functional annotation and proximity in the protein-protein network. Concerning sequence information, we consider the normalized score of the Smith-Waterman alignment of the amino acid sequence (SW) [23], as well as different parametrizations of the Mismatch (MIS) [40] and the Spectrum (SPEC) [41] kernels. For the Mismatch kernel, we evaluated four combinations of distinct values for the k-mers length (k=3 and k=4) and the number of maximal mismatches per k-mer (m=1 and m=2), namely MIS-k3m1, MIS-k3m2, MIS-k4m1 and MIS-k4m2; for the Spectrum kernel, we varied the k-mers length (k=3 and k=4, SPEC-k3 and SPEC-k4, respectively). Both Mismatch and Spectrum kernels were calculated using the R packageKeBABS [42]. The Gene Ontology semantic similarity kernel (GO) was used to encode functional information. GO terms were extracted from the BioMART database [43], and the semantic similarity scores between the GO annotation terms were calculated using the csbl.go R package [44], with the Resnik algorithm [45]. We also extracted a similarity measure from the human protein-protein network (PPI), obtained from the BioGRID database [46]. The similarity between each pair of targets was calculated based on the shortest distance on the corresponding PPI network, according to: $$S(p,p') = A e^{b D(p,p')}, $$ where A and b parameters were set as in [14] (A=0.9,b=1), and D(p,p ′) is the shortest hop distance between proteins p and p ′. Drug kernels As drug information sources, we consider 6 distinct chemical structure and 3 side-effects kernels. Chemical structure similarity between drugs was achieved by the application of the SIMCOMP algorithm [47] (obtained from [23]), defined as the ratio of common substructures between two drugs based on the chemical graph alignment. We also computed the Lambda-k kernel (LAMBDA) [48], the Marginalized kernel [49] (MARG), the MINMAX kernel [50], the Spectrum kernel [48] (SPEC) and the Tanimoto kernel [50] (TAN). These later kernels were calculated with the R Package Rchemcpp [48] with default parameters. Two distinct side-effects data sources were also considered. The FDA adverse event reporting system (AERS), from which side effect keywords (adverse event keywords) similarities for drugs were first retrieved by [51]. The authors introduced two types of pharmacological profiles for drugs, one based on the frequency information of side effect keywords in adverse event reports (AERS-freq) and another based on the binary information (presence or absence) of a particular side-effect in adverse event reports (AERS-bit). Since not every drug in the Nuclear Receptors, Ion Channel, GPCR and Enzyme datasets is also present on AERS-based data, we extracted the similarities of the drugs in AERS, and assigned zero similarity to drugs not present. The second side-effect resource was the SIDER database1 [52]. This database contains information about commercial drugs and their recorded side effects or adverse drug reactions. Each drug is represented by a binary profile, in which the presence or absence of each side effect keyword is coded 1 or 0, respectively. Both AERS and SIDER based profile similarities were obtained by the weighted cosine correlation coefficient between each pair of drug profiles [51]. Network topology information We also use drug-target network structure in the form of a network interaction profile as a similarity measure for both proteins and drugs. The idea is to encode the connectivity behavior of each node in the subjacent network. The Gaussian Interaction Profile kernel (GIP) [10] was calculated for both drugs and targets. Competing methods We compare the predictive performance of the KronRLS-MKL algorithm against other MKL approaches, as well as in a single kernel context (one kernel for drugs, and one for targets). In the latter, we evaluate the performance of each possible combination of base kernels (Table 2) with the KronRLS algorithm, recently reported as the best method for predicting drug-target pairs with single paired kernels [2]. This resulted in a total of 10×10=100 different combinations. The best performing pairs were then used as baselines in our method evaluation, selected according to two distinct criteria: the kernel pair that achieved the largest area under the precision recall curve (AUPR) on the training set, and, a more optimistic approach, which considered the largest AUPR on the testing set. Besides the combination of single kernels for drugs and targets, two different kinds of methods were adopted to integrate multiple kernels: (1) standard non-MKL kernel methods for DTI prediction, trained on the average of multiple kernels (respectively for drugs and targets); (2) actual MKL methods specifically proposed for DTI prediction. Non-MKL approaches We extend state-of-the-art methods [8, 10, 25–27] for the DTI prediction problem for a multiple kernel context. For this, initially we average multiple kernels to produce a single kernel (respectively for drugs and targets). Once we have a single average kernel (one for drug and one for target), we adopt a standard kernel method for DTI prediction, i.e., the base learner. In our experiments, two distinct previous combinations strategies are used: the mean of base kernels and the kernel alignment (KA) heuristic, previously proposed by [53]. We will briefly describe the base learners, followed by a short overview of the two combination strategies considered. The Bipartite Local Model (BLM) [26] is a machine learning based algorithm, where drug-target pairs are predicted by the construction of the so called 'local models', i.e., a SVM classifier is trained for each drug in the training set, and the same is done for targets. Then, the maximum scores for drugs and targets are used to predict new drug-target interactions. Since BLM demonstrated superior performance than Kernel Regression Method (KRM) [23] in previous studies [2, 26], we did not consider KRM in our experiments. The Network-based Random Walk with Restart on the Heterogeneous network (NRWRH) [8] algorithm predicts new interactions between drugs and targets by the simulation of a random walk in the network of known drug-target predictions as well as in the drug-drug and protein-protein similarity networks. LapRLS and NetLapRLS are both proposed in [25]. Both are based on the RLS learning algorithm, and perform similarity normalization by the application of the Laplacian operator. Predictions are done for drugs and targets separately, and the final prediction scores are obtained by averaging the prediction result from drug and target spaces. As said previously, most previous SVM-based methods found on the literature can be reduced to the Pairwise Kernel Method (PKM) [27], with the distinction being made by the kernels used and the adopted combination strategy. PKM starts with the construction of a pairwise kernel, computed from the drug and target similarities. Given two drug-target pairs, (d,p) and (d ′,p ′), and the respective drug and target similarities, K D and K P , the pairwise kernel is given by K((d,p),(d ′,p ′)=K D (d,d ′)×K P (p,p ′). Once the pairwise matrix is computed, it is then used to train a SVM classifier. The PKM [27], KronRLS, BLM, NRWRH, LapRLS and NetLapRLS algorithms cannot cope with multiple kernels. For this reason, we consider two simple methods available for kernel combination: the mean of base kernels and the kernel alignment (KA) heuristic [53]. The mean drug kernel is computed as \(K_{D}^{*} = 1 / P_{D} \sum _{i=1}^{P_{D}}{K_{D}^{i}}\), and the same can be done for targets, analogously. KA is a heuristic for the estimation of kernel weights based on the notion of kernel alignment [54]. More specifically, the weight vector, β D for instance, can be obtained by: $$ {\beta_{D}^{i}} = \frac{A\left({K_{D}^{i}},\boldsymbol{yy}^{T}\right)}{\sum\limits_{h=1}^{P_{D}} A\left({K_{D}^{h}},\boldsymbol{yy}^{T}\right)}, $$ where yy T stands for the ideal kernel and y being the label vector. The alignment A(K,yy T) of a given kernel K and the ideal kernel yy T is defined as: $$ A\left(K,\boldsymbol{yy}^{T}\right) = \frac{\left\langle K,\boldsymbol{yy}^{T} \right\rangle_{F}}{n\sqrt{\langle K,K \rangle_{F}}}, $$ where \(\left \langle K,\boldsymbol {yy}^{T} \right \rangle _{F} = \sum \limits _{i=1}^{n}\sum \limits _{j=1}^{n} (K)_{\textit {ij}} \left (\boldsymbol {yy}^{T}\right)_{\textit {ij}}\). Once such combinations are performed, the resulting drug and protein kernels are then used as input to the learning algorithm. We refer to the mean and KA heuristics appending the -MEAN and -KA, respectively, to each base learner. Multiple kernel approaches Similarity-based Inference of drug-TARgets (SITAR) [14] constructs a feature vector with the similarity values, where each feature is based on one drug-drug and one gene-gene similarity measure, resulting in a total of P D ×P T features. Each one is calculated by combining the drug-drug similarities between the query drug and other drugs and the gene-gene similarities between the query gene and other target genes across all true drug-target associations. The method also performs a feature selection procedure and yields the final classification scores using a logistic regression classifier. Gönen and Kaski [22] proposed the Kernelized Bayesian Matrix Factorization with Twin Multiple Kernel Learning (KBMF2MKL) algorithm, extending a previous work [55] to handle multiple kernels. The KBMF2MKL factorizes the drug-target interaction matrix by projecting the drugs and the targets into a common subspace, where the projected drug and target kernels are multiplied. Normally distributed Kernel weights for each subspace projected kernel are then estimated without any constraints. The product of the final combined matrices is then used to make predictions. Wang et al. [15] proposes to use a simple heuristic to previously combine the drug and target similarities, and then use a SVM classifier to perform the predictions. Only the maximum similarity values of drug and target kernel matrices are selected, resulting in two distinct kernels. They are then used to construct a pairwise kernel, computed from the drug and target similarities. Once the pairwise matrix is computed, it is then used to train a SVM classifier. This procedure is also known as the Pairwise Kernel Method (PKM) [27]. For this reason, we refer to the approach proposed by [15] by PKM-MAX. The authors in [15] suggest as further work a weighted sum approach. They suggest to learn the optimal convex combination of data sources maximizing the correlation of the obtained kernel matrix with the topology of drug-protein network. This objective can be achieved by solving a linear programming problem, as follows: $$\underset{\boldsymbol{\beta}_{D}}{\text{max}} \;\;\; \left|corr(K_{D}^{*}, dist)\right|, $$ where \(K_{D}^{*}\) correspond to the optimal combination of drug kernel matrices with weight vector β D , dist is the drug-drug distance matrix in the DTI network, and corr represents the correlation coefficient. Analogously, the same can be done for targets. We call this method WANG-MKL. Experimental setup Previous work [2, 11, 24] suggest that, in the context of paired input problems, one should consider separately the experiments where the training and test sets share common drugs or proteins. In order to achieve a clear notion of the performance of each method, all competing approaches were evaluated under 5 runs of three distinct 5-fold cross-validation (CV) procedures: 'new drug' scenario: it simulates the task of predicting targets for new drugs. In this scenario, the drugs in a dataset were divided in 5 disjoint subsets (folds). Then the pairs associated to 4 folds of drugs were used to train the classifier and the remaining pairs are used to test; 'new target' scenario: it corresponds in turn to predicting interacting drugs for new targets. This is analogous to the above scenario, however considering 5 folds of targets; pair prediction: is consists of predicting unknown interactions between known drugs and targets. All drug-target interactionswere split in five folds, from which 4 were used for training and 1 for testing. Some of the competing methods (PKM-based, WANG-MKL and SITAR) were trained with sub-sampled datasets, i.e., we randomly selected the same number of known interactions among the unknown interaction set, since these methods cannot be executed in large networks [2, 4, 14, 15]. Although balanced classes are unlikely in real scenarios, we also performed experiments in context (3), using a sub-sampled test set, obtained by sampling as many negative examples as positive examples [14, 15] from the test fold. This experiment is relevant for comparison to previous work, since most previous studies on drug-target prediction performed under-sampling to evaluate predictive performance (see Additional file 1: Table S1).2 The hyperparameters of each competing methods were optimized under a nested CV procedure, using the following values: for the SVM-based methods (PKM, BLM and WANG-MKL), the SVM cost parameter was evaluated under the interval {2−1,…,23}; for the KronRLS-based methods, the λ parameter was evaluated in the interval {2−15,2−10,…,230}. The σ regularization coefficient of the KRONRLS-MKL algorithm was also optimized in the interval {0,0.25,0.5,0.75,1}. The number of components in KBMF2MKL was varied in the interval R∈{5,10,…,40}, and for the LapRLS and NetLapRLS we varied β d ,β t ∈{0.25,0.50,…,1}. In NetLapRLS we also considered two distinct values for γ d2,γ t2∈{0.01,0.1}. For NRWRH the restart probability was evaluated in the set {0.1,0.2,…,0.9}. After the hyperparameters were selected for each method, the outer loop evaluated the predictive performance for the test set partition with the model built using the selected hyperparameters. The evaluation metric considered was the AUPR, as it allows a good quantitative estimate of the ability to separate the positive interactions from the negative ones. According to [56], this metric provides a better quality estimate for highly unbalanced data, since it punishes more heavily the existence of false positives (FP). This is specially true for the datasets considered, as demonstrated on Table 1, in which all datasets are extremely unbalanced. Paired kernel experiments As a base study, we evaluate the performance of KronRLS on all pairs of kernels (10×10 pairs). The AUPR results of all pairs of kernels for the Nuclear Receptors, GPCR, Ion Channel and Enzyme datasets are show in more detail in the supplementary material (see Additional file 1). The performance of KronRLS varies drastically with the kernel choice, as clearly demonstrated by the average performance of each kernel on the single kernel experiments (Fig. 2). For Nuclear Receptors, the best kernel pair combination was SPEC-k4 and GIP, while GIP and SW performed best in all other data sets. It is also important to notice the impact of different parametrizations of the Mismatch sequence kernel. Its performance decreases as more mismatches are allowed inside a k-mer. Overall, both versions of AERS, SIMCOMP, GIP, MINIMAX and SIDER drug kernels showed better performance, while LAMBDA, MARG, SPEC and TAN performed worse. For targets, GIP, GO, MIS-k4m1, SPEC and SW kernels performed better than other target kernels. Average performance of each single kernel with the KronRLS algorithm as base learner. The boxplots shows the AUPR performance of drug and protein kernels across different kernel combinations In this section, we compare the competing methods in terms of AUPR for all datasets. Concerning KronRLS, we will use the best kernel pair (Best Pair) with largest AUPR as described in the previous section. This will serve as a baseline to evaluate the MKL approaches. Results are presented in Table 3. In the pair prediction scenario, KRONRLS-MKL obtained highest AUPR in all datasets. Its results are even superior than the performance in comparison to the best kernel pair under the optimistic selection. The results of KRONRLS-MKL in pair prediction are statistically significant against all other methods (at α=0.05), except from KRONRLS-KA and KRONRLS-MEAN, according to the Wilcoxon rank sum test (Additional file 2). Concerning the subsampled pair prediction, KRONRLS-MKL achieved highest AUPR in the NR and IC data sets, and SITAR performed best in the GPCR and Enzyme data. There it performed second, just after SITAR (see Additional file 3: Table S1). The highest AUPR values obtained in the subsampled data sets in comparison to the unbalanced data sets clearly indicate that performing predictions in the complete data is a more difficult task. Moreover, the number of positive examples was negatively correlated to the dataset size for the complete datasets. Table 3 Results on MKL Experiments on 5 × 5 cross-validation experiments In the 'new target' scenario, BLM-KA performed best in 3 of 4 datasets, followed closely by BLM-mean and KRONRLS-MKL, demonstrating that the local SVM model is more effective in such scenario. BLM-KA performed better than all evaluated methods with the exception of BLM-Mean, KBMF-MKL, KRONRLS-KA, KRONRL-MEAN and KRONRLS-MKL (α=0.05 Additional file 2). In the 'new drug' problem, KRONRLS-MKL obtained higher AUPR in the NR and GPCR datasets, while BLM-KA had higher AUPR values in the IC and Enzyme data. Both KRONRLS-MKL and BLM-KA had statistically significant higher AUPR (at α=0.05; Additional file 2) than all other competing methods. In order to give an overview of the performance of the evaluated methods, an average ranking of the AUPR values obtained by all methods across the four datasets is presented in Table 4. Table 4 Average ranking over all four datasets Methods also displayed distinct computational requirements. Memory usage was stable accross all methods, except from the SVM-based algorithms, which demonstrated quadratic growth of the memory used in relation to the size of the dataset (BLM, PKM, WANG-MKL). This is in part due to the construction of the explicit pairwise kernel (see Additional file 3: Table S3). This fact turns such methods inadequate for contexts in which subsampling of pairs is undesirable. We now discuss about computational time in the pair prediction scenario. The precomputed kernels approaches (MEAN and KA) were overall the fastest on average, with PKM-based methods requiring less time to train and test the models (∼1 min), followed by KronRLS-based and LapRLS-based algorithms(∼20 and 27 min, respectively). KBMF2MKL and BLM were the slowest, requiring more than 100 min on average at the same task. The lower computation time of the heuristic-based methods is explained by the absence of complex optimization procedures to find the kernel weights. KronRLS-MKL took a little less time than KBMF2MKL, taking an average over the four datasets of 74 min. (see Additional file 3: Table S4). Predictions on new drug-target interactions In order to evaluate the quality of final predictions in a more realistic scenario, we performed an experiment similar to that described by [10, 26]. We estimate the most highly ranked drug–target pairs as most likely true interactions, and performed a search on the current release of four major databases (DrugBank [39], MATADOR [38], KEGG [57]) and ChEMBL [58]. As the training datasets were generated almost eight years ago, new interactions included in these databases will serve as a external validation set. We exclude interactions already present in the training data. We trained all methods with all interactions present in the original datasets. In the specific case of BLM and NRWRH, one model for drugs and another for targets was trained, and then the maximum score for each DT pair was considered for prediction. Then, we calculated the AUPR for each dataset separately, discarding already known interactions (see Additional file 3: Table S2). The low AUPR values of all methods indicate the difficulty in performing predictions in such large search space. An average ranking (Fig. 3) of each method across all databases indicates that KronRLS methods as best performing algorithms followed by single kernel approaches. It is also important to highlight the poor performance of BLM-KA and BLM-MEAN in this task. This indicates a poor generalization capacity of the BLM framework to the drug-target prediction problem (see Table 3). Mean AUPR ranking of each method when compared to the new interactions found on updated databases. The KronRLS-based methods achieved superior performance when compared to other integration strategies Next, a more practical assessment of the predicting power of KRONRLS-MKL is done, by looking to the top 5 ranked interactions predicted by our method (Table 5). We observe that the great majority of interactions (14 out of 20) have been already described in ChEMBL, DrugBank or Matador. We focus our discussion in selected novel interactions. For example, in the Nuclear Receptor database, the 5th ranked prediction indicates the association of Tretinoin with the nuclear factor RAR-related orphan receptor A (RORa). Tretinoin is a drug currently used to treatment of acnes [59]. Interestingly, its molecular activity is associated with the activation of nuclear receptors of the closelly related RAR family. Table 5 Top five predicted interactions by KRONRLS-MKL This is also a good example to illustrate the benefits for incorporation of multiple sources of data. Both RORa and Tretinoin do not share nodes in the training set. All targets of Tretinoin have a high GO similarity to RORa (mean value of 0.8368) despite of theirr low sequence similarity (SW mean value is 0.1563). In addition, one of the targets RORa is NR0B1 (nuclear receptor subfamily 0, group B, member 1). This protein is very close to RORa in the PPI network (similarity score of 0.90). Concerning Ion Channel models, prediction ranked 2 and 3 indicate the interaction of Verapamil and Diazoxide with ATP-binding cassete sub-family C (ABBCC8). ABBCC8 is one of the proteins encoding the sulfonylurea receptor (SUR1) and is associated to calcium regulation and diabetes type I [60]. Interestingly, there are positive reports of Diazoxide treatments to prevent diabetes in rats [61]. Evaluation of kernel weigths The kernel weights given by KBMF2MKL, KRONRLS-MKL and WANG-MKL, as well as the KA heuristic, can be used to analyze the ability of such methods to identify the most relevant information sources. As there is no guideline or gold standard for this, we resort to a simple approach: compare the kernel weights (Fig. 4) with the average performance of each kernel on the single kernel experiments (Fig. 2). First, it is noticeable that the KA weights are very similar to the average selection (0.10). This indicates that no clear kernel selection is performed. WANG-MKL and KRONRLS-MKL give low weights to drug kernels LAMBDA, MARG, MINIMAX, SPEC and TAN and protein kernel MIS-k3m2. These kernels have overall worst AUPR in the single kernel experiments, which indicates an agreement with both selection procedures. Although the weights assigned by KBMF2MKL are not subject to convex constraints, as indicated by the larger weights assigned to all kernels, they also provide a notion of quality of base kernels. We can observe a stronger preference to the GIP kernel, in all datasets, even though the algorithm assigned a high weight for the lower quality MIS-k3m2 in three of the four datasets. Comparison of the average final weights obtained by the Kernel Alignment (KA) heuristic, KBMF2MKL, KronRLS-MKL and WANG-MKL algorithms. As one can note, the KA heuristic demonstrated close to mean weights, while KRONRLS-MKL and WANG-MKL effectively discarded the most irrelevant kernels We have presented a new Multiple Kernel Learning algorithm for the bipartite link prediction problem, which is able to identify and select the most relevant information sources for DTI prediction. Most previous MKL methods mainly solve the problem of MKL when kernels are built over the same set of entities, which is not the case for the bipartite link prediction problem, e.g. drug-target networks. Regarding predictions in drug-target networks, the sampling of negative/unknown examples, as a way to cope with large data sets, is a clear limitation [2]. Our method takes advantage of the KronRLS framework to efficiently perform link prediction on data with arbitrary size. In our experiments, the KronRLS-MKL algorithm demonstrated an interesting balance between accuracy and computational cost in relation to other approaches. It performed best in the "pair" prediciton problem and the "new target" problem. In the 'new drug' and 'new target' prediction tasks, BLM-KA was also top ranked. This method has a high computational cost. This arises from the fact it requires a classifier for each DT pair [2]. Moreover, it obtained poor results in the evaluation scenario to predict novel drug-protein pairs interactions. The convex constraint estimation of kernel weights correlated well with the accuracy of a brute force pair kernel search. This non-sparse combination of kernels possibly increased the generalization of the model by reducing the bias for a specific type of kernel. This usually leads to better performance, since the model can benefit from different heterogeneous information sources in a systematic way [33]. Finally, the algorithm performance was not sensitive to class unbalance and can be trained over the whole interaction space without sacrificing performance. 1 http://sideeffects.embl.de/. 2 NRWRH cannot be applied to the pair prediction [8], by which this method was not considered in such context. Csermely P, Korcsmáros T, Kiss HJM, London G, Nussinov R. Structure and dynamics of molecular networks: a novel paradigm of drug discovery: a comprehensive review. Pharmacol Ther. 2013; 138(3):333–408. doi:10.1016/j.pharmthera.2013.01.016. PubMed Central Article CAS PubMed Google Scholar Ding H, Takigawa I, Mamitsuka H, Zhu S. Similarity-based machine learning methods for predicting drug-target interactions: a brief review. Brief Bioinform. 2013. doi:10.1093/bib/bbt056. Chen X, Yan CC, Zhang X, Zhang X, Dai F. Drug – target interaction prediction : databases, web servers and computational models. Brief Bioinform. 2015:1–17. doi:10.1093/bib/bbv066. Yamanishi Y. Chemogenomic approaches to infer drug–target interaction networks. Data Min Syst Biol. 2013; 939:97–113. doi:10.1007/978-1-62703-107-3. Dudek AZ, Arodz T, Gálvez J. Computational methods in developing quantitative structure-activity relationships (QSAR): a review. Comb Chem High Throughput Screen. 2006; 9(3):213–8. Article CAS PubMed Google Scholar Sawada R, Kotera M, Yamanishi Y. Benchmarking a wide range of chemical descriptors for drug-target interaction prediction using a chemogenomic approach. Mol Inform. 2014; 33(11-12):719–31. doi:10.1002/minf.201400066. Cheng F, Liu C, Jiang J, Lu W, Li W, Liu G, et al. Prediction of drug-target interactions and drug repositioning via network-based inference. PLoS Comput Biol. 2012; 8(5):1002503. doi:10.1371/journal.pcbi.1002503. Chen X, Liu MX, Yan GY. Drug-target interaction prediction by random walk on the heterogeneous network. Mol BioSyst. 2012; 8(7):1970–8. doi:10.1039/c2mb00002d. Yamanishi Y, Kotera M, Kanehisa M, Goto S. Drug-target interaction prediction from chemical, genomic and pharmacological data in an integrated framework. Bioinformatics (Oxford, England). 2010; 26(12):246–54. doi:10.1093/bioinformatics/btq176. van Laarhoven T, Nabuurs SB, Marchiori E. Gaussian interaction profile kernels for predicting drug-target interaction. Bioinformatics (Oxford, England). 2011; 27(21):3036–43. doi:10.1093/bioinformatics/btr500. Pahikkala T, Airola A, Pietila S, Shakyawar S, Szwajda A, Tang J, et al. Toward more realistic drug-target interaction predictions. Brief Bioinform. 2014. doi:10.1093/bib/bbu010. Pahikkala T, Airola A, Stock M, Baets BD, Waegeman W. Efficient regularized least-squares algorithms for conditional ranking on relational data. Mach Learn. 2013; 93:321–356. arXiv:1209.4825v2. Gönen M, Alpaydın E. Multiple kernel learning algorithms. J Mach Learn Res. 2011; 12:2211–268. Perlman L, Gottlieb A, Atias N, Ruppin E, Sharan R. Combining drug and gene similarity measures for drug-target elucidation. J Comput Biol. 2011; 18(2):133–45. doi:10.1089/cmb.2010.0213. Wang YC, Zhang CH, Deng NY, Wang Y. Kernel-based data fusion improves the drug-protein interaction prediction. Comput Biol Chem. 2011; 35(6):353–62. doi:10.1016/j.compbiolchem.2011.10.003. Wang Y, Chen S, Deng N, Wang Y. Drug repositioning by kernel-based integration of molecular structure, molecular activity, and phenotype data. PLoS ONE. 2013; 8(11):78518. doi:10.1371/journal.pone.0078518. Ben-Hur A, Noble WS. Kernel methods for predicting protein-protein interactions,. Bioinformatics (Oxford, England). 2005; 21 Suppl 1:38–46. doi:10.1093/bioinformatics/bti1016. Hue M, Riffle M, Vert J-p, Noble WS. Large-scale prediction of protein-protein interactions from structures. BMC Bioinforma. 2010; 11:144. Ammad-Ud-Din M, Georgii E, Gönen M, Laitinen T, Kallioniemi O, Wennerberg K, et al. Integrative and Personalized QSAR Analysis in Cancer by Kernelized Bayesian Matrix Factorization. J Chem Inf Model. 2014; 1. doi:10.1021/ci500152b. Lanckriet GR, Deng M, Cristianini N, Jordan MI, Noble WS. Kernel-based data fusion and its application to protein function prediction in yeast. In: Pacific Symposium on Biocomputing. World Scientific: 2004. p. 300–11. Yu G, Zhu H, Domeniconi C, Guo M. Integrating multiple networks for protein function prediction. BMC Syst Biol. 2015; 9(Suppl 1):3. doi:10.1186/1752-0509-9-S1-S3. Gönen M, Kaski S. Kernelized Bayesian Matrix Factorization. IEEE Trans Pattern Anal Mach Intell. 2014; 36(10):2047–2060. Yamanishi Y, Araki M, Gutteridge A, Honda W, Kanehisa M. Prediction of drug-target interaction networks from the integration of chemical and genomic spaces. Bioinformatics (Oxford, England). 2008; 24(13):232–40. doi:10.1093/bioinformatics/btn162. Park Y, Marcotte EM. Flaws in evaluation schemes for pair-input computational predictions. Nat Methods. 2012; 9(12):1134–6. doi:10.1038/nmeth.2259. Xia Z, Wu LY, Zhou X, Wong STC. Semi-supervised drug-protein interaction prediction from heterogeneous biological spaces. BMC Syst Biol. 2010; 4 Suppl 2(Suppl 2):6. doi:10.1186/1752-0509-4-S2-S6. Bleakley K, Yamanishi Y. Supervised prediction of drug-target interactions using bipartite local models. Bioinformatics (Oxford, England). 2009; 25(18):2397–403. doi:10.1093/bioinformatics/btp433. Jacob L, Vert JP. Protein-ligand interaction prediction: an improved chemogenomics approach. Bioinformatics (Oxford, England). 2008; 24(19):2149–56. doi:10.1093/bioinformatics/btn409. Dinuzzo F. Learning functions with kernel methods. 2011. PhD thesis, University of Pavia. Rifkin R, Yeo G, Poggio T. Regularized least-squares classification. Nato Science Series Sub Series III Computer and Systems Sciences. 2003; 190:131–54. Kimeldorf G, Wahba G. Some results on Tchebycheffian spline functions. J Math Anal Appl. 1971; 33(1):82–95. Kashima H, Oyama S, Yamanishi Y, Tsuda K. On pairwise kernels: an efficient alternative and generalization analysis. Adv Data Min Knowl Disc. 2009; 5476:1030–7. Laub AJ. Matrix Analysis for Scientists and Engineers. Davis, California: SIAM; 2005, pp. 139–44. Kloft M, Brefeld U, Laskov P, Sonnenburg S. Non-sparse multiple kernel learning. In: NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels (Vol. 4): 2008. Byrd RH, Hribar ME, Nocedal J. An interior point algorithm for large-scale nonlinear programming. SIAM J Optim. 1999; 9(4):877–900. doi:10.1137/S1052623497325107. MATLAB. version 8.1.0 (R2013a). Natick, Massachusetts: The MathWorks Inc.; 2013. Kanehisa M, Araki M, Goto S, Hattori M, Hirakawa M, Itoh M, et al. KEGG for linking genomes to life and the environment. Nucleic Acids Res. 2008; 36(suppl 1):480–4. Schomburg I, Chang A, Ebeling C, Gremse M, Heldt C, Huhn G, et al. BRENDA, the enzyme database: updates and major new developments. Nucleic Acids Res. 2004; 32(suppl 1):431–3. Günther S, Kuhn M, Dunkel M, Campillos M, Senger C, Petsalaki E, et al. SuperTarget and Matador: resources for exploring drug-target relationships. Nucleic Acids Res. 2008; 36(suppl 1):919–22. Wishart DS, Knox C, Guo AC, Cheng D, Shrivastava S, Tzur D, et al. DrugBank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Res. 2008; 36(suppl 1):901–6. Eskin E, Weston J, Noble WS, Leslie CS. Mismatch String Kernels for SVM Protein Classification. In: Advances in neural information processing systems-NIPS: 2002. p. 1417–1424. Leslie CS, Eskin E, Noble WS. The spectrum kernel: a string kernel for SVM protein classification. In: Pac Symp Biocomput vol. 7: 2002. p. 566–575. Palme J, Hochreiter S, Bodenhofer U. KeBABS - an R package for kernel-based analysis of biological sequences. Bioinformatics. 2015; 31(15):2574–2576. doi:10.1093/bioinformatics/btv176. Smedley D, Haider S, Durinck S, Al E. The BioMart community portal: an innovative alternative to large, centralized data repositories. Nucleic Acids Res. 2015. doi:10.1093/nar/gkv350. Ovaska K, Laakso M, Hautaniemi S. Fast Gene Ontology based clustering for microarray experiments. BioData Min. 2008; 1(1):11. Resnik P. Semantic Similarity in a Taxonomy: An Information Based Measure and Its Application to Problems of Ambiguity in Natural Language. J Artif Intell Res. 1999; 11:95–130. Stark C, Breitkreutz BJ, Reguly T, Boucher L, Breitkreutz A, Tyers M. BioGRID: a general repository for interaction datasets. Nucleic Acids Res. 2006; 34(suppl 1):535–9. Hattori M, Okuno Y, Goto S, Kanehisa M. Development of a chemical structure comparison method for integrated analysis of chemical and genomic information in the metabolic pathways. J Am Ceram Soc. 2003; 125(39):11853–65. Klambauer G, Wischenbart M, Mahr M, Unterthiner T, Mayr A, Hochreiter S. Rchemcpp: a web service for structural analoging in ChEMBL, Drugbank and the Connectivity Map. Bioinformatics. 2015. Advance access doi:10.1093/bioinformatics/btv373. Kashima H, Tsuda K, Inokuchi A. Marginalized kernels between labeled graphs. In: ICML, vol. 3: 2003. p. 321–328. Ralaivola L, Swamidass SJ, Saigo H, Baldi P. Graph kernels for chemical informatics. Neural Netw. 2005; 18(8):1093–110. doi:10.1016/j.neunet.2005.07.009. Takarabe M, Kotera M, Nishimura Y, Goto S, Yamanishi Y. Drug target prediction using adverse event report systems: A pharmacogenomic approach. Bioinformatics. 2012; 28(18):611–8. doi:10.1093/bioinformatics/bts413. Kuhn M, Campillos M, Letunic I, Jensen LJ, Bork P. A side effect resource to capture phenotypic effects of drugs. Mol Syst Biol. 2010; 6(1):343. PubMed Central PubMed Google Scholar Qiu S, Lane T. A framework for multiple kernel support vector regression and its applications to siRNA efficacy prediction. IEEE/ACM Trans Comput Biol Bioinf. 2009; 6(2):190–9. Cristianini N, Kandola J, Elisseeff A, Shawe-Taylor J. On kernel-target alignment. In: Advances in Neural Information Processing Systems 14. Cambridge MA: MIT Press: 2002. p. 367–73. Gönen M. Predicting drug-target interactions from chemical and genomic kernels using Bayesian matrix factorization. Bioinformatics (Oxford, England). 2012; 28(18):2304–10. doi:10.1093/bioinformatics/bts360. Davis J, Goadrich M. The relationship between Precision-Recall and ROC curves. In: Proceedings of the 23rd international conference on Machine learning - ICML '06. New York, NY, USA: ACM: 2006. p. 233–40. doi:10.1145/1143844.1143874. Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000; 28(1):27–30. Bento AP, Gaulton A, Hersey A, Bellis LJ, Chambers J, Davies M, et al. The ChEMBL bioactivity database: an update. Nucleic Acids Res. 2014; 42(D1):1083–90. doi:10.1093/nar/gkt1031. Webster GF. Topical tretinoin in acne therapy. J Am Acad Dermatol. 1998; 39(2):38–44. REIS A, VELHO G. Sulfonylurea receptor-1 (sur1): Genetic and metabolic evidences for a role in the susceptibility to type 2 diabetes mellitus. Diabetes Metab. 2002; 28(1):14–19. Huang Q, Bu S, Yu Y, Guo Z, Ghatnekar G, Bu M, et al. Diazoxide prevents diabetes through inhibiting pancreatic β-cells from apoptosis via bcl-2/bax rate and p38- β mitogen-activated protein kinase. Endocrinology. 2007; 148(1):81–91. The authors thank the authors of the studies by [23] for making their data publicly available. This work was supported by the Interdisciplinary Center for Clinical Research (IZKF Aachen), RWTH Aachen University Medical School, Aachen, Germany; DAAD; and Brazilian research agencies: FACEPE, CAPES and CNPq. Center of Informatics, UFPE, Recife, Brazil André C. A. Nascimento, Ricardo B. C. Prudêncio & Ivan G. Costa Department of Statistics and Informatics, UFRPE, Recife, Brazil André C. A. Nascimento IZKF Computational Biology Research Group, Institute for Biomedical Engineering, RWTH Aachen University Medical School, Aachen, Germany André C. A. Nascimento & Ivan G. Costa Aachen Institute for Advanced Study in Computational Engineering Science (AICES), RWTH Aachen University, Aachen, Germany Ivan G. Costa Ricardo B. C. Prudêncio Correspondence to André C. A. Nascimento. Conceived and designed the experiments: AN RP IC. Performed the experiments: AN. Analyzed the data: AN RP IC. All authors read and approved the final manuscript. Figure. Single kernel experiments on the Nuclear Receptor dataset with the KronRLS algorithm as base learner. The heatmap shows the AUPR performance of different kernel combinations; red means higher AUPR. (PDF 460 kb) Spreadsheet. p-values under pairwise Wilcoxon Rank Sum statistical tests of all competing methods in pair, drug and target prediction tasks. (XLS 24 kb) Supplementary Tables. AUPR Results of competing methods under pair prediction setting considering subsampled test sets (S1); AUPR results of predicted scores against new interactions found on current release of KEGG, Matador, Drugbank and ChEMBL databases (S2); Average memory (MB) usage during training and testing of competing methods (S3); Average time (minutes) required to train and test the models with the competing methods (S4). (PDF 89.7 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Nascimento, A.C., Prudêncio, R.B. & Costa, I.G. A multiple kernel learning algorithm for drug-target interaction prediction. BMC Bioinformatics 17, 46 (2016). https://doi.org/10.1186/s12859-016-0890-3 Supervised machine learning Kernel methods Multiple kernel learning Networks analysis
CommonCrawl
Paper-based in vitro tissue chip for delivering programmed mechanical stimuli of local compression and shear flow Kattika Kaarj ORCID: orcid.org/0000-0002-0054-65191, Marianne Madias2, Patarajarin Akarapipad2, Soohee Cho1 & Jeong-Yeol Yoon ORCID: orcid.org/0000-0002-9720-64721,2 Journal of Biological Engineering volume 14, Article number: 20 (2020) Cite this article Mechanical stimuli play important roles on the growth, development, and behavior of tissue. A simple and novel paper-based in vitro tissue chip was developed that can deliver two types of mechanical stimuli—local compression and shear flow—in a programmed manner. Rat vascular endothelial cells (RVECs) were patterned on collagen-coated nitrocellulose paper to create a tissue chip. Localized compression and shear flow were introduced by simply tapping and bending the paper chip in a programmed manner, utilizing an inexpensive servo motor controlled by an Arduino microcontroller and powered by batteries. All electrical compartments and a paper-based tissue chip were enclosed in a single 3D-printed enclosure, allowing the whole device to be independently placed within an incubator. This simple device effectively simulated in vivo conditions and induced successful RVEC migration in as early as 5 h. The developed device provides an inexpensive and flexible alternative for delivering mechanical stimuli to other in vitro tissue models. Mechanical stimuli are key parameters for reproducing cellular microenvironment in vitro [1]. The most common mechanical stimuli found in the human body are shear flow, compression, and stretch/strain [2]. Among these stimuli, shear flow has extensively been demonstrated in in vitro tissue models, as it is a fundamental mechanism for delivering medium and solutions of interest throughout tissues and organs. In addition, shear flow can easily be applied to microfluidic tissue models, where the microfluidic channels mimic the network of vessels found in human tissues and organs. Traditionally, the flow control in microfluidic tissue models can be classified into three categories: external pumping (66%), internal pumping (15%), and passive delivery (19%) [3]. The most popular and simplest method is external pumping, typically utilizing a syringe pump or a peristaltic pump. Such external pumping has frequently been demonstrated in various silicone-based microfluidic tissue models to evaluate the effects of shear flow within the model [4,5,6]. However, those pumps are still bulky in size, expensive, and typically require a separate AC power. To overcome these limitations, a pumping mechanism or component was integrated within microfluidic tissue models, i.e., internal pumping [7, 8]. However, the integration of a pump into a microfluidic device is quite challenging and not at all appropriate for the long-term operation necessary for in vitro tissue models (at least several hours and often several days). As a low-cost and simpler alternative to these external and internal pumping methods, passive flow control has recently emerged, specifically in the past three years. For example, microtiter plate-based tissue models were tilted to generate passive flows, where neuron tissues were used to investigate stem cell differentiation or neurite outgrowth [9, 10]. Hydrostatic pressure was also used to generate the passive flow for studying angiogenesis (formation of new blood vessels from pre-existing vessels) on a vasculature tissue model [11]. While passive flow controls are gaining popularity in recent years, the precise control of flow rate can potentially be challenging, and the platform may fail (typically by leaking) in long-term experiments [12]. Besides shear flow, compression is also an important mechanical stimulus in many tissues including heart, bone, and blood vessels [13,14,15]. Such compression has typically been demonstrated using pistons on 3D, gel-based cell culture models [13], which can be bulky and costly. Compression has rarely been demonstrated on microfluidic tissue models. In fact, shear flow and compression stimuli require totally different tissue model platforms: shear flow has typically been demonstrated on silicone-based microfluidic tissue models, while compression on 3D gel-based cell culture models. However, some tissues are exposed to both types of mechanical stimuli, e.g. vascular endothelial cells experience both shear flow and compression [16]. Application of stretch and/or strain to in vitro tissue models also requires 3D gel-based cell culture models that are stretchable and/or compressible, which is inappropriate for silicone-based microfluidic models. In this work, we propose a novel, simple, small, and cost-effective device, where both compression and shear flow can be applied to a tissue model in a programmed manner (Fig. 1). This device was tested for a vasculature tissue model, to evaluate the effects of compression and shear flow towards the migration of endothelial cells, i.e. the initial stage of angiogenesis which involved in normal tissue development as well as disease progression. The inhibition, induction, or normalization of angiogenesis has been widely investigated for potential therapeutic strategies [17]. Specifically, tumor growth utilizes angiogenesis to create a network of blood vessels that surround tumor tissue, supplying nutrients and oxygen, and removing waste. Angiogenesis also plays an important role in metastasis, the migration of cancer cells to secondary tissue [18]. Therefore, the inhibition of angiogenesis is a widely accepted therapy for cancer, depriving nutrients and oxygen, and subsequently hindering the progression of metastasis. Graphical illustrations of the construction of the paper chip (left), the application of various mechanical stimuli (center), and the resulting cell migration (right) Various materials, for example, hydrogel and paper-biomaterial hybrid, have been used as substrates for 3D cell culture, together with mechanical cues adjunction, to create an appropriate microenvironment [19]. The mechanical property of hydrogel can be modulated by adjusting its compositions; however, it still lacks capability to provide the in vivo-like spatiotemporal physical cues mimicking the complexity and heterogeneity of native tissue [20]. On the other hand, papers can provide an ECM-like fibrous structure and porosity. Additionally, papers are flexible and can be easily modified (e.g., cut, fold, create hydrophobic pattern, and even used as a support structure for hydrogel), inexpensive, and applied with the flow referred to as paper-based microfluidic technology [21,22,23]. In this work, the paper coated with collagen was used as a "flexible" substrate for cell culture, effectively delivering both compression and shear flow on a single platform within our device. Both types of mechanical stimuli could be applied by simply tapping and bending the paper substrate (not possible with the silicone-based microfluidic models). Both mechanical stimuli were delivered to this cell model in a programmed manner using an Arduino microcontroller using an original code, and a servo motor. All components were enclosed in a small, 3D-printed housing, including batteries. To the best of our knowledge, there are no currently published studies demonstrating vascular endothelial cell migration in response to both types of mechanical stimuli on a single platform. Under the influence of stimuli (e.g., mechanical stimuli, chemical stimuli, and micropatterning structure), cells can polarize and migrate to a certain direction [24]. Endothelial cells were easily patterned directly on a paper substrate, which offered the gel-like environment, enabling fast and affordable patterning and fabrication. Cells were initially seeded and patterned on the peripheral sides of the collagen-coated nitrocellulose paper, without using lithographic or wax printing methods. Paper type and coating material were optimized for successful cell patterning. This patterning mimicked pre-existing capillary vessels, and was subjected to local compression or pulsatile flow through tapping and bending paper substrate under optimized incubation time (supplementary MOV files are included in the supplementary information (Additional files 1 and 2). Cells were free to migrate in between these two patterns. Mechanical stimuli were also accompanied by chemical induction through applying vascular endothelial growth factor (VEGF) to the center area between two cell patterns, or by tumor induction where one of the patterns was seeded with human breast cancer cells. The developed device and findings of this study may inspire innovative strategies in delivering multiple types of mechanical stimuli to in vitro cell models in a programmed manner. Additional file 1 Video clip showing the pulsatile local compression (18.5 times/min) applied to the paper chip, as controlled by a servo motor and an Arduino microcontroller. Additional file 2 Video clip showing the pulsatile parallel flow (7 cm/s) applied to the paper chip, as controlled by a servo motor and an Arduino microcontroller. Optimization of paper type, coating, and assay time While paper fibers offer a 3D, extracellular matrix (ECM)-like microenvironment, adhesion of endothelial cells on paper fibers is poor due to the electrostatic repulsion between the paper's negative charge and the cell membrane's phosphate groups. This requires the optimization of paper types (cellulose vs. nitrocellulose = NC) and coatings (RGD-containing peptide vs. collagen). RGD peptide sequence should promote cell adhesion through focal adhesion [25]. Collagen-coated NC paper shows a significantly higher number of cells adhered on paper than GRGDSPK-coated NC paper, bare NC paper, and cellulose paper (GRGDSPK-coated, collagen-coated, and bare cellulose) due to the smaller pore size and stronger negative polarity of NC over cellulose (Fig. 2a). In addition, the auto-fluorescence in cellulose paper was lower with NC paper than cellulose paper (Fig. 2b) and the plastic backing of NC paper prevents the vertical flow of coating reagents to the other side of paper [26]. GRGDSPK (RGD-containing peptide) coating was not successful in promoting endothelial cell adhesion. Collagen coating, on the other hand, was much more successful in accommodating endothelial cell adhesion, presumably due to its rigorous coating over paper fibers compared to GRGDSPK (Fig. 2c) as well as collagen's better representation of natural ECM [27]. Consequently, collagen-coated NC paper was chosen as the optimum substrate for the vascular endothelial chip model. Even though endothelial cells' migration was observed in other collagen-based scaffold in the previous study [28], the major benefit of collagen-coated NC paper is the flexibility of paper that can be hole-punched and bended, which can be controlled by an inexpensive servo motor and Arduino microcontroller, to generate the shear flow. These cannot be done with collagen or any other gel-based systems [29, 30]. Optimization of the type of paper, coating materials on paper, and chemical stimuli. a Average numbers of RVECs on the field-of-view (FOV) on various paper types and coatings, hole-punched into 5-mm diameter circles and cultured on a 96-well plate. b Background fluorescence intensities of two types of paper substrates. c Amount of coating materials on NC paper after washing. d-e Average numbers of migrating cells and the length of migration on the RVEC chip under static condition, with VEGF or S1P added to the central area of each chip. Averages of 3 experiments, each from 6 different images of a different paper substrate or a paper chip. Error bars represent standard errors. * represents statistical difference with p < 0.05 between two data sets (in b and c) or from control (in d and e) Rat vascular endothelial cells (RVECs) were cultured on collagen-coated NC paper for 24 h, resulting in confluent and healthy adhesion behavior. Additionally, RVECs were patterned only within the peripheral sides of the paper substrate using a PDMS block (Fig. 1), without using any conventional lithography or wax printing techniques. This patterning mimicked pre-existing capillary vessels, and the RVECs were free to migrate in between these two simulated vessels, demonstrated in the following sets of experiments. The assay time for sufficient cell migration was optimized using these cell-patterned paper chips. The migration of RVECs on the paper chip was induced by VEGF at the center of the unoccupied central area of each paper chip, followed by incubation for 1, 3, or 5 h. Both the number of migrating RVECs and the length of migration were used as measures for migration (Fig. 2d and Fig. 2e). 5 h incubation time illustrated the most successful migration before the detachment of cultured cells from paper surface occurred, and was chosen for the rest of the experiment. VEGF and S1P were also compared by monitoring the migration of RVECs towards the central area in the chip after 5 h of incubation. VEGF plays significant roles in proliferation, differentiation, sprouting, and migration of endothelial cells during angiogenesis. S1P is known to primarily enhance the migration of immune cells, for example, T- and B-cells, while it can also induce the migration of endothelial cells. Both the number of migrating cells and the length of migration pattern with VEGF were significantly different from those without (p < 0.05), and with S1P induction. Since the use of S1P did not result in sufficient angiogenic behavior on the paper chip while VEGF did, VEGF was selected as an optimal chemical stimulant for the rest of the experiments. Mechanical induction Various types and rates of mechanical stimuli were investigated with a single platform developed in this study. Local compression was applied to the paper chip by moving a 3D-printed hammer up-and-down (Fig. 3) but not physically touching the chip surface. Parallel and perpendicular flows were simulated by replacing the 3D-printed hammer with metal wires and lifting the paper up-and-down. This created relative pulsatile flow of the media on top of the cells cultured on the paper chip. Such flows were made in either a parallel or perpendicular direction to the peripherally patterned RVECs (Fig. 1). The rates of the servo motor movement were set to 10 or 15 RPM (100 and 50 ms delay time, respectively), which corresponded to the compression rates of 18.5 or 35.3 times/min or the flow rates of 7 or 15 cm/s. These flow rates correspond with the average arterial flow rate [31]. Still images from the video clips showing local compression (a) and parallel flow (b) applied to the paper chips. Cells are patterned at the top and bottom sides within the black-colored rectangle on a paper chip Compression force delivered to the vascular endothelial cells on the paper chip was calculated using a simple force balance [32] as shown in Eq. 1. $$ \mathrm{Compression}\ \mathrm{force}\ F= ma-\rho Va $$ Hammer mass (m) was 921 mg; acceleration (a) from the hammer movement was 5 m/s2; medium density (ρ) at 37 °C was 1 g/cm3; total volume of a submerged hammer (V) was 150 mm3. The resulting compression force (F) was 3.86 g·m/s2. Compression pressure was calculated using Eq. 2. $$ \mathrm{Compression}\ \mathrm{pressure}\ P=F/A $$ With the hammer area (A) of 75 mm2 and the compression force (F) calculated above, the compression pressure experienced by vascular endothelial cells was 51.4 kg/m·s2 = 51.4 Pa (Table 1), which corresponded to the pressure of 68 Pa in vein [32]. Wall shear stress was calculated using Eq. 3 [33]. $$ \mathrm{Shear}\ \mathrm{stress}\ \tau =6\mu Q/{H}^2W $$ Table 1 Calculation of compression pressure Dynamic viscosity (μ) of the media was 9.4 × 10− 4 N∙s/m2; the volumetric flow rates (Q) were calculated as 3.85 × 10− 6 m3/s (for 7 cm/s flow rate) and 8.25 × 10− 6 m3/s (for 15 cm/s flow rate). The height of media from the paper surface, 5 mm, was used for H and the channel width on the paper chip, 2 mm, was used for W. The resulting wall shear stresses were 4.34 dyn/cm2 for 7 cm/s flow rate and 9.31 dyn/cm2 for 15 cm/s flow rate, respectively (Table 2), which also corresponded with the average shear stress of 4–30 dyn/cm2 within arteries [34]. The effect of paper bending to the cells can also be considered. The surface stress and stiffness resulting from the bended paper chip was negligible according to the thin plate theory on the stiffness of thin sheet (paper chip in our case) that is independent of its bending state within the framework of linear elasticity [35]. Table 2 Calculation of shear stress All three types of mechanical stimuli (local compression, parallel flow, and perpendicular flow) effectively induced the migration of RVECs toward the unoccupied central channel on the paper chip without any chemical induction, as shown in Fig. 4a and Fig. S3 and Table S1 in Additional file 3. With the exception of the perpendicular flow at a low flow rate of 7 cm/s, the average numbers of migrating cells ranged from 89 to 197 with mechanical stimuli and 31 with static condition, all statistically different with p < 0.05, and the average lengths of migration ranged from 0.516 to 0.741 mm with mechanical stimuli and 0.089 with static condition, again all statistically different with p < 0.05 (Table S1 in Additional file 3). The highest number of migrating cells and length of migration were 197 and 0.741 mm, respectively, which are comparable to those with VEGF (chemical induction), 215 and 0.895, respectively. These results correspond to the study of Hsu et al. on the effects of different flow patterns to vascular endothelial cell migration without any chemical induction [36]. Average numbers of RVECs in FOV and average lengths of RVEC migration on the paper chips with a mechanical stimuli, b mechanical and chemical stimuli (VEGF), and c mechanical stimuli and tumor induction, where one peripheral channel was replaced with MCF7 (breast cancer cells). Averages of 3 experiments, each from 6 different images of a different paper chip. Error bars represent standard errors. * represents statistical difference with p < 0.05 The higher compression rate (35.3 times/min) induced a significantly higher number of migrating cells, 197 cells, than 111 cells with the lower compression rate (18.5 times/min) (p < 0.05); however, there was a statistically insignificant increase in migration length, 0.741 mm vs. 0.59 mm (p > 0.05). With the perpendicular flow, the faster flow rate (15 cm/s) significantly improved (p < 0.05) both the number of migrating cells and the length, 136 cells and 0.63 mm length, than those with the slower flow rate (7 cm/s), 36 cells and 0.29 mm length. In contrast, the parallel flow rate did not induce statistical differences (p > 0.05) in the number of migrating cells and the migration length: 125 cells and 0.671 mm migration length with 15 cm/s and 89 cells and 0.52 mm migration length with 7 cm/s. Mechanical and chemical induction The most physiologically relevant method of induction would be combined induction of mechanical and chemical stimuli. For chemical induction, VEGF was chosen considering its major role in increasing vascular permeability and cellular migration [37]. VEGFR-2, a major receptor mediating most angiogenic functions, is upregulated in endothelial cells when seeded in a 3D collagen matrix, increasing the endothelial cells' sensitivity to VEGF [38]. We hypothesized that the fibrous and porous structures of NC paper with collagen coating could provide a 3D microenvironment similar to the collagen matrix, leading to successful migration of endothelial cells in response to VEGF. Three different mechanical stimuli were applied to the RVEC-patterned NC paper chips, with VEGF added to the central channel area. Representative fluorescence images of RVEC migration in response to the combined mechanical and chemical stimuli are shown in Fig. 5 (other images are available in the supplementary figure in Additional file 3). In general, both the number of migrating cells and the migration length increased to 89–321 cells and 0.525–0.823 mm from 36 to 197 cells and 0.291–0.741 mm without VEGF (89–197 cells and 0.516–0.741 excluding the perpendicular flow at the low flow rate) (Fig. 4a and b, as well as Table S1 in Additional file 3). These numbers are of course higher than 31 cells and 0.089 mm under static condition. These effects of mechanical and chemical induction on RVEC migration are consistent with those of VEGF-induced human umbilical vein endothelial cell migration under 2.09 dyn/cm2 shear force [5]. Representative fluorescence images of RVEC migration under the combined chemical (VEGF) and mechanical stimuli: a local compression at the rate of 35.3 times/min and b perpendicular flow at the rate of 15 cm/s. Grey dashed line represents the border where the cells were initially seeded. A hammer in (a) delivers the compression stimuli, and the light blue arrow in (b) represents the direction of perpendicular shear flow Specifically, significant increases in the number of migrating cells were observed at lower compression rate (111 vs. 248 cells) and at slower perpendicular flow rate (36 vs. 321 cells). A similar trend could be observed in the migration length at a slower perpendicular flow rate (0.29 vs. 0.712 mm length). At a higher compression rate and faster perpendicular flow rate, however, the number of migrating cells and the migration length did not increase further or even decreased compared to those without VEGF. No significant improvement with VEGF was observed for parallel flow. Taken together, the numbers shown in Fig. 4b could potentially be the maximum extent of migration. Mechanical and tumor induction Tumor cells release VEGF to induce migration of endothelial cells and subsequently angiogenesis to provide nutrients and oxygen for their growth and persistence [39]. Such tumor-induced endothelial cell migration was also demonstrated on the developed paper chip and device. MCF7, breast cancer cells, were seeded on one side of the paper chip and RVECs on the other side. RVECs and MCF7s were separately cultured under static conditions. Cells were patterned using the aforementioned PDMS block. After that, the block was removed, and the MCF7 cells were expected to release VEGF to induce migration of the RVECs on the other side of the paper chip. No additional VEGF was added in these experiments. While no migration was observed after 3 h static incubation, RVECs started migrating toward the opposite MCF7 side after 5 h static incubation. These experiments were repeated by adding mechanical stimuli (local compression, parallel flow, and perpendicular flow) to further enhance the migration of RVECs (Fig. 4c). Again, no additional VEGF was added. Overall, the numbers of migrating cells with both tumor induction and mechanical stimuli significantly increased from those with mechanical stimuli only and were comparable to those with combined chemical and mechanical stimuli (Fig. 4a, b and c, as well as Table S1 in Additional file 3). In addition, a higher compression rate and higher flow rate did not substantially increase the number of migrating cells with both tumor induction and mechanical stimuli, similar to the results with combined chemical and mechanical stimuli. The only difference was that the parallel flow was preferred with tumor induction and mechanical stimuli while the perpendicular flow was preferred with chemical and mechanical stimuli. The lengths of migration with tumor induction and mechanical stimuli were very similar to those with chemical and mechanical induction, while the differences were more pronounced with tumor induction (Fig. 4c and Table S1 in Additional file 3). The device presented in this work is able to deliver two different mechanical stimuli of local compression and shear flow to an in vitro tissue chip, as well as simultaneous introduction of chemical stimuli and tumor induction. It demonstrates the advantages of both gel-based 3D cell culture models and silicone-based microfluidic tissue models. Vascular endothelial cells' migration was demonstrated as an example of dynamic tissue development in response to mechanical stimuli. This aim was achieved by utilizing paper-based cell culture that was low cost, easy to fabricate, porous, and most importantly flexible. Mechanical stimuli were delivered in a programmed manner using an Arduino microcontroller. The cost for equipment and supplies is also low, with ~US$17 for an Arduino microcontroller, ~US$12 for a servo motor, and the total equipment/supplies cost < US$50 at the time of writing. The device is also small enough to be placed in the shelves of typical CO2 incubators with its own battery supply. Most importantly, large number of experimental conditions can be evaluated using this single device, e.g., many different combinations of mechanical and chemical stimuli (including stretch/strain through bending the paper at much larger degrees) as well as tumor induction, large number of rate combinations for mechanical stimuli, and unrestricted growth of tissue structures (not limited by pre-defined channels). With these possibilities, the proposed device can be used for high-throughput studies and big data analyses, which can be potentially useful in screening and optimizing drugs and therapeutic strategies. Device housing The housing of the device was designed and 3D-printed with acrylonitrile-butadiene-styrene (ABS) co-polymer using MakerBot Z18 (MakerBot Industries, Brooklyn, NY, USA), with the total dimension of 11.7 cm × 9.7 cm × 9.7 cm. There were three shelves in the housing as shown in Fig. 6a; four AA batteries and an Arduino Uno microcontroller in the top shelf (configurations for pin connections shown in Fig. 6b and Fig. 6c), a servo motor with 38 mm arm length and 180° rotation (Kookye MG 995; Pinetree Electronics Ltd., Richmond, BC, Canada) in the middle shelf, and the paper chip submerged in culture media within petri dishes in the bottom shelf. The device housing (a) incorporates batteries, Arduino microcontroller, and a servo motor (b), along with the paper chip submerged in media. Detail pin configurations are shown in (c) Optimization of paper and coating materials To optimize cell adhesion on paper, paper types of cellulose (GE Healthcare, Maidstone, Kent, UK) vs. nitrocellulose (NC; EMD Millipore, Hayward, CA, USA) were tested and compared. In addition, coatings of RGD-containing peptide (GRDGSPK; 50 μg/mL; AnaSpec Inc., Fremont, CA, USA) vs. collagen (type I rat tail; 50 μg/mL; BD, Franklin Lakes, NJ, USA) were tested to compare the difference between cell-binding peptide vs. whole protein. The following paper-coating type combinations were tested: RGD-cellulose, collagen-cellulose, RGD-NC, collagen-NC, cellulose only, nitrocellulose only, and standard 96-well tissue culture plate (TCP; Corning Inc., Corning, NY, USA). Paper substrates were hole-punched into 5-mm diameter circles, individualized into a 96-well plate, and UV sterilized prior to experiments. 100 μL of rat vascular endothelial cells at 5000 cells/mL (RVECs; ATCS, Manassas, VA, USA) were added to each paper-coating combination, incubated for 1 h, then washed twice with phosphate buffer saline (PBS; Sigma-Aldrich, St. Louis, MO, USA). Fluorescence images of each paper substrate were collected using a benchtop fluorescence microscope (Nikon Eclipse TS100, Minato, Tokyo, Japan) with UV filter attachments (A.G. Heinze, Lake Forest, CA, USA). Cells were stained with DAPI (UV excitation and blue emission) to count the number of cells in the field of view (FOV) using ImageJ software (National Institutes of Health, Bethesda, MD, USA). Details of fluorescence imaging are described in the later section of Fluorescence imaging. Rat vascular endothelial cells (RVECs; ATCC, Manassas, VA, USA) were maintained in Dulbecco's Modified Eagle Medium (DMEM; Corning) supplemented with 10% v/v fetal bovine serum (FBS; Fisher Scientific, Pittsburgh, PA, USA), 0.2% v/v of 250 μg/mL Amphotericin B (GE Healthcare), and 0.1% v/v of 50 mg/mL Gentamycin sulfate on T-75 cell culture flasks (Greiner Bio-One, Monroe, NC, USA) under static conditions at 37 °C with 5% CO2 (HERAcell 150i; Thermo Scientific, Waltham, MA, USA) until they reached 90% confluency. They were re-suspended at a final concentration of 2 × 106 cells/mL. MCF7 were maintained and cultured in the same manner as culturing RVECs, where DMEM was replaced with Eagle's Minimum Essential Medium (Corning) and 0.01 mg/mL human recombinant insulin (Sigma-Aldrich) was additionally added. MCF7 was patterned in the same manner as patterning RVECs. Cell patterning Prior to cell patterning, NC paper (thickness = 2 mm; average pore size = 14.53 μm) was cut into 11 mm wide and 15 mm long pieces. 0.1 mL of 50 μg/mL collagen solution (= 5 μg of collagen) was added to cover the entire surface of NC paper and left for 1 h to allow ubiquitous coating, followed by washing twice with PBS. The type (NC) and coating (collagen) of paper were determined from the experiments described in the previous section. Fluorescamine protein assay (Thermo Fisher Scientific, Waltham, MA, USA) was used to verify the collagen coating on NC paper following the vendor's protocol. A 15 mm × 5 mm × 5 mm PDMS block was placed on the paper chip and held by two 12 mm × 4 mm × 2 mm neodymium block magnets (one placed on PDMS block and one underneath NC paper), exposing two rectangular areas of 3 mm wide and 15 mm long each (Fig. 1). 10 μL of the cell solution was seeded on these 3 mm-wide peripheral sides of the paper chip. Cells were allowed to anchor on the surface for 15 min, 3 mL of endothelial growth media was added to cover the paper, and the cells were cultured under static condition for 24 h, until the cells monolayerly covered the peripheral channel to mimic the monolayer of vascular endothelial cells surrounding the blood vessel in vivo. Optimization of assay time and chemical stimuli Following 24 h static cell culture, the collagen-coated NC paper chips were washed twice with DPBS. Either 0.1% v/v of 50 ng/mL vascular endothelial growth factor (VEGF; Invitrogen, Carlsbad, CA, USA) or 0.1% v/v of 50 ng/mL sphingosine-1-phosphate (S1P; Echelon Biosciences Inc., Salt Lake City, UT, USA) was loaded to the central areas of the paper chips as chemical stimuli. The paper chips were dipped into 3 mL endothelial growth media in a Petri dish as shown in Fig. 6, and the cells were incubated under static condition at 37 °C within a 5% CO2 incubator over the time course (1, 3, and 5 h). After incubation, RVECs on each paper chip were imaged using a benchtop fluorescence microscope, where the cells' nuclei were stained with DAPI and the cells' actin filaments were stained with TRITC-phalloidin. These two images were stacked onto each other to represent the entire cell images. Since the entire migration pattern cannot be captured in a single frame, multiple images were captured and connected by using the overlap area of each individual image to represent the whole length and pattern of cell migration. Red arrow symbols were added to all images to indicate the points where the images were connected. The grey dotted lines represent the border of the channel where the cells were initially seeded. To measure the length of migrating cells in 2D, the straight line was used to measure the length in pixels then converted to the length in millimeter scale. The paper chips with patterned cells were exposed to three different types of mechanical stimuli (Fig. 1). Local compression was introduced by placing the paper chips underneath a plastic hammer. These hammers were designed using SolidWorks software (SolidWorks Corp., Waltham, MA, USA), 3D-printed with ABS co-polymer using Zortrax M200 (Zortrax, Olsztyn, Poland), and subsequently sterilized by dipping in 70% ethanol and dried under UV light for 30 min prior to cell experiment. The paper chip was cut into the size that corresponds to a diameter of petri dish in order to allow the unidirectional movement of paper and distribution of delivered local compression along the length of patterned RVECs. Parallel and perpendicular flows were introduced by connecting the metal wire (sterilized following the above-mentioned protocol) that moved up and down through the hole-punched on one end of the NC paper chips to create relative flow of media over the chips. The hammers and metal wires were connected to a 180-degree range servo motor, which was connected to and controlled by an Arduino Uno microcontroller (Fig. 6). The servo motor was programmed to rotate over a very narrow range of angles, from 22° to 25°, to gently tap the paper chip surface but not physically touching it for delivering local compression, or to lift the paper chip up and down over a short distance for delivering relative share flow, all at the rotation rates of 10 or 15 RPM. The video clips of local compression and shear flow were recorded (MOV files are included in the supplementary information as Additional files 1 and 2; still images are shown in Fig. 3) and then used to calculate the compression rates (18.5 times/min or 35.3 times/min) and the relative flow rates (7 cm/s or 15 cm/s). This setup was maintained for 5 h under 37 °C and 5% CO2. Mechanical stimuli added with chemical stimuli or tumor induction In addition to the mechanical stimuli, chemical stimuli and tumor induction were also applied. As previously described in the Optimization of assay time and chemical stimuli section, VEGF was pre-loaded to the central areas of the paper chips as the optimum chemical stimuli, optimized from the experiments described in the same section. For tumor induction, one side of the channel was seeded with MCF7, a human breast cancer cell line, rather than seeding both channels with RVECs. MCF7 were maintained, cultured, and patterned in the same manner as culturing and patterning RVECs (differences were addressed in the supplementary information – Additional file 3). Waste media was removed from the paper chips, which were then fixed with a 4% solution of paraformaldehyde (Affymetrix, Santa Clara, CA, USA) for 15 min. Paraformaldehyde was removed and paper chips were rinsed twice with washing buffer solution (1X PBS with 0.05% Tween-20). Cell membranes were perforated with 0.1% Triton-X-100 (Fisher Scientific) for 5 min. Triton-X was removed and paper chips were rinsed twice with washing buffer solution. Paper chips were treated with blocking buffer solution (1% bovine serum albumin - BSA in 1X PBS) and then were incubated for 30 min. The cells' nuclei were stained with DAPI and actin filaments with TRITC-conjugated phalloidin (EMD Millipore, Burlington, MA, USA). Fluorescence images were collected using the ISCapture software on a personal computer connected to a benchtop fluorescence microscope (Nikon Eclipse TS100, Minato, Tokyo, Japan) with UV and TRITC filter attachments (A.G. Heinze, Lake Forest, CA, USA). Since the entire migration pattern cannot be captured in a single frame, multiple images were captured and connected by using the overlap area of each individual image to represent the whole length and pattern of cell migration. Red arrow symbols were added to all images to indicate the points where the images were connected. The grey dotted lines represent the border of the channel where the cells were initially seeded. The greyscale images were taken from each filter cube and transferred to ImageJ software (National Institutes of Health, Bethesda, MD, USA). The pseudo-colors were then added: blue to DAPI (cells' nuclei) and red to TRITC-phalloidin (cells' actin filaments). These two images were stacked onto each other to represent the entire cell images. To measure the length of sprouting cells in 2D, the straight line was used to measure the length in pixel then converted to the length in millimeter. All data were derived from at least three replicates, each using a different paper chip with seeded cells. Statistical analyses were performed using analyses of variance (ANOVA). Differences at p < 0.05 were considered statistically significant. Movie clips of device operation are available in the Additional files 1 and 2. Fluorescence microscopic images used to generate Figs. 2, 3 and 4 are available in the Additional file 3. Additional data can be requested to the authors upon reasonable request. Polacheck WJ, Li R, Uzel SGM, Kamm RD. Microfluidic platforms for mechanobiology. Lab Chip. 2013;13:2252–67. Waters CM, Roan E, Navajas D. Mechanobiology in lung epithelial cells: measurements, perturbations, and responses. Compr Physiol. 2012;2:1–29. Junaid A, Mashaghi A, Hankemeier T, Vulto P. An end-user perspective on organ-on-a-Chip: assays and usability aspects. Curr Opin Biomed Eng. 2017;1:15–22. Shin SR, Zhang YS, Kim D-J, Manbohi A, Avci H, Silvestri A, et al. Aptamer-based microfluidic electrochemical biosensor for monitoring cell-secreted trace cardiac biomarkers. Anal Chem. 2016;88:10019–27. Tourovskaia A, Fauver M, Kramer G, Simonson S, Neumann T. Tissue-engineered microenvironment systems for modeling human vasculature. Exp Biol Med. 2014;239:1264–71. Seo H-R, Jeong HE, Joo HJ, Choi S-C, Park C-Y, Kim J-H, et al. Intrinsic FGF2 and FGF5 promotes angiogenesis of human aortic endothelial cells in 3D microfluidic angiogenesis system. Sci Rep. 2016;6:28832. Chang J-Y, Wang S, Allen JS, Lee SH, Chang ST, Choi Y-K, et al. A novel miniature dynamic microfluidic cell culture platform using electro-osmosis diode pumping. Biomicrofluidics. 2014;8:044116. Maschmeyer I, Hasenberg T, Jaenicke A, Lindner M, Lorenz AK, Zech J, et al. Chip-based human liver–intestine and liver–skin co-cultures – a first step toward systemic repeated dose substance testing in vitro. Eur J Pharm Biopharm. 2015;95:77–87. Moreno EL, Hachi S, Hemmer K, Trietsch SJ, Baumuratov AS, Hankemeier T, et al. Differentiation of neuroepithelial stem cells into functional dopaminergic neurons in 3D microfluidic cell culture. Lab Chip. 2015;15:2419–28. Wevers NR, van Vught R, Wilschut KJ, Nicolas A, Chiang C, Lanz HL, et al. High-throughput compound evaluation on 3D networks of neurons and glia in a microfluidic platform. Sci Rep. 2016;6:38856. Kim S, Chung M, Ahn J, Lee S, Jeon NL. Interstitial flow regulates the angiogenic response and phenotype of endothelial cells in a 3D culture model. Lab Chip. 2016;16:4189–99. Kaarj K, Yoon J-Y. Methods of delivering mechanical stimuli to organ-on-a-chip. Micromachines. 2019;10:700. Shachar M, Benishti N, Cohen S. Effects of mechanical stimulation induced by compression and medium perfusion on cardiac tissue engineering. Biotechnol Prog. 2012;28:1551–9. Farahat WA, Wood LB, Zervantonakis IK, Schor A, Ong S, Neal D, et al. Ensemble analysis of angiogenic growth in three-dimensional microfluidic cell cultures. PLoS One. 2012;7:e37333. Shin Y, Han S, Jeon JS, Yamamoto K, Zervantonakis IK, Sudo R, et al. Microfluidic assay for simultaneous culture of multiple cell types on surfaces or within hydrogels. Nat Protoc. 2012;7:1247–59. Ambrosi CM, Wille JJ, Yin FC. Reorientation response of endothelial cells to cyclic compression: comparison with cyclic stretching. Proceedings of the Second Joint 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society. Engin Med Biol. 2002;1:386–7. Nguyen DHT, Stapleton SC, Yang MT, Cha SS, Choi CK, Galie PA, et al. Biomimetic model to reconstitute angiogenic sprouting morphogenesis in vitro. Proc Natl Acad Sci U S A. 2013;110:6712–7. Gupta MK, Qin R-Y. Mechanism and its regulation of tumor-induced angiogenesis. World J Gastroenterol. 2003;9:1144–55. Huang G, Li F, Zhao X, Ma Y, Li Y, Lin M, et al. Functional and biomimetic materials for engineering of the three-dimensional cell microenvironment. Chem Rev. 2017;117:12764–850. Ma Y, Lin M, Huang G, Li Y, Wang S, Bai G, et al. 3D spatiotemporal mechanical microenvironment: a hydrogel-based platform for guiding stem cell fate. Adv Mater. 2018;30:1705911. Derda R, Laromaine A, Mammoto A, Tang SKY, Mammoto T, Ingber DE, et al. Paper-supported 3D cell culture for tissue-based bioassays. Proc Natl Acad Sci U S A. 2009;106:18457–62. Njus Z, Kong T, Kalwa U, Legner C, Weinstein M, Flanigan S, et al. Flexible and disposable paper- and plastic-based gel micropads for nematode handling, imaging, and chemical testing. APL Bioeng. 2017;1:016102. Ng K, Azari P, Nam HY, Xu F, Pingguan-Murphy B. Electrospin-coating of paper: a natural extracellular matrix inspired design of scaffold. Polymers. 2019;11:650. Jiang X, Bruzewicz DA, Wong AP, Piel M, Whitesides GM. Directing cell migration with asymmetric micropatterns. Proc Natl Acad Sci U S A. 2005;102:975–8. Jokinen J, Dadu E, Nykvist P, Käpylä J, White DJ, Ivaska J, et al. Integrin-mediated cell adhesion to type I collagen fibrils. J Biol Chem. 2004;279:31956–63. Ulep T-H, Yoon J-Y. Challenges in paper-based fluorogenic optical sensing with smartphones. Nano Converg. 2018;5(1):14. Leszczak V, Baskett DA, Popat KC. Endothelial cell growth and differentiation on collagen-immobilized polycaprolactone nanowire surfaces. J Biomed Nanotechnol. 2015;11:1080–92. Li X, Dai Y, Shen T, Gao C. Induced migration of endothelial cells into 3D scaffolds by chemoattractants secreted by pro-inflammatory macrophages in situ. Regen Biomater. 2017;4:139–48. Laiva AL, Raftery RM, Keogh MB, O'Brien FJ. Pro-angiogenic impact of SDF-1α gene-activated collagen-based scaffolds in stem cell driven angiogenesis. Int J Pharm. 2018;544:372–9. Peterson AW, Caldwell DJ, Rioja AY, Rao RR, Putnam AJ, Stegemann JP. Vasculogenesis and angiogenesis in modular collagen-fibrin microtissues. Biomater Sci. 2014;2:1497–508. Klarhöfer M, Csapo B, Balassy C, Szeles JC, Moser E. High-resolution blood flow velocity measurements in the human finger. Magn Reson Med. 2001;45:716–9. Weiss D, Avraham S, Guttlieb R, Gasner L, Lotman A, Rotman OM, et al. Mechanical compression effects on the secretion of vWF and IL-8 by cultured human vein endothelium. PLoS One. 2017;12:e0169752. Battiston KG, Santerre JP, Simmons CA. Mechanobiological stimulation of tissue engineered blood vessels. Integrative mechanobiology: micro- and nano- techniques in cell mechanobiology: Cambridge University Press; 2015. p. 227–44. Gray KM, Stroka KM. Vascular endothelial cell mechanosensing: new insights gained from biomimetic microfluidic models. Semin Cell Dev Biol. 2017;71:106–17. Lachut MJ, Sader JE. Effect of surface stress on the stiffness of thin elastic plates and beams. Phys Rev B. 2012;85:085440. Hsu P-P, Li S, Li Y-S, Usami S, Ratcliffe A, Wang X, et al. Effects of flow patterns on endothelial cell migration into a zone of mechanical denudation. Biochem Biophys Res Commun. 2001;285:751–9. Hoeben A, Landuyt B, Highley MS, Wildiers H, Van Oosterom AT, De Bruijn EA. Vascular endothelial growth factor and angiogenesis. Pharmacol Rev. 2004;56:549–80. Haspel HC, Scicli GM, McMahon G, Scicli AG. Inhibition of vascular endothelial growth factor-associated tyrosine kinase activity with SU5416 blocks sprouting in the microvascular endothelial cell spheroid model of angiogenesis. Microvasc Res. 2002;63:304–15. Boudreau N, Myers C. Breast cancer-induced angiogenesis: multiple mechanisms and the role of the microenvironment. Breast Cancer Res. 2003;5:140–6. This work was supported by the pilot interdisciplinary grant from the BIO5 Institute at the University of Arizona and the U.S. National Institute of Health, grant number T32HL007955. K.K. acknowledges the scholarship from the Development and Promotion of Science and Technology Talents Project (DPST) of Thailand. M.M. acknowledges the support from Maximizing Access to Research Careers (MARC) at the University of Arizona. P.A. acknowledges the scholarship from One District One Scholarship (ODOS) of Thailand. Department of Biosystems Engineering, The University of Arizona, Tucson, AZ, USA Kattika Kaarj, Soohee Cho & Jeong-Yeol Yoon Department of Biomedical Engineering, The University of Arizona, Tucson, AZ, USA Marianne Madias, Patarajarin Akarapipad & Jeong-Yeol Yoon Kattika Kaarj Marianne Madias Patarajarin Akarapipad Soohee Cho Jeong-Yeol Yoon K.K., S.C., and J.-Y.Y. conceived and designed the study. K.K., M.M., and P.A. conducted the experiments. K. K and J.-Y.Y analyses the data. K. K and J.-Y.Y drafted the manuscript with the input from M. M, P.A., and S.C. The author(s) read and approved the final manuscript. Correspondence to Jeong-Yeol Yoon. Additional file 3. Supplementary figures and table. Fig. S1. Supplementary figures for optimization of paper and coating. Fig. S2. Supplementary figures for assay time optimization and chemical induction. Fig. S3. Supplementary figures for mechanical induction. Fig. S4. Supplementary figures for mechanical plus chemical induction. Fig. S5. Supplementary figures for tumor induction. Table S1. The number of migrating cells and length of the migrating pattern for each figure. Kaarj, K., Madias, M., Akarapipad, P. et al. Paper-based in vitro tissue chip for delivering programmed mechanical stimuli of local compression and shear flow. J Biol Eng 14, 20 (2020). https://doi.org/10.1186/s13036-020-00242-5 Automated flow control Paper-based cell culture Vascular endothelial cell Cell migration
CommonCrawl
Statistical significance: Is my new bot really better than the previous one? Ethaniel 01 Introduction 02 A first glimpse 03 The formulas 04 Additional features 3/4 The formulas Previous: A first glimpse Next: Additional features A test statistic (generically named T) indirectly measures the unlikeliness that some observed result can be explained by random deviation from a simple reference case where nothing special happens (examples: The coin is fair, the dice is fair, the two bots are as good as each other). In the use scope of this paper, the test statistic we'll calculate is named $\chi_1^2$, however we'll use a signed version of it. Use scope The formulas below are for comparing two bots only: Games with more than two bots (1 vs. 1 vs. 1 for instance) are not covered here. The formulas allows for draws, and you don't need to know how the game behaves for a match between two identical bots. Formula of statistical significance $$T = \operatorname{sign}(W-L) \times \left( \frac{\left[ D + \sqrt{2 (W^2 + L^2)} \right]^2}{N} - N \right) $$ W the number of matches won by the new bot, D the number of drawn matches, L the number of matches lost by the new bot, N = W + D + L the number of matches played. A test statistic compares actual observation with what we would expect if nothing special happens: That expected outcome is the null hypothesis H₀, and T = 0 if the actual observation exactly fits H₀. As the outcomes to compare are triplets (values being labeled respectively "wins", "draws" and "losses" for the new bot) for which we increment one of the values at each match result, we're comparing trinomial distributions. We'll thus use Pearson's χ² test of goodness of fit, which is an asymptotic test statistic, not exact but precise enough for our need (calculating an exact test statistic would cost a lot of computing resources for a slight and useless gain in precision). In our case the null hypothesis is that the two bots are as good as each other (if not identical), so the expected distribution is an equal number of wins and losses; The expected number of draws is undefined because it depends on the game rules and on the bot (especially if it does not play perfectly), this is a free parameter for H₀. So, for a given number N of matches played, the actual observation is a (W, D, L) triplet while the expected outcome is a (α, N−2α, α) triplet with α ∈ [0, N/2] (otherwise the triplet would contain negative values). We have 2 degrees of freedom for the actual observation (as N = W + D + L is fixed, we can choose freely only two of the three values) and 1 free parameter for H₀ (namely α), so the remaining number of degrees of freedom for the test statistic is 2 − 1 = 1: Pearson's χ² test statistic is thus $\chi_1^2$ in our case. Whatever its degrees of freedom, Pearson's χ² test statistic is: $$\chi^2 = \sum_{i=1}^N \frac{(O_i - E_i)^2}{E_i} = \sum_{i=1}^N \frac{O_i^2}{E_i} - N $$ with Oᵢ actual observations and Eᵢ expected values. So, in our case: $$\chi_1^2 = \frac{W^2}{\alpha} + \frac{D^2}{N - 2 \alpha} + \frac{L^2}{\alpha} - N = \frac{D^2}{N - 2 \alpha} + \frac{W^2 + L^2}{\alpha} - N $$ The free parameter α of H₀ is estimated by minimizing $\chi_1^2$ in the [0, N/2] domain. Its first derivative is: $$\begin{align} \frac{\mathrm{d}\chi_1^2}{\mathrm{d}\alpha} & = 2 \frac{D^2}{(N - 2 \alpha)^2} - \frac{W^2 + L^2}{\alpha^2} \\ & = \frac{W^2 + L^2}{(N - 2 \alpha)^2} \left[ \frac{2 D^2}{W^2 + L^2} - \frac{(N - 2 \alpha)^2}{\alpha^2} \right] \\ & = \frac{W^2 + L^2}{(N - 2 \alpha)^2} \left[ \frac{2 D^2}{W^2 + L^2} - \left( \frac{N - 2 \alpha}{\alpha} \right)^2 \right] \end{align} $$ We have an extremum when the first derivative vanishes: $$\left. \frac{\mathrm{d}\chi_1^2}{\mathrm{d}\alpha} \right|_{\alpha=\alpha_0} = 0 \Leftrightarrow \frac{N - 2 \alpha_0}{\alpha_0} = \pm \sqrt{\frac{2 D^2}{W^2 + L^2}} = \pm \frac{2 D}{\sqrt{2 (W^2 + L^2)}} $$ As we want α₀ ∈ [0, N/2], N−2α₀ and α₀ are both positive, so we use the positive root: $$\frac{N}{\alpha_0} - 2 = \frac{2 D}{\sqrt{2 (W^2 + L^2)}} \Rightarrow \begin{cases} \alpha_0 = \frac{N}{\frac{2 D}{\sqrt{2 (W^2 + L^2)}} + 2} = \frac{N}{2} \frac{\sqrt{2 (W^2 + L^2)}}{D + \sqrt{2 (W^2 + L^2)}} \\ N - 2 \alpha_0 = \alpha_0 \frac{2 D}{\sqrt{2 (W^2 + L^2)}} \end{cases} $$ As we have taken the positive root, N−2α₀ and α₀ are either both positive or both negative, but that latter case is impossible as N > 0, so α₀ ∈ [0, N/2] as expected. $\chi_1^2$ has a unique extremum in the [0, N/2] domain, at α₀; We then need to check that this extremum is a minimum thanks to the second derivative: $$\frac{\mathrm{d}^2\chi_1^2}{\mathrm{d}\alpha^2} = 8 \frac{D^2}{(N - 2 \alpha)^3} + 2 \frac{W^2 + L^2}{\alpha^3} $$ $$\begin{align} \left. \frac{\mathrm{d}^2\chi_1^2}{\mathrm{d}\alpha^2} \right|_{\alpha=\alpha_0} & = 8 \frac{D^2}{\alpha_0^3} \left( \frac{\sqrt{2 (W^2 + L^2)}}{2 D} \right)^3 + 2 \frac{W^2 + L^2}{\alpha_0^3} \\ & = \frac{2 (W^2 + L^2)}{\alpha_0^3} \frac{\sqrt{2 (W^2 + L^2)}}{D} + 2 \frac{W^2 + L^2}{\alpha_0^3} \\ & = 2 \frac{W^2 + L^2}{\alpha_0^3} \left( \frac{\sqrt{2 (W^2 + L^2)}}{D} + 1 \right) \\ & > 0 \end{align} $$ $\chi_1^2$ is thus minimal at α₀, and its value is: $$\begin{align} \left. \chi_1^2 \right|_{\alpha = \alpha_0} & = \frac{D^2}{\alpha_0} \frac{\sqrt{2 (W^2 + L^2)}}{2 D} + \frac{W^2 + L^2}{\alpha_0} - N \\ & = \frac{2}{N} \frac{D + \sqrt{2 (W^2 + L^2)}}{\sqrt{2 (W^2 + L^2)}} \frac{D \sqrt{2 (W^2 + L^2)}}{2} + \frac{2}{N} \left[ \frac{D}{\sqrt{2 (W^2 + L^2)}} + 1 \right] (W^2 + L^2) - N \\ & = \frac{D}{N} \left[ D + \sqrt{2 (W^2 + L^2)} \right] + \frac{1}{N} \left[ D \sqrt{2 (W^2 + L^2)} + 2 (W^2 + L^2) \right] - N \\ & = \frac{D^2 + 2 D \sqrt{2 (W^2 + L^2)} + 2 (W^2 + L^2)}{N} - N \\ & = \frac{\left[ D + \sqrt{2 (W^2 + L^2)} \right]^2}{N} - N \end{align} $$ A χ² test statistic tells only how far we are from H₀, without a notion of direction: Its value is the same by exchanging W and L. As we have only 1 degree of freedom, we can introduce a direction by adding a sign to the test statistic (Note: It has no meaning for more than 1 d.f.). As we want to evaluate the new bot against the old one, we want the test statistic to be positive if W > L and negative if W < L (given H₀, it is already 0 if W = L): $$T = \operatorname{sign}(W-L) \times \left. \chi_1^2 \right|_{\alpha = \alpha_0} $$ Note: You don't need to bother about sign(0), as the right-hand multiplier's value is 0 if W = L. So you can simply implement the sign function as sign(W, L) := W > L ? 1 : -1. Numerical interpretation Below is a table giving the relation between the absolute value of T and the invert of the asymptotic one-tailed p-value, i.e., of the probability p₁ that mere luck could have given the observed result or something more extreme, for some values of T. Graphs above show that p₁ is not about "better than" but about "more extreme than", i.e., "farther from 0", hence the flip when W < L. Values for 1/p₁ are rounded to 2 significant digits only: The test statistic is asymptotic so the probability is not exact, and the precise probability is not important, its order of magnitude is enough for a good feeling of what it represents. |T| 1/p₁ 20 260ᴇ3 30 46ᴇ6 40 7.9ᴇ9 50 1.3ᴇ12 60 210ᴇ12 70 34ᴇ15 100 130ᴇ21 |T| is $\chi_1^2$, Pearson's χ² test statistic with 1 degree of freedom. The χ² distribution with 1 degree of freedom has PDF: $$\chi_1^2(t) = \frac{\operatorname{e}^{-t/2}}{\sqrt{2 \pi t}} $$ Note: The symbol "$\chi_1^2$" is shared by the test statistic value and the distribution function, even though they are two different mathematical objects: They must not be confused. By definition of Pearson's χ² test statistic with 1 d.f. the asymptotic two-tailed p-value p₂ for |T| verifies: $$\begin{align} 1 - p_2 & = \int_0^{|T|} \chi_1^2(t) \,\mathrm{d}t \\ & = \int_0^{|T|} \frac{\operatorname{e}^{-t/2}}{\sqrt{2 \pi t}} \,\mathrm{d}t \\ & \underset{t = x^2}{=} \int_0^{\sqrt{|T|}} \frac{\operatorname{e}^{-\frac{x^2}{2}}}{\sqrt{2 \pi} x} 2 x \,\mathrm{d}x \\ & = 2 \int_0^{\sqrt{|T|}} \frac{1}{\sqrt{2 \pi}} \operatorname{e}^{-\frac{x^2}{2}} \,\mathrm{d}x \\ & = \int_{-\sqrt{|T|}}^{\sqrt{|T|}} \frac{1}{\sqrt{2 \pi}} \operatorname{e}^{-\frac{x^2}{2}} \,\mathrm{d}x \\ & = \operatorname{erf}\left( \frac{\sqrt{|T|}}{\sqrt{2}} \right) \end{align} $$ with "erf" the error function. So $$p_2 = 1 - \operatorname{erf}\left( \sqrt{\frac{|T|}{2}} \right) = \operatorname{erfc}\left( \sqrt{\frac{|T|}{2}} \right) $$ with "erfc" the complementary error function. As p₁ = p₂/2, we finally have 1/p₁ = 2/erfc(√(|T|/2)). For instance, as the world population will soon reach 8 billion people, we can expect that, at any given time, someone on the planet is reaching a statistical significance of |T| ≈ 40 for some fair random process: Probably someone else, but possibly you while measuring your new bot performance (very unlikely, but still reasonably possible). Formula of win rate There are several ways to define a win rate taking drawn matches into account, the simplest one is to count draws as half wins and half losses: $$\rho = \frac{W + \frac{D}{2}}{N} $$ Use a run length limit of 5000 to 10,000 matches (depending on the time it would take) and a threshold for |T| around 50: no less than 40, as 1/p₁ would be below the world population, no more than 60, as the statistical significance is high enough. Once the length limit or the threshold is reached, launch a new independent run in order to verify that the result is similar. Note: in case of win rate being precisely 50 % (identical bots) or extremely close to that, T will vary randomly in the [−10, 10] range (with possible temporary pikes outside that range), so the first run may end with T ≈ 10 when the length limit is reached (after a lot of movements in the range) and the second run with T ≈ −10, they will nevertheless be "similar" in the sense that |T| does not increase visibly on average. Previous: A first glimpse Next: Additional features C# Tutorial - Part 1: Intro DylanGTech Neural Network xor example from scratch (no libs)
CommonCrawl
Dynamic Regulation of JAK-STAT Signaling Through the Prolactin Receptor Predicted by Computational Modeling Ryland D. Mortlock ORCID: orcid.org/0000-0001-9666-43941, Senta K. Georgia2 & Stacey D. Finley ORCID: orcid.org/0000-0001-6901-36921,3,4 Cellular and Molecular Bioengineering volume 14, pages 15–30 (2021)Cite this article The expansion of insulin-producing beta cells during pregnancy is critical to maintain glucose homeostasis in the face of increasing insulin resistance. Prolactin receptor (PRLR) signaling is one of the primary mediators of beta cell expansion during pregnancy, and loss of PRLR signaling results in reduced beta cell mass and gestational diabetes. Harnessing the proliferative potential of prolactin signaling to expand beta cell mass outside of the context of pregnancy requires quantitative understanding of the signaling at the molecular level. A mechanistic computational model was constructed to describe prolactin-mediated JAK-STAT signaling in pancreatic beta cells. The effect of different regulatory modules was explored through ensemble modeling. A Bayesian approach for likelihood estimation was used to fit the model to experimental data from the literature. Including receptor upregulation, with either inhibition by SOCS proteins, receptor internalization, or both, allowed the model to match experimental results for INS-1 cells treated with prolactin. The model predicts that faster dimerization and nuclear import rates of STAT5B compared to STAT5A can explain the higher STAT5B nuclear translocation. The model was used to predict the dose response of STAT5B translocation in rat primary beta cells treated with prolactin and reveal possible strategies to modulate STAT5 signaling. JAK-STAT signaling must be tightly controlled to obtain the biphasic response in STAT5 activation seen experimentally. Receptor up-regulation, combined with SOCS inhibition, receptor internalization, or both is required to match experimental data. Modulating reactions upstream in the signaling can enhance STAT5 activation to increase beta cell survival. Avoid the most common mistakes and prepare your manuscript for journal editors. Metabolic diseases impair the body's ability to properly convert nutrients into energy. Diabetes is a particularly harmful metabolic disease that affects over 30 million people in the United States alone.33 While multiple factors contribute to the pathogenesis of diabetes, a deficit of functional insulin-secreting beta cells underlies all forms of diabetes. In cases of Type 1 diabetes, an autoimmune attack destroys the majority of beta cells, thus leaving patients unable to produce insulin, the key hormone that regulates the transport of glucose from the blood to the cells where it is used to produce energy. Patients with Type 2 or gestational diabetes can produce some insulin, but not enough to properly regulate blood glucose levels in the context of insulin resistance. Recent advances in the study of pancreatic beta cells have shed light on the body's ability to adapt and expand in response to changes in metabolic demand.36 For example, in cases of high insulin resistance, such as pregnancy or obesity, the body maintains glucose homeostasis by increasing beta cell mass in the pancreas. In fact, studies have shown that over the approximately 20-day time course of pregnancy in mice, pancreatic beta cells both replicate and grow in size, resulting in an increased beta cell mass.36 The ability to induce beta cell expansion could be a powerful step to increase the number of functioning beta cells in diabetes patients. Beta cell expansion is driven by signaling through the prolactin receptor4,19,25,51 (PRLR). Signaling by the lactogenic hormones prolactin and placental lactogen through PRLR stimulates the JAK-STAT signaling cascade.35 Specifically, Janus Kinase 2 (JAK2) is constitutively associated with the PRLR7,17,38 and once the JAK2 kinase is activated, it recruits and phosphorylates Signal Transducer and Activator of Transcription 5 (STAT5). STAT5 regulates the expression of several target genes in the nucleus, including genes related to the cell cycle20,45 and survival.21,26,50 Although initial discoveries were made in rodent models, human prolactin has been shown to increase beta cell survival as well.50 In this work, we investigate the mechanisms by which the pregnancy-related hormone prolactin (PRL) drives JAK-STAT signaling in pancreatic beta cells using a mathematical model of the signaling pathway. We focus our model on JAK2-STAT5 signaling that promotes beta cell survival mediated by the protein Bcl-xL. Experimental studies performed with the beta cell line INS-1, as well as primary cells from rodents and humans, demonstrate that signaling through JAK2-STAT5 promotes cell survival via Bcl-xL.21,26 For example, Fujinaka et al. demonstrated that Bcl-xL up-regulation induced by JAK2-STAT5 signaling promotes beta cell survival. They demonstrated that in both INS-1 cells and primary beta cells and showed that siRNA knockout of Bcl-xL inhibits lactogen-mediated protection from cell death. In addition, Silva et al.41 show that nuclear localization of STAT5 promotes Bcl-xL gene expression: they found direct binding of STAT5 to the Bcl-xL promoter. Since beta cell mass depends on both cell apoptosis and survival and Bcl-xL is required to mediate pro-survival effects in INS-1 cells and primary cells, there is a relationship between Bcl-xL and beta cell mass. Mathematical models have been used to elucidate the balance between replication and apoptosis in beta cells,30 but no molecular-detailed computational model exists for the adaptive expansion of beta cells in response to pregnancy. Here, we use a systems biology approach to quantitatively analyze the beta cell response to hormone stimulation. In particular, we use mathematical modeling to explore the effects of various regulatory mechanisms that control signaling. Experimental data shows that when insulin-secreting cells of the INS-1 cell line are treated with a constant concentration of PRL in vitro, the amount of phosphorylated STAT5 (pSTAT5) has multiple peaks within six hours of stimulation.10,11 The presence of these peaks is influenced by Suppressors of Cytokine Signaling (SOCS) genes, which are transcribed in response to STAT signaling and exert negative feedback on the system. Modeling the cytokine IFN-γ in liver cells, Yamada et al. found that the presence of a nuclear phosphatase, in addition to SOCS negative feedback, are sufficient to cause a decrease in phosphorylated STAT after the initial peak, leading to multiple peaks in phosphorylated STAT dimer in the nucleus.52 The role of SOCS protein in inhibiting JAK-STAT signaling was further elucidated by Singh et al. through joint modeling of JAK-STAT and MAPK pathways in hepatocytes in response to IL-6.42 Particular to our system of study, JAK-STAT signaling through the prolactin receptor (PRLR) has been shown to include positive feedback as nuclear STAT5 promotes transcription of PRLR mRNA.22,28,34,36 We hypothesize that positive feedback could play a role in explaining the initial peak, subsequent decline, then prolonged activation of STAT5 activity in INS-1 cells discovered by Brelje et al. Although these regulatory mechanisms significantly influence beta cell signaling, no model to our knowledge explores the interplay between SOCS feedback and positive regulation of PRL signaling. Therefore, we built upon prior work in the field to create a computational model of signaling that promotes adaptive expansion of beta cell mass in response to pregnancy driven by JAK-STAT signaling in pancreatic beta cells through PRLR. Our work is distinct from previous research because we focus on a different cell type and calibrate our model using experimental data. Since the kinetics of the signaling pathways and the importance of different regulatory mechanisms are cell type-dependent, it is important to fit models to data from the cell type of interest. We fit our model directly to experimental data for STAT5 phosphorylation and localization in the INS-1 cell line. Additionally, we explored up-regulation of the prolactin receptor due to transcriptional activity of STAT5, a control mechanism that is particularly relevant to pancreatic beta cells and is shown in the experimental data from Brelje et al.10,11 This regulatory mechanism has not been explored in any previous computational models. We applied the model to investigate the influence of these regulation mechanisms, individually and in combination, and found that model structures that include both positive and negative regulation produce multiple peaks in STAT5 phosphorylation within a tight range of parameter values. By fitting to experimental data using a Bayesian approach for likelihood estimation of parameter values, we show that the model can simultaneously predict STAT5 phosphorylation and nuclear translocation. The model predicts a faster dimerization and nuclear import rate for STAT5B dimers than STAT5A, which can explain their different activation profiles observed experimentally. Our experimentally-derived mathematical model provides a framework to quantitatively understand lactogenic signaling that mediates the adaptive expansion of beta cell mass during pregnancy. Mechanistic Model of JAK-STAT Signaling in Beta Cells A mechanistic model of JAK2-STAT5 signaling through the prolactin receptor was constructed based on known reactions from the literature. The model builds on the prior work of Yamada et al. 2003 modeling control mechanisms in JAK-STAT signal transduction52 and Finley, et al. 2011, which analyzed IL-12 mediated JAK-STAT signaling in T cells.18 The mechanistic model includes a core network representing the canonical JAK-STAT signaling cascade, which includes 31 reactions and 24 molecular species (Fig. 1). Three regulatory modules are included or excluded from the network in order to consider their effect on STAT5 activation. These include (a) SOCS exerting negative feedback on STAT5 phosphorylation, (b) receptor up-regulation due to transcriptional action of phosphorylated STAT5, and (c) internalization of the prolactin receptor induced by ligand binding. Including each regulatory module individually and in all possible combinations leads to eight model structures to explore. The full signaling network with all three regulatory modules included has 47 reactions and 32 molecular species. A full list of reactions is included in the supplementary material File S1. Model schematic of JAK-STAT signaling in pancreatic beta cells. PRL binds to the PRLR:JAK2 complex (RJ), which induces receptor dimerization and activation by JAK2 kinase activity. The activated receptor PRL:RJ2* phosphorylates STAT5, which dimerizes and transports into the nucleus, where it promotes transcription of target genes. Phosphatases attenuate the signaling at the membrane (SHP-2), in the cytosol (PPX), and in the nucleus (PPN). Signaling modules for ensemble modeling include (a) STAT5-induced SOCS negative feedback, (b) STAT5-induced receptor up-regulation, and (c) ligand-induced receptor internalization. Green indicates positive feedback; red indicates inhibition of signaling. ECM, extracellular matrix. Effect of Different Regulatory Modules on Qualitative Shape of pSTAT5 Activation We defined eight model structures based on inclusion or exclusion of the three regulatory modules from Fig. 1 and ran Monte Carlo simulations for each structure to explore model predictions across a wide area of the parameter space. Here, we varied all parameter values (i.e., the kinetic reaction rates) and non-zero initial conditions (see "Methods"). This enables us to efficiently explore the parameter space and characterize the simulated dynamics. Each simulation was classified as "No peak", "Single Peak", or "Multiple Peaks" based on the predicted time course of STAT5 phosphorylation (Fig. 2a). Within the model structures with only one regulatory module included, the structure that included SOCS feedback was most likely to show multiple peaks in STAT5 phosphorylation (Fig. 2b). The structure that included receptor internalization was most likely to produce a single peak in STAT5 phosphorylation. Overall, the likelihood of randomly sampled parameter sets producing a time course of STAT5 phosphorylation with multiple peaks was very low for all model structures. Of the 8 × 105 simulations we performed, only 1614 (0.2% of simulations) exhibited multiple peaks. This indicates that tight control of the reaction rates is necessary to achieve the right balance of activation and attenuation. Ensemble Modeling Predicts the Number of Peaks in STAT5 Phosphorylation. (a) Simulated time courses were classified into three shapes based on the number of peaks in STAT5 phosphorylation over 6 hours of PRL stimulation. (b) Bar chart shows the percentage of Monte Carlo simulations from each model structure that were classified into each shape shown in panel A. Row labels correspond to the inclusion or exclusion of regulatory modules shown in Fig. 1. The data labels in red show the number of simulations that were classified as "Multiple Peaks" for each structure. n = 100,000 simulations per structure (800,000 total). MP Multiple Peaks. From the Monte Carlo simulations, model predictions that had multiple peaks in STAT5 phosphorylation showed wide variation in the magnitude and time course of phosphorylation. Therefore, we set out to define a more detailed classification to understand which model structures could give rise to STAT5 dynamics matching those observed in INS-1 cells, which show a defined profile for phosphorylated STAT5. Specifically, Brelje and colleagues showed that STAT5 reaches an initial peak at approximately 30 min following the initial stimulation, followed by attenuation between 1 and 3 h, which reduces phosphorylation to below 70% of its initial peak. A second increase is observed after three hours, where phosphorylated STAT5 reaches or exceeds the initial level of phosphorylation.10,11 We first filtered the Monte Carlo simulations, retaining those that resulted in an appreciable level of STAT5 phosphorylation (at least 1% of the initial STAT5 becomes phosphorylated), assuming a minimum amount of phosphorylation is required to promote downstream signaling and cell response. We then defined eight qualitative shapes of STAT5 activation based on the number of peaks and the time at which the peaks occur. The decision tree used to classify predicted time courses is shown in Fig. S1. This classification enabled a detailed characterization of the dynamics of phosphorylated STAT5. A large number of simulations had no peak in STAT5 phosphorylation (Fig. 3a) or attenuation of the initial STAT5 activation leading to a single peak in pSTAT5 (Fig. 3b). A select few parameter sets (0.09%) produced multiple peaks in STAT5 phosphorylation that did not match the qualitative shape of experimental data, such as having more than one oscillation within 6 h (Fig. 3c) or showing a smaller second peak characteristic of damped oscillation (Fig. 3d). This damped oscillation profile has been shown in prior modeling of JAK-STAT signaling42,52 but does not explain the activation profile observed in INS-1 cells treated with prolactin. Positive feedback can lead to unstable systems, and some simulations (0.05%) had an early peak in STAT5 phosphorylation followed by a large increase in phosphorylation due to strong positive feedback (Fig. 3e). Over 2000 (0.36%) simulations had an initial peak followed by minimal attenuation before reactivation (Fig. 3f). These simulations are grouped into early, intermediate, and late simulations to preserve the qualitative shape when pooling simulations together. Another small group of simulations (0.03%) had multiple peaks in pSTAT5 but did not match the time course of the experimental data, either because the reactivation was too fast (< 3 h) or the initial peak was too slow (> 1 h) (Fig. 3g). Classification of simulations into qualitative shapes. Simulated time course of STAT5 phosphorylation for each shape shows the mean (solid line) and 95% confidence interval (shaded area) of all Monte Carlo simulations (800,000 total) classified into that shape. All shapes are mutually exclusive, that is, all simulations were uniquely assigned to one shape (see Fig. S1 for decision tree). Simulations that did not reach a threshold level of 1% of STAT5 phosphorylated were labeled as "weak activation" and filtered out, n = 436,731. Finally, a small number of simulations (101) matched the qualitative shape of the experimental data (Fig. 3h). Simulations classified as having this desired shape comprise just over 0.01% of the 800,000 total simulations, pointing to the necessity of tightly controlled balance of positive feedback and negative feedback, both in terms of the strength and timescale of feedback. The eight distinct model structures contributed differently to the fraction of simulations that match the desired shape (Fig. 4). Although SOCS inhibition was sufficient to cause multiple peaks in STAT5 phosphorylation (Fig. 2), SOCS inhibition alone was not sufficient to cause an early peak followed by prolonged activation (Fig. 4, row 2). Model structures that included receptor up-regulation, combined with either SOCS inhibition, receptor internalization, or both had the highest likelihood of matching the desired qualitative shape (Fig. 4, rows 5, 7, and 8). We found that the likelihood of these three model structures to match the qualitative shape of STAT5 activation was similar even with noise in parameter values (Fig. S2). Breakdown of simulations matching desired shape by structure. The y-axis shows the eight model structures defined by the inclusion or exclusion of the regulatory modules. Horizontal bars show the percentage contribution of each model structure to the 101 simulations that matched the desired shape shown in Fig. 3h. Effect of Parameter Values on Time Course of STAT5 Activation Kinetic parameter values affect the strength of STAT5 activation, strength of feedback, and timescale of feedback. Several parameters from Monte Carlo simulations were strongly correlated with characteristics of the predicted time course of phosphorylated STAT5 (Fig. 5a). These characteristics include the activation strength, the strengths of the negative and positive feedback, and the times of attenuation and reactivation (see "Methods" section for more detail). The Pearson correlation coefficients for each statistically significant association are shown in Figs. 5b to 5f. For ease of viewing, we labeled the five parameters in each panel that had the highest absolute value of correlation coefficient and include the correlation and p-value for all parameters in Supplemental File S1. The ratio of the ligand-bound receptor degradation rate to the unbound receptor degradation rate (deg_ratio) was highly correlated with four of the five defined characteristics of the pSTAT5 time course. As expected, higher values of deg_ratio decreased the activation strength (Fig. 5b), increased the strength of negative feedback (Fig. 5c), and decreased the strength of positive feedback (Fig. 5d). Increased values of deg_ratio also led to a shorter timescale of attenuation (Fig. 5e) because the active receptor complex had a shorter half-life in the cell and therefore less time to phosphorylate STAT5. The parameter k2, the ligand-receptor binding rate, had a similar effect as deg_ratio on the strength of feedback and time scale of attenuation (Figs. 5c to 5e). and timescale of feedback (Figs. 5c to 5f). However, it was positively correlated with the strength of activation (see Supplemental File S1). A faster rate of ligand binding leads to a stronger activation but stronger negative feedback due to increased internalization of ligand-bound receptors. Parameters Correlated with STAT5 phosphorylation. Pearson correlation between each kinetic parameter or initial value and five quantitative characteristics of the STAT5 phosphorylation time course. (a) Illustration of five characteristics. (b) Activation strength. (c) Negative feedback strength. (d) Positive feedback strength. (e) Time of attenuation. (f) Time of reactivation. Only parameters with statistically significant (p < 0 .05) correlations are shown in the waterfall plots. The five parameters most highly correlated with each characteristic are labeled. RJ initial value of PRLR:JAK2 complex, k6 phosphorylation rate of STAT5, k5 activation rate of JAK2, k4 dimerization rate of PRLR:JAK2 complexes, deg_ratio ratio of degradation rate of ligand-bound receptor complexes to unbound complexes, k2 ligand binding on rate, k12 rate of dephosphorylation of pSTAT5 by cytoplasmic phosphatase, PPX initial value of cytoplasmic phosphatase, k11 binding rate of cytoplasmic phosphatase to pSTAT5, k_3 receptor complex dimerization off rate, k3 receptor complex dimerization on rate. The full list of correlated parameters and their Pearson correlation values are given in Supplemental File S1. Increased values of the initial concentration of the receptor:JAK2 complex (RJ) increased the activation strength (Fig. 5b) and shortened the time of reactivation (Fig. 5f). With more receptor complexes at the surface, a larger fraction of STAT5 can be phosphorylated initially and upon reactivation after attenuation of the initial signal. Predictably, parameters that govern the rate of interactions critical to STAT5 activation (k4, k5, and k6, corresponding to the rate at which the ligand-bound receptor complex is activated, binds STAT5, and phosphorylates STAT5, respectively) were positively correlated with the activation strength (Fig. 5b). Additionally, increases in k12, the rate at which cytosolic phosphatase dephosphorylates STAT5, led to stronger negative feedback (Fig. 5c), weaker positive feedback (Fig. 5d), and a faster timescale of attenuation (Fig. 5e). Overall, this analysis provides mechanistic insight into how specific biochemical reactions influence key features of STAT5 dynamics. Such results can guide experimental studies to modulate the signaling network to enhance STAT5 response. Model Calibration to STAT5 Dynamics in INS-1 Cells The results presented thus far provide a detailed analysis of the model features that give rise to STAT5 dynamics that qualitatively agree with experimental data. We next aimed to produce a predictive model that quantitatively matches the data by calibrating the computational model to the experimental data. Specifically, we fit the model to measurements for the time course of phosphorylation of JAK2, STAT5A and STAT5B,11 translocation of STAT5A and STAT5B from the cytoplasm to the nucleus,11 and the fold change in protein level of the pro-survival response protein Bcl-xL.21 We chose to fit the three model structures that produced a reasonable number of simulations (> 5%) matching the qualitative shape of STAT5 activation from ensemble modeling. This included the full model with all regulatory mechanisms (Fig. 4, row 8) as well as the model structure that did not include receptor internalization (Fig. 4, row 5) and that did not include SOCS negative regulation (Fig. 4, row 7). The model structure that included all three regulatory modules had the lowest minimum Sum of Squared Errors (SSE) and median SSE (Table 1). In addition to using the SSE to evaluate the model fits, we also use the Akaike Information Criterion (AIC), which allows for comparison of model structures with different number of fitted parameters, penalizing structures that have more parameters.37,44 A lower value of AIC indicates a better fit. The model structure without SOCS negative feedback had the lowest AIC. This structure fit the data similarly well as the full model (Fig. S4) and has a lower number of fitted parameters, leading to a lower AIC. Our modeling predicts that SOCS negative regulation is not necessary for early activation, attenuation, and reactivation of STAT5 in pancreatic beta cells treated with prolactin. Other sources of negative regulation such as phosphatase action and internalization of ligand-bound receptors, combined with positive regulation due to STAT5-induced receptor upregulation, can drive the experimentally observed activation profile. Table 1 Comparison of model structures. We focus our analysis on the model that included all three regulatory modules, as that model structure produced the lowest SSE and allowed us to probe each different regulatory mechanism. Model predictions for this full model are shown in Fig. 6, illustrating that this structure effectively captured the phosphorylation dynamics of JAK2 (Fig. 6a), STAT5A (Fig. 6b), and STAT5B (Fig. 6c) as well as the nuclear import (Fig. 6d) of both STAT5A and STAT5B on the six-hour timescale. The dynamics of STAT5 phosphorylation and nuclear import share a similar qualitative shape because phosphorylation is necessary for shuttling of STAT5 to the nucleus. Model calibration. Model predictions for (a) Phosphorylated JAK2, normalized to the 10 min time point, (b) Phosphorylated STAT5A, normalized to the 30 min time point, (c) Phosphorylated STAT5B, normalized to the 30-min time point, (d) Ratio of nuclear to cytosolic STAT5A and STAT5B, and (d) Fold change of Bcl-xL. Lines show mean value of model predictions with shading indicating the standard deviation across the 1000 parameter sets from the posterior distribution. Squares show experimental data points from Brelje et al. for panels A, B, C, and D or from Fujinaka et al. for panel E. Error bars are included for experimental data points that had error bars shown in the previously published work. All experimental data are for INS-1 cells treated with PRL at 200 ng/mL. Thirty-three parameters were fit simultaneously to the six data sets using a Bayesian likelihood estimation approach. Dark blue, STAT5A; light blue, STAT5B in (d). Interestingly, although STAT5A and STAT5B show a similar time course of phosphorylation, they differ in the amount that is translocated into the nucleus.10,11 The model accounts for separate STAT5A and STAT5B species and allows for homo- and hetero-dimerization with separate rate constants for dimerization, import of phosphorylated dimers into the nucleus, and export of dephosphorylated STATs from the nucleus. The fitted model predicts that STAT5B homodimers form faster than STAT5A homodimers, with a ratio of 4.24 ± 0.03 as compared to the STAT5A dimerization rate. The model also predicts the nuclear import rate to be faster for STAT5B homodimers than STAT5A homodimers, with a ratio of 4.94 ± 0.03 as compared to the STAT5A nuclear import rate. The faster dimerization rate and nuclear import rate predicted by the model provide a potential hypothesis for greater STAT5B nuclear localization as compared to STAT5A, which has been observed experimentally. In addition to predicting the upstream dynamics, the model also predicts the fold change of Bcl-xL, a response protein that is induced by pSTAT5 activity in the nucleus (Fig. 6e). The model predicts that the fold change of the pro-survival protein Bcl-xL increases through 18 h of stimulation with prolactin before decreasing after 18 h, matching experimental observations from Fujinaka et al.21 that capture how a single oscillation in STAT5 activation on the six hour timescale can lead to a smooth increase in the concentration of a response protein on a longer timescale (Fig. 6e). Taken as a whole, the fitting results suggest that multiple feedback mechanisms could explain the observed time courses in STAT5 phosphorylation, nuclear translocation, and protein response. However, receptor upregulation is required, and it must be combined with at least one of the other regulatory mechanisms (SOCS negative feedback or receptor internalization). The calibrated model containing all three regulatory modules produces the best fit to the data and generates consistent parameter estimates (Fig. S6). Dose Response Predictions for Beta Cells Treated with Prolactin We next aimed to use the parameterized model to make new predictions for STAT5 signaling through the prolactin receptor. We tested six concentrations of prolactin used by Brelje et al.11to treat rat primary beta cells in vitro and found that higher concentrations of prolactin lead to a greater magnitude of STAT5B translocation to the nucleus and an earlier peak in STAT5B translocation (Fig. 7a). We quantified the amount of STAT5B translocation at the 30 min time point in order to compare to experimental measurements using immunohistochemistry from Brelje et al. We found that the model predictions match the qualitative shape of the experimentally determined dose response curve (Fig. 7b), showing a biphasic response, in which the STAT5B level increases with increasing stimulation before decreasing. However, there are differences between the model predictions and experimental data. Specifically, the model predicts an increase in STAT5B translocation at the 30-min timepoint (Fig. 7b, blue bars) with increasing hormone concentration, with the maximal response occurring at a dose of 500 ng/mL. In comparison, the peak response occurs at the 1000 ng/mL dose in the experimental data (Fig. 7b, grey bars). Given that the model produces the full time-course of STAT5 levels, we can investigate why there is this difference between model and experiments. The model predicts that the attenuation of the initial STAT5 activation occurs more rapidly for higher doses of PRL such that attenuation has already reduced STAT5 levels by the 30-min timepoint (Fig. 7a). This difference in the timing may be due to having calibrated the model using data from INS-1 cells treated with prolactin rather than rat primary beta cells. Multiple studies point to differences in enzyme catalytic rates in different biological settings.13,54 Dose Response predictions. (a) Model predicted time course of STAT5B import into the nucleus under various concentrations of PRL ligand, simulated for 60 minutes. The red dotted line emphasizes the values at the 30 minute time point, which is plotted in the bar chart in panel B. (b) Model predicted dose response data for 30-min timepoint (blue) compared to experimental data from Brelje et al. treating rat primary beta cells with PRL (grey). Values are normalized to the amount of STAT5B in the nucleus with no PRL stimulation (0 ng/mL dose). Error bars for model predictions show standard deviation of predictions across the 1,000 posterior parameter sets. Perturbing the Fitted Model Next, we examined the influence of varying individual parameters and initial values on model predictions. We varied each parameter or initial value that was determined by ensemble modeling to have a large impact on one of the various aspects of STAT5 activation (large correlation values in Figs. 5b to 5f) individually within two orders of magnitude of the fitted values. Changing individual parameter and initial values altered both the strength of activation and the feedback dynamics, suggesting that the feedback system can be modulated. We chose to investigate in detail two of the highest ranking influential kinetic parameters and initial values, based on their ability to strongly modulate multiple aspects of STAT5 activation. We predicted the time course of model species in response to changes in parameter values (Fig. S12) and quantified the initial peak in STAT5B nucleus to cytoplasm ratio, which represents the strength of activation of the system (Fig. 8). When varying the PRL ligand binding rate (k2) and the cytosolic phosphatase dephosphorylation rate (k12) two orders of magnitude, we found that the activation of STAT5 was more sensitive to changes in the ligand binding rate, as indicated by the increase in activation along the y-axis (Fig. 8a). The phosphatase did modulate activation, with higher values of k12 leading to lower activation, but the effect is less pronounced than that of k2. A similar result was obtained when varying the initial value of the receptor complex (RJ) and the cytosolic phosphatase (PPX). The activation was increased greatly when the initial value RJ approached ten times its fitted value (Fig. 8b). Higher initial values of PPX decreased the strength of initial value, but again, this effect is less pronounced than modulating signaling at the receptor level. In this case, when the initial concentration of RJ is too low or that of PPX is too high, there is no distinct peak for the nuclear to cytoplasmic ratio of STAT5B (indicated by the value 0 in Fig. 8b). This can be explained, as low RJ would prevent the prolactin input signal from being transduced, and high PPX would strongly attenuate the signal. Model perturbations. (a) The effect of varying the initial ligand-binding rate k2 and the cytosolic phosphatase dephosphorylation rate k12 between 0.1-fold and 10-fold of the fitted parameter values. (b) Varying the initial values of the receptor-JAK2 complex RJ and the cytosolic phosphatase PPX between 0.1- and 10-fold of the fitted values. Coloring of the heat map indicates the initial peak in the STAT5B cytoplasm to nucleus ratio averaged across the 1000 posterior parameter sets. Based on these simulation results, we conclude that targeting reactions upstream in the signaling network has a larger impact on the activation of STAT5, as compared to directly targeting regulatory reactions in the signaling cascade. Interestingly, changing these two sets of parameters produces nonlinear effects, as indicated by the curved isoclines in Fig. 8. Overall, the model is useful in predicting how altering kinetic parameters and species' concentrations influences signaling dynamics that directly mediate pro-survival signaling. Our mechanistic model of JAK-STAT signaling in pancreatic beta cells captures key dynamics of STAT5 activation via phosphorylation by the PRLR-JAK2 complex, followed by import of phosphorylated STAT5 dimers into the nucleus. The model differentiates between STAT5A and STAT5B and identifies the kinetic rate parameters that are able to explain experimentally observed differences in the amount of the STAT5A and STAT5B entering the nucleus under prolactin stimulation.10,11 Specifically, the model shows that the rates of dimerization and translocation can account for the experimental measurements. This mechanistic insight is relevant, as it has been hypothesized that this differential expression of STAT5A and STAT5B in the nuclear and cytosolic forms may be a form of tissue-specific regulation of JAK-STAT signaling,2 arising from their differential affinity for STAT5 target genes.6 The model simultaneously predicts experimental data for upstream activation of STAT5 and fold change of the response protein Bcl-xL to the same hormonal stimulus. In addition, the model was used to predict the dose response of STAT5 nuclear import under different concentrations of prolactin. This demonstrates the predictive capability of the model since the simulated dose response curve qualitatively matched the experimentally observed dose response data, which was not used for parameter estimation. In addition, we establish that although the model was calibrated using data from INS-1 cells, it can reproduce observations obtained using rat primary pancreatic beta cells. This is a particularly important point since INS-1 cells, while used as a model of primary beta cells, exhibit quantitative differences in their metabolism46 and insulin secretion in response to glucose,32 as compared to healthy beta cells. The model includes reactions known to drive JAK-STAT signaling in pancreatic beta cells. There are a multitude of feedback modules affecting the signal transduction pathway,35,40 and we chose to explicitly explore the role of different feedback modules on the activation of STAT5 through ensemble modeling. We hypothesized that positive feedback through STAT5-induced receptor up-regulation could explain the reactivation of STAT5 in INS-1 cells to a magnitude greater than the initial activation.10,11 Classifying Monte Carlo simulated time courses by their qualitative shapes revealed that model structures with both receptor up-regulation and an inhibitory module (whether that be SOCS feedback of receptor internalization) were most likely to show reactivation of STAT5 matching the shape of the experimental data. Quantifying the impact of different parameter values on the time course of STAT5 activation helped us define which parameters drive the dynamics. An increased ligand-bound receptor degradation rate, for example, decreased the strength of activation and timescale of feedback while increasing the negative feedback strength. We followed up on the most influential parameters from ensemble modeling by varying them within the fitted model. Our simulations predict that modulating signaling at the receptor level produces larger increases in STAT5 activation than altering the effect of an individual feedback mechanism (cytosolic phosphatase). This information is relevant for researchers aiming to enhance beta cell survival through activation of the JAK-STAT pathway. Ultimately, we found that multiple model structures could fit the data well (Table 1), but there were emergent properties that were consistent across model structures, such as a faster rate of STAT5B dimerization and nuclear import, as compared to STAT5A. We acknowledge some limitations of our work. Although the model predictions reproduce key aspects of the activation profile of STAT5 in INS-1 cells treated with prolactin, the model does not match experimental data for the 4-hour time point of pSTAT5A and the 4 and 6-h time points of pSTAT5B (Figs. 6b, and 6c). Although error bars were not included in the literature-derived data,11 we expect that there is greater uncertainty in phosphorylation measurements at later time points, as indicated by different quantitative values for pSTAT5A and pSTAT5B when the authors repeated the experiment in INS-1 cells in future work.10,11 Additionally, the model predicts a larger decrease in STAT5B upon initial attenuation than experimental data implies and a larger upward trajectory between 4 and 6 h (Fig. 6d). These discrepancies between model predictions and experimental data are likely due to the fact that in our model, STAT5 cannot be shuttled into the nucleus unless phosphorylated first, so dynamics of nuclear import are tied to the phosphorylation dynamics. In a living cell, additional factors likely affect the nuclear import of STAT5 such as other signaling pathways and concentration of importin proteins, which were not accounted for in our computational model. Our model predicts JAK-STAT signaling within the cell. Further work is required to integrate signaling across many cells to understand how JAK-STAT signaling can drive changes in beta cell mass. Lastly, we only consider JAK-STAT signaling through the PRLR. Other pathways, such as PI3K and MAPK cascades, have shown to be important in beta cell signaling.8,27,47 Future work can build on our model of JAK-STAT signaling to encompass other signaling pathways as well, as has been done for the JAK-STAT and MAPK activation in response to IL6 in hepatocytes.42 Despite these limitations, our model motivates new experiments that can better elucidate the role of regulatory elements in JAK-STAT signaling. Previous work has demonstrated that using principles of optimal experimental design can reduce uncertainty in parameter estimation.3,43 Based on our findings, multiple possible inhibitory mechanisms could explain the observed time course of STAT5 phosphorylation. By designing a time course stimulus of PRL on INS-1 cells that aims to discriminate between these different model structures, one could experimentally test which mechanism is most likely to occur within the cell. This in-depth exploration of signal transduction would benefit pre-clinical researchers trying to design a therapy aimed at increasing beta cell mass in model organisms of diabetes. Taken as a whole, our work points to the importance of regulatory modules in JAK-STAT signaling within pancreatic beta cells. Our model predicts that positive feedback combined with inhibition, be that through negative feedback or enhanced degradation rate, can drive a single oscillation in STAT5 phosphorylation within 6 h, followed by a second peak that is higher than the first. Based on the rarity of this behavior occurring within the wide parameter space sampled, we contend that the kinetic rate parameters within the cell must be well constrained to balance positive and negative feedback and achieve this behavior. In line with this hypothesis, the kinetic parameters predicted by our model when fitting to experimental data were tightly constrained (Figs. S5–S7). Excitingly, the mechanistic insight as to the detailed effects of the regulatory modules provides quantitative information needed to identify strategies to increase beta cell survival. The ability to increase the beta cell mass in vivo could be a powerful new therapy for the treatment of diabetes.14 Hormonal stimulus seeks to recapitulate the islet adaptation to pregnancy5 and has already achieved beta cell proliferation in rodent models9 in both female and male rodents.24 Despite these advances, potential therapies have failed to realize the same gains in beta cell proliferation in humans,8,12,27,47 pointing to a need for better understanding of regulatory mechanisms through the PRLR-JAK-STAT pathway.12 Here, we provide evidence that feedback modules play a key role in regulation of JAK-STAT signaling within a computational model relevant to the pancreatic beta cell. We also show that modulating upstream parameters such as the ligand binding rate and the initial value of receptor complexes can increase PRL-mediated STAT5 activation. We acknowledge the dependence of our model predictions on the accuracy of the model structure, and therefore explored several potential structures through ensemble modeling. The inclusion and exclusion of different regulatory modules gives insight into their relative importance and helps us understand the important predicted behaviors that emerge across multiple model structures. A mathematical model was constructed to describe the reaction kinetics of JAK2 and STAT5 signaling in pancreatic beta cells. The model is comprised of ordinary differential equations, which describe how the concentrations of the molecular species in the reaction network evolve over time. Our model builds on the reactions and kinetic parameters from the work of Yamada et al., who modeled control mechanisms of the JAK-STAT pathway in response to interferon-γ (IFN- γ) signaling.52 The model was adapted to include 2:1 ligand to receptor stoichiometry, which has been shown for the binding of prolactin (PRL) to the prolactin receptor (PRLR).7,17 Literature evidence shows that in humans16 and rats,15 prolactin has cyclic dynamics and rhythmic secretion. The timescale of these dynamics is likely different in the in vitro setting; however, to account for a decrease in prolactin levels over the timescale considered here, we included prolactin degradation. In the absence of experimental data for the half-life of prolactin, we assumed it is similar to that of estrogen (5–6 h) in MCF-7 cells.39 The receptor is assumed to be pre-associated with JAK2 (represented by the species RJ) since JAK2 is constitutively associated with the prolactin receptor.7,17,38 Once two RJ complexes are bound to one PRL hormone, the complex becomes activated. The receptor complex RJ has degradation and synthesis rates corresponding to a half-life on the cell membrane of 45 min.10 Once the ligand is bound, the receptor has a higher degradation rate, which represents internalization of the ligated receptor to the endosome.1,7,10 The activated receptor complex binds to the cytosolic form of STAT5 reversibly, and once bound, releases a phosphorylated form of STAT5 due to the kinase activity of JAK2. The pSTAT5 molecules dimerize in the cytosol and are transported into the nucleus. Three phosphatases are included in the model, which serve to attenuate the signaling after initial ligand binding: SH2 domain-containing tyrosine phosphatase 2 (SHP-2) dephosphorylates the activated receptor-JAK complex, and phosphatases in the cytosol and nucleus (termed PPX and PPN, respectively) dephosphorylate STAT5 species.52 The phosphatase action is a form of negative feedback shown to be necessary for attenuation of STAT activation.52 pSTAT molecules are shuttled out of the nucleus when they are not dimerized with another molecule. The phosphorylated STAT5 dimer promotes transcription of several target genes once in the nucleus. Specifically, we include SOCS, the prolactin receptor, and the pro-survival protein Bcl-xL as STAT5 targets. It has been shown that SOCS proteins bind competitively to the receptor JAK complexes and also target the receptors for ubiquitination-based degradation.7,53 These mechanisms were incorporated in the model rather than the non-competitive binding used by Yamada et al.52 STAT5 dimers promote transcription of mRNA for the prolactin receptor. This has been shown in vitro in INS-1 cells22 and in vivo during pregnancy in mice. This positive feedback mechanism may play a role in the islet response to pregnancy22 and has not been explored computationally before. The phosphorylated STAT5 dimer in the nucleus also promotes transcription of cell-cycle genes such as cyclin D proteins20,45 and anti-apoptotic species such as Bcl-family proteins.21,50 We included a module for the STAT5-mediated transcription and translation of the response protein Bcl-xL. A full list of reactions is included in the supplementary File S1. MATLAB was used to carry out model simulations, and statistical analyses of the simulated results were performed using R statistical computing language.49 All of the code necessary to run the simulations and produce all figures is publicly available at: https://github.com/FinleyLabUSC/JAK-STAT-Regulation-CAMB. Ensemble Modeling The three optional modules (Fig. 1) were included or excluded from the core model. The induction of SOCS in response to STAT5 activation and its subsequent negative feedback on JAK-STAT signaling was the first optional module. The positive regulation due to up-regulation of the PRL receptor in response to activated STAT5 was the second optional module. The third optional module was receptor internalization, as represented by an enhanced degradation rate for ligand-bound receptors. The three optional modules were included in different combinations to produce eight possible model structures. For each model structure, 100,000 Monte Carlo simulations were performed by sampling all free parameters and initial values from a log-uniform distribution. The parameters and initial values were varied two orders of magnitude above and below the initial guess (taken from previous models and literature evidence—see Supplementary File S1 "Parameters" and "Initial Values" spreadsheets). The total amount of phosphorylated STAT5 was calculated by summing together all forms of pSTAT5 and multiplying by two if the molecule included a STAT dimer with both STAT molecules phosphorylated. We analyzed the features of the pSTAT5 concentration over time. The definitions of the characteristics of the pSTAT activation illustrated in Fig. 5a are as follows: $${\text{Activation strength}} = \frac{{\text{Maximum pSTAT}}}{{\text{Total pSTAT}}}$$ $${\text{Negative FB Strength}} = 1 - \frac{\text{Minimum pSTAT after first peak}}{\text{pSTAT at first peak}}$$ $${\text{Positive FB Strength}} = \frac{\text{Maximum pSTAT after first peak}}{\text{pSTAT at first peak}}$$ $${\text{Time of attenuation}} = {\text{Time }}\left( {{\text{hr}}.} \right)\,{\text{of first peak}}$$ $${\text{Time of reactivation}} = {\text{First time}} \left( {{\text{hr}}.} \right)\,{\text{in which pSTAT goes from decreasing to increasing}}$$ The number of peaks in total pSTAT5 was quantified using the Matlab findpeaks function, which returned the value of total pSTAT5 at local maxima as well as the time of the peak in hours. Thresholds for the findpeaks function were defined to have a minimum distance between peaks of 20 min and a minimum peak prominence of 0.1% to avoid identifying noise in the data as peaks (see MATLAB findpeaks documentation). A detailed shape classification was performed based on the decision tree in Fig. S1 implemented through if statements in our MATLAB script. Parameter correlations were calculated in R using the cor function. The correlations shown in Fig. 5 are calculated using Monte Carlo simulations from the full model structure that included all three regulatory modules. Correlations with activation strength were calculated using all 100,000 simulations. Correlations with negative FB strength, positive FB strength, and time of attenuation could only be calculated for simulations that had a peak, n = 58,265. Correlations with the time of reactivation could only be calculated for simulations that had reactivation, n = 11,123. Model Calibration A total of 33 parameters were chosen to fit to the 37 experimental data points based on a global sensitivity analysis. We used the extended Fourier Analysis Sensitivity Test (eFAST) to determine which parameters significantly influence the model predictions.31 The eFAST method uses a variance decomposition method to determine the sensitivity of model outputs to model inputs. The first-order sensitivity Si quantifies the fraction of variance in model output that is explained by the input variance in the parameter i. $$S_{i} = \frac{{\sigma_{i}^{2} }}{{\sigma_{\text{total}}^{2} }}$$ We calculated the first-order sensitivity of each kinetic parameter and non-zero initial value, with the output being all species' concentrations predicted by our model. We also estimated the total-order sensitivity STi for each kinetic parameter and initial value. STi is calculated as one minus the summed sensitivity index of complementary parameters SCi which is defined as all parameters except parameter i. $$S_{\text{Ti}} = 1 - S_{\text{Ci}}$$ In order to determine which parameters to fit to experimental data, we compared the total-order sensitivity index for all kinetic parameters and initial values on the predicted model outputs: phosphorylated STAT5A (pSTATA), phosphorylated STAT5B (pSTATB), nuclear to cytoplasm ratio of STAT5A (STAT5An/STAT5Ac), and the nuclear to cytoplasm ratio of STAT5B (STAT5Bn/STAT5Bc). Although we calculated STi for each parameter on all model outputs, we chose to focus on the effect of each parameter on those four model predictions because they are used in the objective function in model calibration (see below). We took the mean STi for each parameter or initial value over each of the four model outputs listed above at each timepoint for which we had experimental data from the literature. These sensitivity indices are included in the Supplementary File S1 on the sheet "Sensitivity Analysis." The parameters and initial values that had a mean STi greater than that of the dummy variable, a factitious input which has no effect on model structure, were chosen as parameters to be fitted. In addition, the parameter k30a, which is the maximal rate of transcription of the PRLR receptor in response to STAT5 binding, was added to the parameter list because no kinetic parameter affecting the positive feedback module emerged from sensitivity analysis. In order to deconvolute the fact that the dimerization and shuttling rates of the different forms of STAT5 would likely be correlated, we defined the following multiplicative factors: $$mult8B = \frac{k8B}{k8A}, \quad mult8AB = \frac{k8AB}{k8A}$$ $$mult14B = \frac{k14B}{k14A},\quad mult14AB = \frac{k14AB}{k14A}$$ $$mult17B = \frac{k17B}{k17A}$$ The parameters k8A and k8B describe the rate of homodimerization of STAT5A and STAT5B respectively while k8AB represents the rate of heterodimerization. The parameters k14A, k14B, and k14AB represent the rate of nuclear import of dimerized STAT5A dimers, STAT5B dimers, and heterodimers respectively. The parameters k17A and k17B represent the nuclear export rate of unphosphorylated STAT5A and STAT5B respectively. Parameter fitting was performed by fitting the model simultaneously to all of the experimental data used for likelihood estimation. The amount of phosphorylated JAK2, phosphorylated STAT5A, and phosphorylated STAT5B at the 10 min, 30 min, 1 h, 2 h, 4 h, and 6 h timepoints were quantified using Plot Digitizer (Java) from Brelje et al. Fig. 7.11 The nucleus to cytoplasm ratio of STAT5A and STAT5B at the 30 min, 1 h 1.5 h, 3 h, and 6 h. timepoints and the 5 min, 15 min, 30 min, 1 h, 2 h, 3 h, 4 h, 5 h, and 6 h. timepoints respectively were quantified using Plot Digitizer from Brelje et al. Fig. 6 results for INS-1 cells.11 The fold change of the anti-apoptotic protein Bcl-xL in response for the timepoints 2, 4, 6, 8, 12, 18, and 24 h were quantified using Plot Digitizer on Fujinaka et al. Fig. 7e.21 All experimental data from both papers was for INS-1 cells treated with 200 ng/mL of PRL. A total of 50 independent fits were performed for each model structure using a Bayesian approach for likelihood estimation.23,48 Our group recently used this approach to calibrate a model of Natural Killer cell signaling,29 and we implemented the same algorithm in the current study. We assume the parameters are random variables and estimate the distributions of their values using a Bayesian. Thus, we maximized the posterior density \(f\left( {\theta \left| y \right.} \right)\) of the parameters, θ, given the available experimental data, y, using the Metropolis-Hastings (MH) algorithm. In brief, Bayes' theorem describes the relationship between the posterior distribution to be maximized and the known (or assumed) prior distribution $$f\left( {\theta |y} \right) = \frac{{f\left( {y |\theta } \right)f\left( \theta \right)}}{f\left( y \right)} \propto f\left( {y |\theta } \right)f\left( \theta \right),$$ where represents the data likelihood function, \(f\left( \theta \right)\) is our prior knowledge on θ and \(f\left( y \right)\) is the probability of the data. Here, is constant, as the experimental measurements are known. The likelihood function estimates the goodness of fit of a model given the unknown parameter values. It captures the error, \(\in\) , between the model predictions and the experimental data: \(\in = y - {\mathcal{M}}\left( \theta \right)\) (both y and are vectors). Thus, the likelihood function is directly related to the error: We make the assumption that y is normally distributed with mean equal to zero and variance equal to \(\sigma^{2}\). We can marginalize out the noise from \(f\left( {y |\theta ,\sigma^{2} } \right)\) by assuming an inverse gamma distribution over \(\sigma^{2}\) and integrating with respect to \(\sigma^{2}\) to attain $$f\left( {y|\theta } \right) = \int\limits_{0}^{\infty } {f\left( {y|\theta ,\sigma ^{2} } \right)f\left( {\sigma ^{2} } \right){\text{d}}\sigma ^{2} }$$ The density of the likelihood function is at its maximum when \(y = {\mathcal{M}}\left( \theta \right)\) since is centered at zero. Therefore, maximizing the posterior density is equivalent to minimizing the error between the model prediction and the experimental data. We cannot solve for analytically since is a nonlinear operator, so we employ the Metropolis–Hastings (MH) algorithm23,48 to sample from the posterior distribution, which is the target distribution. The prior distribution remains fixed over all iterations while the proposal distribution re-centers around parameters \(\theta^{*}\) that minimize the error between the model and the data. Since this parameter estimation approach is probabilistic, we simulated the MH algorithm 50 independent times with a random initial guess for the parameter values. Within each independent fit, 10,000 iterations on the parameter values were performed to effectively sample from the posterior distribution for each parameter value. The first several thousand iterations of the MH algorithm serve to maximize the posterior density, thereby converging the initial estimate of the posterior distribution closer to the true posterior distribution. This is known as the burning-in phase. Once the algorithm converges, then each \(\theta^{\left( i \right)}\) (for i sufficiently large) will be a sample from the posterior distribution. We discarded the first 9000 iterations, retaining the last 1000 iterations. We note that we do not make the assumption that the parameters have unique values. Rather, the Bayesian approach assumes that the parameters are random variables, and a distribution is imposed on them (\(f\left( \theta \right)\), the prior distribution). Examining the posterior distribution provides an indication of whether the estimated parameters are identifiable, given the experimental data available for fitting. We show the posterior distribution of each parameter in Figs. S5–S7 for model structures 5, 7, and 8, respectively. These figures demonstrate that the parameters are well behaved: the distributions are unimodal, and the values lie within a tight range. In addition, we provide diagnostic information in the form of trace plots to further demonstrate that the parameters are identifiable (Figs. S8–S10 for model structures 5, 7, and 8, respectively). For model structures 5 and 7, the best fit was taken to be the independent fit with the lowest median error within the last 1000 iterations. For model structure 8, the best fit was taken to be the independent fit with the second lowest median error within the last 1000 iterations since the independent fit with the lowest median error had fluctuations in parameter values within the last 1000 iterations. We used the Akaike Information Criterion (AIC) to compare model structures with various combinations of regulatory mechanisms. For Table 1, AIC was calculated from median sum of squared error (SSE) values for each model structure as: $$AIC = n \times \log \left( {\frac{\text{SSE}}{n}} \right) + 2k$$ where n is the number of data points, SSE is the median error, and k is the number of parameters used to fit the model. To display the results for Fig. 6 and Figs. S3 and S4, the predicted time courses were simulated for each of the 1000 parameter sets from the posterior. The mean and standard deviation of the model predictions were quantified and shown. For dose response predictions (Fig. 7), the dose response data for rat primary beta cells was quantified using Plot Digitizer from Brelje et al. Fig. 13.11 Ali, S., Z. Nouhi, N. Chughtai, and S. Ali. SHP-2 regulates SOCS-1-mediated janus kinase-2 ubiquitination/degradation downstream of the prolactin receptor. J. Biol. Chem. 278:52021–52031, 2003. Ambrosio, R., et al. The structure of human STAT5A and B genes reveals two regions of nearly identical sequence and an alternative tissue specific STAT5B promoter. Gene 285:311–318, 2002. Apgar, J. F., J. E. Toettcher, D. Endy, F. M. White, and B. Tidor. Stimulus design for model selection and validation in cell signaling. PLoS Comput. Biol. 4:e30, 2008. Baeyens, L., S. Hindi, R. L. Sorenson, and M. S. German. β-Cell adaptation in pregnancy. Diabetes Obes. Metab. 18:63–70, 2016. Banerjee, R. R. Piecing together the puzzle of pancreatic islet adaptation in pregnancy: Pregnancy and pancreatic islets. Ann. N. Y. Acad. Sci. 1411:120–139, 2018. Basham, B., et al. In vivo identification of novel STAT5 target genes. Nucleic Acids Res. 36:3802–3818, 2008. Ben-Jonathan, N., C. R. LaPensee, and E. W. LaPensee. What can we learn from rodents about prolactin in humans? Endocr. Rev. 29:1–41, 2008. Bernal-Mizrachi, E., R. N. Kulkarni, D. K. Scott, F. Mauvais-Jarvis, A. F. Stewart, and A. Garcia-Ocana. Human-cell proliferation and intracellular signaling part 2: still driving in the dark without a road map. Diabetes 63:819–831, 2014. Brelje, T. C., N. V. Bhagroo, L. E. Stout, and R. L. Sorenson. Prolactin and oleic acid synergistically stimulate β-cell proliferation and growth in rat islets. Islets 9:e1330234, 2017. Brelje, T. C., L. E. Stout, N. V. Bhagroo, and R. L. Sorenson. Distinctive roles for prolactin and growth hormone in the activation of signal transducer and activator of transcription 5 in pancreatic islets of langerhans. Endocrinology 145:4162–4175, 2004. Brelje, T. C., A. M. Svensson, L. E. Stout, N. V. Bhagroo, and R. L. Sorenson. An immunohistochemical approach to monitor the prolactin-induced activation of the JAK2/STAT5 pathway in pancreatic islets of langerhans. J. Histochem. Cytochem. 50:365–383, 2002. Chen, H., et al. Augmented Stat5 signaling bypasses multiple impediments to lactogen- mediated proliferation in human β-Cells. Diabetes 64:3784–3797, 2015. Davidi, D., et al. Global characterization of in vivo enzyme catalytic rates and their correspondence to in vitro kcat measurements. Proc. Natl. Acad. Sci. USA 113:3401–3406, 2016. Domínguez-Bendala, J., L. Inverardi, and C. Ricordi. Regeneration of pancreatic beta-cell mass for the treatment of diabetes. Expert Opin. Biol. Ther. 12:731–741, 2012. Egli, M., R. Bertram, M. T. Sellix, and M. E. Freeman. Rhythmic secretion of prolactin in rats: action of oxytocin coordinated by vasoactive intestinal polypeptide of suprachiasmatic nucleus origin. Endocrinology 145:3386–3394, 2004. Egli, M., B. Leeners, and T. H. C. Kruger. Prolactin secretion patterns: basic mechanisms and clinical implications for reproduction. Reproduction 140:643–654, 2010. Elkins, P. A., et al. Ternary complex between placental lactogen and the extracellular domain of the prolactin receptor. Nat. Struct. Biol. 7:8, 2000. Finley, S. D., D. Gupta, N. Cheng, and D. J. Klinke. Inferring relevant control mechanisms for interleukin-12 signaling in naïve CD4+ T cells. Immunol. Cell Biol. 89:100–110, 2011. Freemark, M., et al. Targeted deletion of the PRL receptor: Effects on islet development, insulin production, and glucose tolerance. Endocrinology 8:1378, 2017. Friedrichsen, B. N., et al. Signal transducer and activator of transcription 5 activation is sufficient to drive transcriptional induction of cyclin D2 gene and proliferation of rat pancreatic β-cells. Mol. Endocrinol. 17:945–958, 2003. Fujinaka, Y., K. Takane, H. Yamashita, and R. C. Vasavada. Lactogens Promote Beta Cell Survival Through JAK2/STAT5 activation and Bcl-XL upregulation. J. Biol. Chem. 282:30707–30717, 2007. Galsgaard, E. D., J. H. Nielsen, and A. Møldrup. Regulation of prolactin receptor (PRLR) gene expression in insulin-producing cells. Prolactin and growth hormone activate one of the rat Prlr gene promoters via Stat5a and Stat5b. J. Biol. Chem. 274:18686–18692, 1999. Ghasemi, O., M. L. Lindsey, T. Yang, N. Nguyen, Y. Huang, and Y.-F. Jin. Bayesian parameter estimation for nonlinear modelling of biological pathways. BMC Syst. Biol. 5:S9, 2011. Goyvaerts, L., et al. Prolactin receptors and placental lactogen drive male mouse pancreatic islets to pregnancy-related mRNA changes. PLoS ONE 10:e0121868, 2015. Huang, C., F. Snider, and J. C. Cross. Prolactin receptor is required for normal glucose homeostasis and modulation of β-cell mass during pregnancy. Endocrinology 150:1618–1626, 2009. Kondegowda, N. G., A. Mozar, C. Chin, A. Otero, A. Garcia-Ocaña, and R. C. Vasavada. Lactogens protect rodent and human beta cells against glucolipotoxicity-induced cell death through Janus kinase-2 (JAK2)/signal transducer and activator of transcription-5 (STAT5) signalling. Diabetologia 55:1721–1732, 2012. Kulkarni, R. N., E.-B. Mizrachi, A. G. Ocana, and A. F. Stewart. Human-cell proliferation and intracellular signaling: Driving in the dark without a road map. Diabetes 61:2205–2213, 2012. Layden, B. T., et al. Regulation of pancreatic islet gene expression in mouse islets by pregnancy. J. Endocrinol. 207:265–279, 2010. Makaryan, S. Z., and S. D. Finley. Enhancing network activation in natural killer cells: Predictions from in silico modeling. Integr. Biol. Quant. Biosci. Nano Macro 12:109–121, 2020. Manesso, E., et al. Dynamics of β-cell turnover: evidence for β-cell turnover and regeneration from sources of β-cells other than β-cell replication in the HIP rat. Am. J. Physiol. Endocrinol. Metab. 297:E323–E330, 2009. Marino, S., I. B. Hogue, C. J. Ray, and D. E. Kirschner. A methodology for performing global uncertainty and sensitivity analysis in systems biology. J. Theor. Biol. 254:178–196, 2008. Montemurro, C., et al. Cell cycle-related metabolism and mitochondrial dynamics in a replication-competent pancreatic beta-cell line. Cell Cycle 16:2086–2099, 2017. National Diabetes Statistics Report, 2017. 20. Pepin, M. E., H. H. Bickerton, M. Bethea, C. S. Hunter, A. R. Wende, and R. R. Banerjee. Prolactin receptor signaling regulates a pregnancy-specific transcriptional program in mouse islets. Endocrinology 160:1150–1163, 2019. Rawlings, J. S. The JAK/STAT signaling pathway. J. Cell Sci. 117:1281–1283, 2004. Rieck, S., et al. The transcriptional response of the islet to pregnancy in mice. Mol. Endocrinol. 11:1702, 2009. Rohrs, J. A., S. Z. Makaryan, and S. D. Finley. Constructing predictive cancer systems biology models. Syst. Biol. 2018. https://doi.org/10.1101/360800. RuiS, H., R. Kirken, and W. L. Farrar. Activation of receptor-associated tyrosine kinase JAK2 by prolactin. J. Biol. Chem. 269:5364–5368, 1994. Sadarangani, A., et al. In vivo and in vitro estrogenic and progestagenic actions of Tibolone. Biol. Res. 38:245–258, 2005. Shuai, K., and B. Liu. Regulation of JAK–STAT signalling in the immune system. Nat. Rev. Immunol. 3:900–911, 2003. Silva, M., et al. Erythropoietin can induce the expression of Bcl-xLthrough Stat5 in erythropoietin-dependent progenitor cell lines. J. Biol. Chem. 274:22165–22169, 1999. Singh, A., A. Jayaraman, and J. Hahn. Modeling regulatory mechanisms in IL-6 signal transduction in hepatocytes. Biotechnol. Bioeng. 95:850–862, 2006. Sinkoe, A., and J. Hahn. Optimal experimental design for parameter estimation of an IL-6 signaling model. Processes 5:49, 2017. Smith, T. D., M. J. Tse, E. L. Read, and W. F. Liu. Regulation of macrophage polarization and plasticity by complex activation signals. Integr. Biol. 8:946–955, 2016. Sorenson, R. L., and T. C. Brelje. Prolactin receptors are critical to the adaptation of islets to pregnancy. Endocrinology 150:1566–1569, 2009. Spégel, P., et al. Unique and shared metabolic regulation in clonal β-cells and primary islets derived from rat revealed by metabolomics analysis. Endocrinology 156:1995–2005, 2015. Stewart, A. F., et al. Human β-cell proliferation and intracellular signaling: Part 3. Diabetes 64:1872–1885, 2015. Stuart, A. M. Inverse problems: A Bayesian perspective. Acta Numer. 19:451–559, 2010. Team, R.C., and others. R: A language and environment for statistical computing. 2013. Terra, L. F., M. H. Garay-Malpartida, R. A. M. Wailemann, M. C. Sogayar, and L. Labriola. Recombinant human prolactin promotes human beta cell survival via inhibition of extrinsic and intrinsic apoptosis pathways. Diabetologia 54:1388–1397, 2011. Vasavada, R. C., et al. Targeted expression of placental lactogen in the beta cells of transgenic mice results in beta cell proliferation, islet mass augmentation, and hypoglycemia. J. Biol. Chem. 275:15399–15406, 2000. Yamada, S., S. Shiono, A. Joo, and A. Yoshimura. Control mechanism of JAK/STAT signal transduction pathway. FEBS Lett. 534:190–196, 2003. Ye, C., and J. P. Driver. Suppressors of cytokine signaling in sickness and in health of pancreatic β-cells. Front. Immunol. 7:169, 2016. Zotter, A., F. Bäuerle, D. Dey, V. Kiss, and G. Schreiber. Quantifying enzyme activity in living cells. J. Biol. Chem. 292:15838–15848, 2017. The authors would like to thank Dr. Mahua Roy, Dr. Jennifer Rohrs, Dr. Qianhui (Jess) Wu, Min Song, Sahak Makaryan, Ding Li, Colin Cess, and Patrick Gelbach for their suggestions and support. Sahak Makaryan provided valuable guidance regarding likelihood estimation parameter fitting. The work was supported by the USC Provost's Undergraduate Research Fellowship and the Undergraduate Research Associates Program (URAP) awarded to R.D.M. and a USC Diabetes & Obesity Research Institute (DORI) Pilot Grant awarded to S.K.G. and S.D.F. Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA, USA Ryland D. Mortlock & Stacey D. Finley Departments of Pediatrics and Stem Cell Biology and Regenerative Medicine, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA Senta K. Georgia Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA Stacey D. Finley Department of Biological Sciences, University of Southern California, Los Angeles, CA, USA Ryland D. Mortlock Correspondence to Stacey D. Finley. Associate Editor Aleksander S. Popel oversaw the review of this article Below is the link to the electronic supplementary material. Supplementary material 1 (XLSX 67 kb) Supplementary material 2 (PDF 6778 kb) Mortlock, R.D., Georgia, S.K. & Finley, S.D. Dynamic Regulation of JAK-STAT Signaling Through the Prolactin Receptor Predicted by Computational Modeling. Cel. Mol. Bioeng. 14, 15–30 (2021). https://doi.org/10.1007/s12195-020-00647-8 Issue Date: February 2021 Intracellular signaling Feedback control Beta cell biology
CommonCrawl
From formulasearchengine {{#invoke:Hatnote|hatnote}} {{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }} Design of experiments with full factorial design (left), response surface with second-degree polynomial (right) In general usage, design of experiments (DOE) or experimental design is the design of any information-gathering exercises where variation is present, whether under the full control of the experimenter or not. However, in statistics, these terms are usually used for controlled experiments. Formal planned experimentation is often used in evaluating physical objects, chemical formulations, structures, components, and materials. Other types of study, and their design, are discussed in the articles on computer experiments, opinion polls and statistical surveys (which are types of observational study), natural experiments and quasi-experiments (for example, quasi-experimental design). See Experiment for the distinction between these types of experiments or studies. In the design of experiments, the experimenter is often interested in the effect of some process or intervention (the "treatment") on some objects (the "experimental units"), which may be people, parts of people, groups of people, plants, animals, etc. Design of experiments is thus a discipline that has very broad application across all the natural and social sciences and engineering. 1 History of development 1.1 Controlled experimentation on scurvy 1.2 Statistical experiments, following Charles S. Peirce 1.2.1 Randomized experiments 1.2.2 Optimal designs for regression models 1.3 Sequences of experiments 2 Principles of experimental design, following Ronald A. Fisher 4 Avoiding false positives 5 Discussion topics when setting up an experimental design 6 Statistical control 7 Experimental designs after Fisher 8 Human participant experimental design constraints Controlled experimentation on scurvy In 1747, while serving as surgeon on HMS Salisbury, James Lind carried out a controlled experiment to develop a cure for scurvy.[1] Lind selected 12 men from the ship, all suffering from scurvy. Lind limited his subjects to men who "were as similar as I could have them", that is he provided strict entry requirements to reduce extraneous variation. He divided them into six pairs, giving each pair different supplements to their basic diet for two weeks. The treatments were all remedies that had been proposed: A quart of cider every day Twenty five gutts (drops) of vitriol (sulphuric acid) three times a day upon an empty stomach One half-pint of seawater every day A mixture of garlic, mustard, and horseradish in a lump the size of a nutmeg Two spoonfuls of vinegar three times a day Two oranges and one lemon every day The men given citrus fruits recovered dramatically within a week. One of them returned to duty after six days, and the others cared for the rest. The other subjects experienced some improvement, but nothing compared to the subjects who ate the citrus fruits, which proved substantially superior to the other treatments. Statistical experiments, following Charles S. Peirce {{#invoke:main|main}} {{#invoke:see also|seealso}} A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics. Randomized experiments {{#invoke:main|main}} {{#invoke:see also|seealso}} Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.[2][3][4][5] Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.[2][3][4][5] Optimal designs for regression models {{#invoke:main|main}} {{#invoke:see also|seealso}} Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876.[6] A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918 Kirstine Smith published optimal designs for polynomials of degree six (and less). Sequences of experiments {{#invoke:main|main}} {{#invoke:see also|seealso}} The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of Sequential analysis, a field that was pioneered[7] by Abraham Wald in the context of sequential tests of statistical hypotheses.[8] Herman Chernoff wrote an overview of optimal sequential designs,[9] while adaptive designs have been surveyed by S. Zacks.[10] One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.[11] Principles of experimental design, following Ronald A. Fisher A methodology for designing experiments was proposed by Ronald A. Fisher, in his innovative books: "The Arrangement of Field Experiments" (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the hypothesis that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup (AKA the "Lady tasting tea" experiment). These methods have been broadly adapted in the physical and social sciences, and are still used in agricultural engineering. The concepts presented here differ from the design and analysis of computer experiments. In some fields of study it is not possible to have independent measurements to a traceable standard. Comparisons between treatments are much more valuable and are usually preferable. Often one compares against a scientific control or traditional treatment that acts as baseline. Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment".[12] There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism such as tables of random numbers, or the use of randomization devices such as playing cards or dice. Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment. The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. The results of an experiment can be generalized reliably from the experimental units to a larger population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things. Random does not mean haphazard, and great care must be taken that appropriate random methods are used. Measurements are usually subject to variation and uncertainty. Measurements are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic.[13] However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.[14] Blocking is the arrangement of experimental units into groups (blocks/lots) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study. Orthogonality Example of orthogonal factorial design Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts. Factorial experiments Use of factorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test. This example is attributed to Harold Hotelling.[9] It conveys some of the flavor of those aspects of the subject that involve combinatorial designs. Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan vs. any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; and errors on different weighings are independent. Denote the true weights by θ 1 , … , θ 8 . {\displaystyle \theta _{1},\dots ,\theta _{8}.\,} We consider two different experiments: Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the ith object, for i = 1, ..., 8. Do the eight weighings according to the following schedule and let Yi be the measured difference for i = 1, ..., 8: left pan right pan 1st weighing: 1 2 3 4 5 6 7 8 (empty) 2nd: 1 2 3 8 4 5 6 7 3rd: 1 4 5 8 2 3 6 7 4th: 1 6 7 8 2 3 4 5 5th: 2 4 6 8 1 3 5 7 6th: 2 5 7 8 1 3 4 6 7th: 3 4 7 8 1 2 5 6 8th: 3 5 6 8 1 2 4 7 {\displaystyle {\begin{matrix}&{\mbox{left pan}}&{\mbox{right pan}}\\{\mbox{1st weighing:}}&1\ 2\ 3\ 4\ 5\ 6\ 7\ 8&{\text{(empty)}}\\{\mbox{2nd:}}&1\ 2\ 3\ 8\ &4\ 5\ 6\ 7\\{\mbox{3rd:}}&1\ 4\ 5\ 8\ &2\ 3\ 6\ 7\\{\mbox{4th:}}&1\ 6\ 7\ 8\ &2\ 3\ 4\ 5\\{\mbox{5th:}}&2\ 4\ 6\ 8\ &1\ 3\ 5\ 7\\{\mbox{6th:}}&2\ 5\ 7\ 8\ &1\ 3\ 4\ 6\\{\mbox{7th:}}&3\ 4\ 7\ 8\ &1\ 2\ 5\ 6\\{\mbox{8th:}}&3\ 5\ 6\ 8\ &1\ 2\ 4\ 7\end{matrix}}} Then the estimated value of the weight θ1 is θ ^ 1 = Y 1 + Y 2 + Y 3 + Y 4 − Y 5 − Y 6 − Y 7 − Y 8 8 . {\displaystyle {\widehat {\theta }}_{1}={\frac {Y_{1}+Y_{2}+Y_{3}+Y_{4}-Y_{5}-Y_{6}-Y_{7}-Y_{8}}{8}}.} Similar estimates can be found for the weights of the other items. For example θ ^ 2 = Y 1 + Y 2 − Y 3 − Y 4 + Y 5 + Y 6 − Y 7 − Y 8 8 . {\displaystyle {\widehat {\theta }}_{2}={\frac {Y_{1}+Y_{2}-Y_{3}-Y_{4}+Y_{5}+Y_{6}-Y_{7}-Y_{8}}{8}}.} The question of design of experiments is: which experiment is better? The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other. Many problems of the design of experiments involve combinatorial designs, as in this example. Avoiding false positives False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields, and experimental designs with undisclosed degrees of freedom are a problem.[15] This can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation - perhaps unconsciously - of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.[16][17] So the design of the experiment should include a clear statement proposing the analyses to be undertaken. Clear and complete documentation of the experimental methodology is also important in order to support replication of results.[18] Discussion topics when setting up an experimental design An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment.[19] An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section: How many factors does the design have? and are the levels of these factors fixed or random? Are control conditions needed, and what should they be? Manipulation checks; did the manipulation really work? What are the background variables? What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power? What is the relevance of interactions between factors? What is the influence of delayed effects of substantive factors on outcomes? How do response shifts affect self-report measures? How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests? What about using a proxy pretest? Are there lurking variables? Should the client/patient, researcher or even the analyst of the data be blind to conditions? What is the feasibility of subsequent application of different conditions to the same units? How many of each control and noise factors should be taken into account? Statistical control It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.[20] To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned. One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero orderTemplate:Disambiguation needed relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time. Experimental designs after Fisher Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett-Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations. In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards. Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics. As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space. Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, C. R. Rao, R. C. Bose, J. N. Srivastava, Shrikhande S. S., D. Raghavarao, W. G. Cochran, O. Kempthorne, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. A. Nelder, R. A. Bailey, J. Kiefer, W. J. Studden, A. Pázman, F. Pukelsheim, D. R. Cox, H. P. Wynn, A. C. Atkinson, G. E. P. Box and G. Taguchi.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} The textbooks of D. Montgomery and R. Myers have reached generations of students and practitioners.[21] [22] [23] Human participant experimental design constraints Laws and ethical considerations preclude some carefully designed experiments with human subjects. Legal constraints are dependent on jurisdiction. Constraints may involve institutional review boards, informed consent and confidentiality affecting both clinical (medical) trials and behavioral and social science experiments.[24] In the field of toxicology, for example, experimentation is performed on laboratory animals with the goal of defining safe exposure limits for humans.[25] Balancing the constraints are views from the medical field.[26] Regarding the randomization of patients, "... if no one knows which therapy is better, there is no ethical imperative to use one therapy or another." (p 380) Regarding experimental design, "...it is clearly not ethical to place subjects at risk to collect data in a poorly designed study when this situation can be easily avoided...". (p 393) {{#invoke:Portal|portal}} Template:Colbegin Adversarial collaboration Bayesian experimental design Computer experiment Controlling for a variable Experimetrics: application of econometrics to economics experiments. First-in-man study Glossary of experimental design Instrument effect Law of large numbers Manipulation checks Multifactor design of experiments software Probabilistic design Protocol (natural sciences) Quasi-experimental design Randomized block design Robust parameter design Supersaturated design Survey sampling Taguchi methods Template:Colend ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }} ↑ 2.0 2.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }} ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}, actually published 1879, NOAA PDF Eprint. Reprinted in Collected Papers 7, paragraphs 139–157, also in Writings 4, pp. 72–78, and in {{#invoke:Citation/CS1|citation |CitationClass=journal }} ↑ Johnson, N.L. (1961). "Sequential analysis: a survey." Journal of the Royal Statistical Society, Series A. Vol. 124 (3), 372–411. (pages 375–376) ↑ Wald, A. (1945) "Sequential Tests of Statistical Hypotheses", Annals of Mathematical Statistics, 16 (2), 117–186. ↑ 9.0 9.1 Herman Chernoff, Sequential Analysis and Optimal Design, SIAM Monograph, 1972. ↑ Zacks, S. (1996) "Adaptive Designs for Parametric Models". In: Ghosh, S. and Rao, C. R., (Eds) (1996). "Design and Analysis of Experiments," Handbook of Statistics, Volume 13. North-Holland. ISBN 0-444-82061-2. (pages 151–180) ↑ Creswell, J.W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd). Upper Saddle River, NJ: Prentice Hall. 2008, p. 300. ISBN 0-13-613550-1 ↑ Template:Cite web ↑ Template:Cite news ↑ Ader, Mellenberg & Hand (2008) "Advising on Research Methods: A consultant's companion" ↑ Bisgaard, S (2008) "Must a Process be in Statistical Control before Conducting Designed Experiments?", Quality Engineering, ASQ, 20 (2), pp 143 - 176 ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} Peirce, C. S. (1877–1878), "Illustrations of the Logic of Science" (series), Popular Science Monthly, vols. 12-13. Relevant individual papers: (1878 March), "The Doctrine of Chances", Popular Science Monthly, v. 12, March issue, pp. 604–615. Internet Archive Eprint. (1878 April), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705–718. Internet Archive Eprint. (1878 June), "The Order of Nature", Popular Science Monthly, v. 13, pp. 203–217.Internet Archive Eprint. (1878 August), "Deduction, Induction, and Hypothesis", Popular Science Monthly, v. 13, pp. 470–482. Internet Archive Eprint. Peirce, C. S. (1883), "A Theory of Probable Inference", Studies in Logic, pp. 126-181, Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing Company, ISBN 90-272-3271-7) Template:Further cleanup {{#invoke:citation/CS1|citation |CitationClass=book }} |CitationClass=book }} Pre-publication chapters are available on-line. Box, G. E. P., & Draper, N. R. (1987). Empirical model-building and response surfaces. New York: Wiley. Box, G. E., Hunter,W.G., Hunter, J.S., Hunter,W.G., "Statistics for Experimenters: Design, Innovation, and Discovery", 2nd Edition, Wiley, 2005, ISBN 0-471-71813-0 |CitationClass=journal }} Mason, R. L., Gunst, R. F., & Hess, J. L. (1989). Statistical design and analysis of experiments with applications to engineering and science. New York: Wiley. Pearl, Judea. Causality: Models, Reasoning and Inference, Cambridge University Press, 2000. Peirce, C. S. (1876), "Note on the Theory of the Economy of Research", Appendix No. 14 in Coast Survey Report, pp. 197–201, NOAA PDF Eprint. Reprinted 1958 in Collected Papers of Charles Sanders Peirce 7, paragraphs 139–157 and in 1967 in Operations Research 15 (4): pp. 643–648, abstract at JSTOR. {{#invoke:Citation/CS1|citation Taguchi, G. (1987). Jikken keikakuho (3rd ed., Vol I & II). Tokyo: Maruzen. English translation edited by D. Clausing. System of experimental design. New York: UNIPUB/Kraus International. Template:Library resources box A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST Box–Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" ] at NIST Articles on Design of Experiments Case Studies and Articles on Design of Experiments (DOE) Czitrom (1999) "One-Factor-at-a-Time Versus Designed Experiments", American Statistician, 53, 2. Design Resources Server a mobile library on Design of Experiments. The server is dynamic in nature and new additions would be posted on this site from time to time. Gosset: A General-Purpose Program for Designing Experiments SAS Examples for Experimental Design Matlab SUrrogate MOdeling Toolbox - SUMO Toolbox – Matlab code for Design of Experiments + Sequential Design + Surrogate Modeling Design DB: A database of combinatorial, statistical, experimental block designs The I-Optimal Design Assistant: a free on-line library of I-Optimal designs Warning Signs in Experimental Design and Interpretation by Peter Norvig, chief of research at Google Knowledge Base, Research Methods: A good explanation of the basic idea of experimental designs The Controlled Experiment vs. The Comparative Experiment: "How to experiment" for science fair projects Spall, J. C. (2010), "Factorial Design for Choosing Input Values in Experimentation: Generating Informative Data for System Identification," IEEE Control Systems Magazine, vol. 30(5), pp. 38–53. General introduction from a systems perspective DOE used for engine calibration reduces fuel consumption by 2 to 4 percent Template:Experimental design Template:Design Template:Statistics Template:Medical research studies Retrieved from "https://en.formulasearchengine.com/index.php?title=Design_of_experiments&oldid=219350" Use dmy dates from July 2013 Engineering statistics
CommonCrawl
The spectral signature of interstellar scintillation Twinkle twinkle little black hole When the light from stars passes through the Earth's atmosphere, the light waves are distorted and result in the "twinkling" affect we see at night. Analogously, when the radio light from a quasar passes through a cloud of dust in the interstellar medium, these radio waves are distorted and produce a highly variable signal over days to months that we can observe on Earth. By studying this variability, we can learn more about the material in the interstellar medium. From left to right: (a) light is emitted from the AGN as plane-parallel waves. (b) This light passes through a cloud of material in the interstellar medium. (c) The light waves emerge from the cloud twisted and distorted, both in space and time. (d) These distorted waves are detected by a radio observatory, and are seen in the form of a highly variable lightcurve as in this example from source PKS 1257-326 (Bignell et al. 2003). During the Summer of 2016—2017, I undertook a project at the CSIRO Astronomy and Space Science (CASS) office in Perth, Australia with Dr Cormac Reynolds to study this kind of quasar variability. The properties of scintillation vary depending on the properties of the medium and the frequency you observe at, and can be primarily characterised by a number called the modulation index, alongside the structure function: $$ M = \frac{\sqrt{\langle F^2 \rangle}}{\langle F \rangle}$$ $$ SF = \sqrt{|F(T_i) - F(T_j)|^2}$$ where F is the flux at the given wavelength and T is the timescale of variability. Together these give both the typical strength and timescale of the variation. We used a sample of 2232 quasars from the Australian Telescope Extreme Scattering Events Survey (ATESE) pipeline observed from 2014—2016, across a frequency range of 2—12 GHz and covering a wide range on the sky. Limiting ourselves to those sources which had >1% variation on a timescale of 50 days, we found a slight bias towards greater variation in the galactic plane (b=0). The strength of variability as a function of distance from the galactic plane, where b=0 denotes the line of the galactic plane. There is a slight, but not statistically significant, preference towards more variability at b=0. CSIRO Astronomy and Space Science (CASS) CASS Undergraduate Vacation Program Active Galactic Nuclei Quasars Undergraduate research Extreme scattering events Environment as a cause of radio source asymmetry Using radio lobes to measure galaxy cluster properties (Honours thesis)
CommonCrawl